corpusid
int64 110
268M
| title
stringlengths 0
8.56k
| abstract
stringlengths 0
18.4k
| citations
sequencelengths 0
142
| full_paper
stringlengths 0
635k
|
---|---|---|---|---|
218,973,899 | [] | Orthographic Codes and the Neighborhood Effect: Lessons from Information Theory
May 2020
Stéphan Tulkens
CLiPS -Computational Linguistics Group Department of Linguistics
University of Antwerp
Dominiek Sandra
CLiPS -Computational Linguistics Group Department of Linguistics
University of Antwerp
Walter Daelemans
CLiPS -Computational Linguistics Group Department of Linguistics
University of Antwerp
Orthographic Codes and the Neighborhood Effect: Lessons from Information Theory
Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)
the 12th Conference on Language Resources and Evaluation (LREC 2020)MarseilleMay 2020172Cognitive Methods, Lexical Database
We consider the orthographic neighborhood effect: the effect that words with more orthographic similarity to other words are read faster. The neighborhood effect serves as an important control variable in psycholinguistic studies of word reading, and explains variance in addition to word length and word frequency. Following previous work, we model the neighborhood effect as the average distance to neighbors in feature space for three feature sets: slots, character ngrams and skipgrams. We optimize each of these feature sets and find evidence for language-independent optima, across five megastudy corpora from five alphabetic languages. Additionally, we show that weighting features using the inverse of mutual information (MI) improves the neighborhood effect significantly for all languages. We analyze the inverse feature weighting, and show that, across languages, grammatical morphemes get the lowest weights. Finally, we perform the same experiments on Korean Hangul, a non-alphabetic writing system, where we find the opposite results: slower responses as a function of denser neighborhoods, and a negative effect of inverse feature weighting. This raises the question of whether this is a cognitive effect, or an effect of the way we represent Hangul orthography, and indicates more research is needed.
Introduction
One of the core issues in contemporary models of word reading is the representation of orthography. Orthography, in this case, can be understood as the visual information of a word as it is represented by the brain. It therefore does not generally refer to the actual visual presentation of a word, such as the font and case, but to a more abstract visual representation (Dehaene et al., 2005;Dehaene, 2009). One of the reasons orthography is such a core issue is that older models, such as the Interactive Activation (McClelland and Rumelhart, 1981) and DISLEX (Miikkulainen, 1993;Miikkulainen, 1997) models, assumed that words were represented as an array of slots, an assumption that has recently been shown to be false. As readers are remarkably flexible at decoding orthographic information from jumbled strings (Schoonbaert and Grainger, 2004;Perea et al., 2008), the positional information in orthographic representations has to be flexible to some degree; if this were not the case, readers would not be able to decode transposition neighbors, e.g., JUGDE -JUDGE, efficiently. These, and other, phenomena have motivated the search for various models or featurizations of orthography that can accurately capture empirical data, which has also been dubbed the search for an orthographic code (Grainger, 2008;Grainger, 2018). Examples of such feature sets, or orthographic codes, are character ngrams, or wickelgraphs (Wickelgren, 1969), fourteen segment coding (Rumelhart and Siple, 1974), slotbased coding (McClelland and Rumelhart, 1981), and various open bigram schemes (Whitney, 2001;Schoonbaert and Grainger, 2004;Whitney and Cornelissen, 2008). Another issue related to the representation of orthography is the discovery that orthographic similarity plays an important role in how quickly words are identified; words that look more like other words are, generally, identified more quickly (Coltheart, 1977;Andrews, 1989;Grainger, 1990;Yarkoni et al., 2008;Perea, 2015). Orthographic codes, or feature sets, thus have an important role; they constrain the inferences models can make about the similarity of words, while also defining the orthographic similarity between words. While many feature schemes have been contrasted (Davis and Bowers, 2006;Grainger, 2008;Davis, 2010;Kinoshita and Norris, 2013), there is little consensus about how words are represented, or which orthographic code is to be preferred over the other. In this paper, we contrast various features on their ability to explain variance in reaction times on lexical decision tasks, and show that optimizing these feature sets leads to gains in explained variance.
The neighborhood effect
We use the neighborhood effect as a task on which to test the fit of orthographic codes. As described above, the neighborhood effect is the effect that words which have more orthographic similarity to other words are read faster (Andrews, 1989;Perea, 2015). It is often thought to involve effects related to co-activation between word representations, a theoretical position mainly inspired by the Interactive Activation model (McClelland and Rumelhart, 1981). That is, the neighborhood effect elicited by a word such as PLEAD is a function of the orthographic similarity of PLEAD to other words in the lexicon. Because the similarity of a space is dependent on the features used, changing feature sets or feature weighting will then also impact the neighborhood effect. For example, if a feature set explicitly models letter order, PLEAD and LEAD have zero similarity, because they do not share any letters in any position. Similarly, if a feature set allows for transposition, words such as COLD and CLOD will have higher similarity than in feature sets that do not model transposition. Hence, the assumption that the neighborhood effect is highly depen-dent on orthographic similarity gives us a way of objectively evaluating different feature sets. Feature sets whose neighborhoods explain more variance are, in our reasoning, more plausible.
Main Contributions
In this paper, we analyze three different feature sets and their various parameterizations, and show that the way these feature sets are currently used is suboptimal, for two reasons: first, the parameterizations of the feature sets used in other papers are not ideal. Second, the number of nearest neighbors considered in calculating the neighborhood effect is not ideal. Furthermore, we show that the optimal parameters for these metrics differ from the parameters used in psycholinguistic research on word reading. We directly compare our results to previous work, i.e., Tulkens et al. (2018a), and show that the conclusions drawn from this work are incomplete because the research was carried out with a suboptimal number of nearest neighbors and parameters. We thus conclude that many different orthographic codes are equally feasible as far as the neighborhood effect is concerned, but only when properly optimized, which was not the case before. In a second experiment, we show that the inverse of mutual information weighting improves the explained variance of the neighborhood effect for all feature sets, and that weighing with regular mutual information almost removes the neighborhood effect. This implies that features that frequently occur with a smaller set of words are less important for calculating the neighborhood effect. We show that these features generally correspond to bound morphemes, such as plural suffixes. Additionally, we present the first results on modeling the orthographic code of Korean, a non-alphabetic language.
Materials and Methods
In this section, we introduce the metric we use to measure the similarity of words in feature space, the features, and the corpora used in this study.
Metrics for Measuring Neighborhoods
The standard metric for measuring neighborhoods is OLD20 (Yarkoni et al., 2008), which is defined as the mean Levenshtein distance (Levenshtein, 1966) to the 20 closest neighbors. As OLD20 operates on string representations, alternative feature representations can not be easily adapted to, or incorporated in, OLD20. Features sets, on the other hand, are more flexible, and can represent, for example, discontinuous regularities in words, transpositions, alignment, or lack thereof (Whitney, 2001). To bridge the neighborhood effect and string metrics, Tulkens et al. (2018a) introduced rd, a metric that calculates the neighborhood density for arbitrary feature spaces. Mathematically, rd is similar to OLD20 (Yarkoni et al., 2008), as it is the sum of the cosine distances to the k closest featurized neighbors. 1 Following Tulkens et al. (2018a), rd is described as follows:
rd(x, X, k) = k i=1 cos(x, X) i (1)
Where cos is the cosine distance, x is featurized item, and X is the set of all featurized items, which may or may not include x. We assume that the output of cos is sorted, so that the function returns the sum over the k closest items.
In their original experiments, Tulkens et al. (2018a) set k to 20, to explicitly compare to OLD20. In this paper, we relax this requirement, and investigate whether different values of k work better for different neighborhoods. Because Tulkens et al. (2018a) showed that rd outperformed OLD20, and because OLD20 can not be feature weighted in Experiment 2, we leave it out of the current discussion. 2
Featurizations and Their Parameterizations
We use several existing feature sets, all of which have been implemented in Wordkit (Tulkens et al., 2018b). An overlooked aspect of these feature sets in psycholinguistic research is that almost all of them are amenable to parameterization. As an example, consider the well-known open bigram encoding (Schoonbaert and Grainger, 2004;Grainger and Van Heuven, 2004;Whitney, 2001). There is no reason, be it computational, cognitive, empirical, or otherwise, to restrict ourselves to using bigrams instead of, say, trigrams. We show that, for most implemented feature sets, there exist multiple parameter settings which can be explored.
Slot-based Encoding
A slot-based encoding consist of orthogonal vectors, aligned in slots. Such an encoding is identical to the letter layer used in the original Interactive Activation model (McClelland and Rumelhart, 1981). This encoding assumes that all characters are completely orthogonal, in the sense that any character is equally different to any other character, regardless of its visual characteristics. Research has shown that transposition (Perea and Lupker, 2004;Perea et al., 2008) and deletion (Van Assche and Grainger, 2006) neighbors are more easily decoded than substitution neighbors. As slot-based codes assigns equal similarity to transposition and substitution neighbors, e.g., sim(PAWN, PWAN) = sim(PAWN, PXYN), slot-based codes are thought to be insufficient. Nevertheless, Tulkens et al. (2018a) showed that this encoding can still account for the most explained variance when its neighborhood is entered into a linear regression.
ngrams
We also consider character ngrams, also known as wickelgraphs (Wickelgren, 1969) as a feature set. The wickelgraph encoding decomposes a word into a set of character ngrams, where n is a free parameter. ngrams do not encode order, which causes words that are embedded in other words, e.g. PAN and SPAN, to still share some of similarity. All words are padded before calculating the ngrams, because otherwise words shorter than n can not be accurately featurized. In all our experiments, we use n ∈ {2, 3, 4}. Note that we explicitly model ngrams as multisets, and the generated vectors thus contain the count of each feature. This is necessary, because otherwise ngrams would not be able to represent the difference between, for example, BA-NANA and BANANANANA, as these contain the same ngram types, but in different frequencies.
Constrained Open ngrams
The constrained open ngram encoding (Schoonbaert and Grainger, 2004) is a refinement of the open ngram encoding (Whitney, 2001;Grainger and Van Heuven, 2004). An open ngram encoding is defined as the set of n-combinations of letters a word, where the letters within a combination are ordered by their occurrence in the word. That is, the word SWAN generates the following open 2-gram (or bigram) features: {SW, SA, SN, WA, WN, AN}. One issue with the original open ngram encoding is that it does not put a constraint on generated combinations, which is an issue, both theoretical and practically. As an example, the word QUARANTINE, in the unconstrained setting, generates 45 bigrams, many of which are separated by more than 4 intervening letters. Because of these bigrams with wide gaps, performance with unconstrained open ngrams tends to suffer. 3 To remedy this, constrained open ngrams were introduced; these only allow for the construction of an ngram with some pre-specified window, which we call w. We use n ∈ {2, 3} and w ∈ {2, 3, 4, 5}, with the added constraint that w must be smaller than n. When w = (n−1), constrained open ngrams reduce to regular ngrams. Constrained open ngram encoding leads to more parsimonious representations, especially for larger values of n, as the number of possible ngrams decreases because of the window constraints. Like the ngrams above, we also represent these as multisets, and add padding.
Corpora
Following previous research into the neighborhood effect, e.g., Yarkoni et al. (2008), we measure the neighborhood effect using Reaction Times (RT) on a lexical decision task, while controlling for log frequency and length, two other control variables. We use various corpora from five different languages: British English, US English, Dutch, French, and Spanish. In all cases, we start from a megastudy of lexical decision RT measurements, to which we then add frequency measurements from subtitle corpora, if necessary. For all corpora, we apply the following preprocessing steps: first, we remove all words which are not lower-cased, and words without RT or frequency measurements. We remove the non-lower-cased words because not all databases have mixed-case words, and because we have no easy way of determining what to do uppercase letters with regards to the neighborhood effect. Second, we remove all words which contain non-alphabetic characters, such as the genitive marker, space, or dash. The American English corpus was constructed using the English Lexicon Project (ELP) (Balota et al., 2007) and the the SUBTLEX-US database (Brysbaert and New, 2009). The British English corpus was constructed using the British Lexicon Project (BLP) (Keuleers et al., 2012) and the SUBTLEX-UK database (Van Heuven et al., 2014). The French corpus was constructed from the Lexique database (New et al., 2001;New et al., 2007), and the French Lexicon Project (Ferrand et al., 2010). The Dutch corpus was constructed using the Dutch Lexicon Project (DLP) (Keuleers et al., 2010b) and SUBTLEX-NL (Keuleers et al., 2010a). Finally, the Spanish corpus was constructed from the SPALex database (Aguasvivas et al., 2018). Because SPALex already includes frequency counts, we did not need an auxiliary corpus for frequency counts. Note that, while SPALex involves lexical decision, the task used in the construction of that corpus did not explicitly ask participants to respond as quickly as possible. Nevertheless, the results can be reinterpreted as being largely equivalent to those of lexical decision (Aguasvivas et al., 2018). For the French and Spanish corpora, we had to decide whether to keep or remove words that contained letters with diacritic markers, such as TRÉS, as the status of these markers with regard to their orthographic decomposition is unclear. As the removal of diacritic markers led to the loss of a lot of words, and created the conundrum of us having to decide what to do with duplicate forms, we chose to keep them.
Experiment 1: Optimization
In the first experiment, we consider all possible featurizations and their parameterizations. We then use crossvalidation to estimate the optimal parameters for the various feature sets we have. Similarly, we also use the same search procedure to estimate the optimal number of nearest neighbors for each feature set. The search procedure is performed on a training set, which consists of 90% of our data. The other 10% is held out as a test set, and is only used to test our final models, and not in any of the optimization procedures. Both the cross-validation and initial train-test split were stratified by length and binned log frequency. .501 Table 4: The test R 2 on the base and optimized models. The R 2 scores in italics denote the best unoptimized (regular) models, while the bold-faced scores denote the best models for each language.
Model Selection
As noted above, RD has a parameter, k, that determines the number of nearest neighbors to take into consideration. In previous work, e.g., Yarkoni et al. (2008) and Tulkens et al. (2018a) and the papers that deploy these metrics as control variables, k has been set to 20. As we do not know what the effect of k is for our different feature sets, we use cross-validation and held-out test-data to search for the optimal value of k for each feature set on each dataset. For all ngram feature sets, we also use the same cross-validation loop to find the best feature set parameters, e.g., the optimal value of window and n. We use a 90% -10% train-test split, stratified by log frequency and length. Then, for each model, we perform 10-fold cross-validation, again stratified by log frequency and length, on the training set to estimate the best k and other parameters. Within each fold, we fit the following regression model:
rt ≈ log 10 (frequency) + length + rd +
Where rd is the representation distance from each word to the k closest neighbors, as detailed above. Using the fit model, we calculate the R 2 on the held-out data of that fold. We compare all our models to a baseline model:
rt ≈ log 10 (frequency) + length + That is, the model above but without the neighborhood effect added in.
Results
For each feature set, we selected the version with the lowest mean test R 2 over all folds, and ran this model on the test data. As noted above, we jointly select k and the optimal parameters for both feature sets. The selected parameters are listed in Table 2, while the optimal values of k are listed in Table 3. The distribution over R 2 adj scores for all featurizers, and all values of k over all languages, is shown in Figure 1. Both outcomes provide a stark contrast with the way neighborhood metrics have been deployed so far. First, as mentioned above, all metrics so far tended to use a k of 20; as the values in the Table indicate, this is far from optimal, as all the values are lower than 10, and sometimes as low as 2, which indicates that 20, as an arbitrary number, was far too high. Second, the optimal parameters for the ngrams and open ngrams are also different from those typically deployed in the literature, and are consistent across languages. That these parameters converge across languages provides strong evidence in favor of the proposed optimizations: feature sets and models should be optimized and cross-validated on a diverse set of data to obtain support for specific parameters. The R 2 scores of the regular and optimized models are shown in Table 4, which shows that the predictive power of all models increases with optimization, although much more so for the ngrams and open ngrams than the slot-based features. To confirm this pattern, we bootstrapped the differences between R 2 scores over 10,000 samples (Efron and Tibshirani, 1994), which allows us to compare distributions over the differences using parametric statistics, such as a T-test. Paired T-tests revealed that differences were significant at p < .0001, indicating that all optimized models significantly outperformed the baseline models, even when correcting using Bonferroni correction (Bonferroni, 1936). This shows that the feature sets that have been in use so far were probably subop-timal.
Experiment 2: Entropy
In a second experiment, we investigate the impact of weighing individual orthographic features. While weighted open bigrams (Whitney and Cornelissen, 2008) introduce a weighting scheme based on the number of intervening letters, there has been very little work on applying feature weighting to the neighborhood effect. Feature weighting has a long history, and has been shown to be effective when employed with kNN models in a Memory-Based Language Processing framework (Daelemans and Van den Bosch, 2005). In the TIMBL toolkit (Daelemans et al., 1998), for example, the IGtree algorithm (Daelemans et al., 1997) uses the information gain (IG) feature weighting metric to construct a tree, which can then be used to more efficiently select the neighbors relevant for classification. Other examples of such feature weighting techniques are MVDM (Stanfill and Waltz, 1986), which weights distances based on the similarity between two features. As the neighborhood models we use are de facto kNN models, we investigate whether feature weighting is an effective way of increasing performance of our models.
Entropy weighting
We use entropy (H) as a feature weighting scheme. For a discrete probability distribution, entropy (H) is given by the following equation:
H(X) = |X| i=1 −P (X i ) log 2 (P (X i ))(2)
Calculating the entropy separately for each column in a Ddimensional feature matrix leads to a vector of weights, W R D , which we use to weigh the features. As we only use binary feature indicators, each feature can either occur or not occur, low entropy values are thus associated with a feature occurring with no words, or with all words. The highest entropy value occurs only when a feature occurs in exactly half of all types in our corpus, i.e., when the distribution is uniform. We use entropy over the more common Mutual Information (MI), which also takes class distributions into account, because in our case the class distribution is uniform. During experimentation, we found that using entropy as a feature weighting measure completely removed any neighborhood effect, and reduced any linear regression models to baseline level. Hence, we hypothesized that one of the drivers of the neighborhood effect is the complement of entropy. Two candidates for such a complement are extropy (Lad et al., 2015), and negentropy (J) (Schrödinger, 1944;Brillouin, 1953). As extropy and entropy are identical for binary values (Lad et al., 2015), we experiment with negentropy. In the continuous case, negentropy is defined as the difference in entropy between the distribution and the entropy of a normal distribution with the same mean and variance (Brillouin, 1953). Note that, for continuous distributions, the normal distribution leads to a maximized entropy. As such, negentropy is always non-negative, and 0 if and only if the distribution is normal. Hence, in accordance with the definition for continuous distributions, for a Ddimensional discrete distribution, we define negentropy as the difference between the entropy of the D-dimensional uniform distribution, which is the situation in which entropy is maximized, and the entropy of the distribution. As such, negentropy over a discrete distribution always nonnegative, and 0 if and only if the distribution is uniform, leading to the same constraints as continuous negentropy. If U is the uniform distribution for a given dimensionality, then negentropy is defined as follows:
J(X) = H(U) − H(X)(3)
We use the same experimental setting as before. Instead of separately optimizing the entropy-weighted feature sets, we simply apply the entropy functions to the best-performing models in cross-validation, and contrast these to the models on the test set. We also apply regular entropy, and investigate whether the scores of neighborhoods are equal to the baseline.
Results
The results are listed in Figure 2, which shows the distribution of R 2 adj bootstrapped over 10,000 samples. As the Figure shows, the models weighted with H are near baseline performance, while the models weighted with J almost all outperform their non-weighted counterparts. We confirmed this by calculating the 95% CI of differences between the weighted and non-weighted variants of the feature sets. This again led to significant differences between all optimized feature sets and their feature weighted counterparts; for all feature sets except the slot-based feature set, negentropy weighting outperformed entropy weighting. For slot-based feature sets, only the US English slot-based feature set outperformed the optimized counterpart. In terms of absolute performance, the negentropy weighted feature set showed the highest performance in all cases, thus questioning the earlier conclusion of Tulkens et al. (2018a) that one hot encoded representations are the best performers. As Table 5 shows, negentropy, when applied to charac- ter trigrams, assigns lower weights to common word endings and bound morphemes, i.e., the '-s' and '-ed' suffixes marking the simple and perfect past, and '-ing' marking the present continuous for English, and the '-nt' and 'ent' suffixes, marking the third person plural in French. We observe similar results for the other languages, where, for example, the morphemes '-en' and '-ado' are weighted down heavily for Dutch and Spanish, respectively. The only models for which the weighting consistently does not improve performance are the slot-based models. An explanation for this lies in independence of the one hot encoded representations; using ngram features, it is very clear whether a given feature is a suffix or not, because it occurs at the end of a word, no matter the length of that word. Slot-based features on the other hand, do not have information about the length of a word, and assigns the same representation to the letter S, e.g., in WALKS and the letter S in EURASIA. Note that the lower weights for bound morphemes can only be related to their relative frequency of occurrence, not their status as morphemes, as feature weighting has no information about morphological segmentation. The frequency of occurrence of these morphemes thus has to be the driver of the weighting. What is peculiar, however, is that the downweighting of morphemes leads to better performance across all languages. Whether this is an effect of letter entropy or frequency, or actually related to morphological processing is an open question.
Experiment 3: Korean
We now apply the strategy and methodology of the previous two experiments to Korean. As argued by Frost (2012), the results of experiments such as the one we performed above carry with them a bias towards alphabetic writing systems, in which it is easy to confuse and transpose letters. In Arabic and Hebrew, for example, which have more rigid position coding, transposition effects and substitution effects are processed differently, and do not lead to the strong priming effects seen in alphabetic languages (Velan and Frost, 2007;Velan and Frost, 2009;Perea et al., 2010;Velan et al., 2013). The Korean alphabet, also called Hangul, is interesting in this regard, as it has alternatively been characterized as featural (Sampson, 1985), syllabic, and alphabetic (Pae, 2011). Like an alphabetic writing system, words are made up out of letters, and are separated by spaces. The letters, however, correspond to syllables instead of phonemes, as in a syllabic writing system. These letters are composed out of sub-letters, called jamo, which correspond to individual phonemes, Jamo can, again, be decomposed into visual features that carry articulatory information about the phonemes, which is a characteristic of a featural writing system. The word 한글 (Hangul), for example, consists of two syllable blocks 한 (han), 글 (gul). The first block, for example, consists of three jamo, ㅎ (h), ㅏ (a), and ㄴ (n). As such, Hangul is a highly decomposable orthography, although there is little consensus about whether this decomposability has an effect on how readers k n w Slots 1 -ngram 500 3 -Open ngram 500 3 3 197 .194 .228 .197 .240 .198 .238 Table 7: The optimal values for k and the various parameters in Korean.
of Korean behave. Rastle et al. (2019) found that readers of Hangul show no rigid transposition effects in a masked priming task, showing that Hangul is likely not processed like a purely alphabetic language. This still leaves the question of whether Korean shows a neighborhood effect, and, if we find such an effect, whether it behaves like an alphabetic language. We attempt to clarify this issue from a computational point of view, by applying the methods from the previous experiments to a large corpus of Korean Lexical Decision RT judgments on Hangul words (Yi et al., 2017).
Data and Preprocessing
We used the data of the Korean Lexicon Project (Yi et al., 2017), which consists of lexical decision judgments on 30,930 Korean words and non-words. As mentioned above, these words are written in syllable characters, or syllable blocks, which are built up out of a smaller set of phoneme letters, called jamo. Because there are many possible syllable blocks, e.g., the dataset we use contains 1391 unique blocks, we choose to decompose the syllable blocks into their constituent jamo, using the jamo Python package, 4 which led to a much smaller set of 66 jamo. For each word, we then simply concatenate the sets of jamo. As each syllable block can contain either two or three jamo 5 , we chose to pad all blocks containing two jamo with a space character, because otherwise longer words would no longer be aligned. We also experimented with using the bare syllable blocks, and decomposing them into jamo without padding, both of which gave worse results.
Experiments
For the sake of convenience, we report on Experiments 1 and 2 simultaneously. As before, we optimized the parameters and the optimal k in a first step, the results of which are shown in Table 6. The results of this analysis differ radically from the results on the alphabetic languages. First, for the alphabetic languages, a low number of neighbors worked well. Korean, instead seems to favor a really high number of neighbors for the ngram-based models, and a really low number of neighbors for the slot-based models. Note that the k value Figure 3: The R 2 adj over various values of k. As the figure shows, the entropy weighting did not have any effect. Note that the values of k for the Korean dataset are much higher.
reported for the ngram and open ngram models has an artificial plateau; we tested with values between 1 and 500 nearest neighbors, and found that the score of the ngram and open ngram models kept increasing, but very gradually so. The scores of the models on the test set are shown in Table 7. This shows another difference between datasets: the relative contribution of the baseline models seems to be relatively low, i.e., while logged frequency and length explains between .37 and .56 percent of variance in RT for the other language, they only explain .197 percent of variance in this case. The contribution of the non-optimized models is much smaller for Korean than for the other languages, implying that the neighborhood effect, while present, is expressed differently for Korean. Figure 3 shows the distribution over k for all Korean models. This shows the gradual upwards trend over increasing k, and also shows that feature weighting, both using H and J, has a negative effect for Korean. To confirm the trends, we again calculated 10,000 resampled bootstrap samples, which were compared in a pairwise fashion. This showed significant differences between all baselines and optimized models. Second, we also compared all optimized models to all weighted models, which showed that all weighted models were significantly worse than the optimized models. Weighting thus had the opposite effect for the Korean models, indicating another difference between Korean and the other languages. Figure 4 shows the distributon of the 10,000 bootstrapped resamples, showing that, in contrast to the other languages, the slot-based codes do not outperform other codes.
Conclusion
In conclusion, Table 8 shows the final regression coefficients for the best models. As shown, ngram models obtain the highest scores across all languages, indicating that this was an overlooked option in previous research. We demonstrate that it is possible to discover the neighborhood effect using a variety of feature sets, and that the feature sets which were in use before were probably not the best feature sets for discovering the feature sets. Although the difference between the highest score is relatively small, especially for Spanish and US English, we do obtain significantly better, and consistent, results across languages. This indicates that conceptualizing the orthographic neighborhood effect in terms of feature sets is a fruitful research di- rection; perhaps other feature sets or other weighting methods are yet to be discovered. This also shows yet another contrast between Korean and other languages: while the neighborhood effect is a positive effect, it has a negative effect in Korean; words with more neighbors are read more slowly. This pattern has previously been observed in Chinese (Li et al., 2011;Wang et al., 2014;Perea, 2015;Chang et al., 2016), where the orthographic neighborhood, both when it is based on characters and strokes, has been observed to be negative. This shows that the neighborhood effect, although perhaps based on general cognitive principles such as co-activation of orthographically similar representations, can not be considered to be completely general effect. One possible confound, as noted by Perea (2015), is that the calculation of the neighborhood effect is highly dependent on the way words are conceptualized. For example, the concept of a word is less clear, and the task of word segmentation more difficult, for languages with syllabic or logographic writing systems, such as Japanese and Chinese. While the issue of word boundaries does not present itself in Hangul, the measurement of the neighborhood effect is highly dependent on whether the it is analyzed as a syllabary or as an alphabetic writing system. In future work, we would like to investigate this in more detail; one interesting direction to take this in would be to train representations of visual orthography directly learned from the visual modality, similar to the way the Triangle model (Harm and Seidenberg, 2004) was trained by Chang et al. (2019). From a computational point of view, it would be interesting to discover a less expensive way to calculate the neighborhood effect. Currently, the neighborhood effect relies on calculating the cosine distance between all words in the lexicon, which takes a lot of time and scales exponentially with the amount of words in the lexicon. In terms of general Natural Language Processing work, the discovery of new features for Hangul could perhaps be of use in training machine translation systems, as has been done for Chinese-Japanese-English machine translation (Zhang and Komachi, 2018).
Acknowledgments
We would like to thank Martina Pastore for preliminary work on Korean. The first author is supported by a PhD scholarship from the FWO Research Foundation -Flanders.
Figure 1 :
1The R 2 adj scores of the optimized models on the full dataset over different values of k. The featurizations ending with ent are the feature weighted counterparts of the regular feature sets. The black line indicates a simple regression model with only length and log frequency as predictors.
Figure 2 :
2The distribution over R 2 adj on all data for all feature sets in their regular, optimized, weighted, and inversely weighted variants. The plots shows an effect of weighting, especially for the ngrams models. For all feature sets in all languages, the weighted variant is low, and near baseline level.
Figure 4 :
4The R 2 adj , bootstrapped over 10,000 samples.
Table 1 :
1The number of words left in each of the corpora after preprocessing.FR
SP
NL
UK
US
# words 38,335 44,853 24,530 28,480 30,639
FR SP NL UK US
ngram
n
4
4
4
4
4
Open ngram n
3
3
3
3
3
w
3
3
3
4
3
Table 2 :
2The optimal parameters for the two feature sets
that had parameters to optimize. Note the lack of variation
across parameterizations.
FR SP NL UK US
Slots
6
8
4
2
6
ngram
4
6
5
5
7
Open ngram
3
6
5
2
6
Table 3 :
3The optimal values of k for the optimized models.
NL .321 .324 .307 .322 .283 .310 UK .356 .367 .357 .360 .350 .366 US .506 .506 .501 .509 .481Slots
ngrams
Open ngrams
Reg Opt Reg Opt Reg
Opt
FR .391 .402 .371 .390 .355
.386
SP
.600 .603 .590 .595 .572
.588
Table 5 :
5The top 10 trigrams with the lowest negentropy
for US English and French, respectively. Notice how most
of these consist of padding ngrams, corresponding to com-
mon bound morphemes and word endings, such as 'ing' in
English and 'en' in French.
Table 6 :
6The optimal values for k and the various parameters in Korean.baseline Slots
ngrams
Open ngrams
-
Reg Opt Reg Opt Reg
Opt
.
Table 8 :
8The final regression models. The top rows indicate the parameters of these models, the presence of weighting. The second part of the table shows their coefficients, while the bottom rows show their explained variance and change in explained variance.Fra
Spa
Nld Eng-UK Eng-US
Kor
Features
ngram
ngram ngram
ngram
ngram ngram
Weighting
J
J
J
J
J
-
intercept
739.98 1060.055 585.06
650.67
762.18 632.56
freq
-39.36
-102.13 -29.89
-44.68
-47.32 -39.15
length
37.38
36.07
6.72
13.72
56.26 -21.47
rd
27.49
33.38
16.90
14.96
28.85 -19.18
R 2
.399
.605
.341
.363
.519
.223
∆R 2
.015
.005
.022
.014
.007
.018
∆R 2
base
.062
.041
.056
.025
.046
.040
Although OLD20 uses the mean distance, instead of the sum, this does not have any bearing on the fit to the data, as the denominator in the mean is k for all items. The sum has the further advantage of not arbitrarily leaving out the item itself; which is required in the definition of OLD20.
We did carry out experiments, and separately optimized k, using OLD, but this did not show anything interesting; rd using one hot encoded strings still outperforms OLD
This is also what we observed in our experiments, where regular open ngrams performed far worse than their constrained alternatives.
https://github.com/JDongian/python-jamo 5 In non-computational terms, a syllable block can contain more than three jamo. In the unicode standard, however, some jamo characters are represented as complex characters, which we also adopted.
Spalex: A spanish lexical decision database from a massive online data collection. J Aguasvivas, M Carreiras, M Brysbaert, P Mandera, E Keuleers, J A Duñabeitia, Frontiers in Psychology. 92156Aguasvivas, J., Carreiras, M., Brysbaert, M., Mandera, P., Keuleers, E., and Duñabeitia, J. A. (2018). Spalex: A spanish lexical decision database from a massive online data collection. Frontiers in Psychology, 9:2156.
Frequency and neighborhood effects on lexical access: Activation or search. S Andrews, Journal of Experimental Psychology: Learning, Memory, and Cognition. 15802Andrews, S. (1989). Frequency and neighborhood effects on lexical access: Activation or search? Journal of Ex- perimental Psychology: Learning, Memory, and Cogni- tion, 15(5):802.
The english lexicon project. D A Balota, M J Yap, K A Hutchison, M J Cortese, B Kessler, B Loftis, J H Neely, D L Nelson, G B Simpson, R Treiman, Behavior research methods. 393Balota, D. A., Yap, M. J., Hutchison, K. A., Cortese, M. J., Kessler, B., Loftis, B., Neely, J. H., Nelson, D. L., Simp- son, G. B., and Treiman, R. (2007). The english lexicon project. Behavior research methods, 39(3):445-459.
Teoria statistica delle classi e calcolo delle probabilita. C Bonferroni, Pubblicazioni del R Istituto Superiore di Scienze Economiche e Commericiali di Firenze8Bonferroni, C. (1936). Teoria statistica delle classi e cal- colo delle probabilita. Pubblicazioni del R Istituto Supe- riore di Scienze Economiche e Commericiali di Firenze, 8:3-62.
The negentropy principle of information. L Brillouin, Journal of Applied Physics. 249Brillouin, L. (1953). The negentropy principle of informa- tion. Journal of Applied Physics, 24(9):1152-1163.
Moving beyond kučera and francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for american english. M Brysbaert, B New, Behavior research methods. 414Brysbaert, M. and New, B. (2009). Moving beyond kučera and francis: A critical evaluation of current word fre- quency norms and the introduction of a new and im- proved word frequency measure for american english. Behavior research methods, 41(4):977-990.
Exploring orthographic neighborhood size effects in a computational model of chinese character naming. Y.-N Chang, S Welbourne, C.-Y Lee, Cognitive psychology. 91Chang, Y.-N., Welbourne, S., and Lee, C.-Y. (2016). Ex- ploring orthographic neighborhood size effects in a com- putational model of chinese character naming. Cognitive psychology, 91:1-23.
A computational model of normal and impaired lexical decision: Graded semantic effects. Y.-N Chang, S Furber, M L Ralph, S Welbourne, BioRxiv. 708156Chang, Y.-N., Furber, S., Ralph, M. L., and Welbourne, S. (2019). A computational model of normal and impaired lexical decision: Graded semantic effects. BioRxiv, page 708156.
Access to the internal lexicon. The psychology of reading. M Coltheart, Coltheart, M. (1977). Access to the internal lexicon. The psychology of reading.
Memorybased language processing. W Daelemans, Van Den, A Bosch, Cambridge University PressDaelemans, W. and Van den Bosch, A. (2005). Memory- based language processing. Cambridge University Press.
Igtree: using trees for compression and classification in lazy learning algorithms. W Daelemans, Van Den, A Bosch, T Weijters, Lazy learning. SpringerDaelemans, W., Van Den Bosch, A., and Weijters, T. (1997). Igtree: using trees for compression and classi- fication in lazy learning algorithms. In Lazy learning, pages 407-423. Springer.
TiMBL: Tilburg Memory-Based Learner-version 1.0-Reference Guide. W Daelemans, J Zavrel, K Van Der Sloot, Van Den, A Bosch, Daelemans, W., Zavrel, J., Van der Sloot, K., and Van den Bosch, A., (1998). TiMBL: Tilburg Memory-Based Learner-version 1.0-Reference Guide.
Contrasting five different theories of letter position coding: Evidence from orthographic similarity effects. C J Davis, J S Bowers, Journal of Experimental Psychology: Human Perception and Performance. 323535Davis, C. J. and Bowers, J. S. (2006). Contrasting five dif- ferent theories of letter position coding: Evidence from orthographic similarity effects. Journal of Experimen- tal Psychology: Human Perception and Performance, 32(3):535.
The spatial coding model of visual word identification. C J Davis, Psychological Review. 1173713Davis, C. J. (2010). The spatial coding model of visual word identification. Psychological Review, 117(3):713.
The neural code for written words: a proposal. S Dehaene, L Cohen, M Sigman, F Vinckier, Trends in cognitive sciences. 97Dehaene, S., Cohen, L., Sigman, M., and Vinckier, F. (2005). The neural code for written words: a proposal. Trends in cognitive sciences, 9(7):335-341.
Reading in the brain: The new science of how we read. S Dehaene, PenguinDehaene, S. (2009). Reading in the brain: The new science of how we read. Penguin.
An introduction to the bootstrap. B Efron, R J Tibshirani, CRC pressEfron, B. and Tibshirani, R. J. (1994). An introduction to the bootstrap. CRC press.
The french lexicon project: Lexical decision data for 38,840 french words and 38,840 pseudowords. Behavior Research. L Ferrand, B New, M Brysbaert, E Keuleers, P Bonin, A Méot, M Augustinova, C Pallier, Methods. 422Ferrand, L., New, B., Brysbaert, M., Keuleers, E., Bonin, P., Méot, A., Augustinova, M., and Pallier, C. (2010). The french lexicon project: Lexical decision data for 38,840 french words and 38,840 pseudowords. Behavior Re- search Methods, 42(2):488-496.
A universal approach to modeling visual word recognition and reading: Not only possible, but also inevitable. R Frost, Behavioral and brain sciences. 355Frost, R. (2012). A universal approach to modeling visual word recognition and reading: Not only possible, but also inevitable. Behavioral and brain sciences, 35(5):310- 329.
Modeling letter position coding in printed word perception. J Grainger, W J Van Heuven, Patrick BoninNova Science PublishersSome words to talk about wordsGrainger, J. and Van Heuven, W. J. (2004). Modeling letter position coding in printed word perception. In Patrick Bonin, editor, Mental lexicon: "Some words to talk about words", page 1-23. Nova Science Publishers.
Word frequency and neighborhood frequency effects in lexical decision and naming. Journal of memory and language. J Grainger, 29Grainger, J. (1990). Word frequency and neighborhood frequency effects in lexical decision and naming. Jour- nal of memory and language, 29(2):228-244.
Cracking the orthographic code: An introduction. Language and cognitive processes. J Grainger, 23Grainger, J. (2008). Cracking the orthographic code: An introduction. Language and cognitive processes, 23(1):1-35.
Orthographic processing: A "midlevel" vision of reading. J Grainger, The Quarterly Journal of Experimental Psychology. 712Grainger, J. (2018). Orthographic processing: A "mid- level" vision of reading. The Quarterly Journal of Ex- perimental Psychology, 71(2):335-359.
Computing the meanings of words in reading: cooperative division of labor between visual and phonological processes. M W Harm, M S Seidenberg, Psychological review. 1113662Harm, M. W. and Seidenberg, M. S. (2004). Computing the meanings of words in reading: cooperative division of labor between visual and phonological processes. Psy- chological review, 111(3):662.
Subtlexnl: A new measure for dutch word frequency based on film subtitles. E Keuleers, M Brysbaert, B New, Behavior research methods. 423Keuleers, E., Brysbaert, M., and New, B. (2010a). Subtlex- nl: A new measure for dutch word frequency based on film subtitles. Behavior research methods, 42(3):643- 650.
Practice effects in large-scale visual word recognition studies: A lexical decision study on 14,000 dutch monoand disyllabic words and nonwords. E Keuleers, K Diependaele, M Brysbaert, Frontiers in Psychology. 1174Keuleers, E., Diependaele, K., and Brysbaert, M. (2010b). Practice effects in large-scale visual word recognition studies: A lexical decision study on 14,000 dutch mono- and disyllabic words and nonwords. Frontiers in Psy- chology, 1:174.
The british lexicon project: Lexical decision data for 28,730 monosyllabic and disyllabic english words. E Keuleers, P Lacey, K Rastle, M Brysbaert, Behavior research methods. 441Keuleers, E., Lacey, P., Rastle, K., and Brysbaert, M. (2012). The british lexicon project: Lexical decision data for 28,730 monosyllabic and disyllabic english words. Behavior research methods, 44(1):287-304.
Letter order is not coded by open bigrams. S Kinoshita, D Norris, Journal of memory and language. 692Kinoshita, S. and Norris, D. (2013). Letter order is not coded by open bigrams. Journal of memory and lan- guage, 69(2):135-150.
Extropy: complementary dual of entropy. F Lad, G Sanfilippo, G Agro, Statistical Science. 301Lad, F., Sanfilippo, G., Agro, G., et al. (2015). Ex- tropy: complementary dual of entropy. Statistical Sci- ence, 30(1):40-58.
Binary codes capable of correcting deletions, insertions, and reversals. V I Levenshtein, Soviet physics doklady. 10Levenshtein, V. I. (1966). Binary codes capable of correct- ing deletions, insertions, and reversals. In Soviet physics doklady, volume 10, pages 707-710.
Orthographic neighborhood size effect in chinese character naming: Orthographic and phonological activations. Q.-L Li, H.-Y Bi, T.-Q Wei, Chen , B.-G , Acta psychologica. 1361Li, Q.-L., Bi, H.-Y., Wei, T.-Q., and Chen, B.-G. (2011). Orthographic neighborhood size effect in chinese charac- ter naming: Orthographic and phonological activations. Acta psychologica, 136(1):35-41.
An interactive activation model of context effects in letter perception: I. an account of basic findings. J L Mcclelland, D E Rumelhart, Psychological review. 885375McClelland, J. L. and Rumelhart, D. E. (1981). An inter- active activation model of context effects in letter per- ception: I. an account of basic findings. Psychological review, 88(5):375.
Subsymbolic natural language processing: An integrated model of scripts, lexicon, and memory. R Miikkulainen, MIT PressMiikkulainen, R. (1993). Subsymbolic natural language processing: An integrated model of scripts, lexicon, and memory. MIT Press.
Dyslexic and category-specific aphasic impairments in a self-organizing feature map model of the lexicon. R Miikkulainen, Brain and language. 592Miikkulainen, R. (1997). Dyslexic and category-specific aphasic impairments in a self-organizing feature map model of the lexicon. Brain and language, 59(2):334- 366.
B New, C Pallier, L Ferrand, R Matos, Une base de données lexicales du français contemporain sur internet: Lexique TM //a lexical database for contemporary french: Lexique TM . L'année psychologique. 101New, B., Pallier, C., Ferrand, L., and Matos, R. (2001). Une base de données lexicales du français contempo- rain sur internet: Lexique TM //a lexical database for con- temporary french: Lexique TM . L'année psychologique, 101(3):447-462.
The use of film subtitles to estimate word frequencies. B New, M Brysbaert, J Veronis, C Pallier, Applied psycholinguistics. 284New, B., Brysbaert, M., Veronis, J., and Pallier, C. (2007). The use of film subtitles to estimate word frequencies. Applied psycholinguistics, 28(4):661-677.
Is korean a syllabic alphabet or an alphabetic syllabary. H K Pae, Writing Systems Research. 32Pae, H. K. (2011). Is korean a syllabic alphabet or an al- phabetic syllabary. Writing Systems Research, 3(2):103- 115.
Can caniso activate casino? transposed-letter similarity effects with nonadjacent letter positions. M Perea, S J Lupker, Journal of memory and language. 512Perea, M. and Lupker, S. J. (2004). Can caniso activate casino? transposed-letter similarity effects with nonadja- cent letter positions. Journal of memory and language, 51(2):231-246.
Transposed-letter priming effects for close versus distant transpositions. M Perea, J A Duñabeitia, M Carreiras, Experimental Psychology. 556Perea, M., Duñabeitia, J. A., and Carreiras, M. (2008). Transposed-letter priming effects for close versus distant transpositions. Experimental Psychology, 55(6):384- 393.
. M Perea, R Abu Mallouh, M Carreiras, Perea, M., Abu Mallouh, R., and Carreiras, M. (2010).
The search for an input-coding scheme: Transposedletter priming in arabic. Psychonomic Bulletin & Review. 173The search for an input-coding scheme: Transposed- letter priming in arabic. Psychonomic Bulletin & Review, 17(3):375-380.
Neighborhood effects in visual word recognition and reading. The Oxford Handbook of Reading. M Perea, 76Perea, M. (2015). Neighborhood effects in visual word recognition and reading. The Oxford Handbook of Read- ing, page 76.
No flexibility in letter position coding in korean. K Rastle, C Lally, C H Lee, Journal of Experimental Psychology: Human Perception and Performance. 454458Rastle, K., Lally, C., and Lee, C. H. (2019). No flexibil- ity in letter position coding in korean. Journal of Ex- perimental Psychology: Human Perception and Perfor- mance, 45(4):458.
Process of recognizing tachistoscopically presented words. Psychological review. D E Rumelhart, P Siple, 8199Rumelhart, D. E. and Siple, P. (1974). Process of recog- nizing tachistoscopically presented words. Psychologi- cal review, 81(2):99.
Writing systems. G Sampson, HutchinsonLondon, U.K.Sampson, G. (1985). Writing systems. London, U.K.: Hutchinson.
Letter position coding in printed word perception: Effects of repeated and transposed letters. Language and Cognitive Processes. S Schoonbaert, J Grainger, 19Schoonbaert, S. and Grainger, J. (2004). Letter position coding in printed word perception: Effects of repeated and transposed letters. Language and Cognitive Pro- cesses, 19(3):333-367.
What is Life -the Physical Aspect of the Living Cell. E Schrödinger, Cambridge University PressSchrödinger, E. (1944). What is Life -the Physical Aspect of the Living Cell. Cambridge University Press.
Toward memorybased reasoning. C Stanfill, D L Waltz, Commun. ACM. 2912Stanfill, C. and Waltz, D. L. (1986). Toward memory- based reasoning. Commun. ACM, 29(12):1213-1228.
From strings to other things: Linking the neighborhood and transposition effects in word reading. S Tulkens, D Sandra, W Daelemans, Proceedings of the 22nd Conference on Computational Natural Language Learning. the 22nd Conference on Computational Natural Language LearningTulkens, S., Sandra, D., and Daelemans, W. (2018a). From strings to other things: Linking the neighborhood and transposition effects in word reading. In Proceedings of the 22nd Conference on Computational Natural Lan- guage Learning, pages 75-85.
Wordkit: a python package for orthographic and phonological featurization. S Tulkens, D Sandra, W Daelemans, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). Nicoletta Calzolarithe Eleventh International Conference on Language Resources and Evaluation (LREC 2018)Paris, France, mayet al., editors. European Language Resources Association (ELRATulkens, S., Sandra, D., and Daelemans, W. (2018b). Wordkit: a python package for orthographic and phono- logical featurization. In Nicoletta Calzolari (Conference chair), et al., editors, Proceedings of the Eleventh Inter- national Conference on Language Resources and Eval- uation (LREC 2018), Paris, France, may. European Lan- guage Resources Association (ELRA).
A study of relative-position priming with superset primes. E Van Assche, J Grainger, Journal of Experimental Psychology: Learning, Memory, and Cognition. 32399Van Assche, E. and Grainger, J. (2006). A study of relative-position priming with superset primes. Jour- nal of Experimental Psychology: Learning, Memory, and Cognition, 32(2):399.
Subtlex-uk: A new and improved word frequency database for british english. W J Van Heuven, P Mandera, E Keuleers, M Brysbaert, Quarterly Journal of Experimental Psychology. 676Van Heuven, W. J., Mandera, P., Keuleers, E., and Brys- baert, M. (2014). Subtlex-uk: A new and improved word frequency database for british english. Quarterly Jour- nal of Experimental Psychology, 67(6):1176-1190.
Cambridge university versus hebrew university: The impact of letter transposition on reading english and hebrew. H Velan, R Frost, Psychonomic Bulletin & Review. 145Velan, H. and Frost, R. (2007). Cambridge university ver- sus hebrew university: The impact of letter transposition on reading english and hebrew. Psychonomic Bulletin & Review, 14(5):913-918.
transposition effects are not universal: The impact of transposing letters in hebrew. H Velan, R Frost, Journal of Memory and Language. 613Velan, H. and Frost, R. (2009). transposition effects are not universal: The impact of transposing letters in hebrew. Journal of Memory and Language, 61(3):285-302.
The flexibility of letter-position flexibility: Evidence from eye movements in reading hebrew. H Velan, A Deutsch, R Frost, Journal of Experimental Psychology: Human Perception and Performance. 3941143Velan, H., Deutsch, A., and Frost, R. (2013). The flex- ibility of letter-position flexibility: Evidence from eye movements in reading hebrew. Journal of Experimen- tal Psychology: Human Perception and Performance, 39(4):1143.
Inhibitory stroke neighbour priming in character recognition and reading in chinese. The Quarterly. J Wang, J Tian, W Han, S P Liversedge, K B Paterson, Journal of Experimental Psychology. 6711Wang, J., Tian, J., Han, W., Liversedge, S. P., and Paterson, K. B. (2014). Inhibitory stroke neighbour priming in character recognition and reading in chinese. The Quar- terly Journal of Experimental Psychology, 67(11):2149- 2171.
C Whitney, P Cornelissen, Seriol reading. Language and Cognitive Processes. 23Whitney, C. and Cornelissen, P. (2008). Seriol reading. Language and Cognitive Processes, 23(1):143-164.
How the brain encodes the order of letters in a printed word: The seriol model and selective literature review. C Whitney, Psychonomic Bulletin & Review. 82Whitney, C. (2001). How the brain encodes the order of letters in a printed word: The seriol model and selec- tive literature review. Psychonomic Bulletin & Review, 8(2):221-243.
Context-sensitive coding, associative memory, and serial order in (speech) behavior. W A Wickelgren, Psychological Review. 7611Wickelgren, W. A. (1969). Context-sensitive coding, as- sociative memory, and serial order in (speech) behavior. Psychological Review, 76(1):1.
Moving beyond coltheart's n: A new measure of orthographic similarity. T Yarkoni, D Balota, M Yap, Psychonomic Bulletin & Review. 155Yarkoni, T., Balota, D., and Yap, M. (2008). Moving be- yond coltheart's n: A new measure of orthographic sim- ilarity. Psychonomic Bulletin & Review, 15(5):971-979.
The korean lexicon project: A lexical decision study on 30,930 korean words and nonwords. K Yi, M Koo, K Nam, K Park, T . Park, S Bae, C H Lee, H.-W Lee, J.-R Cho, The Korean Journal of Cognitive and Biological Psychology. Yi, K., Koo, M., Nam, K., Park, K., Park, T. ., Bae, S., Lee, C. H., Lee, H.-W., and Cho, J.-R. (2017). The korean lexicon project: A lexical decision study on 30,930 ko- rean words and nonwords. The Korean Journal of Cog- nitive and Biological Psychology, pages 395-410.
Neural machine translation of logographic languages using sub-character level information. L Zhang, M Komachi, arXiv:1809.02694arXiv preprintZhang, L. and Komachi, M. (2018). Neural machine trans- lation of logographic languages using sub-character level information. arXiv preprint arXiv:1809.02694. |
||
5,562,790 | DATE: A Dialogue Act Tagging Scheme for Evaluation of Spoken Dialogue Systems | This paper describes a dialogue act tagging scheme developed for the purpose of providing finer-grained quantitative dialogue metrics for comparing and evaluating DARPA COMMUNICATOR spoken dialogue systems. We show that these dialogue act metrics can be used to quantify the amount of effort spent in a dialogue maintaining the channel of communication or, establishing the frame for communication, as opposed to actually carrying out the travel planning task that the system is designed to support. We show that the use of these metrics results in a 7% improvement in the fit in models of user satisfaction. We suggest that dialogue act metrics can ultimately support more focused qualitative analysis of the role of various dialogue strategy parameters, e.g. initiative, across dialogue systems, thus clarifying what development paths might be feasible for enhancing user satisfaction in future versions of these systems.Speech-ActExample PRESENT-INFO You are logged in as a guest user of A T and T Communicator. PRESENT-INFO | [
9889475,
3258280,
29816847,
3069430,
2716607
] | DATE: A Dialogue Act Tagging Scheme for Evaluation of Spoken Dialogue Systems
Marilyn Walker walker@research.att.com
AT&T Shannon Labs
180 Park Ave. Florham Park07932N.J
Rebecca Passonneau
AT&T Shannon Labs
180 Park Ave. Florham Park07932N.J
DATE: A Dialogue Act Tagging Scheme for Evaluation of Spoken Dialogue Systems
This paper describes a dialogue act tagging scheme developed for the purpose of providing finer-grained quantitative dialogue metrics for comparing and evaluating DARPA COMMUNICATOR spoken dialogue systems. We show that these dialogue act metrics can be used to quantify the amount of effort spent in a dialogue maintaining the channel of communication or, establishing the frame for communication, as opposed to actually carrying out the travel planning task that the system is designed to support. We show that the use of these metrics results in a 7% improvement in the fit in models of user satisfaction. We suggest that dialogue act metrics can ultimately support more focused qualitative analysis of the role of various dialogue strategy parameters, e.g. initiative, across dialogue systems, thus clarifying what development paths might be feasible for enhancing user satisfaction in future versions of these systems.Speech-ActExample PRESENT-INFO You are logged in as a guest user of A T and T Communicator. PRESENT-INFO
INTRODUCTION
Recent research on dialogue is based on the assumption that dialogue acts provide a useful way of characterizing dialogue behaviors in human-human dialogue, and potentially in human-computer dialogue as well [16,27,11,7,1]. Several research efforts have explored the use of dialogue act tagging schemes for tasks such as improving recognition performance [27], identifying important parts of a dialogue [12], and as a constraint on nominal expression generation [17]. This paper reports on the development and use of a dialogue act tagging scheme for a rather different task: the evaluation and comparison of spoken dialogue systems in the travel domain. We call this scheme DATE: Dialogue Act Tagging for Evaluation.
Our research on the use of dialogue act tagging for evaluation focuses on the corpus of DARPA COMMUNICATOR dialogues collected in the June 2000 data collection [28]. This corpus consists of 662 dialogues from 72 users calling the nine different COMMUNI-CATOR travel planning systems. Each system implemented a logfile standard for logging system behaviors and calculating a set of core metrics. Each system utterance and each recognizer result was logged, and user utterances were transcribed and incorporated into . the logfiles. The logfile standard supported the calculation of metrics that were hypothesized to potentially affect the user's perception of the system; these included task duration, per turn measures, response latency measures and ASR performance measures. Each dialogue was also hand labelled for task completion.
The hypothesis underlying our approach is that a system's dialogue behaviors have a strong effect on the user's perception of the system. Yet the core metrics that were collected via the logfile standard represent very little about dialogue behaviors. For example, the logging counts system turns and tallies their average length, but doesn't distinguish turns that reprompt the user, or give instructions, from those that present flight information. Furthermore, each COMMUNICATOR system had a unique dialogue strategy and a unique way of achieving particular communicative goals. Thus, in order to explore our hypothesis about the differential effect of these strategies, we needed a way to characterize system dialogue behaviors that would capture such differences yet be applied uniformly to all nine systems. While some sites logged system dialogue behaviors using site-specific dialogue act naming schemes, there existed no scheme that could be applied across sites.
Our goal was thus to develop a dialogue act tagging scheme that would capture important distinctions in this set of dialogues; these distinctions must be useful for testing particular hypotheses about differences among dialogue systems. We also believed that it was important for our tagging scheme to allow for multiple views of each dialogue act. This would allow us, for example, to investigate what part of the task an utterance contributes to separately from what speech act function it serves. A central claim of the paper is that these goals require a tagging scheme that makes distinctions within three orthogonal dimensions of utterance classification: (1) a SPEECH-ACT dimension; (2) a TASK-SUBTASK dimension; and (3) a CONVERSATIONAL-DOMAIN dimension. Figure 1 shows a COMMUNICATOR dialogue with each system utterance classified on these three dimensions. The labels on each utterance are fully described in the remainder of the paper. Sections 2, 3, and 4, describe the three dimensions of DATE. In these sections, we describe two aspects of our annotation scheme that are not captured in existing tagging schemes, which we believe are important for characterizing how much effort in a dialogue is devoted to the task versus different kinds of dialogue maintenance. Section 5 describes how the dialogue act labels are assigned to system utterances and section 6 discusses results showing that the DATE dialogue act metrics improve models of user satisfaction by an absolute 7% (an increase from 38% to 45%). The dialogueue act metrics that are important predictors of user satisfaction are various kinds of meta-dialogue, apologies and acts that may be landmarks for achieving particular dialogueue subtasks. In section 7 we summarize the paper, discuss our claim that a dialogue annotation scheme is a partial model of a natural class of dialogues, and discuss the ways in which the DATE scheme may be generalizable to other dialogue corpora.
CONVERSATIONAL DOMAINS
The CONVERSATIONAL-DOMAIN dimension characterizes each utterance as primarily belonging to one of three arenas of conversational action. The first arena is the domain task, which in this case is air travel booking, and which we refer to below as ABOUT-TASK. The second domain of conversational action is the management of the communication channel, which we refer to as ABOUT-COMMUNICATION. This distinction has been widely adopted [19,2,9]. In addition, we identify a third domain of talk that we refer to as ABOUT-SITUATION-FRAME. This domain is particularly relevant for distinguishing human-computer from human-human dialogues, and for distinguishing dialogue strategies across the 9 COM-MUNICATOR systems. Each domain is described in this section.
About-Task
The ABOUT-TASK domain reflects the fact that many utterances in a task-oriented dialogue originate because the goal of the dialogue is to complete a particular task to the satisfaction of both participants. Typically an about-task utterance directly asks for or presents task-related information, or offers a solution to a task goal.
As Figure 1 shows, most utterances are in the ABOUT-TASK dimension, reflecting the fact that the primary goal of the dialogue is to collaborate on the task of making travel arrangements. The task column of Figure 1 specifies the subtask that each task-related utterance contributes to. DATE includes a large inventory of subtasks in the task/subtask dimension in order to make fine-grained distinctions regarding the dialogue effort devoted to the task or its subcomponents. Section 4 will describe the task model in more detail.
About-Communication
The ABOUT-COMMUNICATION domain reflects the system goal of managing the verbal channel and providing evidence of what has been understood [29,8,25]. Although utterances of this type occur in human-human dialogue, they are more frequent in humancomputer dialogue, where they are motivated by the need to avoid potentially costly errors arising from imperfect speech recognition. In the COMMUNICATOR corpus, many systems use a conservative strategy of providing feedback indicating the system's understanding of the information provided by the user after each user turn. A typical example is the repetition of the origin and destination cities in Figures 1 and 6. This type of repetition is the IMPLICIT-CONFIRMATION speech-act (see Section 3 below). Some systems used a variable confirmation strategy where some information items may be confirmed as they are understood, but the system requests explicit confirmation of all task parameters before searching the database for matching flights. An example is in Figure 2.
Here the system asks for explicit confirmation in SYS3 before going to the database. This is the first opportunity that the user has for making a correction, which he does in USER3. The system then again asks for explicit confirmation of its new understanding, which the user provides in USER4. After the user informs the system that it is a one-way flight in USER6, the system accesses the database. These explicit confirmations have the goal of avoiding a costly database lookup, where the retrieval is conditioned on the wrong parameters.
All implicit and explicit confirmation speech-acts are categorized as ABOUT-COMMUNICATION because they are motivated by the potential errors that the system might make in understanding The last category of ABOUT-COMMUNICATION utterances are the OPENINGS/CLOSINGS by which the system greets or says goodbye to the caller. (Again, see Section 3 below.)
About Situation-Frame
The SITUATION-FRAME domain pertains to the goal of managing the culturally relevant framing expectations. The term is inspired by Goffman's work on the organization and maintenance of social interaction [13,14]. An obvious example of a framing assumption is that the language of the interaction will be English [13,14]. Another is that there is an asymmetry between the knowledge and/or agency of the system (or human travel agent) and that of the user (or caller): the user cannot issue an airline ticket.
In developing the DATE tagging scheme, we compared humanhuman travel planning dialogues collected by CMU with the humanmachine dialogues of the June 2000 data collection and noticed a striking difference in the ABOUT-FRAME dimension. Namely, very few ABOUT-FRAME utterances occur in the human-human dialogues, whereas they occur frequently enough in human-computer dialogues that to ignore them is to risk obscuring significant differences in habitability of different systems. In other words, certain differences in dialogue strategies across sites could not be fully represented without such a distinction. Figure 3 provides examples motivating this dimension.
Dialogue acts that are ABOUT-FRAME are cross-classified as one of three types of speech-acts, PRESENT
Figure 3: Example About-Frame Utterances
medial instructions. One site gives minimal up-front framing information; further, the same utterances that can occur up-front also occur dialogue-medially. A second site gives no up-front framing information, but it does provide framing information dialoguemedially. Yet a third site gives framing information dialogue-initially, but not dialogue-medially. The remaining sites provide different kinds of general instructions dialogue-initially, e.g. (Welcome. ...You may say repeat, help me out, start over, or, that's wrong, you can also correct and interrupt the system at any time.) versus dialoguemedially: (Try changing your departure dates or times or a nearby city with a larger airport.) This category also includes statements to the user about the system's capabilities. These occur in response to a specific question or task that the system cannot handle: I cannot handle rental cars or hotels yet. Please restrict your requests to air travel. See Figure 3.
Another type of ABOUT-FRAME utterance is the system's attempt to disambiguate the user's utterance; in response to the user specifying Springfield as a flight destination, the system indicates that this city name is ambiguous (I know of three Springfields, in Missouri, Illinois and Ohio. Which one do you want?). The system's utterance communicates to the user that Springfield is ambiguous, and goes further than a human would to clarify that there are only three known options. It is important for evaluation purposes to distinguish the question and the user's response from a simple question-answer sequence establishing a destination. A direct question, such as What city are you flying to?, functions as a REQUEST-INFO speech act and solicits information about the task. The context here contrasts with a direct question in that the system has already asked for and understood a response from the caller about the destination city. Here, the function of the system turn is to remediate the caller's assumptions about the frame by indicating the system's confusion about the destination. Note that the question within this pattern could easily be reformulated as a more typical instruction statement, such as Please specify which Springfield you mean, or Please say Missouri, Illinois or Ohio..
THE SPEECH-ACT DIMENSION
The SPEECH-ACT dimension characterizes the utterance's communicative goal, and is motivated by the need to distinguish the communicative goal of an utterance from its form. As an example, consider the functional category of a REQUEST for information, found in many tagging schemes that annotate speech-acts [24,18,6]. Keeping the functional category of a REQUEST separate from the sentence modality distinction between question and statement makes it possible to capture the functional similarity between question and statement forms of requests, e.g., Can you tell me what time you would like to arrive? versus Please tell me what time you would like to arrive.
In DATE, the speech-act dimension has ten categories. We use familiar speech-act labels, such as OFFER, REQUEST-INFO, PRESENT-INFO, ACKNOWLEDGMENT, and introduce new ones designed to help us capture generalizations about communicative behavior in this domain, on this task, given the range of system and human behavior we see in the data. One new one, for example, is STATUS-REPORT, whose speech-act function and operational definition are discussed below. Examples of each speech-act type are in Figure 4.
Speech-Act Example
REQUEST-INFO
And, what city are you flying to?
PRESENT-INFO
The airfare for this trip is 390 dollars.
OFFER
Would you like me to hold this option? ACKNOWLEDGMENT I will book this leg.
STATUS-REPORT
Accessing the database; this might take a few seconds.
EXPLICIT-CONFIRM
Figure 4: Example Speech Acts
In this domain, the REQUEST-INFO speech-acts are designed to solicit information about the trip the caller wants to book, such as the destination city (And what city are you flying to?), the desired dates and times of travel (What date would you like to travel on), or information about ground arrangements, such as hotel or car rental (Will you need a hotel in Chicago?).
The PRESENT-INFO speech-acts also often pertain directly to the domain task of making travel arrangements: the system presents the user with a choice of itinerary (There are several flights from Dallas Fort Worth to Salisbury Maryland which depart between eight in the morning and noon on October fifth. You can fly on American departing at eight in the morning or ten thirty two in the morning, or on US Air departing at ten thirty five in the morning.), as well as a ticket price (Ticket price is 495 dollars), or hotel or car options.
OFFERS involve requests by the caller for a system action, such as to pick a flight (I need you to tell me whether you would like to take this particular flight) or to confirm a booking (If this itinerary meets your needs, please press one; otherwise, press zero.) They typically occur after the prerequisite travel information has been obtained, and choices have been retrieved from the database.
The ACKNOWLEDGMENT speech act characterizes system utterances that follow a caller's acceptance of an OFFER, e.g. I will book this leg or I am making the reservation.
The STATUS-REPORT speech-act is used to inform the user about the status of the part of the domain task pertaining to the database retrieval, and can include apologies, mollification, requests to be patient, and so on. Their function is to let the user know what is happening with the database lookup, whether there are problems with it, and what types of problems. While the form of these acts are typically statements, their communication function is different than typical presentations of information; they typically function to keep the user apprised of progress on aspects of the task that the user has no direct information about, e.g. Accessing the database; this might take a few seconds. There is also a politeness function to utterances like Sorry this is taking so long, please hold., and they often provide the user with error diagnostics: The date you specified is too far in advance.; or Please be aware that the return date must be later than the departure date.; or No records satisfy your request.; or There don't seem to be any flights from Boston.
The speech-act inventory also includes two types of speech acts whose function is to confirm information that has already been provided by the caller. In order to identify and confirm the parameters of the trip, systems may ask the caller direct questions, as in SYS3 and SYS4 in Figure 2. These EXPLICIT-CONFIRM speech acts are sometimes triggered by the system's belief that a misunderstanding may have occurred. A typical example is Are you traveling to Dallas?. An alternative form of the same EXPLICIT-CONFIRM speech-act type asserts the information the system has understood and asks for confirmation in an immediately following question: I have you arriving in Dallas. Is that correct? In both cases, the caller is intended to provide a response.
A less intrusive form of confirmation, which we tag as IMPLICIT-CONFIRM, typically presents the user with the system's understanding of one travel parameter immediately before asking about the next parameter. Depending on the site, implicit information can either precede the new request for information, as in Flying to Tokyo. What day are you leaving?, or can occur within the same utterance, as in What day do you want to leave London? More rarely, an implicit confirmation is followed by PRESENT-INFO: a flight on Monday September 25. Delta has a flight departing Atlanta at nine thirty. One question about the use of implicit confirmation strategy is whether the caller realizes they can correct the system when necessary [10]. Although IMPLICIT-CONFIRMS typically occur as part of a successful sequence of extracting trip information from the caller, they can also occur in situations where the system is having trouble understanding the caller. In this case, the system may attempt to instruct the user on what it is doing to remediate the problem in between an IMPLICIT-CONFIRM and a REQUEST-INFO: So far, I have you going from Tokyo. I am trying to assemble enough information to pick a flight. Right now I need you to tell me your destination.
We have observed that INSTRUCTIONS are a speech-act type that distinguishes these human-computer travel planning dialogues from corresponding human-human travel planning dialogues. Instructions sometimes take the form of a statement or an imperative, and are characterized by their functional goal of clarifying the system's own actions, correcting the user's expectations, or changing the user's future manner of interacting with the system. Dialogue systems are less able to diagnose a communication problem than human travel agents, and callers are less familiar with the capabilities of such systems. As noted above, some systems resort to explicit instructions about what the system is doing or is able to do, or about what the user should try in order to assist the system: Try asking for flights between two major cities; or You can cancel the San Antonio, Texas, to Tampa, Florida flight request or change it. To change it, you can simply give new information such as a new departure time. Note that INSTRUCTIONS, unlike the preceding dialogue act types, do not directly involve a domain task.
Like the INSTRUCTION speech-acts, APOLOGIES do not address a domain task. They typically occur when the system encounters problems, for example, in understanding the caller (I'm sorry, I'm having trouble understanding you), in accessing the database (Something is wrong with the flight retrieval), or with the connection (Sorry, we seem to have a bad connection. Can you please call me back later?). The OPENING/CLOSING speech act category characterizes utterances that open and close the dialogue, such as greetings or goodbyes [26]. Most of the dialogue systems open the interactions with some sort of greeting-Hello, welcome to our Communicator flight travel system, and end with a sign-off or salutation-Thank you very much for calling. This session is now over. We distinguish these utterances from other dialogue acts, but we do not tag openings separate from closings because they have a similar function, and can be distinguished by their position in the discourse. We also include in this category utterances in which the systems survey the caller as to whether s/he got the information s/he needed or was happy with the system.
THE TASK-SUBTASK DIMENSION
The TASK-SUBTASK dimension refers to a task model of the domain task that the system is designed to support and captures distinctions among dialogue acts that reflect the task structure. 1 Our domain is air travel reservations, thus the main communicative task is to specify information pertaining to an air travel reservation, such as the destination city. Once a flight has been booked, ancillary tasks such as arranging for lodging or a rental car become relevant. The fundamental motivation for the TASK-SUBTASK dimension in the DATE scheme is to derive metrics related to subtasks in order to quantify how much effort a system expends on particular subtasks. 2 This dimension distinguishes among 13 subtasks, some of which can also be grouped at a level below the top level task. The subtasks and examples are in Figure 5. The TOP-LEVEL-TRIP task describes the task which contains as its subtasks the ORIGIN, DESTINATION, DATE, TIME, AIRLINE, TRIP-TYPE, RETRIEVAL and ITINERARY tasks. The GROUND task includes both the HOTEL and CAR subtasks.
Typically each COMMUNICATOR dialogue system acts as though it utilizes a task model, in that it has a particular sequence in which it will ask for task information if the user doesn't take the initiative to volunteer this information. For example, most systems ask first for the origin and destination cities, then for the date and time. Some systems ask about airline preference and others leave it to the caller to volunteer this information. A typical sequence of tasks for the flight planning portion of the dialogue is illustrated in Figure 6.
As Figure 6 illustrates, any subtask can involve multiple speech acts. For example, the DATE subtask can consist of acts requesting, or implicitly or explicitly confirming the date. A similar example is provided by the subtasks of CAR (rental) and HOTEL, which include dialogue acts requesting, confirming or acknowledging arrangements to rent a car or book a hotel room on the same trip.
½ This dimension is used as an elaboration of each speech-act type in other tagging schemes [24]. ¾ It is tempting to also consider this dimension as a means of inferring discourse structure on the basis of utterance level labels, since it is widely believed that models of task structure drive the behavior of dialogue systems [23,3,22], and the relationship between discourse structure and task structure has been a core topic of research since Grosz's thesis [15]. However, we leave the inference of discourse structure as a topic for future work because the multifunctionality of many utterances suggests that the correspondence between task structure and dialogue structure may not be as straightforward as has been proposed in Grosz's work [30].
RETRIEVAL, ITINERARY
Figure 6: Dialogue Illustrating a Typical Task Sequence
There are also differences in how each site's dialogue strategy reflects it conceptualization of the travel planning task. For example, some systems ask the user explicitly for their airline preferences whereas others do not (the systems illustrated in Figures 1 and 6 do not, wherase the one in Figure 2 does). Another difference is whether the system asks the user explicitly whether s/he wants a round-trip ticket. Some systems ask this information early on, and search for both the outbound and the return flights at the same time. Other systems do not separately model round-trip and multi-leg trips. Instead they ask the user for information leg by leg, and after requesting the user to select an itinerary for one leg of the flight, they ask whether the user has an additional destination.
A final difference was that, in the June 2000 data collection, some systems such as the one illustrated in Figure 1 included the ground arrangements subtasks, and others did not.
IMPLEMENTATION
Our focus in this work is in labelling the system side of the dialogue; our goal was to develop a fully automatic 100% correct dialogue parser for the limited range of utterances produced by the 9 COMMUNICATOR systems. While we believe that it would be useful to be able to assign dialogue acts to both sides of the conversation, we expect that to require hand-labelling [1]. We also believe that in many cases the system behaviors are highly correlated with the user behaviors of interest; for example when a user has to repeat himself because of a misunderstanding, the system has probably prompted the user multiple times for the same item of information and has probably apologized for doing so. Thus this aspect of the dialogue would also be likely to be captured by the APOLOGY dialogue act and by counts of effort expended on the particular subtask.
We implemented a pattern matcher that labels the system side of each dialogue. An utterance or utterance sequence is identifed automatically from a database of patterns that correspond to the dialogue act classification we arrived at in cooperation with the site developers. Where it simplifies the structure of the dialogue parser, we assign two adjacent utterances that are directed at the same goal the same DATE label, thus ignoring the utterance level segmentation, but we count the number of characters used in each act. Since some utterances are generated via recursive or iterative routines, some patterns involve wildcards.
The current implementation labels the utterances with tags that are independent of any particular markup-language or representation format. We have written a transducer that takes the labelled dialogues and produces HTML output for the purpose of visualizing the distribution of dialogue acts and meta-categories in the dialogues. An additional summarizer program is used to produce a summary of the percentages and counts of each dialogue act as well as counts of meta-level groupings of the acts related to the different dimensions of the tagging scheme. We intend to use our current representation to generate ATLAS compliant representations [4].
RESULTS
Our primary goal was to achieve a better understanding of the qualitative aspects of each system's dialogue behavior. We can quantify the extent to which the dialogue act metrics have the potential to improve our understanding by applying the PARADISE framework to develop a model of user satisfaction and then examining the extent to which the dialogue act metrics improve these models [31]. In other work, we show that given the standard metrics collected for the COMMUNICATOR dialogue systems, the best model accounts for 38% of the variance in user satisfaction [28].
When we retrain these models with the dialogue act metrics extracted by our dialogue parser, we find that many metrics are significant predictors of user satisfaction, and that the model fit increases from 38% to 45%. When we examine which dialogue metrics are significant, we find that they include several types of meta-dialogue such as explicit and implicit confirmations of what the user said, and acknowledgments that the system is going to go ahead and do the action that the user has requested. Significant negative predictors include apologies. On interpretation of many of the significant predictors is that they are landmarks in the dialogue for achievement of particular subtasks. However the predictors based on the core metrics included a ternary task completion metric that captures succinctly whether any task was achieved or not, and whether the exact task that the user was attempting to accomplish was achieved. A plausible explanation for the increase in the model fits is that user satisfaction is sensitive to exactly how far through the task the user got, even when the user did not in fact complete the task. The role of the other significant dialogue metrics are plausibly interpreted as acts important for error minimization. As with the taskrelated dialogue metrics, there were already metrics related to ASR performance in the core set of metrics. However, several of the important metrics count explicit confirmations, one of the desired date of travel, and the other of all information before searching the database, as in utterances SYS3 and SYS4 in Figure 2.
DISCUSSION
This paper has presented DATE, a dialogue act tagging scheme developed explicitly for the purpose of comparing and evaluating spoken dialogue systems. We have argued that such a scheme needs to make three important distinctions in system dialogue behaviors and we are investigating the degree to which any given type of dialogue act belongs in a single category or in multiple categories.
We also propose the view that a tagging scheme be viewed as a partial model of a natural class of dialogues. It is a model to the degree that it represents claims about what features of the dialogue are important and are sufficiently well understood to be operationally defined. It is partial in that the distributions of the features and their relationship to one another, i.e., their possible manifestations in dialogues within the class, are an empirical question.
The view that a dialogue tagging scheme is a partial model of a class of dialogues implies that a pre-existing tagging scheme can be re-used on a different research project, or by different researchers, only to the degree that it models the same natural class with respect to similar research questions, is sufficient for expressing observations about what actually occurs within the current dialogues of interest, and is sufficiently well-defined that high reliability within and across research sites can be achieved. Thus, our need to modify existing schemes was motivated precisely to the degree that existing schemes fall short of these requirements. Other researchers who began with the goal of re-utilizing existing tagging schemes have also found it necessary to modify these schemes for their research purposes [11,18,7].
The most substantial difference between our dialogue act tagging scheme and others that have been proposed is in our expansion of the two-way distinction between dialogue tout simple vs. meta-dialogue, into a three-way distinction among the immediate dialogue goals, meta-dialogue utterances, and meta-situation utterances. Depending on further investigation, we might decide these three dimensions have equal status within the overall tagging scheme (or within the overall dialogue-modeling enterprise), or that there are two types of meta-dialogue: utterances devoted to maintaining the channel, versus utterances devoted to establishing/maintaining the frame. Further, in accord with our view that a tagging scheme is a partial model, and that it is therefore necessarily evolving as our understanding of dialogue evolves, we also believe that our formulation of any one dimension, such as the speech-act dimension, will necessarily differ from other schemes that model a speech-act dimension.
Furthermore, because human-computer dialogue is at an early stage of development, any such tagging scheme must be a moving target, i.e., the more progress is made, the more likely it is we may need to modify along the way the exact features used in an annotation scheme to characterize what is going on. In particular, as system capabilities become more advanced in the travel domain, it will probably be necessary to elaborate the task model to capture differ-ent aspects of the system's problem solving activities. For example, our task model does not currently distinguish between different aspects of information about an itinerary, e.g. between presentation of price information and presentation of schedule information.
We also expect that some domain-independent modifications are likely to be necessary as dialogue systems become more successful, for example to address the dimension of "face", i.e. the positive politeness that a system shows to the user [5]. As an example, consider the difference between the interpretation of the utterance, There are no flights from Boston to Boston, when produced by a system vs. when produced by a human travel agent. If a human said this, it would be be interpretable by the recipient as an insult to their intelligence. However when produced by a system, it functions to identify the source of the misunderstanding. Another distinction that we don't currently make which might be useful is between the initial presentation of an item of information and its re-presentation in a summary. Summaries arguably have a different communicative function [29,7]. Another aspect of function our representation doesn't capture is rhetorical relations between speech acts [20,21].
While we developed DATE to answer particular research questions in the COMMUNICATOR dialogues, there are likely to be aspects of DATE that can be applied elsewhere. The task dimension tagset reflects our model of the domain task. The utility of a task model may be general across domains and for this particular domain, the categories we employ are presumably typical of travel tasks and so, may be relatively portable.
The speech act dimension includes categories typically found in other classifications of speech acts, such as REQUEST-INFO, OF-FER, and PRESENT-INFO. We distinguish information presented to the user about the task, PRESENT-INFO, from information provided to change the user's behavior, INSTRUCTION, and from information presented in explanation or apology for an apparent interruption in the dialogue, STATUS-REPORT. The latter has some of the flavor of APOLOGIES, which have an inter-personal function, along with OPENINGS/CLOSINGS. We group GREETINGS and SIGN-OFFS into the single category of OPENINGS/CLOSINGS on the assumption that politeness forms make less contribution to perceived system success than the system's ability to carry out the task, to correct misunderstandings, and to coach the user.
Our third dimension, conversational-domain, adds a new category, ABOUT-SITUATION-FRAME, to the more familiar distinction between utterances directed at a task goal vs. utterances directed at a maintaining the communication. This distinction supports the separate classification of utterances directed at managing the user's assumptions about how to interact with the system on the air travel task. As we mention above, the ABOUT-SITUATION-FRAME utterances that we find in the human-computer dialogues typically did not occur in human-human air travel dialogues. In addition, as we note above, one obvious difference in the dialogue strategies implemented at different sites had to do with whether these utterances occurred upfront, within the dialogue, or both.
In order to demonstrate the utility of dialogue act tags as metrics for spoken dialogue systems, we show that the use of these metrics in the application of PARADISE [31] improves our model of user satisfaction by an absolute 7%, from 38% to 45%. This is a large increase, and the fit of these models are very good for models of human behavior. We believe that we have only begun to discover the ways in which the output of the dialogue parser can be used. In future work we will examine whether other representations derived from the metrics we have applied, such as sequences or structural relations between various types of acts might improve our performance model further. We are also collaborating with other mem-bers of the COMMUNICATOR community who are investigating the use of dialogue act and initiative tagging schemes for the purpose of comparing human-human to human-computer dialogues [1].
ACKNOWLEDGMENTS
This work was supported under DARPA GRANT MDA 972 99 3 0003 to AT&T Labs Research. Thanks to Payal Prabhu and Sungbok Lee for their assistance with the implementation of the dialogue parser. We also appreciate the contribution of J. Aberdeen, E. Bratt, S. Narayanan, K. Papineni, B. Pellom, J. Polifroni, A. Potamianos, A. Rudnicky, S. Seneff, and D. Stallard who helped us understand how the DATE classification scheme applied to their COMMUNICATOR systems' dialogues.
Figure 2 :
2Dialogue Illustrating Variable Confirmation Strategythe caller, or in diagnosing the causes of misunderstandings. In general, any utterance that reflects the system's understanding of something the user said is classified as ABOUT-COMMUNICATION.A second set of ABOUT-COMMUNICATION utterances are APOLO-GIES that the system makes for misunderstandings (see Section 3 below), i.e. utterances such as I'm sorry. I'm having trouble understanding you., or My mistake again. I didn't catch that. or I can see you are having some problems.
-INFO, INSTRUCTION or APOLOGY. They are not classified as having a value on the TASK-SUBTASK dimension. Most of the ABOUT-FRAME dialogue acts fall into the speech-act category of INSTRUCTIONS, utterances directed at shaping the user's behavior and expectations about how to interact with a machine. Sites differ regarding how much instruction is provided up-front versus within the dialogue; most sites have different utterance strategies for dialogue-initial versus dialogue-I heard you ask about fares. I can only price an itinerary. I cannot provide information on published fares for individual flights. First, always wait to hear the beep before you say anything You can always start over again completely just by saying: start over. Before we begin, let's go over a few simple instructions. Please remember to speak after the tone. If you get confused at any point you can say start over to cancel your current itinerary. Sorry, an error has occurred. We'll have to start over. I am sorry I got confused. Thanks for your patience. Let us try again.Something is wrong with the flight retrieval.INSTRUCTION
INSTRUCTION
INSTRUCTION
INSTRUCTION
APOLOGY
APOLOGY
APOLOGY
APOLOGY
I have trouble with my script.
You will depart on September 1st. Is that correct? IMPLICIT-CONFIRM Leaving from Dallas. Try saying a short sentence. Sorry, I didn't understand that. OPENINGS/CLOSINGS Hello. Welcome to the C M U Communicator.INSTRUCTION
APOLOGY
TIME Departing at what time?. Did you have an airline preference? TRIP-TYPE Will you return to Boston from San Jose? Accessing the database; this might take a few seconds. The airfare for this trip is 390 dollars. Did you need to make any ground arrangements?. Would you like a hotel near downtown or near the airport?. Do you need a car in San Jose?Figure 5: Example Utterances for each SubtaskTask
Example
TOP-LEVEL-
TRIP
What are your travel plans?
ORIGIN
And, what city are you leaving from?
DESTINATION
And, where are you flying to?
DATE
What day would you like to leave?
AIRLINE
RETRIEVAL
ITINERARY
GROUND
HOTEL
CAR
Speaker Utterance
Task
SYS
Welcome .... What trip would you like to take?
TOP-
LEVEL-
TRIP
USER
STARTING FROM ATLANTA GEORGIA TO
LONDON ENGLAND
SYS
flying from atlanta to london.
ORIGIN,
DESTINA-
TION
SYS
What day are you leaving Atlanta?
DATE
USER
MONDAY SEPTEMBER TWENTY FIVE
SYS
on monday, september twenty fifth.
DATE
SYS
About what time do you want to leave?
TIME
USER
[UH] DAYTIME
SYS
Leaving in the daytime.
TIME
SYS
Please stand by while I look up the flight sched-
ules on the web.
RETRIEVAL
SYS
OK, got them. I have 13 flights. The first flight
is on American at six fifty nine eh M, arriving at
ten forty five PM, with a connection in Chicago.
. Is that OK?
ITINERARY
USER
[UH] YES OKAY
SYS
Do you want a return flight from London to At-
lanta?
TRIP-TYPE
USER
[UH] YES
SYS
Returning from london to atlanta.
TRIP-TYPE
SYS
What day are you leaving London?
DATE
USER
[UH] LEAVING [UM] MONDAY OCTOBER
THE SECOND
SYS
on monday, october second.
DATE
SYS
About what time do you want to leave?
TIME
USER
ANY TIME'S OKAY
SYS
Please stand by while I look up the flight sched-
ules on the web... OK, got them.
Speaker UtteranceSpeech Act Task C-Domain SYS Welcome. opening/closing nil task You are logged in as a guest user. instruct nil frame You may say repeat, help me out, start over, or, that's wrong, instruct nil frame you can also correct and interrupt the system at any time.
Human-computer and human-human dialogues. J Aberdeen, C Doran, DARPA Communicator Principle Investigators Meeting. Philadelphia, PA USAJ. Aberdeen and C. Doran. Human-computer and human-human dialogues. DARPA Communicator Principle Investigators Meeting (Philadelphia, PA USA).
Draft of DAMSL: Dialog act markup in several layers. Coding scheme developed by the MultiParty group. J Allen, M Core, 1st Discourse Tagging Workshop. University of PennsylvaniaJ. Allen and M. Core. Draft of DAMSL: Dialog act markup in several layers. Coding scheme developed by the MultiParty group, 1st Discourse Tagging Workshop, University of Pennsylvania, March 1996, 1997.
Recognizing intentions from natural language utterances. J F Allen, Computational Models of Discourse. M. Brady and R. BerwickMIT PressJ. F. Allen. Recognizing intentions from natural language utterances. In M. Brady and R. Berwick, editors, Computational Models of Discourse. MIT Press, 1983.
A formal framework for linguistic annotation. S Bird, M Liberman, Speech Communication. 331S. Bird and M. Liberman. A formal framework for linguistic annotation. Speech Communication, 33(1,2):23-60, 2001.
Politeness: Some universals in language usage. P Brown, S Levinson, Cambridge University PressP. Brown and S. Levinson. Politeness: Some universals in language usage. Cambridge University Press, 1987.
The reliability of a dialogue structure coding scheme. J C Carletta, A Isard, S Isard, J C Kowtko, G Dowerty-Sneddon, A H Anderson, Computational Linguistics. J. C. Carletta, A. Isard, S. Isard, J. C. Kowtko, G. Dowerty-Sneddon, and A. H. Anderson. The reliability of a dialogue structure coding scheme. Computational Linguistics, 23-1:13-33, 1997.
Building a corpus of annotated dialogues: the ADAM experience. R Cattoni, M Danieli, A Panizza, V Sandrini, C Soria, Proc. of the Conference Corpus-Linguistics. of the Conference Corpus-LinguisticsLancaster, U.K.R. Cattoni, M. Danieli, A. Panizza, V. Sandrini, and C. Soria. Building a corpus of annotated dialogues: the ADAM experience. In Proc. of the Conference Corpus-Linguistics-2001, Lancaster, U.K., 2001.
Contributing to discourse. H H Clark, E F Schaefer, Cognitive Science. 13H. H. Clark and E. F. Schaefer. Contributing to discourse. Cognitive Science, 13:259-294, 1989.
Functional comparison of face-to-face and computer-mediated decision-making interactions. S L Condon, C G Cech, Computer-Mediated Converstaion. John Benjamins. S. HerringS. L. Condon and C. G. Cech. Functional comparison of face-to-face and computer-mediated decision-making interactions. In S. Herring, editor, Computer-Mediated Converstaion. John Benjamins, 1995.
Metrics for evaluating dialogue strategies in a spoken language system. M Danieli, E Gerbino, Proceedings of the 1995 AAAI Spring Symposium on Empirical Methods in Discourse Interpretation and Generation. the 1995 AAAI Spring Symposium on Empirical Methods in Discourse Interpretation and GenerationM. Danieli and E. Gerbino. Metrics for evaluating dialogue strategies in a spoken language system. In Proceedings of the 1995 AAAI Spring Symposium on Empirical Methods in Discourse Interpretation and Generation, pages 34-39, 1995.
An empirical investigation of collaborative dialogues. B Di Eugenio, P W Jordan, J D Moore, R H Thomason, ACL-COLING98, Proceedings of the Thirty-sixth Conference of the Association for Computational Linguistics. B. Di Eugenio, P. W. Jordan, J. D. Moore, and R. H. Thomason. An empirical investigation of collaborative dialogues. In ACL-COLING98, Proceedings of the Thirty-sixth Conference of the Association for Computational Linguistics, 1998.
Clarity: Inferring discourse structure from speech. M Finke, M Lapata, A Lavie, L Levin, L M Tomokiyo, T Polzin, K Ries, A Waibel, K Zechner, American Association for Artificial Intelligence (AAAI) Symposium on Applying Machine Learning to Discourse Processing Proceedings. Stanford, CaliforniaM. Finke, M. Lapata, A. Lavie, L. Levin, L. M. Tomokiyo, T. Polzin, K. Ries, A. Waibel, and K. Zechner. Clarity: Inferring discourse structure from speech. In American Association for Artificial Intelligence (AAAI) Symposium on Applying Machine Learning to Discourse Processing Proceedings, Stanford, California, March 1998.
Frame Analysis: An Essay on the Organization of Experience. E Goffman, Harper and RowNew YorkE. Goffman. Frame Analysis: An Essay on the Organization of Experience. Harper and Row, New York, 1974.
Forms of Talk. E Goffman, University of Pennsylvania PressPhiladelphia, Pennsylvania, USAE. Goffman. Forms of Talk. University of Pennsylvania Press, Philadelphia, Pennsylvania, USA, 1981.
The representation and use of focus in dialogue understanding. B J Grosz, 151SRI International333Technical ReportB. J. Grosz. The representation and use of focus in dialogue understanding. Technical Report 151, SRI International, 333
. Ravenswood Ave, Menlo Park, 94025CaRavenswood Ave, Menlo Park, Ca. 94025, 1977.
Replicability of transaction and action coding in the map task corpus. A Isard, J C Carletta, AAAI Spring Symposium: Empirical Methods in Discourse Interpretation and Generation. M. Walker and J. MooreA. Isard and J. C. Carletta. Replicability of transaction and action coding in the map task corpus. In M. Walker and J. Moore, editors, AAAI Spring Symposium: Empirical Methods in Discourse Interpretation and Generation, pages 60-67, 1995.
Intentional Influences on Object Redescriptions in Dialogue: Evidence from an Empirical Study. P W Jordan, Intelligent Systems Program, University of PittsburghPhD thesisP. W. Jordan. Intentional Influences on Object Redescriptions in Dialogue: Evidence from an Empirical Study. PhD thesis, Intelligent Systems Program, University of Pittsburgh, 2000.
Swbd-damsl labeling project coder's manual. D Jurafsky, E Shriberg, D Biasca, University of ColoradoTechnical reportD. Jurafsky, E. Shriberg, and D. Biasca. Swbd-damsl labeling project coder's manual. Technical report, University of Colorado, 1997. available as http://stripe.colorado.edu/ jurafsky/manual.august1.html.
Plan recognition and discourse analysis: An integrated approach for understanding dialogues. D Litman, 170University of RochesterTechnical ReportD. Litman. Plan recognition and discourse analysis: An integrated approach for understanding dialogues. Technical Report 170, University of Rochester, 1985.
Perlocutions: The achilles' heel of speech act theory. D Marcu, Journal of Pragmatics. D. Marcu. Perlocutions: The achilles' heel of speech act theory. Journal of Pragmatics, 1999.
Instructions for coding explanations: Identifying segments, relations and minimal units. M G Moser, J Moore, E Glendening, 96-17University of Pittsburgh, Department of Computer ScienceTechnical ReportM. G. Moser, J. Moore, and E. Glendening. Instructions for coding explanations: Identifying segments, relations and minimal units. Technical Report 96-17, University of Pittsburgh, Department of Computer Science, 1996.
A plan-based analysis of indirect speech acts. R Perrault, J Allen, American Journal of Computational Linguistics. 6R. Perrault and J. Allen. A plan-based analysis of indirect speech acts. American Journal of Computational Linguistics, 6:167-182, 1980.
A Computer Model of Conversation. R Power, University of EdinburghPhD thesisR. Power. A Computer Model of Conversation. PhD thesis, University of Edinburgh, 1974.
Utilizing statistical speech act processing in verbmobil. N Reithinger, E Maier, ACL 95. N. Reithinger and E. Maier. Utilizing statistical speech act processing in verbmobil. In ACL 95, 1995.
Conversation acts in task-oriented spoken dialogue. D R Traum, E A Hinkelman, Computational Intelligence. 83D. R.Traum and E. A. Hinkelman. Conversation acts in task-oriented spoken dialogue. Computational Intelligence, 8(3):575-599, 1992.
Opening up closings. E A Schegloff, H Sacks, Semiotica. 8E. A. Schegloff and H. Sacks. Opening up closings. Semiotica, 8:289-327, 1977.
Can prosody aid the automatic classification of dialog acts in conversational speech. E Shriberg, P Taylor, R Bates, A Stolcke, K Ries, D Jurafsky, N Coccaro, R Martin, M Meteer, C V Ess-Dykema, Language and Speech: Special Issue on Prosody and Conversation. E. Shriberg, P. Taylor, R. Bates, A. Stolcke, K. Ries, D. Jurafsky, N. Coccaro, R. Martin, M. Meteer, and C. V. Ess-Dykema. Can prosody aid the automatic classification of dialog acts in conversational speech. Language and Speech: Special Issue on Prosody and Conversation, 2000.
Darpa communicator dialog travel planning systems: The june 2000 data collection. M Walker, J Aberdeen, J Boland, E Bratt, J Garofolo, L Hirschman, A Le, S Lee, S Narayanan, K Papineni, B Pellom, J Polifroni, A Potamianos, P Prabhu, A Rudnicky, G Sanders, S Seneff, D Stallard, S Whittaker, Submitted to EUROSPEECH 2001M. Walker, J. Aberdeen, J. Boland, E. Bratt, J. Garofolo, L. Hirschman, A. Le, S. Lee, S. Narayanan, K. Papineni, B. Pellom, J. Polifroni, A. Potamianos, P. Prabhu, A. Rudnicky, G. Sanders, S. Seneff, D. Stallard, and S. Whittaker. Darpa communicator dialog travel planning systems: The june 2000 data collection. In Submitted to EUROSPEECH 2001, 2001.
Redundancy in collaborative dialogue. M A Walker, Fourteenth International Conference on Computational Linguistics. M. A. Walker. Redundancy in collaborative dialogue. In Fourteenth International Conference on Computational Linguistics, pages 345-351, 1992.
Limited attention and discourse structure. M A Walker, Computational Linguistics. M. A. Walker. Limited attention and discourse structure. Computational Linguistics, 22-2:255-264, 1996.
Towards developing general models of usability with PARADISE. M A Walker, C A Kamm, D J Litman, Natural Language Engineering: Special Issue on Best Practice in Spoken Dialogue Systems. M. A. Walker, C. A. Kamm, and D. J. Litman. Towards developing general models of usability with PARADISE. Natural Language Engineering: Special Issue on Best Practice in Spoken Dialogue Systems, 2000. |
8,457,271 | Factored Language Models and Generalized Parallel Backoff | We introduce factored language models (FLMs) and generalized parallel backoff (GPB). An FLM represents words as bundles of features (e.g., morphological classes, stems, data-driven clusters, etc.), and induces a probability model covering sequences of bundles rather than just words. GPB extends standard backoff to general conditional probability tables where variables might be heterogeneous types, where no obvious natural (temporal) backoff order exists, and where multiple dynamic backoff strategies are allowed. These methodologies were implemented during the JHU 2002 workshop as extensions to the SRI language modeling toolkit. This paper provides initial perplexity results on both CallHome Arabic and on Penn Treebank Wall Street Journal articles. Significantly, FLMs with GPB can produce bigrams with significantly lower perplexity, sometimes lower than highly-optimized baseline trigrams. In a multi-pass speech recognition context, where bigrams are used to create first-pass bigram lattices or N-best lists, these results are highly relevant. | [] | Factored Language Models and Generalized Parallel Backoff
Jeff A Bilmes bilmes@ssli.ee.washington.edu
Dept. of Electrical Engineering
SSLI-LAB
University of Washington
Katrin Kirchhoff katrin@ssli.ee.washington.edu
Dept. of Electrical Engineering
SSLI-LAB
University of Washington
Factored Language Models and Generalized Parallel Backoff
We introduce factored language models (FLMs) and generalized parallel backoff (GPB). An FLM represents words as bundles of features (e.g., morphological classes, stems, data-driven clusters, etc.), and induces a probability model covering sequences of bundles rather than just words. GPB extends standard backoff to general conditional probability tables where variables might be heterogeneous types, where no obvious natural (temporal) backoff order exists, and where multiple dynamic backoff strategies are allowed. These methodologies were implemented during the JHU 2002 workshop as extensions to the SRI language modeling toolkit. This paper provides initial perplexity results on both CallHome Arabic and on Penn Treebank Wall Street Journal articles. Significantly, FLMs with GPB can produce bigrams with significantly lower perplexity, sometimes lower than highly-optimized baseline trigrams. In a multi-pass speech recognition context, where bigrams are used to create first-pass bigram lattices or N-best lists, these results are highly relevant.
Introduction
The art of statistical language modeling (LM) is to create probability models over words and sentences that tradeoff statistical prediction with parameter variance. The field is both diverse and intricate (Rosenfeld, 2000;Chen and Goodman, 1998;Jelinek, 1997;Ney et al., 1994), with many different forms of LMs including maximumentropy, whole-sentence, adaptive and cache-based, to name a small few. Many models are simply smoothed conditional probability distributions for a word given its preceding history, typically the two preceding words.
In this work, we introduce two new methods for language modeling: factored language model (FLM) and generalized parallel backoff (GPB). An FLM considers a word as a bundle of features, and GPB is a technique that generalized backoff to arbitrary conditional probability tables. While these techniques can be considered in isolation, the two methods seem particularly suited to each other -in particular, the method of GPB can greatly facilitate the production of FLMs with better performance.
Factored Language Models
In a factored language model, a word is viewed as a vector of k factors, so that w t ≡ {f 1 t , f 2 t , . . . , f K t }. Factors can be anything, including morphological classes, stems, roots, and other such features in highly inflected languages (e.g., Arabic, German, Finnish, etc.), or data-driven word classes or semantic features useful for sparsely inflected languages (e.g., English). Clearly, a two-factor FLM generalizes standard class-based language models, where one factor is the word class and the other is words themselves. An FLM is a model over factors, i.e., p(f 1:K t |f 1:K t−1:t−n ), that can be factored as a product of probabilities of the form p(f |f 1 , f 2 , . . . , f N ). Our task is twofold: 1) find an appropriate set of factors, and 2) induce an appropriate statistical model over those factors (i.e., the structure learning problem in graphical models (Bilmes, 2003;Friedman and Koller, 2001)).
Generalized Parallel Backoff
An individual FLM probability model can be seen as a directed graphical model over a set of N + 1 random variables, with child variable F and N parent variables F 1 through F N (if factors are words, then F = W t and F i = W t−i ). Two features make an FLM distinct from a standard language model: 1) the variables {F, F 1 , . . . , F N } can be heterogeneous (e.g., words, word clusters, morphological classes, etc.); and 2) there is no obvious natural (e.g., temporal) backoff order as in standard wordbased language models. With word-only models, backoff proceeds by dropping first the oldest word, then the next oldest, and so on until only the unigram remains. In p(f |f 1 , f 2 , . . . , f N ), however, many of the parent variables might be the same age. Even if the variables have differing seniorities, it is not necessarily best to drop the oldest variable first. Figure 1: A backoff graph for F with three parent variables F 1 , F 2 , F 3 . The graph shows all possible singlestep backoff paths, where exactly one variable is dropped per backoff step. The SRILM-FLM extensions, however, also support multi-level backoff.
F 1 F 2 F 3 F F F 1 F 2 F F 1 F 3 F F 2 F 3 F F 1 F F 3 F F 2 F A B C D E F G H
We introduce the notion of a backoff graph ( Figure 1) to depict this issue, which shows the various backoff paths from the all-parents case (top graph node) to the unigram (bottom graph node). Many possible backoff paths could be taken. For example, when all variables are words, the path A − B − E − H corresponds to trigram with standard oldest-first backoff order. The path A − D − G − H is a reverse-time backoff model. This can be seen as a generalization of lattice-based language modeling (Dupont and Rosenfeld, 1997) where factors consist of words and hierarchically derived word classes.
In our GPB procedure, either a single distinct path is chosen for each gram or multiple parallel paths are used simultaneously. In either case, the set of backoff path(s) that are chosen are determined dynamically (at "run-time") based on the current values of the variables. For example, a path might consist of nodes A − (BCD) − (EF) − G where node A backs off in parallel to the three nodes BCD, node B backs off to nodes (EF), C backs off to (E), and D backs off to (F).
This can be seen as a generalization of the standard backoff equation. In the two parents case, this becomes:
p GBO (f |f 1 , f 2 ) = d N (f,f 1 ,f 2 ) p M L (f |f 1 , f 2 ) if N (f, f 1 , f 2 ) > τ α(f 1 , f 2 )g(f, f 1 , f 2 ) otherwise
where d N (f,f1,f2) is a standard discount (determining the smoothing method), p M L is the maximum likelihood distribution, α(f 1 , f 2 ) are backoff weights, and g(f, f 1 , f 2 ) is an arbitrary non-negative backoff function of its three factor arguments. Standard backoff occurs with g(f, f 1 , f 2 ) = p BO (f |f 1 ), but the GPB procedures can be obtained by using different g-functions. For example, g(f, f 1 , f 2 ) = p BO (f |f 2 ) corresponds to a different backoff path, and parallel backoff is obtained by using an appropriate g (see below). As long as g is non-negative, the backoff weights are defined as follows:
α(f 1 , f 2 ) = 1 − f :N (f,f 1 ,f 2 )>τ d N (f,f 1 ,f 2 ) p M L (f |f 1 , f 2 ) f :N (f,f 1 ,f 2 )<=τ g(f, f 1 , f 2 )
This equation is non-standard only in the denominator, where one may no longer sum over the factors f only with counts greater than τ . This is because g is not necessarily a distribution (i.e., does not sum to unity). Therefore, backoff weight computation can indeed be more expensive for certain g functions, but this appears not to be prohibitive as demonstrated in the next few sections.
SRILM-FLM extensions
During the recent 2002 JHU workshop (Kirchhoff et al., 2003), significant extensions were made to the SRI language modeling toolkit (Stolcke, 2002) to support arbitrary FLMs and GPB procedures. This uses a graphicalmodel like specification language, and where many different backoff functions (19 in total) were implemented. Other features include: 1) all SRILM smoothing methods at every node in a backoff graph; 2) graph level skipping; and 3) up to 32 possible parents (e.g., 33-gram). Two of the backoff functions are (in the three parents case):
g(f, f 1 , f 2 , f 3 ) = p GBO (f |f 1 , f 2 )
where ( 1, 2) = argmax |{f : N (f, fm 1 , fm 2 ) > 0}|
(call this g 2 ) where N () is the count function. Implemented backoff functions include maximum/min (normalized) counts/backoff probabilities, products, sums, mins, maxs, (weighted) averages, and geometric means.
Results
GPB-FLMs were applied to two corpora and their perplexity was compared with standard optimized vanilla biand trigram language models. In the following, we consider as a "bigram" a language model with a temporal history that includes information from no longer than one previous time-step into the past. Therefore, if factors are deterministically derivable from words, a "bigram" might include both the previous words and previous factors as a history. From a decoding state-space perspective, any such bigram would be relatively cheap.
In CallHome-Arabic, words are accompanied with deterministically derived factors: morphological class (M), A w 1 , d 1 , t 1 g 2 / [(1, 2, 3), (1, 2), (2, 3), (3, 1), 1, 2, 3] 266(±1.1) GPB-FLM 2-gram B w 1 , d 1 , f 1 g 2 / [2, 1] 276(±1.3) GPB-FLM 2-gram C w 1 , d 1 , c 1 g 2 / [1, (2, 3), 2, 3] 275(±1.2) stems (S), roots (R), and patterns (P). Training data consisted of official training portions of the LDC CallHome ECA corpus plus the CallHome ECA supplement (100 conversations). For testing we used the official 1996 evaluation set. Results are given in Table 1 and show perplexity for: 1) the baseline 3-gram; 2) a FLM 3-gram using morphs and stems; 3) a GPB-FLM 3-gram using morphs, stems and backoff function g 1 ; 4) the baseline 2-gram; 5) an FLM 2-gram using morphs; 6) an FLM 2-gram using morphs and stems; and 7) an GPB-FLM 2-gram using morphs and stems. Backoff path(s) are depicted by listing the parent number(s) in backoff order. As can be seen, the FLM alone might increase perplexity, but the GPB-FLM decreases it. Also, it is possible to obtain a 2-gram with lower perplexity than the optimized baseline 3-gram. The Wall Street Journal (WSJ) data is from the Penn Treebank 2 tagged ('88-'89) WSJ collection. Word and POS tag information (T t ) was extracted. The sentence order was randomized to produce 5-fold crossvalidation results using (4/5)/(1/5) training/testing sizes. Other factors included the use of a simple deterministic tagger obtained by mapping a word to its most frequent tag (F t ), and word classes obtained using SRILM's ngram-class tool with 50 (C t ) and 500 (D t ) classes. Results are given in Table 2. The table shows the baseline 3-gram and 2-gram perplexities, and three GPB-FLMs. Model A uses the true by-hand tag information from the Treebank. To simulate conditions during first-pass decoding, Model B shows the results using the most frequent tag, and Model C uses only the two data-driven word classes. As can be seen, the bigram perplexities are significantly reduced relative to the baseline, almost matching that of the baseline trigram. Note that none of these reduced perplexity bigrams were possible without using one of the novel backoff functions.
Discussion
The improved perplexity bigram results mentioned above should ideally be part of a first-pass recognition step of a multi-pass speech recognition system. With a bigram, the decoder search space is not large, so any appreciable LM perplexity reductions should yield comparable word error reductions for a fixed set of acoustic scores in a firstpass. For N-best or lattice generation, the oracle error should similarly improve. The use of an FLM with GPB in such a first pass, however, requires a decoder that supports such language models. Therefore, FLMs with GPB will be incorporated into GMTK (Bilmes, 2002), a general purpose graphical model toolkit for speech recognition and language processing. The authors thank Dimitra Vergyri, Andreas Stolcke, and Pat Schone for useful discussions during the JHU'02 workshop.
(m 1 ,m 2 )N
12∈{(f, fm 1 , fm 2 )
Table 1 :
1CallHome Arabic Results.LM
parents
backoff function/path(s)
ppl
3-gram
w 1 , w 2
-/ temporal [2, 1]
173
FLM 3-gram
w 1 , w 2 , m 1 , s 1
-/ [2, 1, 4, 3]
178
GPB-FLM 3-gram
w 1 , w 2 , m 1 , s 1
g 1 / [2, 1, (3, 4), 3, 4]
166
2-gram
w 1
-/ temporal [1]
175
FLM 2-gram
w 1 , m 1
-/ [2, 1]
173
FLM 2-gram
w 1 , m 1 , s 1
-/ [1, 2, 3]
179
GPB-FLM 2-gram
w 1 , m 1 , s 1
g 1 / [1, (2, 3), 2, 3]
167
Table 2 :
2Penn Treebank WSJ Results.LM
parents
Backoff function/path(s)
ppl (±std. dev.)
3-gram
w 1 , w 2
-/ temporal [2, 1]
258(±1.2)
2-gram
w 1
-/ temporal [1]
320(±1.3)
GPB-FLM 2-gram
The GMTK documentation. J Bilmes, J. Bilmes. 2002. The GMTK docu- mentation. http://ssli.ee.washington.edu/ bilmes/gmtk.
Graphical models and automatic speech recognition. J A Bilmes, Mathematical Foundations of Speech and Language Processing. R. Rosenfeld, M. Ostendorf, S. Khudanpur, and M. JohnsonNew YorkSpringer-VerlagJ. A. Bilmes. 2003. Graphical models and au- tomatic speech recognition. In R. Rosenfeld, M. Osten- dorf, S. Khudanpur, and M. Johnson, editors, Mathematical Foundations of Speech and Language Processing. Springer- Verlag, New York.
An empirical study of smoothing techniques for language modeling. Goodman1998] S F Chen, J Chen, Goodman, Tr-10-98Cambridge, MassachusettsCenter for Research in Computing Technology, Harvard UniversityTechnical Report[Chen and Goodman1998] S. F. Chen and J. Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical Report Tr-10-98, Center for Research in Computing Technology, Harvard University, Cambridge, Massachusetts, August.
Lattice based language models. [ Dupont And Rosenfeld1997, ] P Dupont, R Rosenfeld, CMU-CS-97-173Pittsburgh, PA 15213Carnegie Mellon UniversityTechnical Report[Dupont and Rosenfeld1997] P. Dupont and R. Rosenfeld. 1997. Lattice based language models. Technical Report CMU-CS-97-173, Carnegie Mellon University, Pittsburgh, PA 15213, September.
Learning Bayesian networks from data. Koller2001] N Friedman, D Friedman, Koller, NIPS 2001 Tutorial Notes. Neural Information Processing Systems. B.C. Canada[Friedman and Koller2001] N. Friedman and D. Koller. 2001. Learning Bayesian networks from data. In NIPS 2001 Tuto- rial Notes. Neural Information Processing Systems, Vancou- ver, B.C. Canada.
Statistical Methods for Speech Recognition. F Jelinek, MIT PressF. Jelinek. 1997. Statistical Methods for Speech Recognition. MIT Press.
Novel approaches to arabic speech recognition: Report from the 2002 johns-hopkins summer workshop. [ Kirchhoff, Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing. IEEE Intl. Conf. on Acoustics, Speech, and Signal essingHong Kong[Kirchhoff et al.2003] K. Kirchhoff et al 2003. Novel ap- proaches to arabic speech recognition: Report from the 2002 johns-hopkins summer workshop. In Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing, Hong Kong.
On structuring probabilistic dependencies in stochastic language modelling. Ney, Computer Speech and Language. 8[Ney et al.1994] H. Ney, U. Essen, and R. Kneser. 1994. On structuring probabilistic dependencies in stochastic language modelling. Computer Speech and Language, 8:1-38.
Two decades of statistical language modeling: Where do we go from here? Proceedings of the IEEE. R Rosenfeld, 88R. Rosenfeld. 2000. Two decades of statistical language modeling: Where do we go from here? Proceed- ings of the IEEE, 88(8).
SRILM-an extensible language modeling toolkit. A Stolcke, Proc. Int. Conf. on Spoken Language Processing. Int. Conf. on Spoken Language essingDenver, ColoradoA. Stolcke. 2002. SRILM-an extensible lan- guage modeling toolkit. In Proc. Int. Conf. on Spoken Lan- guage Processing, Denver, Colorado, September. |
258,890,929 | TransPerfect's Private Neural Machine Translation Portal | We will present our solution to replace the usage of publicly available machine translation (MT) services in companies where privacy and confidentiality are key. Our MT portal can translate across a variety of languages using neural machine translation, and supports an extensive number of file types. Corporations are using it to enable multilingual communication everywhere. | [] | TransPerfect's Private Neural Machine Translation Portal
Diego Bartolomé dbartolome@translations.com
TransPerfect Passeig de Gràcia 11
Esc. B 5è 2a08007BarcelonaSpain
José Masa jmasa@translations.com
TransPerfect Passeig de Gràcia 11
Esc. B 5è 2a08007BarcelonaSpain
TransPerfect's Private Neural Machine Translation Portal
We will present our solution to replace the usage of publicly available machine translation (MT) services in companies where privacy and confidentiality are key. Our MT portal can translate across a variety of languages using neural machine translation, and supports an extensive number of file types. Corporations are using it to enable multilingual communication everywhere.
Introduction
Machine translation (MT) is widespread today 1 . Companies are using it extensively both for productivity increase and thus turnaround time and cost reduction, and also for gisting or understandability in many situation such as ediscovery. At TransPerfect, we have developed a neural machine translation platform that can be installed on premises or on our own cloud to guarantee data confidentiality and control, link client-specific neural MT engines to it, and enable supervised and unsupervised learning 2 .
Access to the platform
The access is through a URL (to be presented at the conference), and can be customized for each client. Our main features are: Single Sign On: no need for specific usernames or passwords, users at our clients can access with their company e-mail and password. IP address range restriction: only users accessing through a pre-defined range of IP addresses are allowed into the system. This is essential for security in our top clients like banks or pharma companies. Real-time translation of plain text and documents: users can translate plain text and also more than 40 file types, including scanned PDFs and Office documents. Neural MT engines: neural MT engines are available in more than 25 languages, with a supervised and unsupervised learning option. Supervised means that the engines learn from linguists' feedback, and unsupervised refers to self-learning capabilities. A functionality to suggest a better translation is available, as well as automated language detection. Reporting: powerful reporting is available to enable real-time tracking of number of processed words, quality of the engines, and other business KPIs. Data storage: we delete data after 24 hours, and some clients have even more restrictive policies to delete translated plain text immediately and documents after they are downloaded.
Additional features
Besides the above, we are currently integrating additional features that have been commonly requested such as customization of glossaries and do not translate lists, seamless integration with our human post-editing services, and addition of speech-to-text and text-to-speech as input and output modes, respectively.
© 2018 The authors. This article is licensed under a Creative Commons 3.0 licence, no derivative works, attribution, CC-BY-ND.
The Year of Artificial Intelligence in Translation. Transperfect, TransPerfect, The Year of Artificial Intelligence in Translation, http://www.transperfect.com/blog/the- year-of-AI-translation.
Robot Intelligence Technology with a Human Touch. M T Transperfect, TransPerfect blog. TransPerfect, MT: Robot Intelligence Technology with a Human Touch, TransPerfect blog, http://www.transperfect.com/blog/machine-
Pérez-Ortiz, Sánchez-Martínez, Esplà-Gomis, Popović, Martins Rico, Proceedings of the 21st Annual Conference of the European Association for Machine Translation. Van den Bogaert, Forcadathe 21st Annual Conference of the European Association for Machine Translation341Pérez-Ortiz, Sánchez-Martínez, Esplà-Gomis, Popović, Rico, Martins, Van den Bogaert, Forcada (eds.) Proceedings of the 21st Annual Conference of the European Association for Machine Translation, p. 341
. Spain Alacant, Alacant, Spain, May 2018. |
5,546,656 | Using a Mixture of N-Best Lists from Multiple MT Systems in Rank-Sum-Based Confidence Measure for MT Outputs * | This paper addressees the problem of eliminating unsatisfactory outputs from machine translation (MT) systems. The authors intend to eliminate unsatisfactory MT outputs by using confidence measures. Confidence measures for MT outputs include the rank-sum-based confidence measure (RSCM) for statistical machine translation (SMT) systems. RSCM can be applied to non-SMT systems but does not always work well on them. This paper proposes an alternative RSCM that adopts a mixture of the N-best lists from multiple MT systems instead of a single-system's N-best list in the existing RSCM. In most cases, the proposed RSCM proved to work better than the existing RSCM on two non-SMT systems and to work as well as the existing RSCM on an SMT system. | [
5284722,
122099,
1559412,
14852917,
13537374
] | Using a Mixture of N-Best Lists from Multiple MT Systems in Rank-Sum-Based Confidence Measure for MT Outputs *
Yasuhiro Akiba yasuhiro.akiba@atr.jp
Graduate School of Informatics
Kyoto University Yoshida-Honmachi
Sakyo-ku606-8501KyotoJapan
Eiichiro Sumita eiichiro.sumita@atr.jp
Hiromi Nakaiwa
Seiichi Yamamoto
Hiroshi G Okuno okuno@i.kyoto-u.ac.jp
Graduate School of Informatics
Kyoto University Yoshida-Honmachi
Sakyo-ku606-8501KyotoJapan
† Atr
Spoken Language Translation Research Laboratories
Keihana Science City
2-2-2 Hikaridai619-0288KyotoJapan
Using a Mixture of N-Best Lists from Multiple MT Systems in Rank-Sum-Based Confidence Measure for MT Outputs *
This paper addressees the problem of eliminating unsatisfactory outputs from machine translation (MT) systems. The authors intend to eliminate unsatisfactory MT outputs by using confidence measures. Confidence measures for MT outputs include the rank-sum-based confidence measure (RSCM) for statistical machine translation (SMT) systems. RSCM can be applied to non-SMT systems but does not always work well on them. This paper proposes an alternative RSCM that adopts a mixture of the N-best lists from multiple MT systems instead of a single-system's N-best list in the existing RSCM. In most cases, the proposed RSCM proved to work better than the existing RSCM on two non-SMT systems and to work as well as the existing RSCM on an SMT system.
Introduction
This paper addresses the challenging problem of eliminating unsatisfactory outputs from machine translation (MT) systems, which are subsystems of a speech-to-speech machine translation (S2SMT) system. The permissible range of translation quality by MT/S2SMT systems depends on the user. Some users permit only perfect translations, while other users permit even translations with flawed grammar. Unsatisfactory MT outputs are those whose translation quality is worse than the level the user can permit.
In this paper, the authors intend to eliminate unsatisfactory outputs by using confidence measures for MT outputs. The confidence measures 1 indicate how perfect/satisfactory the MT outputs are. In the discipline of MT, confidence measures for MT outputs have rarely been investigated.
The few existing confidence measures include the rank-sum-based confidence measure (RSCM) for statistical machine translation (SMT) systems, C rank in (Ueffing et al., 2003). The basic idea of this confidence measure is to roughly calculate the word posterior probability by using ranks of MT outputs in an N-best list from an SMT system. In the discipline of non-parametric statistical test, ranks of numerical values are commonly used instead of the numerical values themselves for statistical tests. In the case of the existing RSCM, the ranks of probabilities of MT outputs in the N-best list were used instead of the probabilities of the outputs themselves. The existing RSCM scores each word in an MT output by summing the complemented ranks of candidates in the N-best list that contain the same word in a Levenshtein-aligned position (Levenshtein, 1966). When the confidence values of all words in the MT output are larger than a fixed threshold, the MT output is judged as correct/perfect. Otherwise, the output is judged as incorrect/imperfect. The existing RSCM does not always work well on types of MT systems other than SMT systems. Figure 1 shows the differences among the performances, indicated by the Receiver Operating Characteristics (ROC) curve (Section 4.1), of the existing RSCM on each of three MT systems (Section 4.2.1): D 3 , HPAT, and SAT (Doi and Sumita, 2003;Imamura et al., 2003;Watanabe et al., 2003). Only SAT is an SMT system; the others are not. The ideal ROC curve is a square (0,1), (1,1), (1,0); thus, the closer the curve is to a square, the better the performance of the RSCM is. The performances of the existing RSCM on the non-SMT systems, D 3 and HPAT, are much worse than that on the SMT system, SAT. The performance of the existing RSCM depends on the goodness/density of MT outputs in the Nbest list from the system. However, the system's N-best list does not always give a good approximation of the total summation of the probability of all candidate translations given the source sentence/utterance. The N-best list is expected to approximate the total summation as closely as possible.
This paper proposes a method that eliminates unsatisfactory top output by using an alternative RSCM based on a mixture of N-best lists from multiple MT systems ( Figure 2). The elimination system is intended to be used in the selector architecture, as in (Akiba et al., 2002). The total translation quality of the selector architecture proved to be better than the translation quality of each element MT system. The final output from the selection system is the best among the satisfactory top 2 outputs from the elimination system. In the case of Figure 2, the selection system can receive zero to three top MT outputs. When the selection system receive fewer than two top MT outputs, the selection system merely passes a null output or the one top MT output.
The proposed RSCM differs from the existing RSCM in its N-best list. The proposed RSCM re-2 To distinguish the best output from the selection system, the MT output in the first place in each N-best list (e.g., N-best lista in Figure 2 ) refers to the top MT output. ceives an M-best list from each element MT system. Next, it sorts the mixture of the MT outputs in all M-best lists in the order of the average product (Section 3.2) of the scores of a language model and a translation model (Akiba et al., 2002). This sorted mixture is used instead of the system's N-best list in the existing RSCM.
To experimentally evaluate the proposed RSCM, the authors applied the proposed RSCM and the existing RSCM to a test set of the Basic Travel Expression Corpus (Takezawa et al., 2002). The proposed RSCM proved to work better than the existing RSCM on the non-SMT systems and to work as well as the existing RSCM on the SMT system. The next section outlines the existing RSCM. Section 3 proposes our RSCM. Experimental results are shown and discussed in Section 4. Finally, our conclusions are presented in Section 5.
The Existing RSCM
The existing confidence measures include the ranksum-based confidence measure (RSCM) for SMT systems (Ueffing et al., 2003). The basic idea of this RSCM is to roughly calculate the word posterior probability by using ranks of MT outputs in the N-best list of an SMT system. That is, the ranks of probabilities of MT outputs in the N-best list were used instead of the probabilities of the outputs themselves, as in the non-parametric statistical test.
Hereafter,ê I 1 and w In 1 denote the top output 2 and the n-th best output in the N-best list, respectively.ê i denotes the i-th word in the top MT output e I 1 . L i (ê I 1 , w In 1 ) denote the Levenshtein alignment 3 (Levenshtein, 1966) ofê i on the n-th best output w In 1 according to the top outputê I 1 . The existing RSCM of the wordê i is the sum of the ranks of MT outputs in an N-best list containing the wordê i in a position that is aligned to i in the Levenshtein alignment, which is normalized by the total rank sum:
C rank (ê i ) = N n=1 (N − n) · δ(ê i , L i (ê I 1 , w In 1 )) N (N + 1)/2 ,
where δ(·, ·) is the Kronecker function, that is, if words/morphemes x and y are the same, δ(x, y) = 1; otherwise, δ(x, y) = 0. Thus, only in the case whereê i and L i (ê I 1 , w In 1 ) are the same, the rank of the MT output w In 1 , N − n, is summed. In the calculation of C rank , N − n is summed instead of the rank n because ranks near the top of the N-best list contribute more to the score C rank .
In this paper, the calculation of C rank is slightly modified to sum N − n + 1 so that the total summation is equal to N (N + 1)/2. Moreover, when there are MT outputs that have the same score, such MT outputs are assigned the average rank as in the discipline of non-parametric statistical test.
As shown in Section 1, the existing RSCM does not always work well on types of MT systems other than SMT systems. This is because the system's N-best list does not always give a good approximation of the total summation of the probability of all candidate translations given the source sentence/utterance. The N-best list is expected to approximate the total summation as closely as possible.
Proposed Method
In this section, the authors propose a method that eliminates unsatisfactory top output by using an alternative RSCM based on a mixture of N-best lists from multiple MT systems. The judgment that the top output is satisfactory is based on the same threshold comparison as the judgment that the top output is perfect, as mentioned in Section 1. The elimination system and the alternative RSCM are explained in Sections 3.1 and 3.2, respectively.
Elimination system
This section proposes a method that eliminates unsatisfactory top outputs by using an alternative RSCM based on a mixture of N-best lists from multiple MT systems ( Figure 3). This elimination system is intended to be used in the selector architecture ( Figure 2). The elimination system receives an M-best list from each element MT system and outputs only top 2 outputs whose translation quality is better than or as good as that which the user can permit. In the case of Figure 3, the number of MT systems is three; thus, the elimination system can output zero to three top MT outputs, which depends on the number of the eliminated top outputs.
MTa MTb MTc
The top outputa The proposed elimination system judges whether a top output is satisfactory by using a threshold comparison, as in (Ueffing et al., 2003). When the confidence values of all words in the top output, which are calculated by using the alternative RSCM explained in Section 3.2, are larger than a fixed threshold, the top output is judged as satisfactory. Otherwise, the top output is judged as unsatisfactory. The threshold was optimized on a development corpus.
The proposed RSCM
The proposed RSCM is an extension of the existing RSCM outlined in Section 2. The proposed RSCM differs from the existing RSCM in the adopted Nbest list (Figure 3). The proposed RSCM receives an M-best list from each element MT system. Next the proposed RSCM sorts the mixture of all the MT outputs in the order of the average product of the scores of a language model and a translation model (Akiba et al., 2002). This sorted mixture is alternatively used instead of the system's N-best list in the existing RSCM. That is, the proposed RSCM checks whether it accepts/rejects each top MT output in the original M-best lists by using the sorted mixture; on the other hand, the existing RSCM checks whether it accepts/rejects the top MT output in the system's N-best list by using the system's N-best.
For scoring MT outputs, the proposed RSCM uses a score based on a translation model called IBM4 (Brown et al., 1993) (TM-score) and a score based on a language model for the translation target language (LM-score). As Akiba et al. (2002) reported, the products of TM-scores and LM-scores are statistical variables. Even in the case where the translation model (TM) and the language model for the translation target language (LM) are trained on a sub-corpus of the same size, changing the training corpus also changes the TM-score, the LM-score, and their product. Each pair of TM-score and LMscore differently order the MT outputs.
For robust scoring, the authors adopt the multiple scoring technique presented in (Akiba et al., 2002). The multiple scoring technique prepares (Akiba et al., 2002). multiple subsets of the full parallel corpus according to k-fold cross validation (Mitchell, 1997) and trains both TM and LM on each subset. Each MT output is scored in k ways. For example, the full parallel corpus C is divided into three subsets V i (i = 0, 1, 2). For each i, the proposed method trains a translation model T M i on C i (= C − V i ) and a language model LM i on the target-language part of C i (Figure 4). MT outputs in the mixture are sorted by using the average of the product scores by T M i and LM i for each i. In (Akiba et al., 2002), this multiple scoring technique was shown to select the best translation better than a single scoring technique that uses TM and LM trained from a full corpus.
Experimental Comparison
The authors conducted an experimental comparison between the proposed RSCM and the existing RSCM in the framework of the elimination system. The task of both RSCMs was to judge whether each top 2 MT output from an MT system is satisfactory, that is, whether the translation quality of the top MT output is better than or as good as that which the user can permit.
In this experiment, the translation quality of MT outputs was assigned one of four grades: A, B, C, or D as follows: (A) Perfect: no problems in either information or grammar; (B) Fair: easy-tounderstand, with either some unimportant information missing or flawed grammar; (C) Acceptable: broken, but understandable with effort; (D) Nonsense: important information has been translated incorrectly. This evaluation standard was introduced by Sumita et al. (1999) to evaluate S2SMT systems. In advance, each top MT output was evaluated by nine native speakers of the target language, who were also familiar with the source language, and then assigned the median grade of the nine grades.
To conduct a fair comparison, the number of MT outputs in the system's N-best list and the number of MT outputs in the mixture are expected to be the same. Thus, the authors used either a threebest list from each of three MT systems or a fivebest list from each of two non-SMT MT systems for the proposed RSCM and a ten-best list for the existing RSCM. Naturally, this setting 4 is not disadvantageous for the existing RSCM.
Evaluation metrics
The performances of both RSCMs were evaluated by using three different metrics: ROC Curve, Hmean, and Accuracy. For each MT system, these metrics were separately calculated by using a confusion matrix (Table 1). For example, for J2E D 3 (Section 4.2.1), the proposed RSCM checked each top MT output from J2E D 3 by using the input mixture of three-best lists from the three J2E MT systems (Section 4.2.1); on the other hand, the existing RSCM checked each top MT output from J2E D 3 by using the input ten-best list from J2E D 3 . For J2E D 3 , the results were counted up into the confusion matrix of each RSCM, and the metrics were calculated as follows: ROC Curve plots the correct acceptance rate versus the correct rejection rate for different values of the threshold. Correct acceptance rate (CAR) is defined as the number of satisfactory outputs that have been accepted, divided by the total number of satisfactory outputs, that is, V s,a /V s (Table 1). Correct rejection rate (CRR) is defined as the number of unsatisfactory outputs that have been rejected, divided by the total number of unsatisfactory outputs, that is, V u,r /V u (Table 1).
H-mean is defined as a harmonic mean 5 of the CAR and the CRR (Table 1), 2 * CAR * CRR/(CAR + CRR).
Accuracy is defined as a weighted mean 6 of the CAR and the CRR (Table 1)
, (V s * CAR + V u * CRR)/(V s + V u ) = (V s,a + V u,r )/(V s + V u ).
For each performance of H-mean and Accuracy, 10-fold cross validation was conducted. The threshold was fixed such that the performance was maximized on each non-held-out subset, and the performance was calculated on the corresponding held-out subset. To statistically test the differences in performance (H-mean or Accuracy) between the confidence measures, the authors conducted a pairwise ttest (Mitchell, 1997), which was based on the results of 10-fold cross validation. When the difference in performance meets the following condition, the difference is statistically different at a confidence level |p pro − p ext | > t (α,10−1) * S/ √ 10, where p pro and p ext , respectively, denote the average performance of the proposed RSCM and the existing RSCM, t (α,10−1) denotes the upper α point of the Student's t-distribution with (10 − 1) degrees of freedom, and S denotes the estimated standard deviation of the average difference in performance.
Experimental conditions
MT systems
Three English-to-Japanese (E2J) MT systems and three Japanese-to-English (J2E) MT systems of the three types described below were used. Table 2 shows the performances of these MT systems.
D 3 (DP-match Driven transDucer) is an example-based MT system using onlinegenerated translation patterns (Doi and Sumita, 2003).
HPAT (Hierarchical Phrase Alignment based Translation) is a pattern-based system using automatically generated syntactic transfer (Imamura et al., 2003).
SAT (Statistical ATR Translator) is an SMT system using a retrieved seed translation as the start point for decoding/translation (Watanabe et al., 2003).
Test set
The test set used consists of five hundred and ten pairs of English and Japanese sentences, which (Takezawa et al., 2002), Travel Reservation Corpus (Takezawa, 1999), and MT-Aided Dialogue Corpus No. 1 (Kikui et al., 2003) . were randomly selected from the Basic Travel Expression Corpus (BTEC) (Takezawa et al., 2002). BTEC contains a variety of expressions used in a number of situations related to overseas travel.
Training TMs and LMs
The corpora used for training TMs and LMs described in Section 3.2 were merged corpora ( Table 3). The number of trained TMs/LMs was three. The translation models and language models were learned by using GIZA++ and the CMU-Cambridge Toolkit (Clarkson and Rosenfeld, 1997), respectively.
Experimental results and discussion 4.3.1 ROC Curve
In order to plot the ROC Curve, the authors conducted the same experiment as shown in Figure 1. That is, in the case where the grade of satisfactory translations is only grade A, each of the proposed and existing RSCMs tried to accept grade A MT outputs and to reject grade B, C, or D MT outputs. Figures 5 to 7 show the ROC Curves for each of the three J2E MT systems (D3, HPAT, and SAT). The curves with diamond marks, cross marks, triangle marks, and circle marks show the ROC Curves for the existing RSCM, the proposed RSCM by using the mixture of three-best lists from D 3 , HPAT and SAT, the proposed RSCM by using the mixture of five-best lists from D 3 and HPAT, and the existing RSCM with reordering, respectively. In the existing RSCM with reordering, the system's Table 4: Ten-fold cross-validated pairwise t-test of H-mean: Each set of three columns corresponds to the experimental results of each of the three MT systems: D 3 , HPAT, and SAT. Each floating number in the first to third column of each MT system indicates the average performance of the proposed RSCM, the average difference of the performance of the proposed RSCM from that of the existing RSCM, and the t-value of the left-next difference, respectively. The bold floating numbers indicate that the left-next difference is significant at a confidence level of 95%. The floating numbers on the three rows for each MT system, whose row heads are "A | BCD", "AB | CD", or "ABC | D", correspond to the three types of experiments in which each RSCM tried to accept/reject the MT output assigned one of the grades left/right of "|", respectively. Table 5: Ten-fold cross-validated pairwise t-test of Accuracy: The description of this figure is the same as that of original N-best list was sorted by using the average of the product scores from the multiple scoring technique described in Section 3.2, and the existing RSCM with reordering used this sorted system's N-best instead of the system's original N-best. The dotted lines indicate the contours by H-mean from 0.7 to 0.8. The ideal ROC curve is a square (0, 1), (1, 1), (1, 0); thus, the closer the curve is to a square, the better the performance of the RSCM is. In Figures 5 and 6, the curves of the proposed RSCM by using the mixture of three-best lists from the three MT systems are much closer to a square than that of the existing RSCM; moreover, the curves of the proposed RSCM by using the mixture of five-best lists from the two MT systems are much closer to a square than that of the existing RSCM. Note that the superiority of the proposed RSCM to the existing RSCM is maintained even in the case where an M-best list from the SMT system was not used. The curves of the existing RSCM with reordering are closer to a square than those of the existing RSCM. Thus the performance of the proposed RSCM on the non-SMT systems, D 3 and HPAT, are much better than that of the existing RSCM. The difference between the performance of the proposed and existing RSCMs is due to both resorting the MT outputs and using a mixture of N-best lists.
In Figure 7, the curve of the proposed RSCM is a little closer when CRR is larger than CAR; and the curve of the existing RSCM is a little closer when CAR is larger than CRR. Thus, the performance of the proposed RSCM on the SMT system, SAT, is a little better than that of the existing RSCM in the case where CRR is regarded as important; similarly, the performance of the proposed RSCM on the SMT system is a little worse than that of the existing RSCM in the case where CAR is regarded as important.
H-mean and Accuracy
Tables 4 and 5 show the experimental results of tenfold cross-validated pairwise t-tests of the performance of H-mean and Accuracy, respectively.
On the non-SMT systems, Table 4 shows that at every level of translation quality that the user would permit, the H-mean of the proposed RSCM is sig-nificantly better than that of the existing RSCM. On the SMT MT system, Table 4 shows that at every permitted level of translation quality, there is no significant difference between the H-mean of the proposed RSCM and that of the existing RSCM except for two cases: "ABC | D" for E2J-SAT and "AB | CD" for J2E-SAT. Table 5 shows almost the same tendency as Table 4. As for difference, in the case where the translation quality that the user would permit is better than D, there is no significant difference between the Accuracy of the proposed RSCM and that of the existing RSCM except in the one case of "ABC | D" for E2J-HPAT.
As defined in Section 4.1, Accuracy is an evaluation metric whose value is sensitive/inclined to the ratio of the number of satisfactory translations and unsatisfactory translations. H-mean is an evaluation metric whose value is independent/natural to this ratio. We need to use these different evaluation metrics according to the situations encountered. For general purposes, the natural evaluation metric, Hmean, is better. In the case where the test set reflects special situations encountered, Accuracy is useful.
Regardless of whether we encounter any special situation, in most cases on a non-SMT system, the proposed RSCM proved to be significantly better than the existing RSCM. In most cases on an SMT system, the proposed RSCM proved to be as good in performance as the existing RSCM.
This paper reports a case study in which a mixture of N-best lists from multiple MT systems boosted the performance of the RSCM for MT outputs. The authors believe the proposed RSCM will work well only when each of the element MT systems complements the others, but the authors leave the question of the best combination of complementary MT systems open for future study.
Conclusions
This paper addressed the problem of eliminating unsatisfactory outputs from MT systems. It proposed a method that eliminates unsatisfactory outputs by using an alternative RSCM based on a mixture of N-best lists from multiple MT systems. The authors compared the proposed and existing RSCMs in the framework of an elimination system. When the number of MT outputs both in the N-best list for the existing RSCM and in the mixture of N-best lists for the proposed RSCM is almost the same number, i.e. ten, in most cases, the proposed RSCM proved to work better than the existing RSCM on two non-SMT systems and to work as well as the existing RSCM on an SMT system.
In the future, the authors will conduct the following experiments: (1) investigating how the proposed RSCM works when the size of the M-best lists is increased, and (2) seeing how the proposed RSCM influences the performance of the selection system.
Figure 1 :
1Performance of the existing RSCM on three different types of Japanese-to-English (J2E) MT systems: D 3 , HPAT, and SAT. The existing RSCM tried to accept perfect MT outputs (grade A in Section 4) and to reject imperfect MT outputs (grades B, C, and D in Section 4).
Figure 2 :
2Image of our eliminator
Figure 3 :
3Proposed RSCM
Figure 4 :
4Method for training multiple pairs of Language Models (LMs) and Translation Models (TMs)
Figure 7 :
7ROC Curves of bothRSCMs for J2E-SAT
Table 1 :
1Confusion matrixAccept Reject
Subtotal
Satisfactory
Vs,a
Vs,r
Vs (= Vs,a + Vs,r)
Unsatisfactory
Vu,a
Vu,r
Vu (= Vu,a + Vu,r)
Table 2 :
2Performance of MT systems: Each number
in the AB row indicates the ratio of A-or-B-graded
translation by each MT system. Each number in the
other rows similarly indicates corresponding ratios.
J2E MT systems
E2J MT systems
D 3
HPAT SAT
D 3
HPAT SAT
A
63.7 42.5 67.2 58.4 59.6 69.8
AB
72.1 63.7 74.7 72.9 75.4 81.1
ABC 78.8 79.0 82.5 83.3 86.8 88.0
of 1-α%.
Table 3 :
3Corpora for training TMs and LMs: Basic Travel Expression Corpus Nos. 1-3
Table 4
4Ave. Diff. T-val. Ave. Diff. T-val. Ave. Diff. T-val.except that Accuracy is used instead of H-mean.
E2J-D 3
E2J-HPAT
E2J-SAT
Separating point Ave. Diff. T-val. Ave. Diff. T-val. Ave. Diff. T-val.
A | BCD
77.4 10.5 4.354 71.1 15.4 5.667 76.4
1.1 1.000
AB | CD
78.2
4.9 2.953 78.2
2.5 2.176 81.1
0.0 0.000
ABC | D
85.0
1.3 1.172 84.1 -2.9 2.182 88.0
0.0 0.000
J2E-D 3
J2E-HPAT
J2E-SAT
Separating point A | BCD
78.8 15.8 8.243 76.2 18.2 8.118 76.4
3.1 1.041
AB | CD
77.8
4.1 3.279 72.7
8.8 3.288 77.6 -1.5 0.537
ABC | D
83.3
2.9 1.771 77.4 -1.7 1.646 82.7
0.1 0.428
This is the word on the n-th best output w In 1 , aligned with the i-th wordêi, in the calculation of edit distance from the top MT outputê I 1 to the n-th best output w In 1 .
In the future, we will conduct a large-scale experiment to investigate how both RSCMs work while increasing the size of the system's N-best list and the mixture of M-best lists.
This harmonic mean is used for summarizing two measures, each of which has a trade-off relationship with each other. For example, F-measure is the harmonic mean of precision and recall, which is well used in the discipline of Information Retrieval.6 This weighted mean is used for evaluating classification tasks in the discipline of Machine Learning.
Using language and translation models to select the best among outputs from multiple MT systems. Yasuhiro Akiba, Taro Watanabe, Eiichiro Sumita, Proc. COLING-2002. COLING-2002Yasuhiro Akiba, Taro Watanabe, and Eiichiro Sumita. 2002. Using language and translation models to select the best among outputs from multiple MT systems. In Proc. COLING-2002, pages 8-14.
The mathematics of statistical machine translation: Parameter estimation. F Peter, Stephen Della Brown, Vincent J Pietra, Robert L Della Pietra, Mercer, Computational Linguistics. 192Peter F. Brown, Stephen Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263-311.
Statistical language modeling using the CMU-Cambridge toolkit. Philip Clarkson, Ronald Rosenfeld, Proc. EUROSPEECH-1997. EUROSPEECH-1997Philip Clarkson and Ronald Rosenfeld. 1997. Statistical lan- guage modeling using the CMU-Cambridge toolkit. In Proc. EUROSPEECH-1997, pages 2707-2710.
Input sentence splitting and translating. Takao Doi, Eiichiro Sumita, Proc. the HLT-NAACL 2003 Workshop on DDMT. the HLT-NAACL 2003 Workshop on DDMTTakao Doi and Eiichiro Sumita. 2003. Input sentence splitting and translating. In Proc. the HLT-NAACL 2003 Workshop on DDMT, pages 104-110.
Feedback cleaning of machine translation rules using automatic evaluation. Kenji Imamura, Eiichiro Sumita, Yuji Matsumoto, Proc. ACL-2003. ACL-2003Kenji Imamura, Eiichiro Sumita, and Yuji Matsumoto. 2003. Feedback cleaning of machine translation rules using auto- matic evaluation. In Proc. ACL-2003, pages 447-454.
Creating corpora for speechto-speech translation. Genichiro Kikui, Eiichiro Sumita, Toshiyuki Takezawa, Seiichi Yamamoto, Proc. EUROSPEECH-2003. EUROSPEECH-20031Genichiro Kikui, Eiichiro Sumita, Toshiyuki Takezawa, and Seiichi Yamamoto. 2003. Creating corpora for speech- to-speech translation. In Proc. EUROSPEECH-2003, vol- ume 1, pages 381-384.
Binary codes capable of correcting deletions, insertions and reversals. Vladimir I Levenshtein, Soviet Physics Doklady. 108Vladimir I. Levenshtein. 1966. Binary codes capable of cor- recting deletions, insertions and reversals. Soviet Physics Doklady, 10(8):707-710.
Tom M Mitchell, Machine Learning. New York, USAThe McGraw-Hill Companies IncTom M. Mitchell. 1997. Machine Learning. The McGraw-Hill Companies Inc., New York, USA.
An evaluation tool for machine translation: Fast evaluation for machine translation research. Sonja Niessen, Franz J Och, G Leusch, Hermann Ney, Proc. LREC-2000. LREC-2000Sonja Niessen, Franz J. Och, G. Leusch, and Hermann Ney. 2000. An evaluation tool for machine translation: Fast eval- uation for machine translation research. In Proc. LREC- 2000, pages 39-45.
Improved statistical alignment models. Josef Franz, Hermann Och, Ney, Proc. ACL-2000. ACL-2000Franz Josef Och and Hermann Ney. 2000. Improved statistical alignment models. In Proc. ACL-2000, pages 440-447.
Bleu: a method for automatic evaluation of machine translation. A Kishore, Salim Papineni, Todd Roukos, Wei-Jing Ward, Zhu, RC22176 (W0109- 022IBM Research Division. Thomas J. Watson Research Center, Yorktown Heights, NYTechnical ReportKishore A. Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2001. Bleu: a method for automatic evaluation of ma- chine translation. In Technical Report RC22176 (W0109- 022), IBM Research Division, Thomas J. Watson Research Center, Yorktown Heights, NY, pages 257-258.
Solutions to problems inherent in spokenlanguage translation: The ATR-MATRIX approach. Eiichiro Sumita, Setsuo Yamada, Kazuhiro Yamamoto, Michael Paul, Hideki Kashioka, Kai Ishikawa, Satoshi Shirai, Proc. MT Summit VII. MT Summit VIIEiichiro Sumita, Setsuo Yamada, Kazuhiro Yamamoto, Michael Paul, Hideki Kashioka, Kai Ishikawa, and Satoshi Shirai. 1999. Solutions to problems inherent in spoken- language translation: The ATR-MATRIX approach. In Proc. MT Summit VII, pages 229-235.
Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world. Toshiyuki Takezawa, Eiichiro Sumita, Fumiaki Sugaya, Hirofumi Yamamoto, Seiichi Yamamoto, Proc. LREC-2002. LREC-2002Toshiyuki Takezawa, Eiichiro Sumita, Fumiaki Sugaya, Hiro- fumi Yamamoto, and Seiichi Yamamoto. 2002. Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world. In Proc. LREC-2002, pages 147-152.
Building a bilingual travel conversation database for speech translation research. Toshiyuki Takezawa, Proc. the Oriental COCOSDA Workshop-1999. the Oriental COCOSDA Workshop-1999Toshiyuki Takezawa. 1999. Building a bilingual travel conver- sation database for speech translation research. In Proc. the Oriental COCOSDA Workshop-1999, pages 17-20.
Confidence measures for statistical machine translation. Nicola Ueffing, Klaus Macherey, Hermann Ney, Proc. MT Summit IX. MT Summit IXNicola Ueffing, Klaus Macherey, and Hermann Ney. 2003. Confidence measures for statistical machine translation. In Proc. MT Summit IX, pages 394-401.
Chunk-based statistical translation. Taro Watanabe, Eiichiro Sumita, Hiroshi G Okuno, Proc. MT Summit IX. MT Summit IXTaro Watanabe, Eiichiro Sumita, and Hiroshi G. Okuno. 2003. Chunk-based statistical translation. In Proc. MT Summit IX, pages 410-417. |
445,754 | Underspecifying and Predicting Voice for Surface Realisation Ranking | This paper addresses a data-driven surface realisation model based on a large-scale reversible grammar of German. We investigate the relationship between the surface realisation performance and the character of the input to generation, i.e. its degree of underspecification. We extend a syntactic surface realisation system, which can be trained to choose among word order variants, such that the candidate set includes active and passive variants. This allows us to study the interaction of voice and word order alternations in realistic German corpus data. We show that with an appropriately underspecified input, a linguistically informed realisation model trained to regenerate strings from the underlying semantic representation achieves 91.5% accuracy (over a baseline of 82.5%) in the prediction of the original voice. | [
13804679,
2680971,
11182883,
59940,
9107244,
5494958,
6215855,
17175727,
243261,
15950784,
2038617,
16796126,
13466080,
1783652,
8431601,
63346,
2381180,
8493310,
737023
] | Underspecifying and Predicting Voice for Surface Realisation Ranking
June 19-24, 2011
Sina Zarrieß sina.zarriess@ims.uni-stuttgart.de
Institut für maschinelle Sprachverarbeitung Universität Stuttgart
Germany
Aoife Cahill aoife.cahill@ims.uni-stuttgart.de
Institut für maschinelle Sprachverarbeitung Universität Stuttgart
Germany
Jonas Kuhn jonas.kuhn@ims.uni-stuttgart.de
Institut für maschinelle Sprachverarbeitung Universität Stuttgart
Germany
Underspecifying and Predicting Voice for Surface Realisation Ranking
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics
the 49th Annual Meeting of the Association for Computational LinguisticsPortland, OregonJune 19-24, 2011
This paper addresses a data-driven surface realisation model based on a large-scale reversible grammar of German. We investigate the relationship between the surface realisation performance and the character of the input to generation, i.e. its degree of underspecification. We extend a syntactic surface realisation system, which can be trained to choose among word order variants, such that the candidate set includes active and passive variants. This allows us to study the interaction of voice and word order alternations in realistic German corpus data. We show that with an appropriately underspecified input, a linguistically informed realisation model trained to regenerate strings from the underlying semantic representation achieves 91.5% accuracy (over a baseline of 82.5%) in the prediction of the original voice.
Introduction
This paper 1 presents work on modelling the usage of voice and word order alternations in a free word order language. Given a set of meaning-equivalent candidate sentences, such as in the simplified English Example (1), our model makes predictions about which candidate sentence is most appropriate or natural given the context.
(1) Context: The Parliament started the debate about the state budget in April.
a. It wasn't until June that the Parliament approved it. b. It wasn't until June that it was approved by the Parliament. c. It wasn't until June that it was approved.
We address the problem of predicting the usage of linguistic alternations in the framework of a surface realisation ranking system. Such ranking systems are practically relevant for the real-world application of grammar-based generators that usually generate several grammatical surface sentences from a given abstract input, e.g. (Velldal and Oepen, 2006). Moreover, this framework allows for detailed experimental studies of the interaction of specific linguistic features. Thus it has been demonstrated that for free word order languages like German, word order prediction quality can be improved with carefully designed, linguistically informed models capturing information-structural strategies (Filippova and Strube, 2007;Cahill and Riester, 2009). This paper is situated in the same framework, using rich linguistic representations over corpus data for machine learning of realisation ranking. However, we go beyond the task of finding the correct ordering for an almost fixed set of word forms. Quite obviously, word order is only one of the means at a speaker's disposal for expressing some content in a contextually appropriate form; we add systematic alternations like the voice alternation (active vs. passive) to the picture. As an alternative way of promoting or demoting the prominence of a syntactic argument, its interaction with word ordering strategies in real corpus data is of high theoretical interest (Aissen, 1999;Aissen, 2003;Bresnan et al., 2001).
Our main goals are (i) to establish a corpus-based surface realisation framework for empirically investigating interactions of voice and word order in German, (ii) to design an input representation for generation capturing voice alternations in a variety of contexts, (iii) to better understand the relationship between the performance of a generation ranking model and the type of realisation candidates available in its input. In working towards these goals, this paper addresses the question of evaluation. We conduct a pilot human evaluation on the voice al-ternation data and relate our findings to our results established in the automatic ranking experiments.
Addressing interactions among a range of grammatical and discourse phenomena on realistic corpus data turns out to be a major methodological challenge for data-driven surface realisation. The set of candidate realisations available for ranking will influence the findings, and here, existing surface realisers vary considerably. point out the differences across approaches in the type of syntactic and semantic information present and absent in the input representation; and it is the type of underspecification that determines the number (and character) of available candidate realisations and, hence, the complexity of the realisation task.
We study the effect of varying degrees of underspecification explicitly, extending a syntactic generation system by a semantic component capturing voice alternations. In regeneration studies involving underspecified underlying representations, corpusoriented work reveals an additional methodological challenge. When using standard semantic representations, as common in broad-coverage work in semantic parsing (i.e., from the point of view of analysis), alternative variants for sentence realisation will often receive slightly different representations: In the context of (1), the continuation (1-c) is presumably more natural than (1-b), but with a standard sentence-bounded semantic analysis, only (1-a) and (1-b) would receive equivalent representations.
Rather than waiting for the availability of robust and reliable techniques for detecting the reference of implicit arguments in analysis (or for contextually aware reasoning components), we adopt a relatively simple heuristic approach (see Section 3.1) that approximates the desired equivalences by augmented representations for examples like (1-c). This way we can overcome an extremely skewed distribution in the naturally occurring meaning-equivalent active vs. passive sentences, a factor which we believe justifies taking the risk of occasional overgeneration.
The paper is structured as follows: Section 2 situates our methodology with respect to other work on surface realisation and briefly summarises the relevant theoretical linguistic background. In Section 3, we present our generation architecture and the design of the input representation. Section 4 describes the setup for the experiments in Section 5. In Section 6, we present the results from the human evaluation.
Related Work
Generation Background
The first widely known data-driven approach to surface realisation, or tactical generation, (Langkilde and Knight, 1998) used language-model ngram statistics on a word lattice of candidate realisations to guide a ranker. Subsequent work explored ways of exploiting linguistically annotated data for trainable generation models (Ratnaparkhi, 2000;Marciniak and Strube, 2005;Belz, 2005, a.o.). Work on data-driven approaches has led to insights into the importance of linguistic features for sentence linearisation decisions (Ringger et al., 2004;Filippova and Strube, 2009). The availability of discriminative learning techniques for the ranking of candidate analyses output by broad-coverage grammars with rich linguistic representations, originally in parsing (Riezler et al., 2000;Riezler et al., 2002), has also led to a revival of interest in linguistically sophisticated reversible grammars as the basis for surface realisation (Velldal and Oepen, 2006;Cahill et al., 2007). The grammar generates candidate analyses for an underlying representation and the ranker's task is to predict the contextually appropriate realisation.
The work that is most closely related to ours is Velldal (2008). He uses an MRS representation derived by an HPSG grammar that can be underspecified for information status. In his case, the underspecification is encoded in the grammar and not directly controlled. In multilingually oriented linearisation work, Bohnet et al. (2010) generate from semantic corpus annotations included in the CoNLL'09 shared task data. However, they note that these annotations are not suitable for full generation since they are often incomplete. Thus, it is not clear to which degree these annotations are actually underspecified for certain paraphrases.
Linguistic Background
In competition-based linguistic theories (Optimality Theory and related frameworks), the use of argument alternations is construed as an effect of markedness hierarchies (Aissen, 1999;Aissen, 2003). Argument functions (subject, object, . . . ) on the one hand and the various properties that argument phrases can bear (person, animacy, definiteness) on the other are organised in markedness hierarchies. Wherever possible, there is a tendency to align the hierarchies, i.e., use prominent functions to realise prominently marked argument phrases. For instance, Bresnan et al. (2001) find that there is a statistical tendency in English to passivise a verb if the patient is higher on the person scale than the agent, but an active is grammatically possible. Bresnan et al. (2007) correlate the use of the English dative alternation to a number of features such as givenness, pronominalisation, definiteness, constituent length, animacy of the involved verb arguments. These features are assumed to reflect the discourse acessibility of the arguments.
Interestingly, the properties that have been used to model argument alternations in strict word order languages like English have been identified as factors that influence word order in free word order languages like German, see Filippova and Strube (2007) for a number of pointers. Cahill and Riester (2009) implement a model for German word order variation that approximates the information status of constituents through morphological features like definiteness, pronominalisation etc. We are not aware of any corpus-based generation studies investigating how these properties relate to argument alternations in free word order languages.
Generation Architecture
Our data-driven methodology for investigating factors relevant to surface realisation uses a regeneration set-up 2 with two main components: a) a grammar-based component used to parse a corpus sentence and map it to all its meaning-equivalent surface realisations, b) a statistical ranking component used to select the correct, i.e. contextually most appropriate surface realisation. Two variants of this set-up that we use are sketched in Figure 1.
We generally use a hand-crafted, broad-coverage LFG for German (Rohrer and Forst, 2006) to parse a corpus sentence into a f(unctional) structure 3 and generate all surface realisations from a given Figure 1: Generation pipelines f-structure, following the generation approach of Cahill et al. (2007). F-structures are attributevalue matrices representing grammatical functions and morphosyntactic features; their theoretical motivation lies in the abstraction over details of surface realisation. The grammar is implemented in the XLE framework , which allows for reversible use of the same declarative grammar in the parsing and generation direction.
To obtain a more abstract underlying representation (in the pipeline on the right-hand side of Figure 1), the present work uses an additional semantic construction component Zarrieß, 2009) to map LFG f-structures to meaning representations. For the reverse direction, the meaning representations are mapped to f-structures which can then be mapped to surface strings by the XLE generator (Zarrieß and Kuhn, 2010).
For the final realisation ranking step in both pipelines, we used SVMrank, a Support Vector Machine-based learning tool (Joachims, 1996). The ranking step is thus technically independent from the LFG-based component. However, the grammar is used to produce the training data, pairs of corpus sentences and the possible alternations.
The two pipelines allow us to vary the degree to which the generation input is underspecified. An fstructure abstracts away from word order, i.e. the candidate set will contain just word order alternations. In the semantic input, syntactic function and voice are underspecified, so a larger set of surface realisation candidates is generated. Figure 2 illustrates the two representation levels for an active and a passive sentence. The subject of the passive and the object of the active f-structure are mapped to the same role (patient) in the meaning representation.
Issues with "naive" underspecification
In order to create an underspecified voice representation that does indeed leave open the realisation options available to the speaker/writer, it is often not sufficient to remove just the syntactic function information. For instance, the subject of the active sentence (2) is an arbitrary reference pronoun man "one" which cannot be used as an oblique agent in a passive, sentence (2-b) is ungrammatical.
(2) a. So, when combined with the grammar, the meaning representation for (2) in Figure 2 contains implicit information about the voice of the original corpus sentence; the candidate set will not include any passive realisations. However, a passive realisation without the oblique agent in the by-phrase, as in Example (3), is a very natural variant.
(3)
Der The
Kanzler chancellor wurde was gesehen. seen.
The reverse situation arises frequently too: passive sentences where the agent role is not overtly realised. Given the standard, "analysis-oriented" meaning representation for Sentence (4) in Figure 2, the realiser will not generate an active realisation since the agent role cannot be instantiated by any phrase in the grammar. However, depending on the exact context there are typically options for realising the subject phrase in an active with very little descriptive content.
Ideally, one would like to account for these phenomena in a meaning representation that underspecifies the lexicalisation of discourse referents, and also captures the reference of implicit arguments. Especially the latter task has hardly been addressed in NLP applications (but see Gerber and Chai (2010)). In order to work around that problem, we implemented some simple heuristics which underspecify the realisation of certain verb arguments. These rules define: 1. a set of pronouns (generic and neutral pronouns, universal quantifiers) that correspond to "trivial" agents in active and implicit agents Active Passive 2-role trans. 71% (82%) 10% (2%) 1-role trans. 11% (0%) 8% (16%) in passive sentences; 2. a set of prepositional adjuncts in passive sentences that correspond to subjects in active sentence (e.g. causative and instrumental prepositions like durch "by means of"); 3. certain syntactic contexts where special underspecification devices are needed, e.g. coordinations or embeddings, see Zarrieß and Kuhn (2010) for examples. In the following, we will distinguish 1-role transitives where the agent is "trivial" or implicit from 2-role transitives with a non-implicit agent.
By means of the extended underspecification rules for voice, the sentences in (2) and (3) receive an identical meaning representation. As a result, our surface realiser can produce an active alternation for (3) and a passive alternation for (2). In the following, we will refer to the extended representations as SEM h ("heuristic semantics"), and to the original representations as SEM n ("naive semantics").
We are aware of the fact that these approximations introduce some noise into the data and do not always represent the underlying referents correctly. For instance, the implicit agent in a passive need not be "trivial" but can correspond to an actual discourse referent. However, we consider these heuristics as a first step towards capturing an important discourse function of the passive alternation, namely the deletion of the agent role. If we did not treat the passives with an implicit agent on a par with certain actives, we would have to ignore a major portion of the passives occurring in corpus data. Table 1 summarises the distribution of the voices for the heuristic meaning representation SEM h on the data-set we will introduce in Section 4, with the distribution for the naive representation SEM n in parentheses.
Experimental Set-up
Data To obtain a sizable set of realistic corpus examples for our experiments on voice alternations, we created our own dataset of input sentences and representations, instead of building on treebank examples as Cahill et al. (2007) Table 2) are higher than in Cahill et al. (2007).
Labelling For the training of our ranking model, we have to tell the learner how closely each surface realisation candidate resembles the original corpus sentence. We distinguish the rank categories: "1" identical to the corpus string, "2" identical to the corpus string ignoring punctuation, "3" small edit distance (< 4) to the corpus string ignoring punctuation, "4" different from the corpus sentence. In one of our experiments (Section 5.1), we used the rank category "5" to explicitly label the surface realisations derived from the alternation f-structure that does not correspond to the parse of the original corpus sentence. The intermediate rank categories "2" and "3" are useful since the grammar does not always regenerate the exact corpus string, see Cahill et al. (2007) for explanation.
Features
The linguistic theories sketched in Section 2.2 correlate morphological, syntactic and semantic properties of constituents (or discourse ref-erents) with their order and argument realisation. In our system, this correlation is modelled by a combination of linguistic properties that can be extracted from the f-structure or meaning representation and of the surface order that is read off the sentence string. Standard n-gram features are also used as features. 4 The feature model is built as follows: for every lemma in the f-structure, we extract a set of morphological properties (definiteness, person, pronominal status etc.), the voice of the verbal head, its syntactic and semantic role, and a set of informations status features following Cahill and Riester (2009). These properties are combined in two ways: a) Precedence features: relative order of properties in the surface string, e.g. "theme < agent in passive", "1st person < 3rd person"; b) "scale alignment" features (ScalAl.): combinations of voice and role properties with morphological properties, e.g. "subject is singular", "agent is 3rd person in active voice" (these are surface-independent, identical for each alternation candidate).
The model for which we present our results is based on sentence-internal features only; as Cahill and Riester (2009) showed, these feature carry a considerable amount of implicit information about the discourse context (e.g. in the shape of referring expressions). We also implemented a set of explicitly inter-sentential features, inspired by Centering Theory (Grosz et al., 1995). This model did not improve over the intra-sentential model.
Evaluation Measures
In order to assess the general quality of our generation ranking models, we use several standard measures: a) exact match: how often does the model select the original corpus sentence, b) BLEU: n-gram overlap between top-ranked and original sentence, c) NIST: modification of BLEU giving more weight to less frequent n-grams. Second, we are interested in the model's performance wrt. specific linguistic criteria. We report the following accuracies: d) Voice: how often does the model select a sentence realising the correct voice, e) Precedence: how often does the model generate the right order of the verb arguments (agent and patient), and f) Vorfeld: how often does the model correctly predict the verb arguments to appear in the sentence initial position before the finite verb, the so-called Vorfeld. See Sections 5.3 and 6 for a discussion of these measures.
Experiments
Exp. 1: Effect of Underspecified Input
We investigate the effect of the input's underspecification on a state-of-the-art surface realisation ranking model. This model implements the entire feature set described in Section 4 (it is further analysed in the subsequent experiments). We built 3 datasets from our alternation data: FS -candidates generated from the f-structure; SEM n -realisations from the naive meaning representations; SEM h -candidates from the heuristically underspecified meaning representation. Thus, we keep the set of original corpus sentences (=the target realisations) constant, but train and test the model on different candidate sets.
In Table 2, we compare the performance of the linguistically informed model described in Section 4 on the candidates sets against a random choice and a language model (LM) baseline. The differences in BLEU between the candidate sets and models are In Table 3, we report the performance of the linguistic model on the different candidate sets with respect to voice accuracy. Since the candidate sets differ in the proportion of items that underspecify the voice (see "Voice Spec." in Table 3), we also report the accuracy on the SEM n * test set, which is a subset of SEM n excluding the items where the voice is specified. Table 3 shows that the proportion of active realisations for the SEM n * input is very high, and the model does not outperform the majority baseline (which always selects active). In contrast, the SEM h model clearly outperforms the majority baseline.
Example (4) is a case from our development set where the SEM n model incorrectly predicts an active (4-a), and the SEM h correctly predicts a passive (4-b). is plural and indefinite. Counterexamples are possible, but there is a clear statistical preference -which the model was able to pick up. On the one hand, the rankers can cope surprisingly well with the additional realisations obtained from the meaning representations. According to the global sentence overlap measures, their quality is not seriously impaired. On the other hand, the design of the representations has a substantial effect on the prediction of the alternations. The SEM n does not seem to learn certain preferences because of the extremely imbalanced distribution in the input data. This confirms the hypothesis sketched in Section 3.1, according to which the degree of the input's underspecification can crucially change the behaviour of the ranking model.
Exp. 2: Word Order and Voice
We examine the impact of certain feature types on the prediction of the variation types in our data. We are particularly interested in the interaction of voice and word order (precedence) since linguistic theories (see Section 2.2) predict similar informationstructural factors guiding their use, but usually do not consider them in conjunction.
In Table 4, we report the performance of ranking models trained on the different feature subsets introduced in Section 4. The union of the features corresponds to the model trained on SEM h in Experiment 1. At a very broad level, the results suggest that the precedence and the scale alignment features interact both in the prediction of voice and word order.
The most pronounced effect on voice accuracy can be seen when comparing the precedence model to the union model. Adding the surface-independent scale alignment features to the precedence features leads to a big improvement in the prediction of word order. This is not a trivial observation since a) the surface-independent features do not discriminate between the word orders and b) the precedence features are built from the same properties (see Section 4). Thus, the SVM learner discovers depen-dencies between relative precedence preferences and abstract properties of a verb argument which cannot be encoded in the precedence alone.
It is worth noting that the precedence features improve the voice prediction. This indicates that wherever the application context allows it, voice should not be specified at a stage prior to word order. Example (5) is taken from our development set, illustrating a case where the union model predicted the correct voice and word order (5-a), and the scale alignment model top-ranked the incorrect voice and word order. The active verb arguments in (5-b) are both case-ambigous and placed in the non-canonical order (object < subject), so the semantic relation can be easily misunderstood. The passive in (5-a) is unambiguous since the agent is realised in a PP (and placed in the Vorfeld). Moreover, our results confirm Filippova and Strube (2007) who find that it is harder to predict the correct Vorfeld occupant in a German sentence, than to predict the relative order of the constituents.
Exp. 3: Capturing Flexible Variation
The previous experiment has shown that there is a certain inter-dependence between word order and voice. This experiment addresses this interaction by varying the way the training data for the ranker is labelled. We contrast two ways of labelling the sentences (see Section 4): a) all sentences that are not (nearly) identical to the reference sentence have the rank category "4", irrespective of their voice (referred to as unlabelled model), b) the sentences that do not realise the correct voice are ranked lower than sentences with the correct voice ("4" vs. "5"), referred to as labelled model. Intuitively, the latter way of labelling tells the ranker that all sentences in the incorrect voice are worse than all sentences in the correct voice, independent of the word order. Given the first labelling strategy, the ranker can decide in an unsupervised way which combinations of word order and voice are to be preferred. In Table 5, it can be seen that the unlabelled model improves over the labelled on all the sentence overlap measures. The improvements are statistically significant. Moreover, we compare the n-best accuracies achieved by the models for the joint prediction of voice and argument order. The unlabelled model is very flexible with respect to the word order-voice interaction: the accuracy dramatically improves when looking at the top 3 sentences. Table 5 also reports the performance of an unlabelled model that additionally integrates LM scores. Surprisingly, these scores have a very small positive effect on the sentence overlap features and no positive effect on the voice and precedence accuracy. The n-best evaluations even suggest that the LM scores negatively impact the ranker: the accuracy for the top 3 sentences increases much less as compared to the model that does not integrate LM scores. 6 The n-best performance of a realisation ranker is practically relevant for re-ranking applications such as Velldal (2008). We think that it is also conceptually interesting. Previous evaluation studies suggest that the original corpus sentence is not always the only optimal realisation of a given linguistic input (Cahill and Forst, 2010;Belz and Kow, 2010). Humans seem to have varying preferences for word order contrasts in certain contexts. The n-best evaluation could reflect the behaviour of a ranking model with respect to the range of variations encountered in real discourse. The pilot human evaluation in the next Section deals with this question.
Human Evaluation
Our experiment in Section 5.3 has shown that the accuracy of our linguistically informed ranking model dramatically increases when we consider the three best sentences rather than only the top-ranked sentence. This means that the model sometimes predicts almost equal naturalness for different voice realisations. Moreover, in the case of word order, we know from previous evaluation studies, that humans sometimes prefer different realisations than the original corpus sentences. This Section investigates agreement in human judgements of voice realisation.
Whereas previous studies in generation mainly used human evaluation to compare different systems, or to correlate human and automatic evaluations, our primary interest is the agreement or correlation between human rankings. In particular, we explore the hypothesis that this agreement is higher in certain contexts than in others. In order to select these contexts, we use the predictions made by our ranking model.
The questionnaire for our experiment comprised 24 items falling into 3 classes: a) items where the 3 best sentences predicted by the model have the same voice as the original sentence ("Correct"), b) items where the 3 top-ranked sentences realise different voices ("Mixed"), c) items where the model predicted the incorrect voice in all 3 top sentences ("False"). Each item is composed of the original sentence, the 3 top-ranked sentences (if not identical to the corpus sentence) and 2 further sentences such that each item contains different voices. For each item, we presented the previous context sentence.
The experiment was completed by 8 participants, all native speakers of German, 5 had a linguistic background. The participants were asked to rank each sentence on a scale from 1-6 according to its naturalness and plausibility in the given context. The participants were explicitly allowed to use the same rank for sentences they find equally natural. The participants made heavy use of this option: out of the 192 annotated items, only 8 are ranked such that no two sentences have the same rank.
We compare the human judgements by correlat-ing them with Spearman's ρ. This measure is considered appropriate for graded annotation tasks in general (Erk and McCarthy, 2009), and has also been used for analysing human realisation rankings (Velldal, 2008;Cahill and Forst, 2010). We normalise the ranks according to the procedure in Velldal (2008). In Table 6, we report the correlations obtained from averaging over all pairwise correlations between the participants and the correlations restricted to the item and sentence classes. We used bootstrap re-sampling on the pairwise correlations to test that the correlations on the different item classes significantly differ from each other.
The correlations in Table 6 suggest that the agreement between annotators is highest on the false items, and lowest on the mixed items. Humans tended to give the best rank to the original sentence more often on the false items (91%) than on the others. Moreover, the agreement is generally higher on the sentences realising the correct voice.
These results seem to confirm our hypothesis that the general level of agreement between humans differs depending on the context. However, one has to be careful in relating the effects in our data solely to voice preferences. Since the sentences were chosen automatically, some examples contain very unnatural word orders that probably guided the annotators' decisions more than the voice. This is illustrated by Example (6) showing two passive sentences from our questionnaire which differ only in the position of the adverb besser "better". Sentence (6-a) is completely implausible for a native speaker of German, whereas Sentence (6-b) sounds very natural. This observation brings us back to our initial point that the surface realisation task is especially challenging due to the interaction of a range of semantic and discourse phenomena. Obviously, this interaction makes it difficult to single out preferences for a specific alternation type. Future work will have to establish how this problem should be dealt with in
Conclusion
We have presented a grammar-based generation architecture which implements the surface realisation of meaning representations abstracting from voice and word order. In order to be able to study voice alternations in a variety of contexts, we designed heuristic underspecification rules which establish, for instance, the alternation relation between an active with a generic agent and a passive that does not overtly realise the agent. This strategy leads to a better balanced distribution of the alternations in the training data, such that our linguistically informed generation ranking model achieves high BLEU scores and accurately predicts active and passive. In future work, we will extend our experiments to a wider range of alternations and try to capture inter-sentential context more explicitly. Moreover, it would be interesting to carry over our methodology to a purely statistical linearisation system where the relation between an input representation and a set of candidate realisations is not so clearly defined as in a grammar-based system.
Our study also addressed the interaction of different linguistic variation types, i.e. word order and voice, by looking at different types of linguistic features and exploring different ways of labelling the training data. However, our SVM-based learning framework is not well-suited to directly assess the correlation between a certain feature (or feature combination) and the occurrence of an alternation. Therefore, it would be interesting to relate our work to the techniques used in theoretical papers, e.g. (Bresnan et al., 2007), where these correlations are analysed more directly.
Sntx SVM Ranker
SVMSnta 1 Snta 2 ... Snta m Snt b 1 Snta 1 Snta 2 ... Snt bnLFG grammar
FSa
LFG grammar
Snti
Snty
SVM Ranker
LFG Grammar
FSa FS b
Reverse Sem. Rules
SEM
Sem. Rules
FS1
LFG Grammar
Snti
Table 1 :
1Distribution of voices in SEM h (SEM n )
do. We extracted 19,905 sentences, all containing at least one transitive verb,f-structure
Example (2)
2
6
6
6
6
4
PRED ′ see < (↑ SUBJ)(↑ OBJ) > ′
SUBJˆPRED ′ one ′Õ
BJˆPRED ′ chancellor ′T
OPICˆ′one ′P
ASS
−
3
7
7
7
7
5
f-structure
Example (3)
2
6
6
4
PRED ′ see < NULL (↑ SUBJ) > ′
SUBJˆPRED ′ chancellor ′T
OPICˆ′chancellor ′P
ASS
+
3
7
7
5
semantics
Example (2)
HEAD (see)
PAST (see)
ROLE (agent,see,one)
ROLE (patient,see,chancellor)
semantics
Example (3)
HEAD (see)
PAST (see)
ROLE (agent,see,implicit)
ROLE (patient,see,chancellor)
Figure 2: F-structure pair for passive-active alternation
from the HGC, a huge German corpus of newspa-
per text (204.5 million tokens). The sentences are
automatically parsed with the German LFG gram-
mar. The resulting f-structure parses are transferred
to meaning representations and mapped back to f-
structure charts. For our generation experiments,
we only use those f-structure charts that the XLE
generator can map back to a set of surface realisa-
tions. This results in a total of 1236 test sentences
and 8044 sentences in our training set. The data loss
is mostly due to the fact the XLE generator often
fails on incomplete parses, and on very long sen-
tences. Nevertheless, the average sentence length
(17.28) and number of surface realisations (see
FS SEM n SEM h SEM n * All Trans.Voice Acc. 100 98.06 91.05
97.59
Voice Spec. 100 22.8
0
0
Majority BL
82.4
98.1
2-role Trans.
Voice Acc. 100 97.7
91.8
97.59
Voice Spec. 100 8.33
0
0
Majority BL
88.5
98.1
1-role Trans.
Voice Acc. 100 100
90.0
-
Voice Spec. 100 100
0
-
Majority BL
53.9
-
Table 3 :
3Accuracy of Voice Prediction by Ling. Model in
Experiment 1
statistically significant. 5 In general, the linguistic
model largely outperforms the LM and is less sen-
sitive to the additional confusion introduced by the
SEM h input. Its BLEU score and match accuracy
decrease only slightly (though statistically signifi-
cantly).
This prediction is according to the markedness hierarchy: the patient is singular and definite, the agent 5 According to a bootstrap resampling test, p < 0.05(4) a. 26
26
kostspielige
expensive
Studien
studies
erwähnten
mentioned
die
the
Finanzierung.
funding.
b. Die
The
Finanzierung
funding
wurde
was
von
by
26
26
kostspieligen
expensive
Studien
studies
erwähnt.
mentioned.
Features Match BLEU Voice Prec. VF
Prec.
16.3
0.70 88.43 64.1 59.1
ScalAl.
10.4
0.64 90.37 58.9 56.3
Union
26.4
0.75 91.50 80.2 70.9
Table 4 :
4Evaluation of Experiment 2
Table 5 :
5Evaluation of Experiment 3
Table 6 :
6Human Evaluation the design of human evaluation experiments.
This work has been supported by the Deutsche Forschungsgemeinschaft (DFG; German Research Foundation) in SFB 732 Incremental specification in context, project D2 (PIs: Jonas Kuhn and Christian Rohrer).
Compare the bidirectional competition set-up in some Optimality-Theoretic work, e.g.,(Kuhn, 2003).3 The choice among alternative f-structures is done with a discriminative model(Forst, 2007).
The language model is trained on the German data release for the 2009 ACL Workshop on Machine Translation shared task, 11,991,277 total sentences.
(Nakanishi et al., 2005) also note a negative effect of including LM scores in their model, pointing out that the LM was not trained on enough data. The corpus used for training our LM might also have been too small or distinct in genre.
Markedness and subject choice in optimality theory. Judith Aissen, Natural Language and Linguistic Theory. 17Judith Aissen. 1999. Markedness and subject choice in optimality theory. Natural Language and Linguistic Theory, 17(4):673-711.
Judith Aissen, Differential Object Marking: Iconicity vs. Economy. Natural Language and Linguistic Theory. 21Judith Aissen. 2003. Differential Object Marking: Iconicity vs. Economy. Natural Language and Lin- guistic Theory, 21:435-483.
Comparing rating scales and preference judgements in language evaluation. Anja Belz, Eric Kow, Proceedings of the 6th International Natural Language Generation Conference (INLG'10). the 6th International Natural Language Generation Conference (INLG'10)Anja Belz and Eric Kow. 2010. Comparing rating scales and preference judgements in language evalu- ation. In Proceedings of the 6th International Natural Language Generation Conference (INLG'10).
Finding common ground: Towards a surface realisation shared task. Anja Belz, Mike White, Josef Van Genabith, Deirdre Hogan, Amanda Stent, Proceedings of the 6th International Natural Language Generation Conference (INLG'10). the 6th International Natural Language Generation Conference (INLG'10)Anja Belz, Mike White, Josef van Genabith, Deirdre Hogan, and Amanda Stent. 2010. Finding common ground: Towards a surface realisation shared task. In Proceedings of the 6th International Natural Lan- guage Generation Conference (INLG'10).
Statistical generation: Three methods compared and evaluated. Anja Belz, Proceedings of Tenth European Workshop on Natural Language Generation (ENLG-05). Tenth European Workshop on Natural Language Generation (ENLG-05)Anja Belz. 2005. Statistical generation: Three meth- ods compared and evaluated. In Proceedings of Tenth European Workshop on Natural Language Generation (ENLG-05), pages 15-23.
Broad coverage multilingual deep sentence generation with a stochastic multi-level realizer. Bernd Bohnet, Leo Wanner, Simon Mill, Alicia Burga, Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010), Beijing. the 23rd International Conference on Computational Linguistics (COLING 2010), BeijingChinaBernd Bohnet, Leo Wanner, Simon Mill, and Alicia Burga. 2010. Broad coverage multilingual deep sen- tence generation with a stochastic multi-level realizer. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010), Bei- jing, China.
Soft Constraints Mirror Hard Constraints: Voice and Person in English and Lummi. Joan Bresnan, Shipra Dingare, Christopher D Manning, Proceedings of the LFG '01 Conference. the LFG '01 ConferenceJoan Bresnan, Shipra Dingare, and Christopher D. Man- ning. 2001. Soft Constraints Mirror Hard Constraints: Voice and Person in English and Lummi. In Proceed- ings of the LFG '01 Conference.
Predicting the Dative Alternation. Joan Bresnan, Anna Cueni, Tatiana Nikitina, Harald Baayen, Cognitive Foundations of Interpretation. G. Boume, I. Kraemer, and J. ZwartsAmsterdamRoyal Netherlands Academy of ScienceJoan Bresnan, Anna Cueni, Tatiana Nikitina, and Harald Baayen. 2007. Predicting the Dative Alternation. In G. Boume, I. Kraemer, and J. Zwarts, editors, Cogni- tive Foundations of Interpretation. Amsterdam: Royal Netherlands Academy of Science.
Human Evaluation of a German Surface Realisation Ranker. Aoife Cahill, Martin Forst, Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009). the 12th Conference of the European Chapter of the ACL (EACL 2009)Athens, GreeceAssociation for Computational LinguisticsAoife Cahill and Martin Forst. 2010. Human Evaluation of a German Surface Realisation Ranker. In Proceed- ings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 112 -120, Athens, Greece. Association for Computational Linguistics.
Incorporating Information Status into Generation Ranking. Aoife Cahill, Arndt Riester, Proceedings of the 47th Annual Meeting of the ACL. the 47th Annual Meeting of the ACLSuntec, Singapore, AugustAssociation for Computational LinguisticsAoife Cahill and Arndt Riester. 2009. Incorporating Information Status into Generation Ranking. In Pro- ceedings of the 47th Annual Meeting of the ACL, pages 817-825, Suntec, Singapore, August. Association for Computational Linguistics.
Stochastic realisation ranking for a free word order language. Aoife Cahill, Martin Forst, Christian Rohrer, Proceedings of the Eleventh European Workshop on Natural Language Generation. the Eleventh European Workshop on Natural Language GenerationSaarbrücken, GermanyDFKI GmbH. Document D-07-01Aoife Cahill, Martin Forst, and Christian Rohrer. 2007. Stochastic realisation ranking for a free word order language. In Proceedings of the Eleventh European Workshop on Natural Language Generation, pages 17-24, Saarbrücken, Germany, June. DFKI GmbH. Document D-07-01.
Semantics via F-Structure Rewriting. Dick Crouch, Tracy Holloway King, Proceedings of the LFG06 Conference. Miriam Butt and Tracy Holloway Kingthe LFG06 ConferenceDick Crouch and Tracy Holloway King. 2006. Se- mantics via F-Structure Rewriting. In Miriam Butt and Tracy Holloway King, editors, Proceedings of the LFG06 Conference.
XLE Documentation. Dick Crouch, Mary Dalrymple, Ron Kaplan, Tracy King, John Maxwell, Paula Newman, Palo Alto Research Center, CATechnical reportDick Crouch, Mary Dalrymple, Ron Kaplan, Tracy King, John Maxwell, and Paula Newman. 2006. XLE Docu- mentation. Technical report, Palo Alto Research Cen- ter, CA.
Graded Word Sense Assignment. Katrin Erk, Diana Mccarthy, Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. the 2009 Conference on Empirical Methods in Natural Language ProcessingSingaporeKatrin Erk and Diana McCarthy. 2009. Graded Word Sense Assignment. In Proceedings of the 2009 Con- ference on Empirical Methods in Natural Language Processing, pages 440 -449, Singapore.
Generating constituent order in German clauses. Katja Filippova, Michael Strube, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL 07). the 45th Annual Meeting of the Association for Computational Linguistics (ACL 07)Prague, Czech RepublicKatja Filippova and Michael Strube. 2007. Generat- ing constituent order in German clauses. In Proceed- ings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL 07), Prague, Czech Republic.
Tree linearization in English: Improving language model based approaches. Katja Filippova, Michael Strube, NAACL-HLT 09Companion Volume to the Proceedings of Human Language Technologies Conference of the North American Chapter of the Association for Computational Linguistics. Boulder, ColoradoKatja Filippova and Michael Strube. 2009. Tree lin- earization in English: Improving language model based approaches. In Companion Volume to the Pro- ceedings of Human Language Technologies Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics (NAACL-HLT 09, short)., Boulder, Colorado.
Filling Statistics with Linguistics -Property Design for the Disambiguation of German LFG Parses. Martin Forst, ACL 2007 Workshop on Deep Linguistic Processing. Prague, Czech RepublicAssociation for Computational LinguisticsMartin Forst. 2007. Filling Statistics with Linguistics -Property Design for the Disambiguation of German LFG Parses. In ACL 2007 Workshop on Deep Linguis- tic Processing, pages 17-24, Prague, Czech Republic, June. Association for Computational Linguistics.
Beyond nombank: A study of implicit argumentation for nominal predicates. Matthew Gerber, Joyce Chai, Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD). the ACM Conference on Knowledge Discovery and Data Mining (KDD)Matthew Gerber and Joyce Chai. 2010. Beyond nom- bank: A study of implicit argumentation for nominal predicates. In Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD).
Centering: A framework for modeling the local coherence of discourse. Barbara J Grosz, Aravind Joshi, Scott Weinstein, Computational Linguistics. 212Barbara J. Grosz, Aravind Joshi, and Scott Weinstein. 1995. Centering: A framework for modeling the lo- cal coherence of discourse. Computational Linguis- tics, 21(2):203-225.
Training linear svms in linear time. Thorsten Joachims, Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD), CSLI Proceedings Online. M. Butt and T. H. Kingthe ACM Conference on Knowledge Discovery and Data Mining (KDD), CSLI Proceedings OnlineThorsten Joachims. 1996. Training linear svms in linear time. In M. Butt and T. H. King, editors, Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD), CSLI Proceedings Online.
Optimality-Theoretic Syntax-A Declarative Approach. Jonas Kuhn, CSLI PublicationsStanford, CAJonas Kuhn. 2003. Optimality-Theoretic Syntax-A Declarative Approach. CSLI Publications, Stanford, CA.
Generation that exploits corpus-based statistical knowledge. Irene Langkilde, Kevin Knight, Proceedings of the ACL/COLING-98. the ACL/COLING-98Montreal, QuebecIrene Langkilde and Kevin Knight. 1998. Generation that exploits corpus-based statistical knowledge. In Proceedings of the ACL/COLING-98, pages 704-710, Montreal, Quebec.
Using an annotated corpus as a knowledge source for language generation. Tomasz Marciniak, Michael Strube, Proceedings of Workshop on Using Corpora for Natural Language Generation. Workshop on Using Corpora for Natural Language GenerationBirmingham, UKTomasz Marciniak and Michael Strube. 2005. Using an annotated corpus as a knowledge source for lan- guage generation. In Proceedings of Workshop on Us- ing Corpora for Natural Language Generation, pages 19-24, Birmingham, UK.
Probabilistic models for disambiguation of an HPSG-based chart generator. Hiroko Nakanishi, Yusuke Miyao, Junichi Tsujii, Proceedings of IWPT 2005. IWPT 2005Hiroko Nakanishi, Yusuke Miyao, and Junichi Tsujii. 2005. Probabilistic models for disambiguation of an HPSG-based chart generator. In Proceedings of IWPT 2005.
Trainable methods for surface natural language generation. Adwait Ratnaparkhi, Proceedings of NAACL 2000. NAACL 2000Seattle, WAAdwait Ratnaparkhi. 2000. Trainable methods for sur- face natural language generation. In Proceedings of NAACL 2000, pages 194-201, Seattle, WA.
Lexicalized stochastic modeling of constraint-based grammars using log-linear measures and EM training. Stefan Riezler, Detlef Prescher, Jonas Kuhn, Mark Johnson, Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics (ACL'00). the 38th Annual Meeting of the Association for Computational Linguistics (ACL'00)Hong KongStefan Riezler, Detlef Prescher, Jonas Kuhn, and Mark Johnson. 2000. Lexicalized stochastic modeling of constraint-based grammars using log-linear measures and EM training. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguis- tics (ACL'00), Hong Kong, pages 480-487.
Parsing the Wall Street Journal using a Lexical-Functional Grammar and discriminative estimation techniques. Stefan Riezler, Dick Crouch, Ron Kaplan, Tracy King, John Maxwell, Mark Johnson, Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL'02), Pennsylvania. the 40th Annual Meeting of the Association for Computational Linguistics (ACL'02), PennsylvaniaPhiladelphiaStefan Riezler, Dick Crouch, Ron Kaplan, Tracy King, John Maxwell, and Mark Johnson. 2002. Parsing the Wall Street Journal using a Lexical-Functional Gram- mar and discriminative estimation techniques. In Pro- ceedings of the 40th Annual Meeting of the Associa- tion for Computational Linguistics (ACL'02), Pennsyl- vania, Philadelphia.
Linguistically Informed Statistical Models of Constituent Structure for Ordering in Sentence Realization. Eric K Ringger, Michael Gamon, Robert C Moore, David Rojas, Martine Smets, Simon Corston-Oliver, Proceedings of the 2004 International Conference on Computational Linguistics. the 2004 International Conference on Computational LinguisticsGeneva, SwitzerlandEric K. Ringger, Michael Gamon, Robert C. Moore, David Rojas, Martine Smets, and Simon Corston- Oliver. 2004. Linguistically Informed Statistical Models of Constituent Structure for Ordering in Sen- tence Realization. In Proceedings of the 2004 In- ternational Conference on Computational Linguistics, Geneva, Switzerland.
Improving coverage and parsing quality of a large-scale LFG for German. Christian Rohrer, Martin Forst, Proceedings of LREC-2006. LREC-2006Christian Rohrer and Martin Forst. 2006. Improving coverage and parsing quality of a large-scale LFG for German. In Proceedings of LREC-2006.
Statistical ranking in tactical generation. Erik Velldal, Stephan Oepen, Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing. the 2006 Conference on Empirical Methods in Natural Language ProcessingSydney, AustraliaErik Velldal and Stephan Oepen. 2006. Statistical rank- ing in tactical generation. In Proceedings of the 2006 Conference on Empirical Methods in Natural Lan- guage Processing, Sydney, Australia.
Empirical Realization Ranking. Erik Velldal, University of Oslo, Department of InformaticsPh.D. thesisErik Velldal. 2008. Empirical Realization Ranking. Ph.D. thesis, University of Oslo, Department of Infor- matics.
Reversing Fstructure Rewriting for Generation from Meaning Representations. Sina Zarrieß, Jonas Kuhn, Proceedings of the LFG10 Conference. the LFG10 ConferenceOttawaSina Zarrieß and Jonas Kuhn. 2010. Reversing F- structure Rewriting for Generation from Meaning Rep- resentations. In Proceedings of the LFG10 Confer- ence, Ottawa.
Developing German Semantics on the basis of Parallel LFG Grammars. Sina Zarrieß, Proceedings of the 2009 Workshop on Grammar Engineering Across Frameworks. the 2009 Workshop on Grammar Engineering Across FrameworksSingaporeAssociation for Computational LinguisticsSina Zarrieß. 2009. Developing German Semantics on the basis of Parallel LFG Grammars. In Proceed- ings of the 2009 Workshop on Grammar Engineering Across Frameworks (GEAF 2009), pages 10-18, Sun- tec, Singapore, August. Association for Computational Linguistics. |
12,258,794 | Classification procedures for software evaluation | We outline a methodological classification for evaluation approaches of software in general. This classification was initiated partly owing to involvement in a biennial European competition (the European Academic Software Award, EASA) which was held for over a decade. The evaluation grid used in EASA gradually became obsolete and inappropriate in recent years, and therefore needed to be revised. In order to do this, it was important to situate the competition in relation to other software evaluation procedures. A methodological perspective for the classification is adopted rather than a conceptual one, since a number of difficulties arise with the latter. We focus on three main questions: what to evaluate? how to evaluate? and who evaluates? The classification is therefore hybrid: it allows one to account for the most common evaluation approaches and is also an observatory. Two main approaches are differentiated: system and usage. We conclude that any evaluation always constructs its own object, and the objects to be evaluated only partially determine the evaluation which can be applied to them. Generally speaking, this allows one to begin apprehending what type of knowledge is objectified when one or another approach is chosen. | [] | Classification procedures for software evaluation
Muriel Amar muriel.amar@enc.sorbonne.fr
Urfist de Paris -École nationale des Chartes
17, rue des Bernardins75005ParisFrance
Sophie David sophie.david@u-paris10.fr
UMR 7114
CNRS
MoDyCo
Université Paris 10
bât. L-R12A
200, av. de la République92001Nanterre cedexFrance
Rachel Panckhurst rachel.panckhurst@univ-montp3.fr
Praxiling UMR 5267
CNRS
Université Paul-Valéry Montpellier 3
34199, cedex 5MontpellierFrance
Lisa Whistlecroft lisa.whistlecroft@lancaster.ac.uk
PALATINE
The Great Hall
Lancaster University
LA1 4YWLancasterUK
Classification procedures for software evaluation
EvaluationMethodological classificationSoftwareCompetitionsTRECMUCEASAEpistemology
We outline a methodological classification for evaluation approaches of software in general. This classification was initiated partly owing to involvement in a biennial European competition (the European Academic Software Award, EASA) which was held for over a decade. The evaluation grid used in EASA gradually became obsolete and inappropriate in recent years, and therefore needed to be revised. In order to do this, it was important to situate the competition in relation to other software evaluation procedures. A methodological perspective for the classification is adopted rather than a conceptual one, since a number of difficulties arise with the latter. We focus on three main questions: what to evaluate? how to evaluate? and who evaluates? The classification is therefore hybrid: it allows one to account for the most common evaluation approaches and is also an observatory. Two main approaches are differentiated: system and usage. We conclude that any evaluation always constructs its own object, and the objects to be evaluated only partially determine the evaluation which can be applied to them. Generally speaking, this allows one to begin apprehending what type of knowledge is objectified when one or another approach is chosen.
Introduction
Over approximately the past twenty years, the domain of evaluation has attempted to become an independent field, through international conferences (e.g. TREC, Text Retrieval Conference; MUC, Message Understanding Conference; LREC, Language Resources and Evaluation Conference), competitions (e.g. EASA, European Academic Software Award )), publications (e.g. Sparck-Jones & Gallier, 1996;Chaudiron, 2004), and international agencies (e.g. ELDA, a French Agency for evaluation & distribution of linguistic resources; NIST National Institute of Standards and Technology). The production of new software devices has increased; it has mainly emerged in response to professional demand, e.g. in natural language processing or engineering: spelling and grammar checkers, tokenisers, machine translation systems, voice recognisers, etc.; but technological developments also emerged early on in information retrieval (IR) (Chaudiron, 2004). At the same time, social demands have made it necessary to account for the appropriateness of this research, and evaluation procedures have been developed, thereby extending longstanding traditions of evaluation principles for software devices both in linguistics and IR (cf. for the first evaluation reports in linguistics and data processing, Bar-Hillel, 1960, ALPAC report, 1966, and for a state of the art historical perspective, . Much work has been done, and evaluation approaches can now be studied as such: i) procedures can be classified, owing to their relative diversity, which is now well-documented; ii) the way to characterise objects to be evaluated can be queried.
This work includes several aims: 1) produce a classifica-tion, which is methodological in nature; 2) focus on the complex nature of all evaluation approaches; 3) start stipulating what type of knowledge is objectified throughout all evaluation approaches. First, we situate the context of our study (the EASA competition, which partly provided the initial impetus at the onset of our research) and the issues at stake ( § 2.). Then we defend the relevance of a methodological classification for evaluation approaches of software in general ( § 3.), before proposing different elements to produce a classification of the most common evaluation approaches ( § 4.). We finally discuss the two fundamental types of approach which emerge from this classification ( § 5.).
The context 2.1. The EASA Competition
The European Academic Software Award (EASA) was initiated in 1994 and the last competition was held in 2004. It was a biennial competition which was organised by the European Knowledge Media Association (EKMA). Academics and students were able to submit software they had developed which was then evaluated by a team of European jurors. After an expert juror evaluation process of 150 to 200 submissions, 30 to 35 items were selected to proceed to the third and final stage. The finalists' submissions were evaluated once more and 10 prizes were then allocated to the winners. Over the years some aspects of the evaluation process and criteria became inappropriate or obsolete, due to several factors:
• a very wide scope of entries (in later years, EASA implicitly became a competition including not only soft-ware but also virtual learning environments (VLEs) and pedagogical innovations using VLEs); • technical improvements became standard (it was not relevant to evaluate these as they no longer allowed appropriate differentiation); • some of the questions in the evaluation grid became spurious and/or ambiguous, etc.
Three of the authors were therefore commissioned by EKMA to conduct a revision of the whole procedure, but in order to do so, they realised that EASA needed to be situated in relation to other software evaluation procedures, namely: to improve comparisons between competitions; to put emphasis on EASA's original elements and to confront solutions adopted within other competitions in order to improve the weak points of the EASA procedure. This research is partly based on previous work (e.g., the distinctions proposed by Sparck-Jones & Gallier, 1996), but it also integrates other procedures, e.g., work on usage, or the EASA competition, which includes several original elements. Following the initial impetus of this research (to improve the procedure of the EASA competition), the objects subjected to evaluation procedures that we want to characterise remain systems in a broad sense: software, VLEs, etc. Static resources (corpora, databases, etc.) are not considered in this paper 1 .
Issues at stake
It appears to us that the field of evaluation was initially posited in a problematic way, by considering that it could be described conceptually as a discipline, and by positioning itself as an autonomous science with its own concepts, methods and rules (e.g., Ellis, 1992 quoted by Chaudiron, 2004;Sparck-Jones & Gallier, 1996). Needs for evaluation, from an industrial or research perspective, the necessity for rigour and systematism which accompany these projects, and assets in terms of results do not imply that evaluation should be considered an autonomous science as such. Evaluation is a methodological step of every system, for every project. It seems that this conceptual status is often a posteriori reconsidered. Our current research goes in the opposite direction to much former work, by establishing methodological distinctions (cf. ( § 3.)).
In addition, in a world where evaluation has become increasingly important, and in which its results have consequences at different levels (professional recognition of work, leading to research funding, commercialisation of products, etc.), it is important to characterise the context of an evaluation and the adopted procedures at their best. One often observes that evaluation is ultimately founded on measures (cf. Sparck-Jones & Gallier, 1996, p. 20-21, measures for evaluation of tokenisers, Adda et al., 2000). Even though it is trivial to state that what we measure is measurable, it is much less trivial to discuss the meaning of what is measured. The measure is no longer a simple measure, it becomes an indicator. The measure then shifts from its calculus space -its context of production -to 1. Temperature recordings: the same temperature is interpreted differently according to the season, the geographical situation, etc. 2. In companies, the absenteeism rate is admittedly measurable, but its meaning is not fixed for all companies, nor is it for a specific company, because it always depends on a particular context and can alter over time, owing to: relational deficiencies between managers and employees, within a specific professional group, problems about hygiene or security, anxiogene social pressure, etc. Absenteeism as such is easy to measure; however, the interpretation of this as an indicator is a much more complex question. 3. The number of visitors at the BPI (Bibliothèque Publique d'Information, the biggest library in Paris): the BPI registers 6,000 entries per day, except on Sundays, where this number drops to 4,000 entries (figures are approximate). As there is a limit of 2,000 people in the library at one time, the library "fills" 3 times every day, but only twice on Sundays, when the queue is surprisingly lengthy. Another indicator is necessary in this case: the duration of the visit, which is longer on Sundays. Which indicator is the most reliable to account for what happens on Sundays? It depends on the question: accounting for a phenomenon (a social sciences approach) or measuring the conformity of an event (even a social event) to a law (a reference, a scientific approach, etc.).
The indicator (built on a measure) is thus necessarily interpreted by some indices which are not part of the calculated elements. The dimension of this interpretation requires another framework, distinct from a theoretical framework on which the measure is founded (for example physical measures vs. climatological interpretations). Finally, it is important to underline that what is at stake in an evaluation is crucial when one knows the ability of humans (including researchers) to adapt easily to evaluation procedures. If, on the one hand, clarification must be transparent, on the other hand one must be aware of the impact that procedures can have on designing systems. In other words, differentiate between what a community is able to build in terms of objects, and what a community of experts is able to evaluate. And it is unfortunate that, for non-scientific reasons, the constructed objects suffer from constructed evaluation procedures (e.g. in one of the past TREC competitions, answers were limited to 50 characters maximum, Lavenus & Lapalme, 2002). More fundamentally, our research tries to characterise the type of knowledge which can be addressed when we posit evaluation procedures (in a similar way to Pariente's work (1973) about conceptual knowledge). This aim exceeds the framework stipulated in the current paper by far. However, the classification we propose is a first step. It begins to establish that:
1. every evaluation always builds its own object of study;
2. objects to be evaluated partially determine the procedure which can be applied to them. This is what one can explicitly perceive in the examples associated with the classification, where the same object can be evaluated according to different procedures.
A methodological classification
The classification proposed below ( § 4.) relies on a methodological approach to evaluation. Classifying approaches with different aims does not allow the development of conceptual observatories as such. This is due to the fact that: (i) evaluation aims may include differing scientific, social, financial, etc. considerations (Habermas' (1973) definition of a practice is more relevant here than that of a conceptual domain); (ii) evaluation is applied to fundamentally multidisciplinary objects (in computer sciences, linguistics, IR, communication, learning, etc.). If these objects were conceptually characterised, one would have, at best, a set of concepts elaborated in all implied disciplines, but this set could not form an integrated theory. Furthermore, articulating the conceptual framework, which produces the data, with the conceptual framework which produces their interpretation (see § 2.2. for examples) would become necessary, which is not a trivial problem; (iii) outside the classification framework, can a specific evaluation be linked with conceptual knowledge? If this is the case, concepts and theories need to be determined; nothing of the kind has been convincingly demonstrated so far, including specific elements of the domain: for instance glassbox/blackbox are not concepts that belong to a particular theory (or theories) of evaluation but are only methodological notions. Though it is proposed by many authors (e.g. Sparck-Jones & Gallier, 1996: "this section introduces some basic, general evaluation concepts" (p. 19); "The main problem in evaluation is finding measures, i.e. concepts which are both instantiations of generic notions and are operable as measures" (p. 20); or Chaudiron 2004, who extends Ellis' work (1992, by using the term "paradigm"; see also Chaudiron & Mustafa el Hadi, 2007, for usage of this term). It is not sufficient to name a notion a "concept" for it to really become one. The concept would have to be integrated into a conceptual network, and be defined according to a "study object"; (iv) the multi-disciplinarity of the objects to be evaluated prevents one from giving a stabilised definition of what could be a "study object" of the evaluation, in this case as a conceptual discipline. More precisely, a definition of a "study object" as such does not exist, in the way it does in linguistics for instance ("characterise 'language' in relation to 'non-language'", according to Milner's (1989) research program); or even in information sciences, where the study object is the study of the "process of research and exploitation of the intentional information" (or "communicated knowledge", which differs from "news-information", "data-information" "knowledgeinformation", Fondin 2006). For these reasons, could a theory exist, which dominates all other theories implied in both the building of measures and of interpretations? Facing this epistemological issue, we have chosen a stance which solely posits "methodological distinctions". As these "methodological distinctions" can be applied to all ap-proaches, they can be compared. And, as is shown below, methodological questioning allows the construction of an observatory (Milner, 1989). Three questions are sufficient for classifying the most common approaches (but not for describing each one in detail, but this is not our purpose). These 3 questions are: What to evaluate? How to evaluate? Who evaluates?
Elements of the classification
We now review the different elements and sub-elements that we have posited. The general classification appears in 1. The objects which are evaluated are items of software, isolated from their context of use. They may consist of one or several items of software. The latter may consist of the same or different types of software. 2. Another method is centred on usage. The item of software is evaluated in its context of use. The evaluation must therefore take into account many other factors, which form a complex device (purpose, users, expectations, etc.). Research conducted by Le Marec (2004) on evaluation in the context of museums is an example. She illustrates how computerised information points in museums are used, and that they are only one factor among many which form a complex device of institutional communication, including: expectations, itineraries, pieces of information appearing near the information points, etc. In actual fact, it may not be the computerised information point as such, which should be evaluated, she stresses, but rather the situation as a whole.
Access
The evaluator engages with different elements of the software depending on the type of access. Two methods appear:
1. The glassbox method implies that the evaluator has access to the whole computing process (structure, algorithms, programming). It includes detailed evaluation, and is often accompanied with measures of intrinsic performance of the software. Reasons and causes of errors/bugs are investigated from a computer programming perspective. Several key stages are analysed and the results influence later development; consequences are both financial and human. The developer often conducts this sort of evaluation (Falkedal, 1998). 2. The blackbox method focuses solely on input and output. The evaluator does not have access to any details of the computer process, which remains a black box. This method is generally used when there is intellectual or commercial copyright, and is often used in competitions (e.g., TREC, MUC, EASA, etc.).
How to evaluate? 4.2.1. Object distribution: individual or comparative
This refers to the evaluation of multiple items.
1. The items of software are evaluated one by one. The evaluation procedure (which may be fairly detailed) is applied to each item of software individually. This is the most common method used in competitions (e.g., TREC, MUC, EASA, etc.). 2. The items of software are evaluated comparatively, together. A common point of view is established, allowing for similarities/differences. This perspective is not detailed and is always ad hoc, since it is constructed on the basis of participating items of software (considered in a sense as tokens but not as the instance of a type). Compared to 1), only a small number of items of software may be evaluated. This method is usually used in order to create connections between software developers. For instance, this was the initial framework chosen for the evaluation project of information extracting devices (funded by the Agence universitaire pour la francophonie, AUF, Amar & David, 2001).
Resources
This aspect refers to the means used during the evaluation :
1. Referentials are used when stable, consensual and normed knowledge exists, or when expected results can be stated in advance; referentials give a form of external calibration (for instance spelling and grammatical rules for a spelling and grammar checker). The results produced by the software are considered to be correct or incorrect. This method is often used when ranking of software is required, since the referentials are used to make comparisons between items of competing software. 2. No referentials are used when stable and normed knowledge does not exist, or when expected results cannot be stated in advance. This is often the case for situations which are more or less consensual or when one focuses on needs which can change according to differing practice and context. Instructional software may be typical of this sort of approach, but also software for indexing (Amar & David, 2001) & Falkedal 1990;Nübel & Seewald, 1998).
Measures
Quantitative vs. qualitative methods can be applied.
1. In quantitative methods, a mark is attributed to evaluated aspects (via sets of tests/questions about content, interface ergonomics, etc.). Marks are usually associated with true/false answers or check-boxes on a grid. Quantitative methods are often used in competitions since marks are then ranked. Gold-standard methods can be included here: the software is measured against a given gold standard (which is established from a set of expected answers). 2. A qualitative method refers to a particular issue; in this instance, a methodology and a questionnaire are often used. The result is usually a report including recommendations. This does not mean that all aspects are excluded from any sort of measure, but simply that the measure is never seen to be the final result of the evaluation (Le Marec, 2004).
Evaluation distribution
This is where we consider the number of evaluations and the ways in which the evaluators work. 1. Evaluator and developer: evaluator and developer (of the object being evaluated) are rarely combined, except in glassbox methods. In competitions, ethics require these to be two different people. 2. Evaluator and user: (i) if the evaluator observes the user of the software in situation, evaluator and user are never the same person; (ii) the evaluator can temporarily adopt the position of user.
Evaluator expertise
Expertise is a complex notion, since one can be an expert in a particular domain (rarely in several), and even within a specific area there are variable degrees of expertise.
1. Non-expert evaluators are often used in methods with referentials, as they are given a set of points which need to be checked and then indicate the answers that match appropriately.
Expert evaluators usually intervene in methods with
or without referentials, and they judge the quality/relevance of the answer in the given context.
In methods without referentials, expert evaluators will normally be required. However, it may be appropriate for the evaluators to be expert in evaluation but non-expert in the subject domain.
Conclusion
Our classification in § 3. is based on 3 questions (What?, How?, Who?). Each question consists of different subelements, for which distinct answers can be given. This may lead to a very high number of possibilities, if each combination of parameters is envisaged. One could object that we have envisaged an exceedingly high number of procedures: (i) first this indicates the astounding abundance of the parameters which have been used in different frameworks; (ii) in actual fact, it is not the case, because some choices imply de facto other choices: the glassbox access is compatible only with experts as users; the evaluation of a practice is compatible only with blackbox access, etc. In the same way, the framework of a specific evaluation can significantly reduce the possibilities. If competitions are considered, some aspects are necessarily quasi-immutable: a competition which evaluates many entries is necessarily situated in the system approach (cf. infra), uses a blackbox method and applies quantitative measures. These methodological distinctions allow a classification of approaches to be constructed. This classification actually has a hybrid status:
1. It is a tool which helps when revising or inventing evaluation procedures; one is obliged to stipulate major elements about which the evaluation procedure needs to formulate an opinion. This is what was experienced during our work on revising the EASA grid. It was fruitful for determining the nature of the objects to be evaluated, for eliminating spurious or ambiguous formulations and inappropriate criteria, and proposing new ones (David et al. 2005a(David et al. , 2005b for details on both the former and revised evaluation grids). 2. It is also an observatory of the knowledge constructed by the evaluation procedure: it indicates a way to apprehend and to reason about objects. It is particularly apparent when the consequences of different choices are explored and updated (cf. (ii) supra). Table 1 is an exemplification of several approaches. It is not globally exhaustive: it does not show all of the possibilities, neither all of the currently existing ones, nor a fortiori the ones which do not (yet) exist. It is also not locally exhaustive, because it does not describe in detail the specificities of each procedure. But it clearly to shows two major things:
1. Evaluations always build points of view, which are always limited by the different chosen parameters. But choosing one or another parameter is justified by multiple reasons of differing natures (cf. § 2.). Consequently, every time an evaluation is conducted, it constructs its own object. The same spelling checker evaluated according to a developer procedure or in a competition will be observed in different ways. The chosen dimensions provide limited pieces of knowledge. 2. The objects to be evaluated only partially determine the evaluation which can be applied to them. The same spelling checker could be evaluated according to a developer procedure, or compete at TREC or EASA, or be evaluated according to practices and usage (that is why, in the table, we posit the same objects under all of the procedures).
Finally, the exemplification of the procedures, such as can be observed in Table 1, allows one to reflect upon the resemblances and differences between procedures. We shall now proceed with the general classification as such.
5. General classification: system vs. usage 2 Two major approaches can be identified: system and usage.
1. In the system approach (white background in Table 1), the intrinsic performance of the software prevails; evaluation of the usage within a real context (professional, private, collective, etc.) is excluded, the user is not taken into account, nor is the diversity of the users (employees, students, etc.) or the usages (occasional, regular, etc.). This does not imply that aspects which concern users directly are not covered (interface ergonomics, installation, etc.), but that they are fairly limited and, if the user is indeed considered, it is always from the standpoint of a potential user. In this approach, one focuses on an ideal/norm where each object is posited at a certain distance from this ideal/norm. Objects can then be compared (when there is only one object, the comparison is of course lost). The norm could be represented by referentials or qualitative judgments. The objects to be evaluated are reduced to aspects that are measurable, comparable, and that generally belong to one field. Only very few dimensions are considered, so evaluation procedures often "abolish" the complexity of objects. All evaluations conducted in competitions use the system approach. 2. In the usage approach (grey background in Table 1), thorough preliminary meditation on the "objects to be evaluated" is crucial. The item of software itself may not be directly considered, but more general practices surrounding the usage of the software are addressed (the question marks after the name of the systems in Table 1 refer to this). One then focuses on the complexity of the situation (including the object): the multidisciplinary aspects, the specific tasks aimed at specified users, the interactive properties, etc. In this case, objects are considered as practical complex devices, i.e., a complex set of social and technical relationships, which are established between groups or individuals and technical objects, including representations, norms, and habits (Amar, 2000;Le Marec, 2001). This approach can be used when questions related to user practice within a given context are addressed (e.g. museums, educational situations in which the pedagogical and relational approach is also studied, etc.). The perspective here is radically different, compared to the system approach. It is a different type of knowledge which is exhibited.
To illustrate these two types of knowledge, one can think of the spelling checker in Microsoft Word TM . Everyone has experienced its shortcomings. In a developer approach or in a competition, one could exhibit them precisely, and perhaps be tempted to assign a negative judgement. On the other hand, in a usage approach, one could exhibit its utility and its context of use, also including the reasons why it is used in spite of its defects. One perceives with this example how different knowledge is objectified and how difficulties are encountered when choosing an approach, precisely because specific points of view are constructed: either the tool is "invalidated" for (very) good reasons, even if it is the most widely used globally; or it is "validated" despite its faults. In both cases, the objectified knowledge is situated within two radically different perspectives.
General conclusion
The outlines indicated may make a helpful addition to general classification techniques in relation to evaluation procedures. As it is a classification which can be defined as methodological, comparisons become possible, and the most common evaluation approaches can then be analysed. We have shown that evaluation always constructs a point of view: because this point of view is limited (it chooses some dimensions, but never all of them) and because the objects to be evaluated are complex, the latter can be submitted to different approaches. In this sense, any evaluation always constructs its own object, and the objects to be evaluated only partially determine the evaluation which can be applied to them. We also address the issue of the epistemological nature of evaluation. If one agrees that evaluation is a technique, and that it may become the subject of applied research, what can one conclude? Three attitudes seem feasible: consider evaluation to be an engineering science, or just a plain science, or a methodological branch of a science. In this paper, we have chosen to explore the third attitude. We have clarified some of the problems, but further in-depth research is necessary in order to specify more precisely the epistemological status of evaluation.
Acknowledgements
The following institutions have sponsored this work; CNRS, EKMA, The Higher Education Academy, The Joint Information Systems Committee, Lancaster University, Universités Paris 10 & Montpellier 3. We would especially like to thank the three anonymous referees and Jean-Luc Minel, whose questions helped us rethink certain aspects of our work. Type of software (which could be) evaluated interactive information points (museums)?
*QA: question/answering; **MT: machine translation; ?: the software may not be primary focus of evaluation; grey background = usage approach; white background = system approach.
Table 1 (
1see Appendix).4.1. What to evaluate?
4.1.1. Objects to be evaluated
This indicates whether the evaluation primarily takes into
account the software, or primarily considers usage:
Table 1 .
1Exemplification of some approaches.Approaches
developer
TREC
EASA
usage
!
!
!
!
Evaluation object
one item of software
several items of the same type
of software
several items of different types of
software
practice
What
Access
glassbox
blackbox
blackbox
for the software:
blackbox
Object distribution
individual
individual
individual
for the software:
individual
Resources
with referentials
with referentials
without referentials
for the software:
without referentials
Measures
quantitative measures
(true/false answers)
quantitative measures
(true/false answers)
quantitative measures
(grid)
surveys
How
Evaluation distribution
single
single
aggregated (stage 2) and collective
(stage 3, finals)
collective
evaluator !
user
evaluator !
user
evaluator !
user
but temporarily so (stage 2)
evaluator !
user
Evaluator position
evaluator = developer
evaluator !
developer
evaluator !
developer
evaluator !
developer
Who
Expertise
experts
non experts
experts (stages 2 & 3) and non experts
(stage 3, finals)
experts
!
!
!
!
all software
spelling checkers
spelling checkers
spelling checkers?
QA* systems
QA systems
QA systems?
MT** systems
MT systems
MT systems?
It is of course clear that more precise observation of the procedures that the community has proposed for these types of resources would be an asset.another space -its context of interpretation, in which indices not belonging to the calculus variables are used, and which possibly summon yet other ones. Different examples illustrate this point:
We prefer the term usage to that of user: the former implies the latter, and puts more emphasis on social practices rather than on individual or cognitive characteristics.
Appendix
A Abbou, La Tribune des industries de la langue et du multimédia. Evaluation des résumeurs automatiques disponiblesAbbou A. (2000), « Evaluation des résumeurs auto- matiques disponibles », La Tribune des industries de la langue et du multimédia, 35-36, 2-7.
« Les procédures de mesure automatique de l'action GRACE pour l'évaluation des assignateurs de partie du discours pour le français. G Adda, J Lecompte, J Mariani, P Paroubek, M Rajman, Chibout K. Mariani J. Masson N., Néel F.DuculotParis(éds) Ressources etévaluation en ingénierie de la langueAdda G., Lecompte J., Mariani J., Paroubek P., Rajman M. (2000) « Les procédures de mesure automatique de l'action GRACE pour l'évaluation des assignateurs de partie du discours pour le français », in Chibout K. Mari- ani J. Masson N., Néel F. (éds) Ressources etévaluation en ingénierie de la langue, Paris : Duculot, p. 645-664.
Les fondements théoriques de l'indexation : une approche linguistique. M Amar, Editions de l'ADBS. ParisAmar, M. (2000), Les fondements théoriques de l'indexation : une approche linguistique. Paris: Editions de l'ADBS.
Evaluation de logiciels d'extraction dans les champs de l'indexation, la traduction et la terminologie. Corpus INRA. Rapport de recherche. M Amar, S David, n˚X /7.10.04/llec.A3oThe Present Status of Automatic Translation of Language. Bar-Hillel Y.Cersates; NewYork Academic Press8529Université Lille 3AUF & CNRS UMRAmar, M., David S., (2001), Evaluation de logi- ciels d'extraction dans les champs de l'indexation, la traduction et la terminologie. Corpus INRA. Rap- port de recherche. Action de recherche concertée (n˚X /7.10.04/llec.A3o), AUF & CNRS UMR 8529 (Cer- sates, Université Lille 3), 109p. Bar-Hillel Y. (1960). "The Present Status of Automatic Translation of Language", Advances in Computers, Vol. 1, New York Academic Press, 91-141.
« Réflexions préalablesà l'analyse qualité des logiciels d'ingénierie linguistique. S Chaudiron, Bulag. 24Chaudiron S. 1999. « Réflexions préalablesà l'analyse qualité des logiciels d'ingénierie linguistique ». Bulag, 24, 153-168.
Évaluation des systèmes de traitement de l'information. S Chaudiron, HermèsParisChaudiron, S. (Ed.) (2004),Évaluation des systèmes de traitement de l'information, Paris : Hermès.
« L'évaluation des outils d'acquisition de ressources terminologiques : problèmes et enjeux », in Actes de la première conférence TOTH. S Chaudiron, W Mustafa El Hadi, AnnecyChaudiron S., Mustafa el Hadi W. (2007), « L'évaluation des outils d'acquisition de ressources terminologiques : problèmes et enjeux », in Actes de la première conférence TOTH, Annecy 2007, 163-179, http:// www.porphyre.org/toth/07/actes.
. M Cori, S David, J Léon, « Pourquoi un travailépistémologique sur le TAL », TAL, Problèmeś epistémologiques, Cori M., David S., Léon J.43Cori, M., David S., Léon J. (2002). « Pourquoi un trav- ailépistémologique sur le TAL », TAL, Problèmeś epistémologiques, Cori M., David S., Léon J. (eds), 43 (3), 7-22.
M Cori, J Léon, La constitution du TAL. Etude historique des dénominations et des concepts », Problèmesépistémologiques. 43Cori M., Léon J. (2002), « La constitution du TAL. Etude historique des dénominations et des concepts », Problèmesépistémologiques, 43 (3), 21-55.
Comments on the current EASA evaluation process. S David, R Panckhurst, Talk, European workshop. Montpellier, FranceEvaluation in e-learning: review & future directionsDavid, S., Panckhurst, R. (2004), "Comments on the current EASA evaluation process", Talk, European workshop, Montpellier, France, November 2004, Evaluation in e-learning: review & future directions, http://www.univ-montp3.fr/˜rachel/ spip/article.php3?id_article=3
Many Forms of the Future. A report on future options for the organisation of EASA. S David, R Panckhurst, L Whistlecroft, 45Report submitted to EKMA, OxfordDavid, S., Panckhurst, R., Whistlecroft, L. (2005a), "Many Forms of the Future. A report on future options for the or- ganisation of EASA", Report submitted to EKMA, Ox- ford, April 11 2005, 45p.
This official weblink is no longer valid. The most recent EASA competition website can be. S David, R Panckhurst, L Whistlecroft, Proceedings, EUNIS 2005 Conference. EUNIS 2005 ConferenceEuropean University Information Systems ; The University of ManchesterRevising the Evaluation Procedure of the European Academic Software AwardDavid, S., Panckhurst, R., Whistlecroft, L. (2005b), "Revis- ing the Evaluation Procedure of the European Academic Software Award", European University Information Systems, Proceedings, EUNIS 2005 Conference, 20-24 June 2005, The University of Manchester, http: //www.mc.manchester.ac.uk/eunis2005/ medialibrary/papers/paper_111.pdf EASA : http://www.easa-award.net (This offi- cial weblink is no longer valid. The most recent EASA competition website can be viewed at: http://www. bth.se/llab/easa.nsf).
« The Physical and Cognitive Paradigm in Information Retrieval Research », Journal of documentation. D Ellis, 48Ellis D. (1992), « The Physical and Cognitive Paradigm in Information Retrieval Research », Journal of document- ation, 48 (1), 45-64.
Evaluation of the Linguistic Performance of Machine Translation Systems. K Falkedal, Nübel, R., Seewald-Heeg U.Gardez! VerlagSt-AugustinEvaluation Problems from a Developer's Point of ViewFalkedal, K. (1998), "Evaluation Problems from a De- veloper's Point of View", in Nübel, R., Seewald-Heeg U. (eds), Evaluation of the Linguistic Performance of Ma- chine Translation Systems. St-Augustin: Gardez! Verlag, 137-150.
« La science de l'information ou le poids de l'histoire », article inédit diffusé le 24 mars. H Fondin, consulté le 18 marsFondin, H. (2006), « La science de l'information ou le poids de l'histoire », article inédit dif- fusé le 24 mars 2006, disponible en ligne : http://w3.u-grenoble3.fr/les_enjeux/ 2005/Fondin/index.php (consulté le 18 mars 2008).
La technique et la science comme "idéologie. J Habermas, GallimardParisHabermas J. (1973) (1968), La technique et la science comme "idéologie", Paris : Gallimard.
« Using Test Suites in Evaluation of Machine Translation Systems. M King, K Falkedal, Coling. 2King M., Falkedal K. (1990), « Using Test Suites in Evalu- ation of Machine Translation Systems », Coling, vol. 2, 211-216.
Machines Language, A report by the Automatic Language Processing Advisory Committee (ALPAC), National Academy of Sciences. National Research CouncilComputers in translation and linguisticsLanguage and Machines. Computers in translation and linguistics (1966), A report by the Automatic Lan- guage Processing Advisory Committee (ALPAC), Na- tional Academy of Sciences, National Research Council.
« Evaluation des systèmes de question réponse. K Lavenus, G Lapalme, Aspects méthodologiques ». TAL : Problèmesépistémologiques. 433Lavenus K., Lapalme G. (2002), « Evaluation des systèmes de question réponse. Aspects méthodologiques ». TAL : Problèmesépistémologiques, 43 (3), 181-208.
Le Marec, J , « L'usage et ses modèles: quelques réflexions méthodologiques », Spirale. 28Le Marec, J. (2001), « L'usage et ses modèles: quelques réflexions méthodologiques », Spirale, 28, 105-122.
« Lesétudes d'usage. Le Marec, J , Evaluation des systèmes de traitement de l'information. Chaudiron S.ParisHermèsLe Marec, J. (2004), « Lesétudes d'usage », in Chaud- iron S. (ed.), Evaluation des systèmes de traitement de l'information, Paris : Hermès, 353-372.
. Medida Prix, Medida Prix: http://www.medidaprix.org
. J.-Cl Milner, Le SeuilParisIntroductionà une science du langageMilner J.-Cl. (1989), Introductionà une science du lan- gage, Paris : Le Seuil.
«Évaluation des systèmes de résumé automatique. J.-L Minel, Évaluation des systèmes de traitement de l'information. Chaudiron S.ParisHermèsMinel J.-L. (2004), «Évaluation des systèmes de résumé automatique », in Chaudiron S. (éd.),Évaluation des systèmes de traitement de l'information, Paris : Hermès, p. 171-184.
MUC. MUC: http://www-nlpir.nist.gov/related_ projects/muc/
Evaluation of the Linguistic Performance of Machine Translation Systems. R Nübel, U Seewald-Heeg, Gardez ! VerlagSt-AugustinNübel R., Seewald-Heeg U. (éds) (1998), Evaluation of the Linguistic Performance of Machine Translation Systems, St-Augustin : Gardez ! Verlag.
R Panckhurst, S David, L Whistlecroft, Evaluation in e-learning: the European Academic Software Award, Montpellier : Publications de l'université Paul-Valéry. Panckhurst, R., David, S., Whistlecroft, L. (Eds), 2004, Evaluation in e-learning: the European Aca- demic Software Award, Montpellier : Publications de l'université Paul-Valéry, http://www.pulm.fr/ evaluation-in-e-learning-easa
J.-Cl Pariente, Le langage et l'individuel. ParisArmand ColinPariente J.-CL. (1973), Le langage et l'individuel, Paris : Armand Colin.
Evaluating Natural Language Processing Systems: an Analysis and Review. K Sparck-Jones, J R Gallier, Springer-VerlagBerlinSparck-Jones K., Gallier J. R. (1996), Evaluating Natural Language Processing Systems: an Analysis and Review, Berlin: Springer-Verlag. TREC: http://trec.nist.gov/ (especially: http://trec.nist.gov/pubs/trec15/t15_ proceedings.html:Voorhees E. M., Overview of TREC 2006). |
44,855,702 | An Innovative Distributed Speech Recognition Platform for Portable, Personalized and Humanized Wireless Devices | In recent years, the rapid growth of wireless communications has undoubtedly increased the need for speech recognition techniques. In wireless environments, the portability of a computationally powerful device can be realized by distributing data/information and computation resources over wireless networks. Portability can then evolve through personalization and humanization to meet people's needs. An innovative distributed speech recognition (DSR) [ETSI, 1998], [ETSI, 2000] platform, configurable DSR (C-DSR), is thus proposed here to enable various types of wireless devices to be remotely configured and to employ sophisticated recognizers on servers operated over wireless networks. For each recognition task, a configuration file, which contains information regarding types of services, types of mobile devices, speaker profiles and recognition environments, is sent from the client side with each speech utterance. Through configurability, the capabilities of configuration, personalization and humanization can be easily achieved by allowing users and advanced users to be involved in the design of unique speech interaction functions of wireless devices. | [] | An Innovative Distributed Speech Recognition Platform for Portable, Personalized and Humanized Wireless Devices
August 2004
Yin-Pin Yang
An Innovative Distributed Speech Recognition Platform for Portable, Personalized and Humanized Wireless Devices
Computational Linguistics and Chinese Language Processing
9277August 2004Distributedspeech recognitionconfigurablewirelessportablepersonalizedhumanized
In recent years, the rapid growth of wireless communications has undoubtedly increased the need for speech recognition techniques. In wireless environments, the portability of a computationally powerful device can be realized by distributing data/information and computation resources over wireless networks. Portability can then evolve through personalization and humanization to meet people's needs. An innovative distributed speech recognition (DSR) [ETSI, 1998], [ETSI, 2000] platform, configurable DSR (C-DSR), is thus proposed here to enable various types of wireless devices to be remotely configured and to employ sophisticated recognizers on servers operated over wireless networks. For each recognition task, a configuration file, which contains information regarding types of services, types of mobile devices, speaker profiles and recognition environments, is sent from the client side with each speech utterance. Through configurability, the capabilities of configuration, personalization and humanization can be easily achieved by allowing users and advanced users to be involved in the design of unique speech interaction functions of wireless devices.
Introduction
In the current wireless era, cellular phones have become daily-life necessities. People carry their own handsets and make phone calls anytime, everywhere, while public payphones have almost disappeared. Inspired by this vast number of mobile phone users, the wireless communication industry is developing wireless data services to create more profit. Wireless devices can be treated as terminals of an unbounded information/data networkthe Internet.
However, the small screen sizes of mobile devices discourage users from surfing the Internet in mobile situations. Wireless data services are not as attractive as was expected, and this is one of the major reasons for the so-called "3G Bubble" [Baker, 2002] [Reinhardt et al, 2001].
On the other hand, the handset market is still blooming. Personal, stylish and fashionable features, such as ring tones, color screen displays, covers, and so on, are all very popular, especially among teenagers. Functionally speaking, portable devices, such as PDAs, pocket/palm PCs and digital cameras, are now integrated with handsets. Many interesting applications, such as portable electronic dictionaries, map navigators, and mobile learning, can be built into mobile devices. However, these functions or services still cannot create serious business opportunities for telecom companies.
What will future appealing services for cell phones be? "Talking to a machine," or interacting with a machine, might be a candidate. That is, besides talking to human-beings through voice channels, people may like to talk to machines and access the Internet through data channels. The possibilities are unlimited. Handsets may thus evolve into personal "intimate pets" that people will use from childhood to grownup. In this scenario, speech interaction will play an important part in humanizing devices [Hiroshi et al. 2003]. However, due to the limitations of the current state-of-art speech recognition techniques, the robustness issue ] [Wu et al. 2003] [Lee 1998] is always a bottleneck in commercializing speech recognition products. This imperfection reveals the importance of configurability. In the following paragraphs, the relationships among configurability, personalization, and wireless environments will be explored.
Speech Recognition and Wireless Environments
How does a speech recognition system fit into a the wireless network? In this paper, we will primarily highlight two key terms "distributed" and "configurable." The term "distributed"
can be interpreted as follows: computation distributed and data distributed. As for the former, normally speech recognition functions are needed in mobile situations, and devices are usually thin and lacking in computational power. It would be much easier to design speech recognition functions if the computation involved in recognition processes would be distributed over wireless networks by means of a client-server architecture. As for the latter, speech recognition is by nature a pattern matching process, which needs to acquire utterances within a given application domain. For example, a speaker-independent (SI) continuous digit recognizer targeting the Taiwanese market needs to acquire a great large number of sample continuous digit utterances from all dialects in this market. The representation and quality of the sample utterances used for training or adaptation can seriously dominate the performance of a speech recognizer. If a wireless network is used, speech data acquisition can be done in a much more efficient and systematical way. More importantly, the acquired speech data, labeled by means of a speaker profile, recognition environment, and device/microphone type, can be kept on the server. Speech data will thus not be abandoned when particular applications or services are discontinued.
From the above, we can conclude that we need a centralized speech recognition server embedded in the wireless infrastructure. When we say "talking to a machine", the "machine" is actually an entire wireless network. People talk to the same lifetime recognizer, and the recognizer evolves continuously. This speech recognition server can provide any types speech recognition services (computation distributed) for all classes of wireless mobile devices.
These services continuously acquire speech data from all locations (data distributed), and adapt the engine performance all the time. For each recognition task, there is a "configuration file" (or, say, a tag) to record all of the information regarding types of services, speaker profiles, recognition environments, etc. We call this type of server a configurable distributed speech recognition (C-DSR) server.
In the following, the history of DSR developed by ETSI/Aurora will be briefly described.
Then, the innovative C-DSR platform will be introduced.
Distributed Speech Recognition (DSR) developed by ETSI/Aurora
Instead of squeezing the whole recognizer into a thin device, it seems more reasonable to host recognition tasks on a server and exchange information between the client and server.
However, due to the low bit-rates of speech coders (note that coders are designed for humans, not recognizers), the speech recognition performance can be significantly degraded. The DSR, proposed by ETSI Aurora, overcomes these problems by distributing the recognition process between the client and server, by using an error protected data channel to send parameterized speech features.
From DSR to Configurable DSR (C-DSR)
Aurora DSR can be seen as a speech "coder" [Digalakis et al. 1999] designed to enable handset users to talk to their recognizers. Besides handsets, there are many other mobile devices that need DSR services, and they all operate in different environments and in different recognition modes. Each combination or, say, configuration, needs its own "coder" to achieve better performance. Based on these needs, C-DSR was built as an integrated client-server platform which not only offers a convenient way to construct speech recognition functions on various client devices, but also provides powerful utilities/tools to assist each configuration to obtain its own coder in order to increase the overall recognition task completion rate. To achieve these goals, C-DSR maximizes the advantages of data channels and centralized servers by means of its "configurable" capability -configurability. The configurability can be considered from two points of views.
The C-DSR Client
From the client side viewpoint, speech recognition processing is configurable to work with: (1) various kinds of thin to heavy mobile devices, ranked according to their computational power, noting that most of devices do not have sufficient computational power to perform the feature extraction process proposed by ETSI Aurora; (2) various types of recognition environments, such as offices, homes, streets, cars, airports, etc; this information about recognition environments can help recognition engines achieve greater accuracy; (3) various types of recognition services, such as command-based, grammar-based, speaker independent/ de--pendent mixed mode, and dialogue style services; (4) various speaker profiles since speaker information can help recognizers achieve higher recognition rates 1 and are required by recognition applications, such as speaker adaptation [Lee et al.1999] [Chen et al.1990], speaker verifications identification [Siohan et al.1998]. The C-DSR platform provides a faster and more flexible way to construct various speech recognition functions for various mobile devices used in various recognition environments. One of the major missions of C-DSR is to increase the frequency of speech recognition use in daily life.
The C-DSR Server
From the viewpoint of the centralized server, the C-DSR server collects from all of the registrant clients speech utterances or formatted speech feature arrays along with their configuration tags. The basic idea is to formalize the life cycle of a speech recognition product/task from the deployment phase, to the diagnostic phase, tuning phase, and upgrading phase. Also, similar tasks can share corresponding information and adaptation data located on the server. The C-DSR server offers the following mechanisms to fully take advantage of these categorized speech and configuration data: (1) the C-DSR server can decide which recognition engine or which acoustic HMM model to employ according to the history log;
(2) the C-DSR server can balance the trade-offs among communication bandwidth, system load and recognition accuracy; (3) the categorized and organized speech database can be utilized to create diagnostic tools that can be used to tune-up recognition engines and to perform all kinds
Personalized and Humanized Wireless Devices
of adaptation, such as speaker adaptation, channel adaptation [Siohan et al.1995] and background noise adaptation [Kristjansson et al.2001].
In summary, C-DSR is a generic speech recognition engine, in which all of the information, or parameters, concerning application-dependent user profiles and device profiles are kept in a configuration file which is initiated at the client side. In technical terms, C-DSR is a platform, while from customer's point of view, C-DSR is a personally owned, lifetime speech recognition engine. The C-DSR platform is embedded in the wireless network in contrast to conventional speech recognizers that are treated as input interfaces for portable devices (see Figure 1).
Overview
In the following, the architecture of C-DSR is described in section II. Then in section III, a demo system is presented to demonstrate the unique capability, that is, configurability, of C-DSR. Some conclusions are drawn in the final section.
Figure 1(A). Conventionally, speech recognition is simply one of the available input interfaces for portable devices.
R Talking Doll
Using SDK to build a stand-alone speech recognizer
Relatively cost high Client Devices
Figure 1(B). Illustration of the innovative C-DSR platform.
The C-DSR Architecture
The function blocks of the C-DSR development platform are shown in Figure 2. A wireless device equipped with the C-DSR Client connects to a remote C-DSR Server using the C-DSR Protocol through a wireless network. The C-DSR Protocol sends speech data and parameters.
The speech data can be in the form of PCM raw speech, ADPCM or pre-defined compressed speech features, depending on the computation power and bit-rate (communication bandwidth).
The configuration file with speech data prepared by the client is, thus, transmitted by the C-DSR Protocol thru wireless networks. (2) Centralized server makes maintenance, tuning, upgrading easier.
(1) User always talks to the same recognizer thru different clients devices.
(3) Provides all kinds of services. Users may even design their own interaction scenarios.
Personalized and Humanized Wireless Devices
(HLC), resulting in a formatted package, or a database. The package is then passed to the Diagnostic Center (DC), where, diagnostic tools are used to tune-up the engines and provides adaptation data for various kinds of adaptation schemes, such as speaker/channel adaptation.
Figure 2. The function blocks of the C-DSR Platform.
A. Configuration File and Configuration Controller, CC
The configuration of the C-DSR platform is stored in a Configuration File (CF). Each field in the CF can be categorized into three attributes, rSR, rDSR, and Interactive Design-It-Yourself (I-DIY), which are explained below.
i. rSR is the difference between the perfect and a current practical state-of-art speech recognition engine. The SR engine is never perfect. However, if we can restrict the speaking styles of the users, the recognition accuracy will be higher.
ii. rDSR refers to those configurable parameters which can minimize the degradation due to wireless transmission loss.
iii. IDIY means Interactive Design-It-Yourself. We can never expect a machine to act
C-DSR Server
exactly like a human being. The philosophy behind C-DSR is to make human-machine interaction simple and easy. The best way to achieve this goal is to involve users in design. Thus, we provide DIY tools to enable users to design their own ways of talking to machines.
I-DIY
The original design principle behind the C-DSR platform is to remotely configure the speech recognition engine on the server from the client side. It is the client device that initially prepares all of the configuration files. However, some of the configuration parameters may not be fully determined by the client device or may even be totally empty. In this case, the CC of the server should be able to append or modify these parameters by utilizing all available resources, including the historical configurations or statistics located in the server.
B. Configurable Engine
The configurable engine is the heart of the C-DSR server. As the name indicates, a configurable engine is an SR engine which is modulized and can be configured according to different requests requested from the various types of clients; Figure 3 shows the modules of
Personalized and Humanized Wireless Devices
the engine, which is a generalized SR engine on which state-of-art SR techniques can be employed. These typical modules (in a traditional/generalized SR engine) are listed below: each representing a different algorithm used to implement the EPD function. The C-DSR platform also allows the system maintainer to adopt a new method for each module.
Intermediate data between modules are also generated to provide "symptoms" useful for diagnostic purposes. These symptoms include: § speech segmentation boundaries, § the Viterbi resulting path, § likelihood trajectories along the time axis on the resulting path, § a histogram of observations (feature vectors) for a particular Gaussian mixture, § a histogram of the observing likelihood of a particular HMM state
Personalized and Humanized Wireless Devices
C. The Dialogue System, DS
A generic DS is, firstly, responsible for preparing a grammar, including vocabulary needed for the next speech recognition process. Then, the grammar with the incoming utterance is fed to the recognizer. The recognition result, a recognized key word, is then sent back to the DS. The DS then updates the "dialog status" records and determines the grammar for the next recognition.
Currently, we support AIML (Artificial Intelligence Markup Language, www.alicebot.org) and the simplified VoiceXML format used to describe dialogue scripts (see Figure 4).
D. The Diagnostic Center, DC
As described earlier, the so-called symptoms are intermediate data obtained while the engine is running and sent to the DC. The main purpose of the DC is to analyze and diagnose these symptoms in order to make suggestions. Thus, the C-DSR server is faced with various types of services, various environment configurations, and various types of client devices and speakers, so we want to make the engine core to be as generalized as possible. Currently, all of the diagnostics are done manually, which means that the DC only display to users or C-DSR server maintainers. We plan to make the DC automatically in the next generation of C-DSR.
E. The History Log Center (HLC)
The HLC is responsible for collecting and logging all of the corresponding information for each recognition service. The information collected includes speech utterances, formatted feature arrays, configuration files and the intermediate data, that is, symptoms and recognition results, and is saved to a corresponding user directory according to the user registration ID.
The HLC serves as a database manager, whose job functions includes: (i) maintaining the database, if necessary, and creating a mechanism to eliminate garbage data; (ii) building data links to prepare adaptation data for various types of adaptation algorithms, for speaker or channel adaptation; (iii) preparing intermediate data for the DC to diagnose, so that the DC can provide data for the C-DSR engine to perform to tune algorithms and improve recognition accuracy.
Figure 4. Diagram of the Dialogue System, DS.
C-DSR Demo System
In a laboratory, several speech recognition applications may be emulated by a PDA serving as a client on the C-DSR platform. These recognition applications are usually realized on stand-alone portable devices or server-based dialogue systems. Now, using the proposed C-DSR solutions, thin client devices can take advantage of powerful, wireless servers to perform sophisticated speech recognition functions (see Figure 5), including the following: § car agentretrieving map/travel/hotel information through GPRS network in a car; § a personal inquiry systema portable device which can retrieve stock/weather information anywhere through a GPRS network; § general-purpose remote controlin a WLAN 802.11b environment, a remote control which can be used to control a TV, stereo, air-conditioner, etc., through infrared rays by using natural language commands; § Sim-Librariana portable device, which, when a person walks into a library, can be used to ask for directions or for the location of a book the person is searching for. 2. Environmental noise: this can be Quiet or Noisy. If this is skipped, the C-DSR Server will make a decision according to the history log. Internet/ IP Network 3. Speaking speed: the speaking speed of a normal person is around five words per second. The user can determine his range of speaking speed, for instance, from three words per second to six words per second. If this is skipped, the C-DSR server will use default values.
C-DSR Server
4. Gender/Age/Accent: gender, age and accent information are very helpful for improving recognition performance. The C-DSR Client will retrieve and pass these pieces of information from the user/speaker profile to C-DSR Server for reference purposes. If this is skipped, the C-DSR Server will employ default models.
5. The Number of Results: the user may configure the number of recognition results, say N. The C-DSR Client will then display the first most likely candidates to the user.
6. Recognition Style: this can be Command-based or Dialogue. The grammar format for the Command-based style uses the ABNF format shown in the following:
[The Setup at the C-DSR Server]
When the configuration file and speech data are received from the client, the C-DSR Server performs recognition tasks according to the configuration. In the grammar example shown above, exactly one keyword from the $action1 group (open/close) and $keyword1 (light/fan/tv/radio) group will be generated. The Action Center on the C-DSR Server will perform a corresponding action, such as "turn on light."
[Example No.2] Tourist Information Guide of Yu-Shan National Park
[Scenario] Users may use their own PDAs or smart phones to access this service when entering the area covered by the WLAN.
[Configuration Settings]
As in the previous case, we only need to change the field RecognitionStyle from
Command-based to Dialogue-ProvidedByServer.
[The Setup at the C-DSR Server]
This example shows a dialogue system for a tourist guide. The content was provided by
#ABNF 1.0 $prefiller= 請 | 麻煩 | 你 | 我要 $action1= 開{open} | 關{close} $keyword1= ( 燈{light} | 電燈{light} | 風扇{fan} | 電 風扇{fan} | 電視{tv} | 收音機{radio} )
Yu-Shan National Park. The C-DSR Platform provides several Dialogue Scripts. Here, we use VoiceXML as an example.
[CDSR_VXML] <form_id="tourinfo_agent"> <field name="hello"> <prompt>您好,旅遊導覽精靈在此為您服務</prompt> <grammar src="howRU.gram" type="application/grammar+xml"/> </field> <field name="caragent"> <prompt>很好,謝謝</prompt> <prompt>馬馬呼呼啦,謝謝</prompt> <prompt>可以啦,謝謝</prompt> <grammar src="tourinfo.gram" type="application"/> <filled> <if cond="caragent == 'resource'"> <prompt>森林型態以柳杉,天然闊葉林樟樹,台灣杜鵑為主</prompt> <prompt>動物型態有松鼠,穿山甲,台灣獼猴,台灣野兔等</prompt> <prompt>鳥類型態有綠繡眼,小鶯,巨嘴鴉,五色鳥等</prompt> <clear namelist="tourinfo.gram"/> </if> <if cond="caragent == 'facilities'"> <prompt>遊客中心:內設餐飲部,會議室,多媒體簡報室及生態教育展示館</prompt> <prompt>餐廳:除可同時供一百人用餐外,並可作為大型會議室,教室使用</prompt> <prompt>行政管理中心:為本區工作人員處理行政事務的辦公地點</prompt> <clear namelist="tourinfo.gram"/> </if> <if cond="caragent == 'spot'"> <prompt>化石區:此生痕化石是大約三萬年前蝦,蟹類進行築穴工事時所遺留而成 </prompt> <prompt>造林紀念石:為前新竹山林管理所大溪分所,在民國四十四年為紀念 東眼山造林工作實績而建</prompt> <prompt>親子峰:在林道終點上方,有大小雙峰,猶如慈母帶著小孩,故名親子峰 </prompt> <clear namelist="tourinfo.gram"/> </if> </filled> </field> <block> <submit next="theTop.vxml" namelist="city state"/> <prompt>好吧,掰</prompt> <prompt>好吧,下次再聊囉,掰</prompt> </block> </form>
Conclusions
Speaking of wireless mobile devices, conventionally, speech recognition is considered to be one of the available input methods for these devices. In this paper, we have presented the client-server C-DSR platform which is a centralized speech recognition server embedded in the wireless infrastructure. By using C-DSR, people talk to the same lifetime speech recognition system. Speech data and the corresponding configuration, which keeps all the records about recognition environments, device information, dialogue scripts, recognition results, and so on, will not be abandoned when particular applications or services are discontinued. The speech recognition server provides many types of services for all classes of wireless mobile devices, and these services continuously acquire speech data from all locations, and adapt the engine performance all the time.
Personalization and humanization are essential. We have seen many successful products come on the market. A humanized device does not have to be "intelligent." As long as it "looks" intelligent and people find it interesting, we do not really need to make such a machine act exactly like a human being. People like to have their own ways to interact with their own personal devices/pets. Perhaps the Design-It-Yourself approach, getting people involved in the design process, is one good solution, and the "configurability" of C-DSR can surely provide such a platform to meet these needs.
For now, the C-DSR Protocol is implemented on top of TCP/IP or RTP (Real Time Transit Protocol). After parsing the received protocol, the Configuration Controller (CC) decides how to configure the recognition engine (C-DSR Engine) and Dialogue System (DS) to accomplish the recognition task. The C-DSR engine and DS engine are composed of modulized components such that switches inside the engines can be shifted to corresponding components to perform the functionalities requested by the configuration. The recognition results are then logged and organized in the History
Figure 3 .
3Modules of the C-DSR engine.
Figure 5 .
5Illustration of the C-DSR implementation.Each application uses its own configuration, specified according to (1) the device type:from thin to heavy, 8-bit 8051 class, DSP, PDA class; (2) the recognition style: command-based, natural, or dialogue; (3) the recognition environment: in a car, home, or open space. Two configuration files are presented below to illustrate how configuration files are used to realize a speech recognition application. Note that, normally, there are two types of speech recognition applications: voice command-based and dialogue style. [Example 1] Voice-controlled home appliances [Scenario] The user may use his wireless portable device with the installed C-DSR client to control a TV, lamp, or other home appliances within a WLAN environment. [Configuration Settings] 1. Speech Feature Compression Format: this may be PCM, 8051-class, LPC-based Cepstrum, or MFCC (Mel-Frequency Cepstral Coefficients), depending on the computational cost and communication bandwidth (bit-rates) of the client device.
Table 1 .
1Configuration fileConfiguration, Configurable Parameter
Attribute
SrCoder
Bit-rate: 0.1/1/4/16/64/128 Kbps
computationPower
(deviceType)
very-thin/thin/medium/heavy
Expandable
…
rDSR
NoiseType
home/office/street/in-vehicle
microphonelType
US5/US20/US100/US200
searchEngine
Full/Fast-mode
voiceActivated
Yes/No
endPointDetect
Yes/No
speakingSpeed
Fast/medium/slow
speakingAccent
Taiwan/china/foreigner
Expandable
…
rSR
vocabularySetUp
Vocabulary
grammarSetUp
Grammar (ABNF/cdsr_format)
dialogueSetUp
Dialogue Script (VXML/AIML/cdsr_format)
PersonalitySettings
Talktive/Quiet/Shy/…
Expandable…
…
Table 2 .
2Configurable modules in the C-DSR Engine with their parameters particular module, several implementations are available. To each implementation, one CF name is attached, and it can be switched or configured. For instance, in the End Point Detection (EPD) module, there are three options, EPD_NONE, EPD_VFR and EPD_ENG,Configurable Module
Parameter
Energy Normalization None / FRAME_ENG_NORM
Front-end filter
FF_NONE / FF_LOW_PASS / FF_1POLE_1ZERO
Feature Extraction
(if needed)
FE_NONE / FE_MFCC / FE_LPC_CEP / FE_8051_CLASS
End Point Detection EP_NONE / EP_VFR / EP_ENG
Engine Type
EG_DIGITSTRING/EG_COMMAND/EG_KEYWORDSPOT/G_LVCSR
Mean
Subtraction
Computing
MS_NONE / MS_STD
HMM Adjustments
HJ_NONE / HJ_PMC
HMM Adaptations
HP_NONE / HP_ADAPT_DEV / HP_ADAPT_SPKR
Viterbi Searching
VS_FULL_PATH / VS_NBEST / VS_BEAM
Post Operations
PO_NONE / PO_STD
As shown in Table 2 above, each module has a well-defined interface and, for a
For example, if we may provide gender information from speaker profile to speech recognizer, even a first-time speaker can obtain higher recognition accuracy.
; Etsi Doc, Aurora Stq, ES 201 108, Ref. RES/STQ-00018. DSR Front EndETSI Doc. No. ES 201 108, Ref. RES/STQ-00018, STQ Aurora, "DSR Front End".
Front-End Extension for Tonal Language Recognition and Speech Reconstruction. Etsi Es ;, Aurora Stq, Version 0.1.1, Ref. DES/STQ-00030ETSI ES, Version 0.1.1, Ref. DES/STQ-00030, STQ Aurora, "Front-End Extension for Tonal Language Recognition and Speech Reconstruction".
Business Week Magazine, International Cover Story. S Baker, A Tale of A BubbleBaker, S. , "A Tale of A Bubble, "Business Week Magazine, International Cover Story, June 3, 2002.
Who Needs 3G Anyway?. A Reinhardt, W Echikson, K Carlisle, P Schmidt, Business Week Magazine, International -European Business. Reinhardt, A. , W. Echikson, K. Carlisle, P. Schmidt, "Who Needs 3G Anyway? " Business Week Magazine, International -European Business, March 26, 2001.
The Humanization, Personalization and Authentication Issues in the Design of Interactive Service System. H Yamaguchi, K Suzuki, C V Ramamoorthy, Society for Design and Process Science, www.sdpsnet.orgYamaguchi, H. , K. Suzuki, C. V. Ramamoorthy, "The Humanization, Personalization and Authentication Issues in the Design of Interactive Service System, " 2003 Society for Design and Process Science, www.sdpsnet.org.
Recursive Estimation of Nonstationary Noise Using Iterative Stochastic Approximation for Robust Speech Recognition. L Deng, J Droppo, A Acero, IEEE Transactions on Speech and Audio Processing. 11Deng, L. , J. Droppo, and A. Acero, "Recursive Estimation of Nonstationary Noise Using Iterative Stochastic Approximation for Robust Speech Recognition," in IEEE Transactions on Speech and Audio Processing. Volume: 11 Issue: 6 , Nov 2003.
A Noise-Robust ASR Front-End Using Wiener Filter Constructed from MMSE Estimation of Clean Speech and Noise. J Wu, J Droppo, L Deng, A Acero, Proc. of the IEEE Workshop on Automatic Speech Recognition and Understanding. Virgin Islands. of the IEEE Workshop on Automatic Speech Recognition and Understanding. Virgin IslandsPersonalized and Humanized Wireless DevicesWu, J. , J. Droppo, L. Deng and A. Acero, "A Noise-Robust ASR Front-End Using Wiener Filter Constructed from MMSE Estimation of Clean Speech and Noise," in Proc. of the IEEE Workshop on Automatic Speech Recognition and Understanding. Virgin Islands, Dec, 2003. Personalized and Humanized Wireless Devices
On stochastic feature and model compensation approaches to robust speech recognition. C H Lee, Speech Communication. 25Lee, C. H. , "On stochastic feature and model compensation approaches to robust speech recognition," Speech Communication, 25:29-47, 1998.
Quantization of Cepstral Parameters for Speecg Recognition Over the World Wide Web. V Digalakis, L Neumeyer, M Perakakis, IEEE Journal on Selected Areas in Communications. 17Digalakis, V. , L. Neumeyer and M. Perakakis, "Quantization of Cepstral Parameters for Speecg Recognition Over the World Wide Web," IEEE Journal on Selected Areas in Communications, Jan. 1999, volume 17, pp 82-90.
A study on speaker adaptation of continuous density HMM parameters. C H Lee, C H Lin, B H Juang, Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing. IEEE Int. Conf. on Acoustics, Speech and Signal essingAlbuquerque, New MexicoICASSP'90Lee, C. H. , C. H. Lin, and B. H. Juang, "A study on speaker adaptation of continuous density HMM parameters," Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, pages 145-148, Albuquerque, New Mexico, April 1990. ICASSP'90.
Eigenspace-based Maximum A Posteriori Linear Regression for Rapid Speaker Adaptation. K T Chen, Hsin-Min Wang, Proc. IEEE Int. Conf. Acoustics, Speech, Signal processing (ICASSP'2001). IEEE Int. Conf. Acoustics, Speech, Signal processing (ICASSP'2001)Salt Lake City, USAChen, K. T. and Hsin-min Wang, "Eigenspace-based Maximum A Posteriori Linear Regression for Rapid Speaker Adaptation," in Proc. IEEE Int. Conf. Acoustics, Speech, Signal processing (ICASSP'2001), Salt Lake City, USA, May 2001.
Speaker identification using minimum classification error training. O Siohan, A E Rosenberg, S Parthasarathy, Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing. IEEE Int. Conf. on Acoustics, Speech and Signal essingSeattle, Washington, USASiohan, O. , A. E. Rosenberg and S. Parthasarathy, "Speaker identification using minimum classification error training," In Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Seattle, Washington, USA, May 1998.
Channel adaptation using linear regression for continuous noisy speech recognition. O Siohan, Y Gong, J P Haton, IEEE Workshop on Automatic Speech Recognition. Siohan, O. , Y. Gong, and J. P. Haton, "Channel adaptation using linear regression for continuous noisy speech recognition," IEEE Workshop on Automatic Speech Recognition, Snowbird, Utah, USA, December 1995.
Towards Non-Stationary Model-Based Noise Adaptation for Large Vocabulary. T Kristjansson, B Frey, L Deng, A Acero, Proc. of the Int. Conf. on Acoustics, Speech, and Signal Processing. of the Int. Conf. on Acoustics, Speech, and Signal essingSalt Lake City, UtahKristjansson, T. , B. Frey, L. Deng and A. Acero. "Towards Non-Stationary Model-Based Noise Adaptation for Large Vocabulary," in Proc. of the Int. Conf. on Acoustics, Speech, and Signal Processing. Salt Lake City, Utah, May, 2001. |
231,643,687 | [] | Where Bears Have the Eyes of Currant: Towards a Mansi WordNet
Csilla Horváth horvathcs@ieas-szeged.hu
University of Szeged
Institute of English-American Studies Egyetem u. 26720SzegedHungary
Ágoston Nagy nagyagoston@lit.u-szeged.hu
University of Szeged
Institute of English-American Studies Egyetem u. 26720SzegedHungary
Norbert Szilágyi norbertszilagyi91@gmail.com
Department of Finno-Ugrian Studies
University of Szeged
Egyetem u. 26720SzegedHungary
Veronika Vincze vinczev@inf.u-szeged.hu
Research Group on Artificial Intelligence Tisza Lajos krt. 103
Hungarian Academy of Sciences
6720SzegedHungary
Where Bears Have the Eyes of Currant: Towards a Mansi WordNet
Here we report the construction of a wordnet for Mansi, an endangered minority language spoken in Russia. We will pay special attention to challenges that we encountered during the building process, among which the most important ones are the low number of native speakers, the lack of thesauri and the bear language. We will discuss our solutions to these issues, which might have some theoretical implications for the methodology of wordnet building in general.
Introduction
Wordnets are lexical databases that are rendered according to semantic and lexical relations between groups of words. They are supposed to reflect the internal organization of the human mind (Miller et al., 1990). The first wordnet was constructed for English (Miller et al., 1990) and since that time, wordnets have been built for several languages including several European languages, mostly in the framework of EuroWordNet and BalkaNet (Alonge et al., 1998;Tufiş et al., 2004) and other languages such as Arabic, Chinese, Persian, Hindi, Tulu, Dravidian, Tamil, Telugu, Sanskrit, Assamese, Filipino, Gujarati, Nepali, Kurdish, Sinhala (Tanács et al., 2008;Bhattacharyya et al., 2010;Fellbaum and Vossen, 2012;Orav et al., 2014). Synsets within wordnets for different languages are usually linked to each other, so concepts from one language can be easily mapped to those in another language. Wordnets can be beneficial for several natural language processing tasks, be it mono-or multilingual: for instance, in machine translation, information retrieval and so on.
In this paper, we aim at constructing a wordnet for Mansi, an indigenous language spoken in Russia. Mansi is an endangered minority language, with less than 1000 native speakers. Most often, minority languages are not recognized as official languages in their respective countries, where there is an official language (in this case, Russian) and there is one or there are several minority languages (e.g. Mansi, Nenets, Saami etc.). Hence, the speakers of minority languages are bilingual, and usually use the official or majority language in their studies and work, and the language of administration is the majority language as well. However, the minority language is typically restricted to the private sphere, i.e. among family members and friends, and thus it is mostly used in oral communication, with only sporadic examples of writing in the minority language (Vincze et al., 2015). Also, the cultural and ethnographic background of Mansi people may affect language use: certain artifacts used by Mansi people that are unknown to Western cultures have their own vocabulary items in Mansi and vice versa, certain concepts used by Western people are unknown to Mansi people, therefore there are no lexicalized terms for them.
The construction of a Mansi wordnet help us explore how a wordnet can be built for a minority language and also, an endangered language. Thus, we will investigate the following issues in this paper:
• What are the specialties of constructing a wordnet for a minority language?
• What are the specialties of constructing a word-net for an endangered language?
• What are the specialties of constructing a wordnet for Mansi?
The paper has the following structure. First, the Mansi language will be shortly presented from linguistic, sociolinguistic and language policy perspectives. Then our methods to build the Mansi wordnet will be discussed, with special emphasis on specific challenges as regards endangered and minority languages in general and Mansi in particular. Later, statistical data will be analysed and our results will be discussed in detail. Finally, a summary will conclude the paper.
The Mansi Language
Mansi (former term: Vogul) is an extremely endangered indigenous Uralic (more precisely Finno-Ugric, Ugric, Ob-Ugric) languages, spoken in Western Siberia, especially on the territory of the Khanty-Mansi Autonomous Okrug. Among the approximately 13,000 people who declared to be ethnic Mansi according to the data of the latest Russian federal census in 2010 only 938 stated that they could speak the Mansi language.
The Mansi have been traditionally living on hunting, fishing, to a lesser extent also on reindeer breeding, they got acquainted with agriculture and urban lifestyle basically during the Soviet period. The principles of Soviet linguistic policy according to which the Mansi literary language has been designed kept changing from time to time. After using Latin transcription for a short period, Mansi language planners had to switch to the Cyrillic transcription in 1937. While until the 1950s the more general tendency was to create new Mansi words to describe the formerly unknown phenomena, later on the usage of Russian loanwords became more dominant. As a result of these tendencies some of the terms describing contemporary environment, urban lifestyle, the Russian-dominated culture are Russian loanwords, while others are Mansi neologisms created by Mansi linguists and journalists. It is not uncommon to find two or even three different synonyms describing the same phenomena (for example, hospital): by the means of borrowing the word from Russian (áîëüE íèöà), or using the Russian loanword in a form adapted to the Mansi phonology (ï© yëüíèöà), or using a Mansi neologism to describe it (ì© õóì ïóE ñìàëòàí êîë, 'a house for healing people, hospital', as opposed to í© ÿâðàì ïóñìàëòàí êîë 'children hospital, children's clinic' or © yéõóë ïóñìàëE òàí êîë 'veterinary clinic').
Semi-automatic construction of the Mansi WordNet
In this section, we will present our methods to construct the Mansi WordNet. We will also pay special attention to the most challenging issues concerning wordnet building.
Low number of native speakers
The first and greatest problem we met while creating the Mansi wordnet was that only a handful of native speakers have been trained in linguistics. Thus, we worked with specialists of the Mansi language who have been trained in linguistics and technology, but do not have native competence in Mansi.
As it is not rentable to build a WordNet from scratch and as our annotators are native speakers of Hungarian, we used the Hungarian WordNet (Miháltz et al., 2008) as a starting point. First, we decided to include basic synsets, and the number of the synsets is planned to be expanded continuously later on. We used Basic Concepts -already introduced in EuroWordNet -as a starting point: this set of synsets contains the synsets that are considered the most basic conceptual units universally.
Already existing resources
In order to accelerate the whole task and to ease the work of Mansi language experts, the WordNet creating process was carried out semi-automatically. Since there is no native speaker available who could solve the problems requiring native competence, we were forced to utilize the available sources as creatively as possible.
First, the basic concept sets of the Hungarian WordNet XML file were extracted and at the same time, the non-lexicalized elements were filtered as in this phase, we intend to focus only on lexicalized elements.
Second, we used a Hungarian-Mansi dictionary to create possible translations for the members of the synsets. The dictionary we use in the process is based on different Mansi-Russian dictionaries (e.g. Rombandeeva (2005), Balandin and Vahruševa (1958), Rombandeeva and Kuzakova (1982)). The translation of all Mansi entries to Hungarian and to English in the new dictionary is being done independently of WordNet developing (Vincze et al., 2015).
In order not to get all Hungarian entries of the WordNet translated to Mansi again, a program code was developed to replace the Hungarian terms with the already existing translations from the dictionary. Only literals are replaced, definitions and examples are left untouched, so that the linguists can check the actual meaning and can replace them with their Mansi equivalents. The Mansi specialists' role is to check the automatic replacement and to give new term candidates if there is no proper automatic translation.
In this workphase, as there are no synonym dictionaries or thesauri available for the Mansi language, the above-mentioned bilingual student dictionaries are used as primary resources. These dictionaries were designed to be used during school classes, they rarely contain any synonyms, antonyms or hypernyms, and hardly any phrases or standing locutions. (Most of these dictionaries were written by the same authors, thus -besides the inconsistent marking of vowel length -fortunately we do not have to pay special attention to possible contradictions or incoherence.) Hence originates the unbalanced situation in which we are either missing the Mansi translation, either the Mansi definition belonging to the same code, and we are able to present the translation, the definition and the examples of usage only in a few extraordinary instances. The sentences illustrating usage in the synset come from our Mansi corpus, built from articles from the Mansi newspaper called Luima Seripos published online semimonthly at http://www. khanty-yasang.ru/luima-seripos.
In its final version, our corpus will contain above 1,000,000 tokens, roughly 400,000 coming from the online publications and the rest from the archived PDF files.
Even if based on the Hungarian WordNet, the elements of the Mansi WordNet can be matched to the English ones and those of other wordnets since the Hungarian WN itself is paired with the Princeton WordNet (Miller et al., 1990).
Bear language
Another very special problem occurred during wordnet building in Mansi, that is the question regarding the situation of the so called "bear language". The bear is a prominently sacred animal venerated by Mansi, bearing great mythical and ritual significance, and also surrounded by a detailed taboo language. Since the bear is believed to understand the human speech (and also to have sharp ears), it is respectful and cautious to use taboo words while speaking about the bear, the parts of its body, or any activity connected with the bear (especially bear hunting) so that the bear would not understand it. The taboo words of this "bear language" may be divided into two major subgroups: Mansi words which have a different, special meaning when used in connection with the bear (e.g. ñîñûã 'currant' but also meaning 'eye', when speaking of the bear's eyes), and those which may be used solely in connection with the bear (e.g. õàùëû 'to be angry', as opposed to êàíòëû 'to be angry' speaking of a human). Even the word for bear belongs to taboo words and has only periphrastic synonyms like f© oðò© oëí© oèêà 'an old man from the forest' etc.
As a first approach, taboo words were included as literals in the synsets because their usage is restricted in the sense that they can solely be used in connection with bears. Hence, first we marked the special status of these literals, for which purpose we applied the note "bear". However, it would have also been practical to well differentiate the synsets that are connected to "bears". This can be realized in many ways: for example, the "bear"-variants of the notions should be the hyponyms of their respective notions, like õàùëû 'to be angry', which can be considered as a hyponym of êàíòëû 'to be angry' speaking of a human. However, this solution is not a perfect one since (i) this is not a widespread method either in WordNets of other languages and therefore it would not facilitate WordNet-based dictionaries and (ii) it is not a true hyponym, that is, a real subtype of their respective notion connected to humans. Finally, we decided to put these notions in separate synsets, which has the advantage that these notions are grouped together and it is easier to do a targeted search on these expressions.
The manual correction of the automatically translated Basic Concept Set 1 is in progress. Currently, the online xml file contains 300 synsets. These synsets had altogether 410 literals, thus a synset had 1.37 literals in average: this proportion was 1.88 in the original Hungarian WordNet xml file. Concerning the proportion of the two part-of-speech categories, nouns prevail over verbs with 210 nouns (70%), 90 verbs.
Presumably 40% of all lexicon entries are multiword expressions, regardless of word class or derivational processes. In many case when the Russian word refers to special posts or professional person, the proper Mansi word is a roundabout phrase. For example the ó÷èòåëü 'schoolteacher masc.' could be translated as í© ÿâðàìûò õàíèñüòàí õóì built up of the element children-teaching man , and the feminine counterpart ó÷èòåëüíèöà 'schoolteacher f em.' as í© ÿâðàìûò õàíèñüòàí í© ý from childrenteaching woman. Though the multi-word expressions are highly variable in their elements, replacing the dedicated parts with synonyms, or adding new ones to enrich the layers of senses. The number of multi-word expressions in this version of the Mansi WordNet is 74, that is 18% of all literals.
Section 3.2 enumerated some challenges about transforming an already existing WordNet to Mansi. Some synsets in the Basic Concept Set also have proved to be difficult to handle. For example, the Mansi language is only occasionally (if ever) used in scientific discourse. Therefore, the terms 'unconscious process', 'physiology' or 'geographical creature' cannot have any Mansi equivalents and therefore can be included in the Mansi WordNet only as non-lexicalized items. The number of such literals is 34, that is 16% of all literals.
Discussion
Building a wordnet for a minority or endangered language can have several challenges. Some of these are also relevant for dead languages, however, wordnets for e.g. Latin (Minozzi, 2009), Ancient Greek (Bizzoni et al., 2014) and Sanskrit (Kulkarni et al., 2010) prove that these facts do not necessarily mean an obstacle for wordnet construction. Here we summarize the most important challenges and how we solved them while constructing the Mansi wordnet.
Wordnet construction for minority and endangered languages
First, linguistic resources, e.g. mono-and multilingual dictionaries may be at our disposal only to a limited extent and second, there might be some areas of daily life where only the majority language is used, hence the minority language has only a limited vocabulary in that respect. As for the first challenge, we could rely on the Mansi-Russian-English-Hungarian dictionary under construction, which is itself based on Mansi-Russian dictionaries (see above) and we made use of its entries in the semi-automatic building process. However, if there are no such resources available, wordnets for minority languages should be constructed fully manually. For dead languages which are well-documented and have a lot of linguistic descriptions and dictionaries (like Latin and Ancient Greek), this is a less serious problem.
As for the second challenge, we applied two strategies: we introduced non-lexicalized synsets for those concepts that do not exist in the Mansi language or we included an appropriate loanword from Russian.
Besides being a minority language, Mansi is also an endangered language. Almost none of its native speakers have been trained in linguistics, which fact rules out the possibility of having native speakers as annotators. Thus, linguist experts specialized in the Mansi language have been employed as wordnet builders and in case of need, they can contact native speakers for further assistance. This problem is also relevant for dead languages, where there are no native speakers at all, however, we believe that linguists with advanced knowledge of the given language can also fully contribute to wordnet building.
Specialties of wordnet construction for Mansi
Wordnet building for Mansi also led to some theoretical innovations. As there is a subvocabulary of the Mansi language related to bears (see above), we intended to reflect this distinction in the wordnet too. For that reason, we introduced the novel relation "bear", which connect synsets that are only used in connection with bears and synsets that in-clude their "normal" equivalents. All this means that adding new languages to the spectrum may also have theoretical implications which contribute to the linguistic richness of wordnets.
Conclusions
In this paper, we reported the construction of a wordnet for Mansi, an endangered minority language spoken in Russia. As we intend to make the Mansi wordnet freely available for everyone, we hope that this newly created language resource will contribute to the revitalization of the Mansi language.
In the future, we would like to extend the Mansi wordnet with new synsets. Moreover, we intend to create applications that make use of this language resource, for instance, online dictionaries and linguistic games for learners of Mansi.
AcknowledgmentsThis work was supported in part by the Finnish Academy of Sciences and the Hungarian National Research Fund, within the framework of the project Computational tools for the revitalization of endangered Finno-Ugric minority languages (FinUgRevita). Project number: OTKA FNN 107883; AKA 267097.
The Linguistic Design of the EuroWordNet Database. Computers and the Humanities. Antonietta Alonge, Nicoletta Calzolari, Piek Vossen, Laura Bloksma, Irene Castellon, Maria Antonia Marti, Wim Peters, EuroWordNet. 322-3Special Issue onAntonietta Alonge, Nicoletta Calzolari, Piek Vossen, Laura Bloksma, Irene Castellon, Maria Antonia Marti, and Wim Peters. 1998. The Linguistic Design of the EuroWordNet Database. Computers and the Humani- ties. Special Issue on EuroWordNet, 32(2-3):91-115.
Mansijskirusskij slovar' s leksičeskimi paralelljami iz južnomansijskogo (kondinskogo) dialekta. A N Balandin, M I Vahruševa, Prosvešenije, LeningradA.N. Balandin and M.I. Vahruševa. 1958. Mansijski- russkij slovar' s leksičeskimi paralelljami iz južno- mansijskogo (kondinskogo) dialekta. Prosvešenije, Leningrad.
Principles, Construction and Application of Multilingual Wordnets. Proceedings of GWC 2010. Pushpak Bhattacharyya, Christiane Fellbaum, and Piek VossenGWC 2010Mumbai, IndiaNarosa Publishing HousePushpak Bhattacharyya, Christiane Fellbaum, and Piek Vossen, editors. 2010. Principles, Construction and Application of Multilingual Wordnets. Proceedings of GWC 2010. Narosa Publishing House, Mumbai, In- dia.
The making of ancient greek wordnet. Yuri Bizzoni, Federico Boschetti, Harry Diakoff, Riccardo Del Gratta, Monica Monachini, Gregory Crane, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14). the Ninth International Conference on Language Resources and Evaluation (LREC'14)Reykjavik, IcelandEuropean Language Resources Association (ELRA). ACL Anthology IdentifierYuri Bizzoni, Federico Boschetti, Harry Diakoff, Ric- cardo Del Gratta, Monica Monachini, and Gregory Crane. 2014. The making of ancient greek wordnet. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 1140-1147, Reykjavik, Iceland, May. European Language Resources Association (ELRA). ACL An- thology Identifier: L14-1054.
Proceedings of GWC 2012. Christiane Fellbaum and Piek VossenGWC 2012Matsue, JapanChristiane Fellbaum and Piek Vossen, editors. 2012. Proceedings of GWC 2012. Matsue, Japan.
Introducing Sanskrit WordNet. M Kulkarni, C Dangarikar, I Kulkarni, A Nanda, P Bhattacharya, Principles, Construction and Application of Multilingual Wordnets. Proceedings of the Fifth Global WordNet Conference. Mumbai, IndiaNarosa Publishing HouseM. Kulkarni, C. Dangarikar, I. Kulkarni, A. Nanda, and P. Bhattacharya. 2010. Introducing Sanskrit WordNet. In Principles, Construction and Application of Mul- tilingual Wordnets. Proceedings of the Fifth Global WordNet Conference (GWC 2010), Mumbai, India. Narosa Publishing House.
Methods and Results of the Hungarian WordNet Project. Márton Miháltz, Csaba Hatvani, Judit Kuti, György Szarvas, János Csirik, Gábor Prószéky, Tamás Váradi, Proceedings of the Fourth Global WordNet Conference (GWC 2008). the Fourth Global WordNet Conference (GWC 2008)Szeged. University of SzegedMárton Miháltz, Csaba Hatvani, Judit Kuti, György Szarvas, János Csirik, Gábor Prószéky, and Tamás Váradi. 2008. Methods and Results of the Hungar- ian WordNet Project. In Proceedings of the Fourth Global WordNet Conference (GWC 2008), pages 311- 320, Szeged. University of Szeged.
Introduction to WordNet: an on-line lexical database. George A Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, Katherine J Miller, International Journal of Lexicography. 34George A. Miller, Richard Beckwith, Christiane Fell- baum, Derek Gross, and Katherine J. Miller. 1990. Introduction to WordNet: an on-line lexical database. International Journal of Lexicography, 3(4):235-244.
The Latin WordNet project. Stefano Minozzi, Peter Anreiter and Manfred Kienpointner, editors, Latin Linguistics Today. Akten des 15. Internationalem Kolloquiums zur Lateinischen Linguistik. 137Stefano Minozzi. 2009. The Latin WordNet project. In Peter Anreiter and Manfred Kienpointner, edi- tors, Latin Linguistics Today. Akten des 15. Inter- nationalem Kolloquiums zur Lateinischen Linguistik, volume 137 of Innsbrucker Beiträge zur Sprachwis- senschaft, pages 707-716.
Heili Orav, Christiane Fellbaum, Piek Vossen, Proceedings of GWC 2014. GWC 2014Tartu, EstoniaHeili Orav, Christiane Fellbaum, and Piek Vossen, edi- tors. 2014. Proceedings of GWC 2014. Tartu, Esto- nia.
Slovar' mansijsko-russkij i russko-mansijskij. E I Rombandeeva, E A Kuzakova, Prosvešenije, LeningradE.I. Rombandeeva and E.A. Kuzakova. 1982. Slovar' mansijsko-russkij i russko-mansijskij. Prosvešenije, Leningrad.
Russko-mansijskij slovar'. Mirall. I Rombandeeva, Sankt-PeterburgI. Rombandeeva. 2005. Russko-mansijskij slovar'. Mirall, Sankt-Peterburg.
Christiane Fellbaum, and Piek Vossen. Attila Tanács, Dóra Csendes, Veronika Vincze, Szeged, HungaryUniversity of Szeged, Department of InformaticsProceedings of GWCAttila Tanács, Dóra Csendes, Veronika Vincze, Chris- tiane Fellbaum, and Piek Vossen, editors. 2008. Pro- ceedings of GWC 2008. University of Szeged, Depart- ment of Informatics, Szeged, Hungary.
Dan Tufiş, Dan Cristea, Sofia Stamou, Balka-Net: Aims, Methods, Results and Perspectives. 7Dan Tufiş, Dan Cristea, and Sofia Stamou. 2004. Balka- Net: Aims, Methods, Results and Perspectives. Roma- nian Journal of Information Science and Technology. Special Issue on BalkaNet, 7(1-2):9-43.
Veronika Vincze, Ágoston Nagy, Csilla Horváth, Norbert Szilágyi, István Kozmács, Edit Bogár, and Anna Fenyvesi. 2015. FinUgRevita: Developing Language Technology Tools for Udmurt and Mansi. Proceedings of the First International Workshop on Computational Linguistics for Uralic Languages, Tromsø, NorwayVeronika Vincze, Ágoston Nagy, Csilla Horváth, Nor- bert Szilágyi, István Kozmács, Edit Bogár, and Anna Fenyvesi. 2015. FinUgRevita: Developing Language Technology Tools for Udmurt and Mansi. In Proceed- ings of the First International Workshop on Computa- tional Linguistics for Uralic Languages, Tromsø, Nor- way, January. |
||
15,631,550 | The Lefff , a freely available and large-coverage morphological and syntactic lexicon for French | In this paper, we introduce the Lefff , a freely available, accurate and large-coverage morphological and syntactic lexicon for French, used in many NLP tools such as large-coverage parsers. We first describe Alexina, the lexical framework in which the Lefff is developed as well as the linguistic notions and formalisms it is based on. Next, we describe the various sources of lexical data we used for building the Lefff , in particular semi-automatic lexical development techniques and conversion and merging of existing resources. Finally, we illustrate the coverage and precision of the resource by comparing it with other resources and by assessing its impact in various NLP tools. | [
11976514,
2071563,
383404,
15293618,
219301234,
7609686,
232021879,
1458424,
6486942,
17834270
] | The Lefff , a freely available and large-coverage morphological and syntactic lexicon for French
Benoît Sagot benoit.sagot@inria.fr
INRIA Paris-Rocquencourt
Alpage
Université Paris
Domaine de Voluceau -Rocquencourt
BP 10578153Le Chesnay CedexFrance
The Lefff , a freely available and large-coverage morphological and syntactic lexicon for French
In this paper, we introduce the Lefff , a freely available, accurate and large-coverage morphological and syntactic lexicon for French, used in many NLP tools such as large-coverage parsers. We first describe Alexina, the lexical framework in which the Lefff is developed as well as the linguistic notions and formalisms it is based on. Next, we describe the various sources of lexical data we used for building the Lefff , in particular semi-automatic lexical development techniques and conversion and merging of existing resources. Finally, we illustrate the coverage and precision of the resource by comparing it with other resources and by assessing its impact in various NLP tools.
Introduction
Many Natural Language Processing (NLP) tools require or benefit from reliable linguistic resources, such as lexicons and grammars. In particular, for tasks such as parsing, a morphological and syntactic lexicon is a highly valuable source of information. However, such a lexicon needs (1) to have a large coverage, (2) to guarantee a high level of quality, (3) to be directly usable in NLP tools,and (4) to be available to all its potential users. Such resources now exist for English, but are often lacking or incomplete for other languages, even major ones. For example, for French, several lexical resources exist that contain syntactic information, such as Lexicon-Grammar tables (Gross, 1975), Dicovalence (van den Eynde and Mertens, 2006) or Les Verbes Français (Dubois and Dubois-Charlier, 1997), but none of them combines satisfactorily the four above-mentioned properties. These properties are the basis of our lexical development work. In this paper, we introduce both our lexical formalism, named Alexina, and the most advanced lexical resource developed within this framework, the Lefff (Lexique des Formes Fléchies du Français -Lexicon of French inflected forms), now in its third version (3.0.1). The Lefff is a widely-used and freely available 1 large-coverage morphological and syntactic lexicon for French. Apart from the Lefff , other Alexina lexicons are being developed, in particular the Leffe for Spanish (Molinero et al., 2009), and resources for Galician, Polish (Sagot, 2007), Slovak (Sagot, 2005), Persian , Sorani Kurdish and soon English. 2 1 The Lefff is distributed under the LGPL-LR license. See http://alpage.inria.fr/ ∼ sagot/lefff.html or the web page of the Alexina project: https://gforge. inria.fr/projects/alexina. 2 Moreover, other freely redistributable lexicons have been converted into Alexina morphological lexicons and are accessible on the web page of the Alexina project. This includes the Morphit! lexicon for Italian (Zanchetta and Baroni, 2005) and the Dutch lexicon distributed with the Alpino parser (van Noord, 2007). Since this latter lexicon also contains syntactic information, a full
The Alexina framework
Alexina is a lexical modeling and acquisition framework that covers both the morphological and syntactic levels.
Alexina allows to represent lexical information in a complete, efficient and readable way, that is meant to be independent of the language and of any grammatical formalism (Sagot, 2005;Sagot, 2007).
Moreover, it is compatible with the LMF 3 standard (Francopoulo et al., 2006). Therefore, an Alexina lexicon can be directly used in NLP tools. Taking parsers as an example, we are aware of Alexina lexicons, and most notably the Lefff , being used in parsers based on LTAG, including LTAGs generated from metagrammars developed in various meta-grammar formalisms (Thomasset and de la Clergerie, 2005), LFG (Boullier and Sagot, 2005), Interaction Grammars (Guillaume and Perrier, 2010), Pre-Group Grammars (Béchet and Foret, 2009), and other less known formalisms. An Alexina lexicon consists of lexical entries that correspond to lexemes, i.e., to a meaning of a lemma that exhibits consistent morphological and syntactic properties. The morphological information in an Alexina lexicon has a simple and standard structure. Each entry is associated with a lemma, a category (or part-of-speech) and an inflection class. The morphological description defines how to build all wordforms (inflected forms, or simply forms) from a given lemma depending on its inflection class and associate with each inflected form a morphological tag (e.g., ms for masculine singular) and a morphosyntactic flag (see below). The morphological formalism used in Alexina for defining inflection classes is described in Section 2.2.. The syntactic level of the Alexina model deserves a more detailed description.
Each phrase or pronoun whose existence, type, morphosyntactic properties and distribution is controlled by a given form in a given sentence is considered as the realization of a (syntactic) conversion would lead to a morphological and syntactic Alexina lexicon for Dutch. 3 Lexical Markup Framework, the ISO/TC37 standard for NLP lexicons.
argument of this form. In languages such as French or English, such a predicative form may be among others a verb, an adjective or a noun. In most cases, a syntactic argument corresponds to a semantic argument of the form, i.e., a participant to the process expressed by this form. In this case the syntactic argument can also be called an actant of the form. However, a syntactic argument may have no semantic counterpart, and is then called a pseudo-argument. This is for example the case of the se/s' in French pronominal verbs such as s'évanouir (to faint). Conversely, it may be impossible to provide a syntactic counterpart to a semantic argument. This if the case in French in so-called se-moyen constructions (Abeillé, 2002, p. 193). The set of syntactic arguments of a given form and the associated constraints is modeled by the means of a subcategorization frame. It encodes the set of arguments of the form, as well as additional syntactic properties (e.g., control, various constraints on the arguments, etc.). A sub-categorization frame associated with a predicative form is defined as a list of syntactic arguments of this form; each of them is assigned a syntactic function with respect to the predicative form, i.e., a consistent set of morphological and syntactic constraints, as well as the set of its possible realizations; pseudo-arguments are also included in the sub-categorization frame, with no associated syntactic function. The notion of syntactic function (or grammatical function) is widespread across formalisms and approaches (Tesnière, 1959;Kaplan and Bresnan, 1982;Perlmutter and Postal, 1983). We define them on a per-language basis by the mean of several syntactic criteria that can be sketched as follows:
• the commutation principle, taking into account both pronouns and phrases, contrarily to (van den Eynde and Mertens, 2003): if a pronoun or a phrase can be replaced at the same position or another by another pronoun or phrase (both pronouns or phrases being mutually exclusive), without changing the dependency structure underlying the sentence, then they occupy the same syntactic function;
• the unique realization principle: for a given predicate, a syntactic function is realized at most once. 4
With such criteria, the linking between semantic arguments and syntactic functions is not necessarily unique. Several sub-categorization frames may be found among the various forms of a given lexeme, for at least two reasons. First, there are form-dependant specificities in the syntactic behavior (e.g., in French, the infinitive form of a verb may have a non-realized subject). Second, and more importantly, a same inflected form of a given lexeme may have its semantic arguments linked to (final) syntactic functions in many ways: this is the wellstudied phenomenon of regular syntactic alternations (e.g., active, passive or impersonal for French or English verbs). Therefore, inspired by previous work (Perlmutter and Postal, 1983;Candito, 1999), we define initial syntactic functions as follows: 5
• for most lexemes, there exists a non-marked mapping between their semantic arguments and syntactic functions that leads to syntactically non-marked constructions;
• initial syntactic functions are defined as identical to final syntactic functions for the non-marked case;
• therefore, the set of initial syntactic functions is the same as the set of final syntactic functions;
• marked mappings (e.g., passive for French or English verbs) or mappings that lead to syntactically marked constructions (e.g., impersonal constructions for French or English verbs) are defined as redistributions of the set of initial syntactic functions;
• each redistribution must be defined formally on a perlanguage basis; a redistribution may assign a syntactic argument to a different syntactic function than its initial one, affect the list of realizations and/or change some of its properties (optionality, control, etc.).
Given the correspondence between the lemma and the inflected forms of a lexeme, as well as the correspondence between an initial sub-categorization frame of a lexeme and the various final sub-categorization frames its inflected form may receive, the Alexina model is based on a twolevel representation that separates the description of a lexicon from its use:
• The lexicon proper, or intensional lexicon, factorizes the lexical information: each entry corresponds to a lexeme and provides its lemma, morphological class and initial syntactic information; it is used for lexical resource development;
• The extensional lexicon, which is generated automatically by compiling the intensional lexicon, associates each inflected form of a given entry with a detailed structure that represents its morphological information and (some of) its possible syntactic behaviours; it is directly used by NLP tools such as parsers. 6
The set of syntactic functions, the set of possible realizations and the set of redistributions defined for French and used in the Lefff are described in Section 2.1.. Sections 2.2. and 2.3. respectively describe of the formalisms used in Alexina for defining inflection classes and redistributions. Finally, Section 2.4. and 2.5. define and illustrate respectively the format of the intensional and extensional lexicons.
Syntactic functions, realizations and redistributions in the Lefff
For verbs, the Lefff uses the following syntactic functions (defined here in a simplified way):
• Suj for subjects: cliticization with the nominative clitic;
• Obj for direct objects: cliticization with the accusative clitic, commutable with ceci/cela (this/that), impacted by passivization when it is possible;
• Objà for indirect objects canonically introduced by the prepositionà: commutable withà+non-clitic pronoun (in the sense of (van den Eynde and Mertens, 2006)) but not with ici (here) or là(-bas) (there), may be cliticizable into the dative clitic or y;
• Objde for indirect objects introduced by the preposition de: cliticization with en, not commutable with d'ici (from here) or de là (from there),
• Loc for locative arguments: commutable with ici (here) or là(-bas) (there), cliticizable with y;
• Dloc for delocative arguments: commutable with d'ici (from here) or de là (from there), cliticizable with en;
• Att for (subject, object orà-object) attributes and pseudo-objects (e.g., 3 euros in j'ai acheté ceci 3 euros -I bought this 3 euros),
• Obl and Obl2 for other (non-cliticizable) arguments;Obl2 is used for verbs with two oblique arguments, such as plaider auprès de quelqu'un en faveur de quelqu'un d'autre (to plead in front of somebody for somebody else).
For predicative adjectives and nouns, that can be headed respectively by a copula or a support verb, the same set of functions are used. The argument of a preposition is considered as an Obj. Adverbs may have arguments with the syntactic function Objà (contrairement) or Objde (indépendamment). Possible realizations are threefold:
• clitic pronouns: cln (nominative clitic), cla (accusative clitic), cld (dative clitic), y, en, seréfl (reflexive se), seréc (reciprocal se);
• direct phrases: sn (noun phrase), sa (adjectival phrase), sinf (infinitive clause), scompl (completive clause), qcompl (interrogative clause);
• prepositional phrases: a direct phrase introduced by a preposition (e.g.,à-sn, de-scompl, 7 pour-sinf).
For verbs, the inventory of possible redistributions is the following:
• %actif, a dummy "redistribution" that has almost no effect on the initial sub-categorization information; 8
• %passif for the standard passive in par;
• %passif de for the passive in de ("Pierre est aimé de Marie"/"Pierre is loved by Mary");
• %impersonnel for (active) impersonal constructions with inverted subject, if any;
• %passif impersonnel for passive impersonal constructions with inverted subject, if any;
• %se moyen for modelling constructions such as "ce livre se vend bien"/"this book sells good" on the basis of the underlying transitive construction for the same verb; 9
• %se moyen impersonnel, the impersonal counterpart of the previous redistribution (see il se vend beaucoup de livres ici/there are many books sold here);
For adjectives, we have defined two redistributions:
• %adj impersonnel when an adjective is the lexical head of an impersonal construction (see il est difficile de travailler/it is hard to work);
• %adj personnel for other cases.
For now, all other categories only use the %default construction that builds a final sub-categorization frame which is identical to the initial one.
The morphological formalism
In the Alexina formalism, inflection is modelled as the affixation of a prefix and a suffix around a stem, while sandhi phenomena may occur at morpheme boundaries (see below), sometimes conditioned by stem properties. The formalism, which shares some widespread ideas with the DATR formalism (Evans and Gazdar, 1990) , relies on the following scheme:
• The core of a morphological description is a set of inflection tables that define corresponding inflection classes; inflection tables can (partly or completely) inherit from one another,
• Each inflection table defines a set of forms, each one of them being defined by a morphological tag and by a prefix and a suffix that, together with the stem, constitute the sequence of morpheme-like units prefix stem suffix;
• Sandhi phenomena allow to link the inflected form to the underlying prefix stem and stem suffix sequences by applying regular transformations; such rules may use classes of characters (e.g., [:aou:] can be defined as denoting one of the characters a, o or u with or without diacritics, as illustrated in Table 1);
• Forms can be controlled by tests over the stem (e.g., a given rule can apply only if a given regular expression matches the stem and/or if another one does not match the stem);
• Forms can be controlled by "variants" of the inflection classes (e.g., forms can be selected by one or more flags which complement the name of the class).
Tables 1 and 2 illustrate this model by showing respectively a few sandhi rules and an excerpt of a verbal inflection class. Within the Alexina architecture, a morphological description using this formalism can generate two tools:
• an inflection tool that generates all inflected forms of a given lemma according to its morphological class;
• an ambiguous lemmatization tool, that computes for a given form (associated or not with a category) all possible candidate lemmas (existing or not) that <letterclass name="aou" letters="aàâä oôö uûüù"/> <letterclass name="ou" letters="oôö uûüù"/> Table 1: A letter class definition and three sandhi rules from our Alexina description of French morphology (the " " models a morpheme boundary; the first sandhi associates for example mangeons with mang ons, the second associates for example broient with broy ent, the third associates for example jette with jet 2e) <table name="v-er" canonical tag="W" stems="..*"> <form suffix="er" tag="W" synt="Infinitive"/> <alt> <form suffix="2e" tag="PS13s" var="dbl" synt="ThirdSing"/> <form suffix="e" tag="PS13s" rads="..*ay" var="std" synt="ThirdSing"/> <form suffix="e" tag="PS13s" var="std" synt="ThirdSing"/> </alt> . . . <form suffix="a" tag="J3s" synt="ThirdSing"/> <form suffix="ai" tag="J1s"/> . . . Table 2: Excerpts of the inflection class for French regular first group verbs in the Alexina morphological description used by the Lefff . The attributes tag and synt respectively define the morphological tag and the morphosyntactic flag. The attribute var, if present, indicates that the form must be generated only if the inflection class of the input lemma has the corresponding variant (e.g., v-er:std). The attributes rads (resp. except) indicates that the form must be generated only if the stem matches (resp. does not match) the corresponding pattern. Alternatives are represented with alt tags; within an alternative, at least one form must be generated, otherwise an error occurs.
are consistent with the morphological description and have this form among their inflected forms.
The redistribution formalism
For a given language, redistributions are defined formally as a sequence of elementary transformations to be applied on the initial sub-categorization frame in order to produce the final sub-categorization frame. More precisely, each inflected form generated for an intensional entry tries to combine itself with each redistribution associated with this intensional entry. In some cases, it may lead to an incompatibility (e.g., in the Lefff , non-third person singular for the %impersonal redistribution). One of the ways to control this is by the means of the morphosyntactic flag associated with each inflected form (e.g., all past participle forms in the Lefff receive the morphosyntactic flag PastParticiple, which is required by the %passive redistribution in order to apply). Each elementary transformation belongs to one of the following categories
• it may control the compatibility of the inflected form with the redistribution ({Only morphosyntacticFlag} or {Skip morphosyntacticFlag});
• it may add or remove a realization from the realization list of a syntactic function 10 (e.g., {Obj -cla} removes the cla realization -accusative clitic -from the list of realizations of the object syntactic function);
• it may change the optionality status of the surface realization of a syntactic function (e.g., {Obj ()} makes optional the surface realization of the object);
• it may build a list of realizations for a syntactic function (replacing the previous one if any) from an existing list, namely that of the same syntactic function or another one (e.g., the realizations of the subject for the %passive redistribution are built by the elementary transformation {Suj <Obj[cla>cln,de-sinf>sinf,seréfl>,seréc>]});
• it may add a new piece of information in the additional information section (e.g., {Macros @être} adds a macro which means the auxiliary must beêtre, which is the case for example for the passive past participle);
• it may simply replace a regular pattern by another in the additional information section (e.g., {@CtrlObjObjà @CtrlSujObjà} turns a macro expressing the fact that the control of the object on theàobject must be transformed into a control of the subject on theà-object, which is necessary for the %passive redistibution);
Each of these elementary transformations can be controlled by a morphosyntactic flag (e.g., Infinitive:{Suj ()} makes optional the subject only for the inflected form that has the Infinitive flag). Moreover, each elementary transformation may be mandatory (if it is not applicable, it means that the inflected form and the redistribution are incompatible) or optional (it is then prefixed by ?). Finally, any redistribution can be used as an elementary transformation (e.g., %passive impersonal is simply defined as %passif + %impersonnel).
An example of redistribution definition in the Lefff is shown in Table 3. Table 3: The formal definition of the %se moyen redistribution in the Lefff . This redistribution models sentences such as "ce livre se vend bien" ("this book sells good").
The Intensional format
Each entry in the intensional lexicon corresponds to a unique meaning of the corresponding lemma. It contains the following information:
• a morphological class, which defines the patterns that build its inflected forms (see Section 2.2.);
• a category (or part-of-speech); categories can be divided in two types: open (productive) categories (adjectives, adverbs, verbs, nouns) and closed (grammatical) categories;
• the initial sub-categorization frame;
• additional syntactic information (e.g., control, raising, attributes) represented by macros (e.g., @CtrlSujObj indicates that if it is realized as an infinitive phrase, the object is controlled by the subject);
• the list of possible redistributions.
For example, the intensional entry (slightly simplified for clarity reasons) in the Lefff for the French lemma diagnostiquer 1 /to diagnose is as follows:
diagnostiquer1 v-er:std Lemma;v; <arg0:Suj:cln|sn,arg1:Obj:(cla|sn)>; %actif,%passif,%se moyen It describes a transitive entry with the following information:
• its morphological class is v-er:std, the class of standard first-conjugation verbs (ending -er);
• its semantic predicate can be represented by the Lemma as is, i.e., diagnostiquer;
• its category is verb (v);
• it has two arguments canonically realized by the syntactic functions Suj (subject) and Obj (direct object); each syntactic function is associated with a list of possible realizations, but the Obj is optional as shown by the brackets;
• it allows for three different redistributions: %active, %passive, and %se moyen.
The Extensional format
The compilation process builds one extensional entry for each inflected form and each compatible redistribution, by inflecting the lemma according to the definition of its morphological class and by applying the formalized definitions of these redistributions. For example, the only inflected forms of diagnostiquer that are compatible with the passive redistribution are the past participle forms. The (simplified) extensional passive entry for diagnostiqués/diagnosed is the following (Kms is the morphological tag for past participle masculine plural forms):
diagnostiqués v [pred='diagnostiquer1<arg1:Suj:cln|sn, arg0:Obl2:(par-sn)>',@passive,@pers,@Kms]; %passive
The original direct object (Obj) has been transformed into the passive Subject and an optional Agent (Obl2) realized by a noun phrase preceded by a preposition (par-sn) was added.
Lexical data
Sources of lexical information
Lexical information included in the Lefff originate in different works:
• automatic acquisition (with manual validation) thanks to statistical techniques applied on raw corpora (Clément et al., 2004;Sagot, 2005);
• automatic acquisition (with manual validation) of specific syntactic information (Sagot, 2006, ch. 7);
• manual correction and extension guided by automatic techniques, such as simple statistics on tagged corpora (Molinero et al., 2009) or error mining in parsing results ;
• careful linguistic study of some phenomena and their representation in other resources, conversion of (part of) these resources in the Alexina format, and manually validated automatic merging with the Lefff ; we mainly used Lexicon-Grammar Tables (Gross, 1975), Dicovalence (van den Eynde and Mertens, 2006) and the Lexique des Verbes Français (Dubois and Dubois-Charlier, 1997). This was applied among other to impersonal constructions , pronominal constructions , adverbs in -ment (Sagot and Fort, 2007), several classes of frozen verbal expressions (Danlos et al., 2006), verbs in -iser and -ifier (Sagot and Fort, 2009) • finally, a certain amount of nominal and adjectival entries have their origin in the Multext morphological lexicon for French (Veronis, 1998).
Quantitative data
At the extensional level, the current version of the Lefff (3.0.1) contains 536,375 entries corresponding to 110,477 distinct lemmas covering all categories. Detailed figures are given in Table 4. This includes all kinds of conjunctions, determiners, interjections, punctuation marks, pronouns, prefixes and suffixes, as well as special entries for named entities and unknown words.
Evaluation
The evaluation of a lexical resource in itself is not easy, as no gold standard can be used. However, we performed three types of evaluation: (1) quantitative comparison with other resources, (2) comparative evaluation of NLP tools based on a lexicon, depending on whether they use the Lefff or no lexical resource, and (3) comparative evaluation of NLP tools based on a lexicon, depending on whether they use the Lefff or another lexical resource. The two latter types of evaluation are illustrated respectively on a part-of-speech tagger and on a deep parser.
Quantitative comparison with other resources
We provide in Table 5 direct comparison with other lexical resources in terms of morphological coverage (number of distinct lemmas). However, including more and more rare or archaic words in a lexicon may prove inappropriate, for it increases the size of the lexicon and the lexical ambiguity, with a very low improvement in the coverage of real texts. (Denis and Sagot, 2009), the authors compared the performance of a maximum-entropy-based part-of-speech tagger for French depending on the amount of lexical information extracted from the Lefff it relies on. This tagger, trained solely on the French TreeBank (Abeillé et al., 2003), exhibits a 97.0% accuracy (86.1% for words unknown to the training corpus). An additional coupling with the Lefff by adding Lefff -based features to the model increases this figure up to 97.7% (90.1% for words unknown to the training corpus), which is state-of-the-art for French. The explanation for this significant improvements is that the Lefff -based features reduce data sparseness and provide useful information on the right context: first, fewer errors on words unknown to the training corpus (a direct result of the use of a morphosyntactic lexicon) necessarily leads to fewer erroneous contexts for other words, and therefore to better tagging; second, the possible categories of tokens that are on the right of the current tokens are valuable pieces of information, and they are available only from the lexicon. In their study, (Denis and Sagot, 2009) also compare the effect of the use of increasingly large sub-parts of the Lefff in a way that approximately simulates the development of a morphological lexicon, by retaining only the most frequent lemmas (frequency figures come from a large journalistic corpus). The results show that using a morphological lexicon drastically improves the tagging accuracy on unknown words, whatever the development stage. Moreover, for fixed performance levels, the availability of the full lexicon consistently reduces the need for training data by at least one half (and up to two thirds).
4.3. Evaluating a deep parser using the Lefff vs.
another lexicon Comparative experiments with the FRMG parser for French (Thomasset and de la Clergerie, 2005) have been described, depending on the lexicon used by FRMG. FRMG is based on Tree-Adjoining Grammars, and normally relies on the Lefff . However, the lexical information included in Lexicon-Grammar verb tables, a high-quality and largecoverage resource (Gross, 1975), were converted into the Alexina framework (Tolone and Sagot, 2009). This allowed for integrating it within FRMG, by replacing verb entries in the Lefff . Results show that FRMG performs slightly better with the original Lefff : according to the metrics used by the EASy French parsing evaluation campaign (Paroubek et al., 2006), f-measures on "relations" (approx. dependencies between lexical words) drop from 59.9% to 56.6% when replacing Lefff verb entries with entries extracted from Lexicon-Grammar tables. 13 Several explanations can be given to explain these results. First, despite their fine-grainedness, Lexicon-Grammar tables lack some information, for instance on subject and object attributes and on pronominal redistributions. Second, the limit between arguments and modifiers tends to be different from that used in the Lefff ; the higher number of verb arguments listed in Lexicon-Grammar tables have some negative effects on the disambiguation heuristics used by FRMG. Moreover, the higher number of entries in Lexicon-Grammar leads to higher ambiguity levels, which in turn increases parsing times (and therefore the amount of sentences that can not be parsed before a fixed timeout) and affects the precision of disambiguation heuristics. Finally, the Lefff has been developed from the very beginning for and in parallel with NLP applications, which is not the case for Lexicon-Grammar tables.
Conclusion and perspectives
Since its first versions , the Lefff has turned into a widely used morphological and syntactic lexical resource for French. Its lexical framework, Alexina, is used for developing Lefff -like resources for several other languages. Moreover, the lexical data in the Lefff is under continuous improvement thanks to various semi-automatic techniques. The next step of the Lefff 's development shall be twofold. First, we intend to carry on the improvement of the
<sandhi source="g [:aou:]" target="ge [:aou:]"/> <fusion source="[:ou:]y es$" target="[:ou:]i es$"/> <fusion source="et 2e$" target="ett e$"/>
{@AttSuj } + ?{@AttObj @AttSuj} + ?{@Ctrl.* } + ?{@Comp.* }%se moyen =
%active
+ ?{@avoir } # removes the @avoir macro
+ {Macros @être}
+ {Macros @se moyen}
+ {0 se}
+ {Suj <Obj[cla>,de-sinf>sinf,seréfl>,seréc>]}
+ {Suj )(} # if the Obj was optional, the Suj is as well,
# and must therefore be made mandatory
+ ?
Table 4 :
4Quantitative data about the Lefff11
Table 5 :
5Quantitative comparison of the amount of unique lemmas in various resources 4.2. Evaluating a POS tagger using the Lefff vs. no lexicon In
Note that with such a principle, the clitic subject inversion in French needs an appropriate treatment. Indeed, in sentences such as Ainsi Pierre viendra-t-il demain (Thus Pierre will come tomorrow), one could argue that both Pierre and il should receive the syntactic function subject, which is apparently incompatible with the unique realization principle.
The notion of initial syntactic function is not standard. It is an alternative to the more widespread notion of lexical rule (see for example(Kaplan and Bresnan, 1982)). Lexical rules can be applied iteratively, starting with a base entry, and successively generating derived entries. The difference with the initial vs. final syntactic functions approach is that lexical rules can be applied on a derived entry, the result being itself a derived entry that on which another lexical rule may be applied, and so on. On the opposite, a redistribution is a mapping between initial and final syntactic functions that applies to an initial sub-categorization frame, and the resulting final sub-categorization frame cannot undergo another redistribution. This allows to avoid termination issues, and does not prevent from defining a redistribution as, e.g., the sequence of two redistributions (see Section 2.3.).
Note that in our approach, redistributions are computed during the compilation process of the lexicon. Therefore, taking parsing as an example, there is no need for on-the-fly transformations (e.g., raising), but at the same time the same lexeme will generate both an active and passive past participle (see below).
de-scompl andà-scompl are exceptions, in so far that they do not correspond toà or de followed by a clause, but toà ce or de ce followed by a clause ("Pierre se souvient de *(ce) que Marie est belle"/"Pierre remembers that Marie is beautiful").8 It corresponds to the non-marked case, but is not assigned to all verbs. For example, some meteorological verbs have only the impersonal redistribution. 9 Neutral constructions, with or without se, are considered as corresponding to a different lexeme, and therefore a different although semantically related lexical entry.
Apart from syntactic functions, the special 0 may be used as a syntactic function, in order to add elements that do not realize any syntactic function (e.g., in French, the impersonal pronoun il for the %impersonal redistribution.
Note that in Morphalou, masculine and feminine variants of a noun are considered as two different lemmas, whereas in the Lefff they are forms of the same lemma (ex.: fermier/fermièrefarmer.
Note that these figures cannot be directly compared with classical f-measures based on evalb metrics. FRMG is one of the best performing parsers for French, as proved during the EASy/Passage evaluation campaigns. precision and coverage through linguistic studies and various semi-automatic techniques, with a strong manual validation effort. Second, we aim at extending the Lefff to the semantic level, by coupling it with semantic resources such as the WOLF (Wordnet Libre du Français -Free French Wordnet)(Sagot and Fišer, 2008).
Building a treebank for French. Anne Abeillé, Lionel Clément, François Toussenel, Anne AbeilléKluwerDordrechtAnne Abeillé, Lionel Clément, and François Toussenel. 2003. Building a treebank for French. In Anne Abeillé, editor, Treebanks. Kluwer, Dordrecht.
Une grammaireélectronique du français. Anne Abeillé, CNRS Editions. Anne Abeillé. 2002. Une grammaireélectronique du français. CNRS Editions, Paris, France.
PPQ : a pregroup parser using majority composition. Denis Béchet, Annie Foret, Proceedings of the ESSLLI 2009 Workshop on Parsing with Categorial Grammars. Timothy Fowler and Gerald Pennthe ESSLLI 2009 Workshop on Parsing with Categorial GrammarsBordeaux, France; Bordeaux, Francethe ESSLLI 2009 Workshop on Parsing with Categorial GrammarsDenis Béchet and Annie Foret. 2009. PPQ : a pregroup parser using majority composition. In Timothy Fowler and Gerald Penn, editors, Proceedings of the ESSLLI 2009 Workshop on Parsing with Categorial Grammars, 20-24 July 2009, Bordeaux, France the ESSLLI 2009 Workshop on Parsing with Categorial Grammars, pages 33-37, Bordeaux, France.
Efficient and robust LFG parsing: SXLFG. Pierre Boullier, Benoît Sagot, Proceedings of IWPT 2005. IWPT 2005Vancouver, CanadaPierre Boullier and Benoît Sagot. 2005. Efficient and robust LFG parsing: SXLFG. In Proceedings of IWPT 2005, pages 1-10, Vancouver, Canada.
Organisation modulaire et paramétrable de grammairesélectroniques lexicalisées. Marie-Hélène Candito, Université Paris 7Ph.D. thesisMarie-Hélène Candito. 1999. Organisation modulaire et paramétrable de grammairesélectroniques lexicalisées. Ph.D. thesis, Université Paris 7.
Morphology based automatic acquisition of largecoverage lexica. Lionel Clément, Benoît Sagot, Bernard Lang, Proceedings of LREC 2004. LREC 2004Lisbon, PortugalLionel Clément, Benoît Sagot, and Bernard Lang. 2004. Morphology based automatic acquisition of large- coverage lexica. In Proceedings of LREC 2004, pages 1841-1844, Lisbon, Portugal.
Constructions pronominales dans dicovalence et le lexique-grammaire -intégration dans le Lefff. Laurence Danlos, Benoît Sagot, Proceedings of the 27th Lexis and Grammar Conference. the 27th Lexis and Grammar ConferenceL'Aquila, ItalyLaurence Danlos and Benoît Sagot. 2008. Constructions pronominales dans dicovalence et le lexique-grammaire -intégration dans le Lefff . In Proceedings of the 27th Lexis and Grammar Conference, L'Aquila, Italy.
French frozen verbal expressions: from lexicongrammar to NLP applications. Laurence Danlos, Benoît Sagot, Susanne Salmon-Alt, Proceedings of the 25th Lexis and Grammar Conference. the 25th Lexis and Grammar ConferencePalermo, ItalyLaurence Danlos, Benoît Sagot, and Susanne Salmon-Alt. 2006. French frozen verbal expressions: from lexicon- grammar to NLP applications. In Proceedings of the 25th Lexis and Grammar Conference, Palermo, Italy.
Coupling an annotated corpus and a morphosyntactic lexicon for state-of-the-art POS tagging with less human effort. Pascal Denis, Benoît Sagot, Proceedings of PACLIC 2009. PACLIC 2009Hong KongPascal Denis and Benoît Sagot. 2009. Coupling an annotated corpus and a morphosyntactic lexicon for state-of-the-art POS tagging with less human effort. In Proceedings of PACLIC 2009, Hong Kong.
Jean Dubois, Françoise Dubois-Charlier, Les verbes français. Larousse-Bordas. Paris, FranceJean Dubois and Françoise Dubois-Charlier. 1997. Les verbes français. Larousse-Bordas, Paris, France.
The DATR Papers. Roger Evans, Gerald Gazdar, CSRP 139Brighton, UKUniversity of SussexTechnical ReportRoger Evans and Gerald Gazdar. 1990. The DATR Papers: February 1990. Technical Report CSRP 139, University of Sussex, Brighton, UK.
Lexical Markup Framework (LMF). Gil Francopoulo, Monte George, Nicoletta Calzolari, Monica Monachini, Nuria Bel, mandy Pet, and Claudia Soria. Genoa, ItalyProceedings of LREC 2006Gil Francopoulo, Monte George, Nicoletta Calzolari, Monica Monachini, Nuria Bel, mandy Pet, and Claudia Soria. 2006. Lexical Markup Framework (LMF). In Proceedings of LREC 2006, Genoa, Italy.
Méthodes en syntaxe. Maurice Gross, HermannParis, FranceMaurice Gross. 1975. Méthodes en syntaxe. Hermann, Paris, France.
Interaction grammars. Research on Language and Computation. Bruno Guillaume, Guy Perrier, To appearBruno Guillaume and Guy Perrier. 2010. Interaction grammars. Research on Language and Computation. To appear.
Lexicalfunctional grammar: a formal system for grammatical representation. Ronald Kaplan, Joan Bresnan, Joan BresnanMIT PressCambridge, MA, USAThe Mental Representation of Grammatical RelationsRonald Kaplan and Joan Bresnan. 1982. Lexical- functional grammar: a formal system for grammatical representation. In Joan Bresnan, editor, The Mental Representation of Grammatical Relations, pages 173- 281. MIT Press, Cambridge, MA, USA.
A morphological and syntactic wide-coverage lexicon for spanish: The Leffe. Miguelángel Molinero, Benoît Sagot, Lionel Nicolas, Proceedings of RANLP 2009. RANLP 2009Borovets, BulgariaMiguelÁngel Molinero, Benoît Sagot, and Lionel Nicolas. 2009. A morphological and syntactic wide-coverage lexicon for spanish: The Leffe. In Proceedings of RANLP 2009, Borovets, Bulgaria.
Data, Annotations and Measures in EASy, the Evaluation Campaign for Parsers of French. Patrick Paroubek, Isabelle Robba, Anne Vilnat, Christelle Ayache, Proceedings of the 5th Language Resources and Evaluation Conference. the 5th Language Resources and Evaluation ConferenceGenoa, ItalyPatrick Paroubek, Isabelle Robba, Anne Vilnat, and Christelle Ayache. 2006. Data, Annotations and Measures in EASy, the Evaluation Campaign for Parsers of French. In Proceedings of the 5th Language Resources and Evaluation Conference (LREC 2006), Genoa, Italy.
Studies in Relational Grammar 1. David Perlmutter, Paul Postal, University of Chicago PressChicago, IL, USADavid Perlmutter and Paul Postal. 1983. Studies in Relational Grammar 1. University of Chicago Press, Chicago, IL, USA.
Améliorer un lexique syntaxiqueà l'aide des tables du lexiquegrammaire -Constructions impersonnelles. Benoît Sagot, Laurence Danlos, Cahiers du Cental. 5Benoît Sagot and Laurence Danlos. 2008. Améliorer un lexique syntaxiqueà l'aide des tables du lexique- grammaire -Constructions impersonnelles. Cahiers du Cental, 5:107-126.
Error mining in parsing results. Benoît Sagot, La De, Clergerie, Proceedings of ACL/COLING 2006. ACL/COLING 2006Sydney, AustraliaBenoît Sagot andÉric de La Clergerie. 2006. Error mining in parsing results. In Proceedings of ACL/COLING 2006, pages 329-336, Sydney, Australia.
Building a free french wordnet from multilingual resources. Benoît Sagot, Darja Fišer, Proceedings of Ontolex. OntolexMarrakech, MoroccoBenoît Sagot and Darja Fišer. 2008. Building a free french wordnet from multilingual resources. In Proceedings of Ontolex 2008, Marrakech, Morocco.
Améliorer un lexique syntaxiqueá l'aide des tables du lexique-grammaireadverbes en -ment. Benoît Sagot, Karën Fort, Proceedings of the 26th Lexis and Grammar Conference. the 26th Lexis and Grammar ConferenceBonifacio, FranceBenoît Sagot and Karën Fort. 2007. Améliorer un lexique syntaxiqueá l'aide des tables du lexique-grammaire - adverbes en -ment. In Proceedings of the 26th Lexis and Grammar Conference, Bonifacio, France.
Description et analyse des verbes désadjectivaux et dénominaux en -ifier etiser. Benoît Sagot, Karën Fort, Proceedings of the 28th Lexis and Grammar Conference. the 28th Lexis and Grammar ConferenceBergen, NorwayBenoît Sagot and Karën Fort. 2009. Description et analyse des verbes désadjectivaux et dénominaux en -ifier et - iser. In Proceedings of the 28th Lexis and Grammar Conference, Bergen, Norway.
A morphological lexicon for the persian language. Benoît Sagot, Géraldine Walther, Proceedings of LREC 2010. LREC 2010Valetta, MaltaBenoît Sagot and Géraldine Walther. 2010. A morpholog- ical lexicon for the persian language. In Proceedings of LREC 2010, Valetta, Malta.
The Lefff 2 syntactic lexicon for French: architecture, acquisition, use. Benoît Sagot, Lionel Clément, Éric De La Clergerie, Pierre Boullier, Proceedings of the 5th Language Resource and Evaluation Conference. the 5th Language Resource and Evaluation ConferenceLisbon, PortugalBenoît Sagot, Lionel Clément,Éric de La Clergerie, and Pierre Boullier. 2006. The Lefff 2 syntactic lexicon for French: architecture, acquisition, use. In Proceedings of the 5th Language Resource and Evaluation Conference, Lisbon, Portugal.
Automatic acquisition of a Slovak lexicon from a raw corpus. Benoît Sagot, LNAI 3658, Proceedings of TSD 2005. Karlovy Vary, Czech RepublicBenoît Sagot. 2005. Automatic acquisition of a Slovak lexicon from a raw corpus. In LNAI 3658, Proceedings of TSD 2005, pages 156-163, Karlovy Vary, Czech Republic.
Analyse automatique du français: lexiques, formalismes, analyseurs. Benoît Sagot, Université Paris 7Ph.D. thesisBenoît Sagot. 2006. Analyse automatique du français: lex- iques, formalismes, analyseurs. Ph.D. thesis, Université Paris 7.
Building a morphosyntactic lexicon and a pre-syntactic processing chain for Polish. Benoît Sagot, Proceedings of LTC 2005. LTC 2005Poznań, PolandBenoît Sagot. 2007. Building a morphosyntactic lexicon and a pre-syntactic processing chain for Polish. In Proceedings of LTC 2005, pages 423-427, Poznań, Poland.
Éléments de syntaxe structurale. Lucien Tesnière, KlincksieckParis, FranceLucien Tesnière. 1959.Éléments de syntaxe structurale. Klincksieck, Paris, France.
Comment obtenir plus des méta-grammaires. François Thomasset, De La Clergerie, Proceedings of TALN 2005. TALN 2005Dourdan, FranceFrançois Thomasset andÉric de la Clergerie. 2005. Comment obtenir plus des méta-grammaires. In Proceedings of TALN 2005, Dourdan, France.
Using lexicongrammar tables for french verbs in a large-coverage parser. Elsa Tolone, Benoît Sagot, Proceedings of LTC 2009. LTC 2009Poznań, Poland13La valence: l'approche pronominale et son application au lexique verbalElsa Tolone and Benoît Sagot. 2009. Using lexicon- grammar tables for french verbs in a large-coverage parser. In Proceedings of LTC 2009, Poznań, Poland. Karel van den Eynde and Piet Mertens. 2003. La valence: l'approche pronominale et son application au lexique verbal. Journal of French Language Studies, 13:63-104.
Le dictionnaire de valence DICOVALENCE : manuel d'utilisation. Karel van den Eynde and Piet MertensKarel van den Eynde and Piet Mertens. 2006. Le diction- naire de valence DICOVALENCE : manuel d'utilisation.
Using self-trained bilexical preferences to improve disambiguation accuracy. Gertjan Van Noord, Proceedings of the Tenth International Conference on Parsing Technologies (IWPT 2007). the Tenth International Conference on Parsing Technologies (IWPT 2007)Prague, Czech RepublicGertjan van Noord. 2007. Using self-trained bilexical preferences to improve disambiguation accuracy. In Proceedings of the Tenth International Conference on Parsing Technologies (IWPT 2007), pages 1-10, Prague, Czech Republic.
Multext-lexicons, a set of electronic lexicons for european languages. CD-ROM distributed by. Jean Veronis, ELRA/ELDAJean Veronis. 1998. Multext-lexicons, a set of electronic lexicons for european languages. CD-ROM distributed by ELRA/ELDA.
Developing a large-scale lexicon for a less-resourced language: general methodology and preliminary experiments on sorani kurdish. Géraldine Walther, Benoît Sagot, Proceedings of the 7th SaLTMiL Workshop on Creation and use of basic lexical resources for less-resourced languages (LREC 2010 Workshop). the 7th SaLTMiL Workshop on Creation and use of basic lexical resources for less-resourced languages (LREC 2010 Workshop)Valetta, MaltaGéraldine Walther and Benoît Sagot. 2010. Developing a large-scale lexicon for a less-resourced language: general methodology and preliminary experiments on sorani kurdish. In Proceedings of the 7th SaLTMiL Workshop on Creation and use of basic lexical resources for less-resourced languages (LREC 2010 Workshop), Valetta, Malta.
Morph-it! a free corpus-based morphological resource for the italian language. Eros Zanchetta, Marco Baroni, Proceedings of Corpus Linguistics. Corpus LinguisticsBirmingham, UKEros Zanchetta and Marco Baroni. 2005. Morph-it! a free corpus-based morphological resource for the italian language. In Proceedings of Corpus Linguistics 2005, Birmingham, UK. |
17,815,197 | Multi-threaded Interaction Management for Dynamic Spatial Applications | We present a multi-threaded Interaction Manager (IM) that is used to track different dimensions of user-system conversations that are required to interleave with each other in a coherent and timely manner. This is explained in the context of a spoken dialogue system for pedestrian navigation and city question-answering, with information push about nearby or visible points-of-interest (PoI). | [
460839,
266903
] | Multi-threaded Interaction Management for Dynamic Spatial Applications
April 26-30
Srinivasan Janarthanam
Interaction Lab Heriot
Interaction Lab Heriot-Watt University
Watt University Edinburgh
Oliver Lemon
Interaction Lab Heriot
Interaction Lab Heriot-Watt University
Watt University Edinburgh
Multi-threaded Interaction Management for Dynamic Spatial Applications
Proceedings of the of the EACL 2014 Workshop on Dialogue in Motion (DM)
the of the EACL 2014 Workshop on Dialogue in Motion (DM)Gothenburg, SwedenApril 26-30
We present a multi-threaded Interaction Manager (IM) that is used to track different dimensions of user-system conversations that are required to interleave with each other in a coherent and timely manner. This is explained in the context of a spoken dialogue system for pedestrian navigation and city question-answering, with information push about nearby or visible points-of-interest (PoI).
Introduction
We present a multi-threaded Interaction Manager (IM) that is used to track different dimensions of user-system conversations and interleave the different converational threads coherently. The IM that we present interacts with the user in a spatial domain and interleaves navigation information along with historical and cultural information about the entities that users can see around them. In addition, it aims to answer questions that users might have about those entities. This presents a complex conversational situation where several conversational threads have to be interleaved in such a way that the system utterances are presented to the user at the right time but in a prioritised order, and with bridging utterances when threads are interrupted and resumed. For instance, a navigation instruction may be important (since the user is walking up to a junction at which they need to turn) and therefore it needs to be spoken before continuing information presentation about an entity or answering other ongoing questions.
Related work
Previously, multi-threaded interaction was used to handle multiple simultaneous tasks in humanrobot interaction (HRI) scenarios (Lemon and Gruenstein, 2004). This idea also turns out to be important for cases where humans are interacting with a variety of different web-services in parallel. Human multitasking in dialogue is discussed in (Yang et al., 2008). (Lemon and Gruenstein, 2004) presented a multi-threaded dialogue management approach for managing several concurrent tasks in an HRI scenario. The robot could, for example be flying to a location while simultaneously searching for a vehicle, and utterances about both tasks could be interleaved. Here, conversational threads were managed using a representation called the "Dialogue Move Tree", which represented conversational threads as branches of the tree, linked to an "Activity Tree" which represented the states of ongoing robot tasks (deliver medical supplies, fly to a waypoint, search for a truck), which could be active simultaneously. The situation for our pedestrian navigation and information system is similar -concurrent tasks need to be managed coherently via conversation. The approach adopted in this paper is similar to (Lemon and Gruenstein, 2004). However, in this work we separate out a domain-general thread called 'dialogue control' which handles generic issues like clarification of reference across all tasks. This increasing modularisation of the dialogue threads makes it possible to learn individual dialogue policies for each one, in future work. (Nakano et al., 2008) presented an approach where one of the several expert modules handling different tasks is activated based on the user input, but only one verbal expert is active at any one time. In contrast to this, we present an approach where several thread managers each handling a different task can be activated in parallel and their outputs stored and retrieved based on priority.
Multi-threaded IM
The Interaction Manager (IM) is the central component of any spoken dialogue system architec- ture. Generally, it takes as input the user's utterances in the form of dialogue acts from the parser and identifies the next dialogue action to present to the user. Dialogue about a domain task is managed using a dialogue strategy or policy (e.g. (Young, 2000;Lemon and Pietquin, 2007)). A dialogue policy is a mapping between dialogue states and dialogue actions, which are semantic representations of what the system should say next.
In order to handle multiple tasks simultaneously, we present an architecture for a multi-threaded interaction manager that treats conversation about each domain task as a thread. These conversational threads are interleaved and managed using techniques such as multi-queuing, priority based pushing, and queue revision. We describe these techniques below. The architecture of the Interaction Manager is shown in figure 1.
Multi-threading and queuing
In order to manage complex interactions involving several conversational tasks/topics, we propose that the each task be handled by a thread manager within the interaction management framework. Each such manager will handle a conversational thread using a dialogue policy. Each thread manager will be fed with the input from the user and the dialogue actions generated will be stored in separate queues. This approach allows the interaction manager to produce several dialogue actions at the same time although for different conversational tasks.
Prioritised Queue Management
Dialogue actions from the several threads are stored in separate queues. The queues can be assigned priorities that decide the order in which items from the queues will be popped. The dialogue actions in the queues are pushed to the user based on an order of priority (see below). This priority can either be fixed or dynamic based on context. The system and user engagement should also be checked so that system utterances are pushed only when the system and user are not speaking already.
Queue Revision: resuming and bridging
The dialogue actions are generated and stored in queues. Therefore, there is a difference between the time they are generated and time that they are pushed. Therefore dialogue actions in the queues are revised periodically to reflect changes in context. Obsolete dialogue actions will have to removed for two reasons. Firstly, pushing them to the user may make the conversation incoherent because the system may be speaking about an entity that is no longer relevant and secondly, these obsolete dialogue actions may delay other other important dialogue actions from being pushed on time. In addition, it may also be useful to edit the dialogue actions to include discourse markers to signify topic change (Yang et al., 2008) and bridge phrases to reintroduce a previous topic. We discuss some examples later in section 4.3.
SPACEBOOK Interaction Manager
As a part of the SpaceBook EU FP7 project, we implemented the above design for a multithreaded interaction manager that presents the user with navigational instructions, pushes PoI information, and manages QA questions (Janarthanam et al., 2013). It receives the user's input in the form of a dialogue act (DA) from the ASR module and the user's location (latitude and longitude), orientation, and speed from the Pedestrian Tracker module. Based on these inputs and the dialogue context, the IM responds with a system output dialogue act. It should be noted that the location coordinates of the user are sent to the IM every 2 seconds. This allows the IM to generate location aware information at a high frequency. In addition, the IM has to deal with incoming requests and responses from the user's spoken inputs. With the possibility of system utterances being generated at a frequency of one every two seconds, there is a need for an efficient mechanism to manage the conversation and reduce the risk of overloading the user with information. These tasks are treated as separate conversational threads.
Conversational Threads
The SpaceBook IM manages the conversation using five conversational threads using dedicated task managers. Three threads: 'navigation', 'question answering' and 'PoI pushing', represent the core tasks of our system. In addition, for handling the issues in dialogue management, we introduce two threads: 'dialogue control' and 'request response'. These different threads represent the state of different dimensions of the user-system conversation that need to interleave with each other coherently. Each of the threads is managed by a thread manager using a dialogue policy. Each thread can generate a dialogue action depending on the context, as described below:
Dialogue Control
During the course of the conversation, the IM uses this thread to manage user requests for repetition, issues with unparsed (i.e. not understood) user utterances, utterances that have low ASR confidence, and so on. The dialogue control thread is also used to manage reference resolution in cases where referring expressions are underspecified.
The IM resolves anaphoric references by keeping a record of entities mentioned in the dialogue context. It stores the name and type information for each entity (such as landmark, building, etc) mentioned in previous utterances by either user or system. Subsequent user references to these entities using expressions such as "the museum", "the cafe", and so on, are resolved by searching for the latest entity of the given type. In cases where the IM cannot resolve the referent, it asks the user to clarify.
Request Response
The user can also initiate tasks that interest him/her at anytime during the conversation. These tasks include searching for an entity (e.g. a museum or a restaurant), requesting navigation instructions to a destination, and asking questions about the entities in the city database such as their location ("Where is X?", "How far is X?"). During navigation, users might want to ask questions about the destination, ask for next instructions, etc. All these user requests are handled using the request response thread. For instance, when the user asks for directions, the IM resolves the destination entity (perhaps using clarification) in the city model and acknowledges the user request. The task is then further handled using the Navigation thread.
Navigation
The IM identifies the location of the destination entity and queries a city database for a route plan. Using the route plan, the navigation thread presents step-by-step instructions to the user based on the current location and orientation of the user. The IM continuously monitors users to determine if at any time they are deviating from the planned route and provides corrective instructions. As users get near to the next node on the route plan, the next instruction is given. The IM uses highly salient visible landmarks and popular landmarks near the nodes to instruct the user (e.g. "When you reach Clydesdale Bank, turn left on to Nicolson Square"). The IM also informs users when they pass by recognisable landmarks, just to reassure them that they are on the right track (e.g. "You will pass by Tesco on the right"). When the user is close to his/her destination, the IM determines whether the destination is visible to the user, informs the user, and closes the task.
Question Answering
The system also answers ad hoc questions from the user (e.g. "Who is David Hume?", "What is the Old College?", "Who was William Wallace", etc). These are sent to the QA server and answered based on responses from the Question-Answering (QA) server (Janarthanam et al., 2013). The dialogue policy here is to answer the user's question with the first snippet available and ask the user to request for more if more snippets are available and he or she is interested.
Pushing PoI Information
When the user is mobile, the IM identifies popular points of interest (PoI) on the route based on two factors: proximity and visibility. The dialogue policy is to introduce the PoI, query the QA server for snippets and push the first snippet to the user. The user is encouraged to ask for more information if he/she is interested.
Priority assignment in SpaceBook
Priority is assigned to the above dialogue threads as follows: Priority 1. Dialogue control (repeat request, clarifications etc) Priority 2. Responding to user requests Priority 3. System initiated navigation task actions Priority 4. Responses to User-initiated QA actions Priority 5. PoI Push actions For instance, informing the user of a PoI could be delayed if the user needs to be given an instruction to turn at the junction he is approaching.
Queue revision and bridging utterances
The queues need to be revised at regular intervals in order to keep the information in them relevant to context. For instance, the dialogue action of informing the user of his/her location is deleted after 5 seconds, as this tends to become obsolete. Similarly, dialogue actions corresponding to information segments in PoI and QA queues are edited to inform the utterance generator of other intervening dialogue actions so that it can use appropriate bridge phrases to reintroduce the focus of the conversational thread. For instance, as shown in the example below, the utterance generator inserts a bridge phrase (i.e. "More on Old College") to reintroduce the focus of the PoI push task because of the intervening user request and the subsequent system response.
Conclusion
We presented an architecture for a multi-threaded Interaction Manager that can handle multiple conversational tasks. We also described an implementation of the architecture in a dynamic spatial environment. The SpaceBook IM is a multi-tasking IM that aims to interleave navigation information along with historical information about the entities users can see around them. In addition, it aims to answer questions users might have about those entities.
Figure 1 :
1Interaction Manager Architecture
Usr: I want to go to the National Gallery. Sys: Ok. I am now looking for directions to the National Gallery. Sys: The National Gallery is on Queens Street at about 300 meters away.....
Sys: Can you see the Bank of Scotland about
300 meters away on your left? It has a big
green dome.
Usr: Yes. I can see it.
Sys: Walk towards it.
Sys: Can you see the National Gallery in front
of you at about 100 meters? It is a building
with Roman architecture.
Usr: Yes.
Sys: You have now reached your destination.
Sys: In front of you, about 200 meters away is Old College. It has a grey dome on top. Sys: Situated on South Bridge, Old College is . . . Sys: Ask for more information if interested.
Sys: In front of you, about 200 meters away is the Old College. It has a grey dome on top. User: Where am I? Sys: You are on Chambers street. Sys: More on Old College. Situated on South Bridge, the Old College is......
AcknowledgementsThe research leading to these results has received funding from the European Community's Seventh FrameworkProgramme (FP7/2007(FP7/ -2013 under grant agreement no. 270019 (SPACEBOOK project www.spacebook-project.org).
D3.1.2: The SpaceBook City Model. P Bartie, W Mackaness, The SPACEBOOK Project (FP7/2011-2014 grant agreement no. 270019). Technical reportP. Bartie and W. Mackaness. 2013. D3.1.2: The SpaceBook City Model. Technical report, The SPACEBOOK Project (FP7/2011-2014 grant agreement no. 270019).
D2.1.2 Final Viewshed Component. P Bartie, W Mackaness, M Fredriksson, J Konigsmann, The SPACEBOOK Project (FP7/2011-2014. Technical reportgrant agreement no. 270019P. Bartie, W. Mackaness, M. Fredriksson, and J. Konigsmann. 2013. D2.1.2 Final Viewshed Component. Technical report, The SPACEBOOK Project (FP7/2011-2014 grant agreement no. 270019).
Evaluating a city exploration dialogue system combining question-answering and pedestrian navigation. S Janarthanam, O Lemon, P Bartie, T Dalmas, A Dickinson, X Liu, W Mackaness, B Webber, Proc. ACL. ACLS. Janarthanam, O. Lemon, P. Bartie, T. Dalmas, A. Dick- inson, X. Liu, W. Mackaness, and B. Webber. 2013. Evaluating a city exploration dialogue system combining question-answering and pedestrian navigation. In Proc. ACL 2013.
Multithreaded context for robust conversational interfaces: context-sensitive speech recognition and interpretation of corrective fragments. Oliver Lemon, Alexander Gruenstein, ACM Transactions on Computer-Human Interaction (ACM TOCHI). 113Oliver Lemon and Alexander Gruenstein. 2004. Mul- tithreaded context for robust conversational interfaces: context-sensitive speech recognition and interpretation of corrective fragments. ACM Transactions on Computer- Human Interaction (ACM TOCHI), 11(3):241-267.
Machine learning for spoken dialogue systems. Oliver Lemon, Olivier Pietquin, InterspeechOliver Lemon and Olivier Pietquin. 2007. Machine learning for spoken dialogue systems. In Interspeech.
A framework for building conversational agents based on a multi-expert model. Mikio Nakano, Kotaro Funakoshi, Yuji Hasegawa, Hiroshi Tsujino, Proceedings of the 9th SIGdial Workshop on Discourse and Dialogue, SIGdial '08. the 9th SIGdial Workshop on Discourse and Dialogue, SIGdial '08Stroudsburg, PA, USAAssociation for Computational LinguisticsMikio Nakano, Kotaro Funakoshi, Yuji Hasegawa, and Hi- roshi Tsujino. 2008. A framework for building conversa- tional agents based on a multi-expert model. In Proceed- ings of the 9th SIGdial Workshop on Discourse and Dia- logue, SIGdial '08, pages 88-91, Stroudsburg, PA, USA. Association for Computational Linguistics.
Switching to real-time tasks in multi-tasking dialogue. Fan Yang, Peter A Heeman, Andrew Kun, Proceedings of the 22Nd International Conference on Computational Linguistics. the 22Nd International Conference on Computational LinguisticsStroudsburg, PA, USAAssociation for Computational Linguistics1COLING '08Fan Yang, Peter A. Heeman, and Andrew Kun. 2008. Switching to real-time tasks in multi-tasking dialogue. In Proceedings of the 22Nd International Conference on Computational Linguistics -Volume 1, COLING '08, pages 1025-1032, Stroudsburg, PA, USA. Association for Computational Linguistics.
Probabilistic methods in spoken dialogue systems. Steve Young, Philosophical Transactions of the Royal Society (Series A). 358Steve Young. 2000. Probabilistic methods in spoken dia- logue systems. Philosophical Transactions of the Royal Society (Series A), 358(1769):1389-1402. |
21,696,490 | Enriching a Lexicon of Discourse Connectives with Corpus-based Data | We present the results of the effort of enriching the pre-existing resource LICO, a Lexicon of Italian COnnectives retrieved from lexicographic sources(Feltracco et al., 2016), with real corpus data for connectives marking contrast relations in text. The motivation beyond our effort is that connectives can only be interpreted when they appear in context, that is, in a relation between the two fragments of text that constitute the two arguments of the relation. In this perspective, adding corpus examples annotated with connectives and arguments for the relation allows us to both extend the resource and validate the lexicon. In order to retrieve good corpus examples, we take advantage of the existing Contrast-Ita Bank(Feltracco et al., 2017), a corpus of news annotated with explicit and implicit discourse contrast relations for Italian according to the annotation scheme proposed in the Penn Discourse Tree Bank (PDTB) guidelines(Prasad et al., 2007). We also use an extended -non contrast annotated-version of the same corpus and documents from Wikipedia. The resulting resource represents a valuable tool for both linguistic analyses of discourse relations and the training of a classifier for NLP applications. | [
27865020,
219310229,
18125103
] | Enriching a Lexicon of Discourse Connectives with Corpus-based Data
Anna Feltracco feltracco@fbk.eu
Fondazione Bruno Kessler
Via Sommarive 1838100TrentoItaly
University of Pavia
Strada Nuova 6527100PaviaItaly
University of Bergamo
Via Salvecchio, 1924129BergamoItaly
Elisabetta Jezek jezek@unipv.it
University of Pavia
Strada Nuova 6527100PaviaItaly
Bernardo Magnini magnini@fbk.eu
Fondazione Bruno Kessler
Via Sommarive 1838100TrentoItaly
Enriching a Lexicon of Discourse Connectives with Corpus-based Data
discourse connectivescontrast relationcorpus examples
We present the results of the effort of enriching the pre-existing resource LICO, a Lexicon of Italian COnnectives retrieved from lexicographic sources(Feltracco et al., 2016), with real corpus data for connectives marking contrast relations in text. The motivation beyond our effort is that connectives can only be interpreted when they appear in context, that is, in a relation between the two fragments of text that constitute the two arguments of the relation. In this perspective, adding corpus examples annotated with connectives and arguments for the relation allows us to both extend the resource and validate the lexicon. In order to retrieve good corpus examples, we take advantage of the existing Contrast-Ita Bank(Feltracco et al., 2017), a corpus of news annotated with explicit and implicit discourse contrast relations for Italian according to the annotation scheme proposed in the Penn Discourse Tree Bank (PDTB) guidelines(Prasad et al., 2007). We also use an extended -non contrast annotated-version of the same corpus and documents from Wikipedia. The resulting resource represents a valuable tool for both linguistic analyses of discourse relations and the training of a classifier for NLP applications.
Introduction
Discourse relations and the linguistic elements marking them in text, commonly referred to as discourse connectives, have recently been at the core of several annotation efforts for multiple languages (including English, German, French, Italian, Portuguese, see Stede and Umbach (1998), Roze et al. (2012) among others). In this paper, we present the results of the effort of enriching the preexisting resource LICO, a lexicon of Italian connectives retrieved from lexicographic sources (Feltracco et al., 2016), with real corpus data, thus allowing us to both extend the resource and validate the lexicon. Our goal in this contribution is limited to the class of connectives marking contrast, and the additional relations such connectives might convey, some of them being polysemous. The motivation beyond our effort is that connectives can only be interpreted and disambiguated when they appear in context, that is, in a relation between the two fragments of text that constitute the two arguments of the relation. In order to retrieve good examples, we take advantage of the existing Contrast-Ita Bank (Feltracco et al., 2017), a corpus of news annotated with explicit and implicit discourse contrast relations for Italian according to the annotation scheme proposed in the Penn Discourse Tree Bank (PDTB) guidelines (Prasad et al., 2007). Contrast-Ita Bank contains corpus annotations for 19 discourse connectives of contrasts, 14 of them are included in LICO; these provide the starting point for our work. We pick additional examples from the larger news corpus from which Contrast-Ita Bank is derived. The resulting resource represents a valuable tool for both linguistic analyses and the training of a classifier for NLP applications. The paper is structured as follows. Section 2 reports the definition of discourse connective we assume in our work. Section 3 introduces LICO and related lexica for other languages, while Section 4 reports the methodology and Section 5 presents the results of the effort of enriching the resource. The pa-per ends with concluding observations and hints for further work.
Discourse connectives
We define discourse connectives as lexical markers that are used to express relations between parts of the discourse. This definition is inspired by Ferrari (Ferrari and Zampese, 2000;Ferrari, 2010): she defines a connective as "each of the invariable forms [...], which introduce relations that structure "logically" the meanings of the sentence and of the text" 1 . Ferrari clarifies that relations marked by connectives hold between events or assertions, and includes as arguments for the relation also nominalisations (e.g. "after the pressing invitation ...'), i.e. cases that contain an event introduced through a nominal expression. On the other hand, she excludes those grammatical elements that introduce relative clauses or pronouns (as who in "I don't know who you are.") to be connectives. This is in line with the definition provided for the arguments of a connective in the Penn Discourse Tree Bank (PDTB) 2.0 project, for which connectives relate two events, states, and propositions, that can be realized mostly as clauses, nominalisations, and anaphoric expressions (Prasad et al., 2007). From this group are excluded general cue phrases or discourse markers, words/phrases that do not have the function of connectives but are used for instance to change the topic in a discourse or to initialize it, such as "but" in "But, what are you doing?". According to Ferrari (2010), connectives belong to different syntactic classes, the same defined in the PDTB schema: i) subordinating conjunctions or subordinating expressions; ii) coordinating conjunctions or coordinating ex-pressions; iii) adverbs or adverbial expressions; iv) prepositions or prepositional expressions. In line with this definition, Stede (2012) distinguishes connectives as never inflected, closed-class lexical items, which belong to the above mentioned syntactic categories. He also specifies that these lexical elements can only be interpreted successfully when they appear in a relation between two discourse segments. Ferrari (2010) also proposes a non hierarchical classification for connectives depending on the "type of logical relation they convey", e.g. temporal and causal. The PDTB 3.0 project (Webber et al., 2016) proposes a hierarchical classification composed by three levels ( (Webber et al., 2016).
In the first level of the hierarchy, the class level, sense tags are grouped in four major classes (first column of Table 1). The second level of the hierarchy (second column of Table 1) specifies further the semantics of the class level: the type level. For example, the tag TEMPORAL.Synchronous indicates the type Synchronous of the class TEMPORAL and is used for connectives that indicate that the arguments of the relation are simultaneous (e.g. "When she arrived, he was leaving"); differently, the TEMPORAL.Asynchronous tag is used when the connective indicates a before-after relation (e.g. "She arrived before he left"). The third level, the subtype level (third column of Table 1), reflects the direction of the relations. For example, the type CONTIN-GENCY.Cause represents an asymmetric relation between two arguments: being one the cause, and the other the result. The subtype CONTINGENCY.Cause.Reason is used if the argument introduced by the connective (Arg2) is the reason for the situation in the other argument (Arg1) (e.g. "I stayed at home because it was raining"), while CONTIN-GENCY.Cause.Result is used if it represents the result/effect (e.g. "It was raining, therefore I stayed at home"). Notice that not every type has a further subtype: for example, the arguments involved in a temporal relation of type Synchronous do not play different roles and no subtype has been proposed.
LICO: Lexicon for Italian Connectives
According to our knowledge, LICO (Feltracco et al., 2016), Lexicon for Italian COnnectives, is the highest coverage resource of discourse connectives available for Italian.
Connectives in LICO.
In LICO connectives are listed together with orthographic, syntactic, semantic information and also possible alignments with lexica of connectives in other languages. LICO is organized in 173 entries, each one corresponding to a connective and its orthographic or lexical variants. In fact, the invariability criterion proposed by Ferrari (2010) which does not include variable forms (i.e. those forms which are subject to morphological modifications) is partially dropped in LICO. Specifically, the resource does not include forms which exhibit morphological inflection or conjugation, but includes connectives which show a certain degree of lexical variability, that is, multiword expressions which are not totally rigid from a lexical point of view (e.g. ad esempio/per esempio 'for example' are both registered in the resource, as two variants of unique entry). Connectives in LICO are retrieved from three sources: i) the list of connectives mentioned by Ferrari for the entry connettivi 2 , ii) the list of connectives tagged as congiunzione testuale in Sabatini Coletti 2006 (Sabatini and Coletti, 2007), except for the ones of literary use, and iii) the list of the equivalent Italian terms of the German connectives in the DimLex resource (Stede, 2002) (see Related Lexica).
LICO Structure. For each entry LICO specifies:
• possible lexical variants (e.g dopo di ché and dopo di ciò) and orthographic variants (e.g. ciò nonostante and ciononostante);
• whether the connective (or its variants) is composed by a single token or by more than one token;
• whether the connective is composed by correlating parts (e.g. da una parte [..] dall'altra) or not (e.g. ciononostante);
• the syntactic category: adverbs, prepositions, subordinating or coordinating conjunctions;
• the semantic relation(s) which the connective indicates, according to the PDTB 3.0 schema of relations (Webber et al., 2016);
• possible alignments with lexica of connectives in German;
• examples of usage of the connective for each semantic relations it indicates.
The examples in the first version of the resource are translation of the German examples already present in the DimLex resource (Scheffler and Stede, 2016;Stede, 2002;Stede and Umbach, 1998). In adopting a corpus-based approach we aim at enriching LICO with data-driven examples and validating the information in the resource.
Related lexica. LICO has been inspired by the Dim-Lex project for German (Scheffler and Stede, 2016;Stede, 2002;Stede and Umbach, 1998) 3 , an XML-encoded resource that provides information on orthographic variants, syntactic category, semantic relations in terms of PDTB3.0 (Webber et al., 2016) sense tags, and usage examples for 274 connectives. DimLex is used for automatic discourse parsing, and also for semiautomatic text annotation using the ConAno tool (Stede and Heintze, 2004). A similar repository for French is LEXCONN (Roze et al., 2012) 4 , which contains more than 300 connectives with their syntactic categories and discourse relations from Segmented Discourse Representation Theory (Asher and Lascarides, 2003). The lexicon has been constructed manually, using a corpus as empirical support. LICO is freely distributed under a CC-BY licence 5 and can be browsed with DIMLEX and LEXCONN at http:// connective-lex.info/. More specifically, the documents correspond to articles published in a local newspaper "L'Adige" in two different days and include reports, news about politics, news about economics, sport results. They contain narrations and quotes of oral interviews. Contrast-Ita Bank (henceforth CIB) follows the schema proposed in the PDTB guidelines (Prasad et al., 2007) both in terms of sense tags, i.e., CONTRAST and CONCES-SION are tagged in the corpus, and in terms of information annotated, i.e., for each explicit relations, the connective that conveys the relation is marked together with its arguments (named Arg1 and Arg2) 6 . For instance, in Example (1) the connective is underlined, Arg1 is in italics, and Arg2 is in bold.
Enriching Connectives of Contrast in LICO
(1)
Il ministro del Lavoro e delle Pensioni britannico, Andrew Smith, ha rassegnato ieri le dimissioni nonostante i tentativi del premier Tony Blair di convincerlo a rimanere. 7 tag: CONCESSION.Arg1.as.denier
In our work we take advantage of the information associated to the connectives of contrast in CIB. In fact, the annotated connective (marked as CONTRAST or CONCES-SION) together with its arguments constitute the examples we retrieved for the enrichment of LICO. For instance, Example (1) from CIB has been retrieved as an example of nonostante to enrich LICO. Moreover, we can get information of how the connective is used with reference to the relation it conveys; for example, we can inspect if it is found between the arguments it links, before them, if it requires a conjunctive form of the verb just in its arg2, etc... We keep this data in LICO by taking care of reporting the span of text of the two arguments as part of the example and by encoding the token id of the connective as registered in CIB: this works as a pointer for users who can reconstruct the entire annotation of the contrast relation in CIB.
Corpus examples picking.
Not all the connectives tagged as CONTRAST and CONCESSION in LICO are present in CIB, and the resource does not provide 5 examples for all the connectives of contrast it contains (some of them appear just once). Moreover, we want to retrieve examples also for the non contrastive use of the connectives, and these are not tagged in CIB. To reach our goal, we extend the search to other documents. On one hand, we con- text: "La nostra idea -dice Rita -è quella di aprire un bar normale. In cui il pubblico possa consumare anche qualche cibo precotto, patatine fritte, poi i soliti panini. Magari ascoltandosi il juke-box". In verità, proprio normale, normale non sarà il bar. id="2" source = "Adige 414186" text: Sulla carta l'opposizioneè di dodici consiglieri. In verità siamo rimasti in cinque id="3" source = "wiki 34540 Piario" text: Soltanto nel 1792 si verificò la definitiva divisione di quello che era il comune di "Oltressenda bassa" in Piario e Villa d' Ogna .
A dire la verità i due centri vennero nuovamente uniti in un' unica realtà amministrativa durante il periodo fascista ( anno 1929 ) , per poi separarsi in modo definitivo nel 1958 . id="4" source = "wiki 39733 Papa Pio XII" text: .. id="5" source = "wiki 1041014 Divisione Nazionale" text: .. sem relation EXPANSION:Level-of-detail:Arg2-as-detail example id="1" source = "Adige 413952" text: .. sidered other 357 documents of the newspaper "L'Adige" (same source of CIB); we will refer to this source as Adige.
On the other hand, we search for additional examples in documents from Wikipedia 8 . While the retrieving has been done automatically, the selection of the examples has been conducted manually. This is because we need to distinguish cases in which the connective plays such a role from cases in which it does not introduce a discourse relation. These latter cases are also known as discourse markers already mentioned in Section 2, and are used, for instance, to take the turn in a conversation (interactive function) or as indicators of reformulation (metatextual function) -see (Bazzanella, 1995) and (Ferrari, 2010). Once the connective is identified, we also need to disambiguate it, in order to associate it with the sense of the relation it is actually conveying. A manual annotation is thus needed for the creation of a resource of reference for both linguistic and computational uses.
Results
The results of our work will be presented in three sections considering that: the format used for registering the information in LICO has been updated since we introduce new elements; a new list of connectives has been created since we validated and modified the first version of LICO; a revision of the polysemy of the connectives has been carried out with the new data. The resulting format. Figure 1 in LICO. Specifically, the examples have their own id and are identified with the tag "source" which attribute is a two or three position code standing for: i) the source corpus (i.e. CIB or Adige or Wiki), ii) the source document id as identified in the source corpus, iii) just for example from CIB, the id of the tokens of the connective in the source document. For instance, in Figure 1 the first example is from document 405635 of CIB, and in that document the connective corresponds to tokens 185 and 186. Notice also that the source documents are distributed with LICO, as external material at users disposal.
A new list of connectives of contrast. One important benefit of considering CIB, a corpus that has been exhaustively annotated with contrast relations, concerns the enrichment of the list of 38 connectives of contrast in LICO. As can be seen in Table 2, five of the connectives tagged with CONTRAST or CONCESSION, or both relations, in CIB are not present in LICO. More precisely al contrario di and seppure were not present in LICO as connectives, while e, in realtà and se were in LICO but associated to a non contrastive sense. On the other side, the corpus investigation bring us to eliminated 3 connectives from this list: con tutto questo, a onor del vero, persino. The three of them were found in the corpus as connectives but not conveying a contrast relation; they have been kept in LICO as connectives of the PDTB sense EXPANSION:Conjunction. 9 connective CIB L a dire la verità x (in verità )
x a dire il vero x a dispetto di x x a onor del vero x ad ogni modo x al contrario x x al contrario di
x anche
x anche se x x benché
x bensì
x ciononostante
x cionondimeno
x comunque x x con tutto questo x se
x sebbene
x sennonché
x seppure
x solamente
x solo
x tuttavia x x viceversa
x Total 19 38 Table 2: Connectives of contrast in Contrast-Ita-Bank (CIB) and in LICO (L). Three were removed as they do not appear to convey the contrast relation.
We thus update the list of connectives (of contrast) in LICO including the connectives form Contrast-Ita Bank and discarding those find not conveying this relation in the corpus; the final list is 40 connectives (38 in LICO -3 removed + 5 from CIB). This result, even limited, highlights the importance of a corpus investigation in order to enrich a lexical resource.
Checking polysemy The use of corpora not only permits to discover new connectives of contrast or contrast uses of connectives that was already in LICO, but it lead us to review the polysemy of the already listed connectives of contrast. For example, we add the sense EXPANSION:Exception:Arg2-as-except to the connectives solamente and solo (both Eng. 'only') since we find example as the following, in which it introduces an exception.
(2) L'attaccante quindi genera un certificato server falso, totalmente uguale al certificato vero, solamente che nonè firmato dalla stessa CA. 10 Table 3 shows the data of the enrichment. Notice that globvero) are in fact conveying the contrast relation; in context however, they seems to convey more the EXPANSION:Conjunction relation. For what concerns persino, it has been retrieved as a translation of the German auch which however, is not associated with the contrast relation in the DimLex lexicon; we do not find example for its use as contrastive. 10 Eng. "The attacker then generates a fake server certificate, totally equal to the true certificate, only that it is not signed by the same CA." ally the average number of examples for each entry almost doubled, even though we added only two entries to the resource. As already specified, the corpus analysis also lead us to discover new senses for the connectives under examination (for a total of 214 senses over 175 entries with respect to the previous 205 over 173 connectives).
Conclusion and Further works
We have presented our project aiming at enriching the preexisting resource LICO (Feltracco et al., 2016), a lexicon of discourse connectives for Italian, with real corpus data for connectives marking contrast. The adopted methodology partially takes advantage of a pre-existing resource in which discourse connectives of contrast are annotated, along with the arguments of the discourse relation they make esplicit. A complementary investigation has been conducted picking examples of the connectives in corpora and manually disambiguation their senses in the retrieved textual contexts. In particular, this latter strategy is replicable to enrich LICO with information about discourse connectives that convey relations other than contrast (e.g. temporal, causal) and it can also be adopted to enriched lexica of connectives in other language. Corpus investigation can also be carried out in a more automatic way: for example, Bourgonje et al. (2017) use parallel corpus to discover correspondences between connectives in different language and highlight point to gaps in the examined resources. In particular, the authors report on experiments to validate the list of connectives in Dim-Lex (Stede, 2002) and LICO in an effort to constructing a bilingual lexicon on connectives that are connected via their discourse senses. In our case, since we want to extract clear examples and disambiguate their senses in the context, we believe the manual disambiguation of connectives was necessary. The resulting resource represents a valuable tool for both linguistic analyses and the training of a classifier for NLP applications.
Figure 1 :
1The connective a dire la verità and its variants in LICO.
shows the connective a dire la verità and its variants in LICO. In the central part of the figure, we can see how examples from the three different sources (i.e. CIB, Adige, Wikipedia) have been reported 8 Documents are from Italian Wikipedia, February 2010.
Table 1 ).
1I level
II level
III level
CLASS
TYPES
SUBTYPES
TEMPORAL
Synchronous
-
Asynchronous
Precedence
Succession
CONTINGENCY
Cause
Reason
Result
Condition
Arg1-as-cond
Arg2-as-cond
Negative Condition
Arg1-as-negcond
Arg2-as-negcond
Purpose
Arg1-as-goal
Arg2-as-goal
COMPARISON
Contrast
-
Similarity
-
Concession
Arg1-as-denier
Arg2-as-denier
EXPANSION
Conjunction
-
Disjunction
-
Equivalence
-
Instanciation
-
Level-of-detail
Arg1-as-detail
Arg2-as-detail
Substitution
Arg1-as-subst
Arg2-as-subst
Exception
Arg1-as-except
Arg2-as-except
Manner
Arg1-as-manner
Arg2-as-manner
Table 1 :
1The PDTB 3.0 hierarchy of relations
Average polysemy of the connectives in LICO 1.18 1.22 # examples per connective sense (average) 1.73 3.37Data
Pre
Post
# Connectives of Contrast in LICO
38
40
Table 3 :
3Data on connectives of contrast in Lico pre and post corpus enrichment.
Original text: "Il termine connettivo indica in linguistica ciascuna delle forme invariabili[...], che indicano relazioni che strutturano 'logicamente' i significati della frase e del testo"(Ferrari, 2010).
http://www.treccani.it/enciclopedia/connettivi (Enciclopediadell'Italiano)/
https://github.com/discourse-lab/dimlex/ 4 http://www.linguist.univ-paris-diderot. fr/˜croze/D/Lexconn.xml/ 5 https://hlt-nlp.fbk.eu/technologies/lico
Specifically, Arg2 is the argument that is syntactically bound to the connective, and Arg1, the other one(Prasad et al., 2007). 7 Eng: Secretary of State for Work and Pensions Andrew Smith resigned yesterday, despite Prime Minister Tony Blair's attempts to persuade him to stay.
A deep examination highlights that the first two were retrieved from the resource Sabatini Coletti, which does not specify the relation they convey. The assignment to the contrast sense is probably derived by the fact that their synonyms (i.e. nonostante tutto for con tutto questo and a dire la verità for a onor del
Logics of conversation. N Asher, A Lascarides, Cambridge University PressAsher, N. and Lascarides, A. (2003). Logics of conversa- tion. Cambridge University Press.
I segnali discorsivi. Grande grammatica italiana di consultazione. C Bazzanella, 3Bazzanella, C. (1995). I segnali discorsivi. Grande gram- matica italiana di consultazione, 3:225-257.
Toward a bilingual lexical database on connectives: Exploiting a german/italian parallel corpus. P Bourgonje, Y Grishina, M Stede, Proceedings of the Fourth Italian Conference on Computational Linguistic. the Fourth Italian Conference on Computational LinguisticBourgonje, P., Grishina, Y., and Stede, M. (2017). Toward a bilingual lexical database on connectives: Exploiting a german/italian parallel corpus. In Proceedings of the Fourth Italian Conference on Computational Linguistic (CLiC-it 2017).
LICO: A Lexicon of Italian Connectives. A Feltracco, E Jezek, B Magnini, M Stede, Proceedings of the Third Italian Conference on Computational Linguistic. the Third Italian Conference on Computational Linguistic141Feltracco, A., Jezek, E., Magnini, B., and Stede, M. (2016). LICO: A Lexicon of Italian Connectives. In Proceedings of the Third Italian Conference on Computational Lin- guistic (CLiC-it 2016), page 141.
Contrastita bank: A corpus for italian annotated with discourse contrast relations. A Feltracco, B Magnini, Jezek , E , Proceedings of the Fourth Italian Conference on Computational Linguistic. the Fourth Italian Conference on Computational LinguisticFeltracco, A., Magnini, B., and Jezek, E. (2017). Contrast- ita bank: A corpus for italian annotated with discourse contrast relations. In Proceedings of the Fourth Italian Conference on Computational Linguistic (CLiC-it 2017).
Dalla frase al testo : una grammatica per l'italiano. Zanichelli. A Ferrari, L Zampese, Ferrari, A. and Zampese, L. (2000). Dalla frase al testo : una grammatica per l'italiano. Zanichelli.
A Ferrari, Connettivi. In Enciclopedia dell Italiano. diretta da Raffaele Simone, con la collaborazione di Gaetano Berruto e Paolo D Achille. RomaIstituto della Enciclopedia ItalianaFerrari, A. (2010). Connettivi. In Enciclopedia dell Ital- iano. diretta da Raffaele Simone, con la collaborazione di Gaetano Berruto e Paolo D Achille, Roma, Istituto della Enciclopedia Italiana.
. R Prasad, E Miltsakaki, N Dinesh, A Lee, A Joshi, L Robaldo, B L Webber, The Penn Discourse Treebank 2.0 Annotation ManualPrasad, R., Miltsakaki, E., Dinesh, N., Lee, A., Joshi, A., Robaldo, L., and Webber, B. L. (2007). The Penn Dis- course Treebank 2.0 Annotation Manual.
LEXCONN: a French lexicon of discourse connectives. Discours. Revue de linguistique. C Roze, L Danlos, P Muller, psycholinguistique et informatique. 10Roze, C., Danlos, L., and Muller, P. (2012). LEXCONN: a French lexicon of discourse connectives. Discours. Re- vue de linguistique, psycholinguistique et informatique., (10).
Dizionario della lingua italiana. F Sabatini, V Coletti, Milano: Rizzoli LarousseSabatini, F. and Coletti, V. (2007). Dizionario della lingua italiana 2008 (2007). Milano: Rizzoli Larousse.
Adding Semantic Relations to a Large-Coverage Connective Lexicon of German. T Scheffler, M Stede, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC '16). the Tenth International Conference on Language Resources and Evaluation (LREC '16)Portoroz, SloveniaScheffler, T. and Stede, M. (2016). Adding Semantic Rela- tions to a Large-Coverage Connective Lexicon of Ger- man. In Proceedings of the Tenth International Con- ference on Language Resources and Evaluation (LREC '16), Portoroz, Slovenia, May.
Machineassisted rhetorical structure annotation. M Stede, S Heintze, Proceedings of the 20th International Conference on Computational Linguistics. the 20th International Conference on Computational LinguisticsGenevaStede, M. and Heintze, S. (2004). Machineassisted rhetor- ical structure annotation. In Proceedings of the 20th International Conference on Computational Linguistics, pages 425-431, Geneva.
Dimlex: A lexicon of discourse markers for text generation and understanding. M Stede, C Umbach, Proceedings of the 17th international conference on Computational linguisticsVolume 2. the 17th international conference on Computational linguisticsVolume 2Association for Computational LinguisticsStede, M. and Umbach, C. (1998). Dimlex: A lexicon of discourse markers for text generation and understanding. In Proceedings of the 17th international conference on Computational linguisticsVolume 2, pages 1238-1242. Association for Computational Linguistics.
Dimlex: A lexical approach to discourse markers. M Stede, Exploring the Lexicon -Theory and Computation. Edizioni dell'Orso. AlessandriaStede, M. (2002). Dimlex: A lexical approach to discourse markers. In Exploring the Lexicon -Theory and Compu- tation. Edizioni dell'Orso, Alessandria.
Discourse processing. M Stede, Morgan & Claypool PublishersStede, M. (2012). Discourse processing. Morgan & Clay- pool Publishers.
A Discourse Annotated Corpus of Conjoined VPs. B Webber, R Prasad, A Lee, A Joshi, LAW X22Webber, B., Prasad, R., Lee, A., and Joshi, A. (2016). A Discourse Annotated Corpus of Conjoined VPs. LAW X, page 22. |
26,582,300 | Spell-Checking based on Syllabification and Character-level Graphs for a Peruvian Agglutinative Language | There are several native languages in Peru which are mostly agglutinative. These languages are transmitted from generation to generation mainly in oral form, causing different forms of writing across different communities. For this reason, there are recent efforts to standardize the spelling in the written texts, and it would be beneficial to support these tasks with an automatic tool such as a spell-checker. In this way, this spelling corrector is being developed based on two steps: an automatic rule-based syllabification method and a character-level graph to detect the degree of error in a misspelled word. The experiments were realized on Shipibo-konibo, a highly agglutinative and Amazonian language, and the results obtained have been promising in a dataset built for the purpose. | [] | Spell-Checking based on Syllabification and Character-level Graphs for a Peruvian Agglutinative Language
Association for Computational LinguisticsCopyright Association for Computational LinguisticsSeptember 7, 2017. 2017
Carlo Alva carlo.alva@pucp.pe
Facultad de Ciencias e Ingeniería Pontificia
Departamento de Ingeniería, GRPIAA Pontificia Universidad Católica del Perú
Universidad Católica del Perú
Arturo Oncevay-Marcos
Facultad de Ciencias e Ingeniería Pontificia
Departamento de Ingeniería, GRPIAA Pontificia Universidad Católica del Perú
Universidad Católica del Perú
Spell-Checking based on Syllabification and Character-level Graphs for a Peruvian Agglutinative Language
Proceedings of the First Workshop on Subword and Character Level Models in NLP
the First Workshop on Subword and Character Level Models in NLPCopenhagen, DenmarkAssociation for Computational LinguisticsSeptember 7, 2017. 2017
There are several native languages in Peru which are mostly agglutinative. These languages are transmitted from generation to generation mainly in oral form, causing different forms of writing across different communities. For this reason, there are recent efforts to standardize the spelling in the written texts, and it would be beneficial to support these tasks with an automatic tool such as a spell-checker. In this way, this spelling corrector is being developed based on two steps: an automatic rule-based syllabification method and a character-level graph to detect the degree of error in a misspelled word. The experiments were realized on Shipibo-konibo, a highly agglutinative and Amazonian language, and the results obtained have been promising in a dataset built for the purpose.
Introduction
In Peru, there are several native languages through the diverse native communities in the Amazonian region, such as Asháninka, Kakataibo, Shipibokonibo, among others (Rivera, 2001). These languages, in spite of being very different from each other (47 languages in 21 linguistic families), share some features related to their morphology and the context in which they are used.
Regarding the morphology of the Amazonian languages, they are highly agglutinative, where suffixes predominates over prefixes. This characteristic distances them a lot from Spanish, the main official language in the country, and even the structural order is also different.
On the other side, these languages are used and transmitted mainly in an oral way, such as story-telling, poetry and in everyday life in the native communities. This causes differences in the way of writing between communities, and even among people in the same community (Aikman, 1999). For this reason, the texts that were written in these languages did not have an orthographic standard to guide them.
Thus, it is a must to support the educational process of this languages for this communities, and from the computational side, the main way to help them would be through the development of automatic tools or functions that process the specific language, in order to assist tasks related to generate written material such as educational books.
In that way, this project aims to develop a spellchecker focused on Shipibo-konibo, an amazonian language that is one of the most studied by linguists (Valenzuela, 2003) and also there are efforts from the computer science field to develop a basic toolkit for it (Pereira et al., 2017).
As this kind of language possess a rich morphology, the spelling corrector would focus on process sub words parts, such as syllables and characters, developing data structures and functions that could help in the process of identifying a misspelled word and suggesting potential corrected alternatives.
This study is organize as follows: In the next section, there will be described some studies related to the implementation of spelling correctors focusing on low-resource languages or languageindependent models. Then, the sub word approach for the resources used (data structures and functions) will be detailed. After that, Section 4 describes the proposed spelling corrector, while Section 5 presents the experimentation and results obtained. Finally, the conclusions and future work are discussed in Section 6.
Related work
The related works focus on studies regarding the development of spell-checkers in a low-resource scenario or with a language independent approach.
Firstly, Barari and QasemiZadeh presented a tool called "CloniZER Spell Checker Adaptive". It consists of an adaptive method that uses internal error pattern recognition based on a ternary search tree. Thanks to this approach, the spellchecker was independent of the language because it was not based on specific rules from a specific language or corpus (this could be replaced). An interesting part in this approach resides is the support of the tree with variable weighted edges. The weights modifications are made through the interaction with a user, since the method learns from a mean of error patterns and thus the suggestion of solutions keeps improving.
Another source found is Abdullah and Rahman, who performed a generic spelling correction engine for South Asian languages, which uses circular lists where words are grouped by phonetic similarity and with an algorithm adapted from the Recursive Simulation (Lee, 1997) is constructed the possibly correct word that has similar to the misspelled word. However an interesting help they used was an additional dictionary of words, where they kept the misspelled words that the users wrote. This method is favorable for languages that have similarity with other phonetically, such as the group of languages of the Pano family in which the shipibo-konibo is.
Aduriz et al., who presented a corrector based on morphologies, in which a morphological analysis is used to perform morphological decompositions at two levels for spelling errors and for typographical errors uses a recognition of morphemes in the generation of correct words. This approach is interesting since they additionally use a lexicon that they are improving and a set of rules that help to map the lexical level and the surface level due to the morphological transformation (phonological representation of the morphemes).
Finally, Wasala et al. presented a proofreader for an African language. In this it was used a statistical model based on n-grams, this approach is based on assigning probabilities to a sequence of words where the sequence is determined by ngram. An example of a 2-gram or bigram is: "try this". The chosen approach offers relative ease of construction and avoids the problem of hav- ing few linguistic resources. The algorithm that is proposed for the orthographic correction uses 4 modules. These are: pre-processing, generation of permutations, selection of best suggestions, and post-processing. The interesting thing about this algorithm is that after preprocessing it performs a word search with similar sounds or phonemes, thus generating possible solutions, which are then improved based on the statistics of n-grams applying from unigramas to trigrams.
Subword Resources
As a first step, it is necessary to specify what type of resources, such as data structures, were used for the development of the spell checker.
Character-level Graph
The first structure that is needed is a graph as is shown in Figure 1. The nodes represent the characters that are used in the Shipibo-konibo (SHP) vocabulary, while the vertexes are the (weighted) relationships between each pair of them (this information is extracted from a corpus). Specifically, the alphabet of SHP contains 4 vowels and 15 consonants, as it can be seen in Table1. It is important to note that all the nodes will not be used in the whole process, due to the proposed n-gram based approach.
Syllable-level Graph
Another structure, needed to improve the possible algorithm solution, is a syllable-level graph. The nodes are the syllables that could be formed in the SHP grammar, and the vertexes represent the potential proximity relationship between 2 syllables (that is extracted from a corpus also). The number Table 2: Syllabic pattern of grammatically correct syllables in SHP is 576, and with these syllables, all the word entries of a SHP dictionary could be generated.
Syllabification function
There are 4 syllabic patterns in the SHP language, and these are represented in Table 2. There are 3 positions named Attack, Core and Coda.
The syllabification function helps in the improvement of the solutions selection of the correction algorithm. It has been developed based on the existing rules in the Shipibo-konibo grammar, and it helped the process because it allowed to separate each word of the dictionary in syllables to create the syllables graph. In addition, the use of the syllabification functions is a filter that is used to identify whether a word is well written or not.
Dictionary for previous corrections
As another needed resource, there is an own built dictionary structure that saves the previous misspelled word that has already been corrected, in order to avoid the same error correction again.
The key is the misspelled word, and this is associated with a list of words corrected previously as is shown in Figure 2.
Proposed Spelling Corrector
First, the text is tokenized. After that, there is a verification process for each word to know if the term have been corrected before, or belongs to the Spanish language, or exceeds a modified language model. In those cases the suggestion is sent directly as it can be appreciated in Figure 3. In other case, the word goes to the correction process, in which a graph is traversed in order to be able to identify possible correct words. Also, a filter is used where the word must be syllabified and finally a score is applied while traversing a graph of syllables and other score with the edit-distance. The results are ranked by score and assigned as suggestions to each corrected word.
4.1 Spell-checking algorithm 4.1.1 Identifying a misspelled word First, the text is tokenized, and the numbers and punctuation are removed. The position of these filtered characters are saved, in order to replace them after the correction process is complete. Besides, the text is transformed to lowercase letters and the accent marks are removed.
As there is a dictionary structure that stores previous corrections, a search of the misspelled word is performed.
If the terms are not found in the previous dictionary structure, it is necessary to identify if they belong to the Spanish language using a corpus (Davies, 2002), a feature shared with other native languages in Perú.
Words that are not found in this Spanish corpus, are evaluated by the syllable language model. This model sum-up the weights assigned by each syllable found in the word, if this value can't surpass an empirical calculated threshold, is marked as a misspelled word an it will be the input for the next stage.
Correcting a misspelled word
For this stage, both graphs of letter and syllables described in the previous section are loaded. Each word that has to be corrected is evaluated. In this way, the number of vowels in the word allows to approximate number of syllables that can be iden- tified. Since the absence or increasing of a vowel may be an error, it is considered a little higher range of syllables that can form the word: [number of syllables -1; number of syllables + 1]. This helps in the generation of the possible correct solutions of each word. The next step is the solution search. This is done by recursively traversing the misspelled word letter by letter. In this way, each possible syllable combination is contrasted versus the rule-based syllabification model and the range of syllables considered. Also, each solution receive a value calculated by the sum of the repetition of each own letter in the dictionary and the value of the unions in the graph. While traversing the misspelled word by finding a mistake, different paths are generated, the first one is from deleting the wrong letter, the next is when making a change of letter with the previous one, and the other paths are generated by changing the wrong letter with related letters in the graph. In this way, when creating several paths are generating different solutions.
Once the search for correct solutions is completed, they are evaluated through a modified language model. For that purpose the second graph containing syllables and the repetitions of the connections between syllables are used. In order to start the process, each possible solution is separated into syllables and the sum of the connections of these syllables is calculated using the graph. Additionally, the edit distance with Damerau-Levenshtein is calculated. At the end of calculating the two values, a 90% multiplication was performed on Damerau-Levenshtein and 10% on the sum of syllabic relationships, these values of 90% and 10% were chosen after testing to identify which optimized results. Finalized the calculations to each word, there are chosen the three possible correct words with the best values to be returned as solution.
Suggestions component
The suggestions component has been defined so that the user can make corrections to the solution that the application provides, allowing to improve the results since it will not have a totally accurate correction and adding new elements to the corrected list structure. It starts when the user selects the corrected word, activating a menu with the additional suggested words that were obtained when performing the correction. Then, the user selects the option that seems more precise and it is changed in the corrected text. Once this change is made, an additional change is made to the internal structures, where the correct word is added to the corrected list structure and the values are updated in the internal graph that is handled for the correction algorithm.
Experimentation and Results
An experiment will be carried out to establish the effectiveness of the spelling checker using metrics (van Huyssteen et al., 2004) to evaluate this type of projects.
Dataset
To construct the dataset there was more than one source. The first is a dictionary of shipibokonibo and Spanish which through a preprocessing has been updated to the new rules of writing and consists of 5486 words. The second source is separated texts by domain (educational, legal and religious) that was translated from Spanish to Shipibo-Konibo. The educational domain consists of 2097 sentences consisting of 16789 words, the legal domain consists of 957 sentences consisting of 16794 words and the religious domain consists of 13482 sentences consisting of 212921 words. This is the initial dataset that helps to construct the graphs that are needed in the correction algorithm.
The dataset is available at a website project. 1
Design of the experiment
To perform the experiment, where the effectiveness of the algorithm will be tested, it has been decided to use different sentences extracted from the shipibo-konibo dictionary (Lauriout et al., 1993). These sentences correspond to the examples of each dictionary entry. As the dictionary is not with the new official changes, proceeded to perform a pre-processing to update all the words in the example sentences. After cleaning, 2 types of tests will be generated. The first test is to randomly add a character to some words in the sentence. On the other hand, the second test adds, deletes or changes characters of some words at random in the sentence.
Three columns are created in a table: in the first column the original sentence which is already cleaned, in the second column the sentence with the first type of error and in the last column the Sentence Sentence for test type 1 nokon wáióroa pekaora ea náshiai nyokon wái aóroa poekaora eda náshiai eara nénoa iki eara nénopa iki rámara títa ka cónko iki rámara títaa ka cónko ikii sentence with more than 1 type of error. The file with the generated tables is used as input to perform the experiment.
Results
Correction of the 2 types of test was done with 5121 sentences. In order to identify the correct functioning, the Recall, Precision, measure of suggestions and general performance metrics were proposed for the evaluation of spell checkers (van Huyssteen et al., 2004). When counting the number of words in sentences, the result was 55786 words, these included correct words and misspelled words which will allow a better evaluation of the metrics. To calculate the recall and precision of the results:
• True positives (Tp): Misspelled word that are well corrected.
• True negatives (Tn): Misspelled word that are poorly corrected.
• False positives (Fp): Correct word that are well corrected.
• False negatives (Fn): Correct word that are poorly corrected.
To calculate the measure of suggestions a score is used depending on the suggestion of correction that will be applied to all corrections. Upon completion, a sum of all the scores obtained from the corrections will be made and divided by the number of positive ones to find the value of the suggestion measure. The scores used are:
• Correction in the first position of the suggestions: 1 point With the values in Tables 5 and 6, it was possible to calculate the recall and precision metrics with the formulas 1 and 2:
recall = T p T p + F p (1) precision = T p T p + T n(2)
Finally, to find the value of overall performance of the corrector, the following formula 3 is used:
overall = T p + F p T p + T n + F p + F n(3)
'
With the results obtained in Table 7 and Table 8, the values of the Recall and precision metrics are low, however this is because the spell-checker has yet to be improved to better identify the errors. What would improve its effectiveness is to be able to better detect words that do not need to be really corrected, because as can be appreciate, many of the words that are corrected are words that did not need it. An approach to face this problem is to take advantage of the available corpus to perform a search of the word and identify in the corpus if it already exists. This would avoid unnecessary correction, but would mean an increase in the time due to a search for each word that although in short texts would not be much difference, in longer texts it would be noticed. Despite the low numbers in recall and precision as can see in Figure 5, the spell-checker get a good result at the general level because it is considered within the correct result that both misspelled and well written words, when corrected, offer a correct result. In addition, it can be seen that more than half the time a good correction proposal has been found and 25% of the time these corrections are in the first position as a suggestion in the two types of tests performed.
There was another experiment where it can be found the ranking of the suggestions, for this case Top Type of errors 1 Type of errors 2 1 13400 12695 3 722 376 5 665 337 7 6572 4338 Table 9: Number of words in top suggestions by type of errors we use four elements: the first position, the top-3, the top-5 and the top-7. These results can be seen in Table 9. In most cases, the corrected suggestions are in the first positions and low values in the next positions in both types of errors.
Conclusions and Future Work
In this study, it was proposed a hybrid approach for the development of a spell-checker for Shipibokonibo. This method was supported with the implementation of linguistic rules (in the syllabification process) and the information obtained from different text corpus for the language. One of the difficult tasks was the use of recursion at the following of different paths when finding an error in the words (since it could be possible to change the character, add or remove it). That is why it was used a cutoff depth that help not to create long paths.
Finally the results were obtained, however they were not very promising because the approach followed is not enough to obtain a precise correction. However, as a future work, a context analysis will be integrated, using embedded words with a character-level model to identify terms that should not be corrected, because they may be well written but are not suitable in the context of a sentence.
Figure 1 :
1Character
Figure 2 :
2Sample of a misspelled-corrected dictionary
Figure 3 :Figure 4 :
34Identifying Correcting a misspelled word
Figure 5 :
5Metrics by type of error
Table 1 :
1Vowels and consonants in shipibokonibo.
Table 3 :
3Example of sentences for the test 1Sentence
Sentence for test type 2
nokon wáióroa
pekaora ea náshiai
nokon wájióra dpekaora a náshiai
eara nénoa iki
eaora néoa oiki
rámara títa ka
cónko iki
rámara títa k cónko iki
Table 4 :
4Example of sentences for the test 2
Table 5 :
5Experiment type 1 dataData
Value
True positive
3504
True negative
9769
False positive
14242
False negative
111
Correction in the first position of the suggestion 12695
Correction in some position of the suggestions
5051
No correct suggestion
9980
No suggestion
510
Table 6 :
6Experiment type 2 data
• Correction in some position of the sugges-
tions: 0.5 points
• No correct suggestion: -0.5 points
• No suggestion: 0 points
Type Error 1 Type Error 2Data
Value
Recall
0.33
Precision
0.53
Suggested Measure
14117.5 points
Overall Performance 0.76
Table 7: Resulting metric type 1
Data
Value
Recall
0.19
Precision
0.26
Suggested Measure
10230.5 points
Overall Performance 0.64
Table 8: Resulting metric type 2
Recall
Precision
Overall
0
0.2
0.4
0.6
0.8
1
chana.inf.pucp.edu.pe/resources
AcknowledgmentsFor this study, the authors acknowledge the support of the "Consejo Nacional de Ciencia, Tecnología e Innovación Tecnológica" (CONCYTEC Perú) under the contract 225-2015-FONDECYT.
A generic spell checker engine for south asian languages. Aba Abdullah, Ashfaq Rahman, Conference on Software Engineering and Applications (SEA. ABA Abdullah and Ashfaq Rahman. 2003. A generic spell checker engine for south asian languages. In Conference on Software Engineering and Applica- tions (SEA 2003). pages 3-5.
A spelling corrector for basque based on morphology. Itziar Aduriz, Miriam Urkia, Xa-Bier Inaki Alegria, Artola, Kepa Ezeiza, Sarasola, Literary and Linguistic Computing. 121Itziar Aduriz, MIRIAM Urkia, INAKI Alegria, XA- BIER Artola, NEREA Ezeiza, and KEPA Sarasola. 1997. A spelling corrector for basque based on morphology. Literary and Linguistic Computing 12(1):31-38.
Intercultural education and literacy: An ethnographic study of indigenous knowledge and learning in the Peruvian Amazon. Sheila Aikman, John Benjamins PublishingSheila Aikman. 1999. Intercultural education and lit- eracy: An ethnographic study of indigenous knowl- edge and learning in the Peruvian Amazon, vol- ume 7. John Benjamins Publishing.
CloniZER spell checker adaptive language independent spell checker. Loghman Barari, Behrang Qasemizadeh, AIML 2005 Conference CICC. Cairo, EgyptLoghman Barari and Behrang QasemiZadeh. 2005. CloniZER spell checker adaptive language indepen- dent spell checker. In AIML 2005 Conference CICC, Cairo, Egypt. pages 19-21.
Corpus del español: 100 million words, 1200s-1900s. Davies, M Davies. 2002. Corpus del español: 100 million words, 1200s-1900s. Available online at http: //www.corpusdelespanol.org .
Diccionario shipibo-castellano. Erwin Lauriout, Dwight Day, James Loriot, Erwin Lauriout, Dwight Day, and James Loriot. 1993. Diccionario shipibo-castellano .
A smooth likelihood simulator for dynamic disequilibrium models. Lung-Fei Lee, Journal of econometrics. 782Lung-fei Lee. 1997. A smooth likelihood simulator for dynamic disequilibrium models. Journal of econo- metrics 78(2):257-294.
Ship-lemmatagger: building an NLP toolkit for a peruvian native language. Jose Pereira, Rodolfo Mercado, Andres Melgar, Marco Sobrevilla-Cabezudo, Arturo Oncevay-Marcos, Text, Speech, and Dialogue: 20th International Conference. SpringerIn-pressJose Pereira, Rodolfo Mercado, Andres Melgar, Marco Sobrevilla-Cabezudo, and Arturo Oncevay-Marcos. 2017. Ship-lemmatagger: building an NLP toolkit for a peruvian native language. In Text, Speech, and Dialogue: 20th International Conference, TSD 2017. Springer (In-press).
Atlas lingüístico del Perú. Andrés Chirinos Rivera, Centro Bartolomé de Las Casas6Andrés Chirinos Rivera. 2001. Atlas lingüístico del Perú, volume 6. Centro Bartolomé de Las Casas.
Transitivity in shipibo-konibo grammar. Pilar Valenzuela, University of OregonPh.D. thesisPilar Valenzuela. 2003. Transitivity in shipibo-konibo grammar. Ph.D. thesis, University of Oregon.
Re-evaluating evaluation metrics for spelling checker evaluations. E R Gerhard B Van Huyssteen, M J Eiselen, Puttkammer, Proceedings of First Workshop on International Proofing Tools and Language Technologies. First Workshop on International Proofing Tools and Language TechnologiesGerhard B van Huyssteen, ER Eiselen, and MJ Put- tkammer. 2004. Re-evaluating evaluation metrics for spelling checker evaluations. In Proceedings of First Workshop on International Proofing Tools and Language Technologies. pages 91-99.
An open-source data driven spell checker for sinhala. Ruwan Ruwan Asanka Wasala, Randil Weerasinghe, Chamila Pushpananda, Eranga Liyanage, Jayalatharachchi, 3Ruwan Asanka Wasala, Ruwan Weerasinghe, Randil Pushpananda, Chamila Liyanage, and Eranga Jay- alatharachchi. 2011. An open-source data driven spell checker for sinhala. ICTer 3(1). |
17,895,526 | Cognate and Misspelling Features for Natural Language Identification | We apply Support Vector Machines to differentiate between 11 native languages in the 2013 Native Language Identification Shared Task. We expand a set of common language identification features to include cognate interference and spelling mistakes. Our best results are obtained with a classifier which includes both the cognate and the misspelling features, as well as word unigrams, word bigrams, character bigrams, and syntax production rules. | [
16050554,
13582931,
17934925
] | Cognate and Misspelling Features for Natural Language Identification
June 13
Garrett Nicolai nicolai@ualberta.ca
Department of Computing Science
University of Alberta Edmonton
ABCanada
Bradley Hauer bmhauer@ualberta.ca
Department of Computing Science
University of Alberta Edmonton
ABCanada
Mohammad Salameh msalameh@ualberta.ca
Department of Computing Science
University of Alberta Edmonton
ABCanada
Lei Yao
Department of Computing Science
University of Alberta Edmonton
ABCanada
Grzegorz Kondrak gkondrak@ualberta.ca
Department of Computing Science
University of Alberta Edmonton
ABCanada
Cognate and Misspelling Features for Natural Language Identification
Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications
the Eighth Workshop on Innovative Use of NLP for Building Educational ApplicationsAtlanta, GeorgiaJune 13
We apply Support Vector Machines to differentiate between 11 native languages in the 2013 Native Language Identification Shared Task. We expand a set of common language identification features to include cognate interference and spelling mistakes. Our best results are obtained with a classifier which includes both the cognate and the misspelling features, as well as word unigrams, word bigrams, character bigrams, and syntax production rules.
Introduction
As the world becomes more inter-connected, an increasing number of people devote effort to learning one of the languages that are dominant in the global community. English, in particular, is studied in many countries across the globe. The goal is often related to increasing one's chances to obtain employment and succeed professionally. The language of work-place communication is often not a speaker's native language (L1) but their second language (L2). Speakers and writers of the same L1 can sometimes be identified by similar L2 errors. The weak Contrastive Analysis Hypothesis (Jarvis and Crossley, 2012) suggests that these errors may be a result of L1 causing linguistic interference; that is, common tendencies of a speaker's L1 are superimposed onto their L2. Native Language Identification, or NLI, is an attempt to exploit these errors in order to identify the L1 of the speaker from texts written in L2.
Our group at the University of Alberta was unfamiliar with the NLI research prior to the announce-ment of a shared task . However, we saw it as an opportunity to apply our expertise in character-level NLP to a new task. Our goal was to propose novel features, and to combine them with other features that have been previously shown to work well for language identification.
In the end, we managed to define two feature sets that are based on spelling errors made by L2 writers. Cognate features relate a spelling mistake to cognate interference with the writer's L1. Misspelling features identify common mistakes that may be indicative of the writer's L1. Both feature sets are meant to exploit the Contrastive Analysis Hypothesis, and benefit from the writer's L1 influence on their L2 writing. Koppel et al. (2005b) approach the NLI task using Support Vector Machines (SVMs). They experiment with features such as function-word unigrams, rare part-of-speech bigrams, character bigrams, and spelling and syntax errors. They report 80% accuracy across 5 languages. We further investigate the role of word unigrams and spelling errors in native language identification. We consider not only function words, but also content words, as well as word bigrams. We also process spell-checking errors with a text aligner to find common spelling errors among writers with the same L1. Tsur and Rappoport (2007) also use SVMs on the NLI task, but limit their feature set to character bigrams. They report 65% accuracy on 5 languages, and hypothesize that the choice of words when writing in L2 is strongly affected by the phonology of their L1. We also consider character bigrams in our feature set, but combine them with a number of other features. Wong and Dras (2011) opt for a maximum entropy classifier, and focus more on syntax errors than lexical errors. They find that syntax tree production rules help their classifier in a seven language classification task. They only consider non-lexicalized rules, and rules with function words. In contrast, we consider both lexicalized and non-lexicalized production rules, and we include content words. Bergsma et al. (2012) consider the NLI task as a sub-task of the authorship attribution task. They focus on the following three questions: (1) whether the native language of the writer of a paper is English,
Related Work
(2) what is the gender of the writer, and (3) whether a paper is a conference or workshop paper. The authors conclude that syntax aids the native language classification task, further motivating our decision to use part-of-speech n-grams and production rules as features for our classifier. Furthermore, the authors suggest normalizing text to reduce sparsity, and implement several meta-features that they claim aid the classification.
Classifier
Following Koppel et al. (2005b) and others, we perform classification with SVMs. We chose the SVM-Multiclass package, a version of the SVMlight package (Joachims, 1999) specifically modified for multi-class classification problems. We use a linear kernel, and two hyperparameters that were tuned on the development set: the c soft-margin regularization parameter, which measures the tradeoff between training error and the size of the margin, and , which is used as a stopping criterion for the SVM. C was tuned to a value of 5000, and epsilon to a value of 0.1.
Features
As features for our SVM, we used a combination of features common in the literature and new features developed specifically for this task. The features are listed in the following section.
Word n-grams
Following previous work, we use word n-grams as the primary feature set. We normalize the text before selecting n-grams using the method of Bergsma et al. (2012). In particular, all digits are replaced with a representative '0' character; for example, '22' and '97' are both represented as '00'. However, unlike Koppel et al. (2005b), we incorporate word bigrams in addition to word unigrams, and utilize both function words and content words.
Function Words
Using a list of 295 common function words, we reduce each document to a vector of values representing their presence or absence in a document. All other tokens in the document are ignored. When constructing vectors of bigrams, any word that is not on the list of function words is converted to a placeholder token. Thus, most of our function-word bigrams consist of a single function word preceded or followed by a placeholder token.
Content Words
Other than the normalization mentioned in Section 4.1, all tokens in the documents are allowed as possible word unigrams. No spelling correction is used for reducing the number of word n-grams. Furthermore, we consider all token unigrams that occur in the training data, regardless of their frequency.
An early concern with token bigrams was that they were both large in number, and sparse. In an attempt to reduce the number of bigrams, we conducted experiments on the development set with different numbers of bigrams that exhibited the highest information gain. It was found that using all combinations of word bigrams improved predictive accuracy the most, and did not lead to a significant cost to the SVM. Thus, for experiments on the test set, all token bigrams that were encountered in the training set were used as features.
Character n-grams
Following Tetreault et al. (2012), we utilize all character bigrams that occur in the training data, rather than only the most frequent ones. However, where the literature uses either binary indicators or relative frequency of bigrams as features, we use a modified form of the relative frequency in our classifier.
In a pre-processing step, we calculate the average frequency of each character bigram across all training documents. Then, during feature extraction, we again determine the relative frequency of each character bigram across documents. We then use binary features to indicate if the frequency of a bigram is higher than the average frequency. Experiments conducted on the development set showed that although this modified frequency was out-performed by the original relative frequency on its own, our method performed better when further features were incorporated into the classifier.
Part-of-speech n-grams
All documents are tagged with POS tags using the Stanford parser (Klein and Manning, 2003), From the documents in the training data, a list of all POS bigrams was generated, and documents were represented by binary indicators of the presence or absence of a bigram in the document. As with character bigrams, we did not simply use the most common bigrams, but rather considered all bigrams that appeared in the training data.
Syntax Production Rules
After generating syntactic parse trees with the Stanford Parser. we extract all possible production rules from each document, including lexicalized rules. The features are binary; if a production rule occurs in an essay, its value is set to 1, and 0 otherwise. For each language, we use information gain for feature selection to select the most informative production rules as suggested by Wong and Dras (2011). Experiments on the development set indicated that the information gain is superior to raw frequency for the purpose of syntax feature selection. Since the accuracy increased as we added more production rules, the feature set for final testing includes all production rules encountered in the training set. The majority of the rules are of the form POS ⇒ terminal. We hypothesized that most of the information contained in these rules may be already captured by the word unigram features. However, experiments on the development set suggested that the lexicalized rules contain information that is not captured by the unigrams, as they led to an increase in predictive accuracy. Koppel et al. (2005a) suggested spelling errors could be helpful as writers might be affected by the spelling convention in their native languages. Moreover, spelling errors also reflect the pronunciation characteristics of the writers' native languages. They identified 8 types of spelling errors and collected the statistics of each error type as their features. Unlike their approach, we focus on the specific spelling errors made by the writers because 8 types may be insufficient to distinguish the spelling characteristics of writers from 11 different languages. We extract the spelling error features from character-level alignments between the misspelled word and the intended word. For example, if the word abstract is identified as the intended spelling of a misspelling abustruct, the character alignments are as follows:
Spelling Errors
a bu s t ru ct | | | | | | a b s t ra ct
Only the alignments of the misspelled parts, i.e. (bu,b) and (ru,ra) in this case, are used as features. The spell-checker we use is aspell 1 , and the character-level alignments are generated by m2maligner (Jiampojamarn et al., 2007).
Cognate Interference
Cognates are words that share their linguistic origin. For example, English become and German bekommen have evolved from the same word in a common ancestor language. Other cognates are words that have been transfered between languages; for example, English system comes from the Greek word συστ ηµα via Latin and French. On average, pairs of cognates exhibit higher orthographic similarity than unrelated translation pairs (Kondrak, 2013).
Cognate interference may cause an L1-speaker to use a cognate word instead of a correct English translation (for example, become instead of get). Another instance of cognate interference is misspelling of an English word under the influence of the L1 spelling (Table 1).
We aim to detect cognate interference by identifying the cases where the cognate word is closer to the misspelling than to the intended word ( Figure 1). We define one feature to represent each language L, for which we could find a downloadable bilingual English-L dictionary. We use the following algorithm:
1. For each misspelled English word m found in a document, identify the most likely intended word e using a spell-checking program. We use a simple method of computing orthographic distance with threshold t = 0.58 defined as the baseline method by Bergsma and Kondrak (2007). However, more accurate methods of cognate identification discussed in that paper could also be used.
Misspellings can betray cognate interference even if the misspelled word has no direct cognate in language L1. For example, a Spanish speaker might spell the word quick as cuick because of the existence of numerous cognates such as question/cuestión. Our misspelling features can detect such phenomena at the character level; in this case, qu:cu corresponds to an individual misspelling feature.
Meta-features
We included a number of document-specific metafeatures as suggested by Bergsma et al. (2012): the average number of words per sentence, the average word length, as well as the total number of characters, words, and sentences in a document. We reasoned that writers from certain linguistic backgrounds may prefer many short sentences, while other writers may prefer fewer but longer sentences. Similarly, a particular linguistic background may influence the preference for shorter or longer words.
Results
The dataset used for experiments was the TOEFL11 Non-Native English Corpus . The dataset was split into three smaller datasets: the Training set, consisting of 9900 essays evenly distributed across 9 languages, the Development set, which contained a further 1100 essays, and the Test set, which also contained 1100 essays. As the data had a staggered release, we used the data differently. We further split the Training set, with a split of 80% for training, and 10% for development and testing. We then used the Development set as a held-out test set. For held-out testing, the classifier was trained on all data in the Training set, and for final testing, the classifier was trained on all data in both the Training and Development sets.
We used four different combinations of features for our task submissions. The results are shown in Table 2. We include the following accuracy values:
(1) the results that we obtained on the Development set before the Test data release, (2) the official Test set results provided by the organizers , (3) the actual Test set results, and (4) the mean cross-validation results (for submissions 1 and 3). The difference between the official and the actual Test set results is attributed to two mistakes in our submissions. In submission 1, the feature lists used for training and testing did not match. In submissions 3 and 4, only non-lexicalized syntax production rules were used, whereas our intention was to use all of them. All four submissions used the following base combination of features:
• word unigrams • word bigrams • error alignments
• syntax production rules
• word-level cognate interference features In addition, submission 3 includes character bigrams, while submission 4 includes both character bigrams and meta-features. In submission 2, only function words are used, with the exclusion of content words.
Our best submission, which achieves 81.73% accuracy on the Test set, includes all features discussed in Section 4 except POS bigrams. Early tests indicated that any gains obtained with POS bigrams were absorbed by the production rules, so they were excluded form the final experiments. Character bigrams help on the Test set but not on the Development set. The meta-features decrease accuracy on both sets. Finally, the content words dramatically improve accuracy. The reason we included a submission which did not use content words is that it is a common practice in previous work. In our analysis of the data, we found content words that were highly indicative of the language of the writer. Particularly, words and phrases which contained the speaker's home country were useful in predicting the language. It should be noted that this correspondence may be dependent upon the prompt given to the writer. Furthermore, it may lead to false positives for L1 speakers who live in multi-lingual countries.
Confusion Matrix
We present the confusion matrix for our best submission in Table 5.1. The highest number of incorrect A C F G H I J K S T Tu ARA 83 0 0 0 2 2 2 1 4 5 1 CHI 1 81 2 0 1 0 8 6 1 0 0 FRE 6 0 82 2 1 3 0 0 1 0 5 GER 1 0 0 90 1 1 1 0 2 0 4 HIN 1 2 2 0 76 1 0 0 0 16 2 ITA 1 1 0 1 0 89 1 0 5 1 1 JPN 2 1 1 1 0 1 86 6 0 0 2 KOR 1 8 0 0 0 0 11 78 0 1 1 SPA 2 2 7 0 3 5 0 2 75 0 4 TEL 2 0 0 2 15 0 0 0 1 80 0 TUR 4 3 2 1 0 1 1 5 2 2 79 Table 4 shows the results of an ablation experiment on our best-performing submission. The word bigrams contribute the most to the classification; their removal increases the relative error rate by 27%. The word unigrams contribute much less., This is unsurprising, as much of the information contained in the word unigrams is also contained in the bigrams. The remaining features are also useful. In particular, our cognate interference features, despite applying to only 4 of 11 languages, reduce errors by about 4%.
Ablation Study
Conclusions and Future Work
We have described the system that we have developed for the NLI 2013 Shared Task. The system combines features that are prevalent in the literature with our own novel character-level spelling features and word cognate interference features. Most of the features that we experimented with appear to increase the overall accuracy, which contradicts the view that simple bag-of-words usually perform better than more complex feature sets (Sebastiani, 2002). Our cognate features can be expanded by including languages that do not use the Latin script, such as Russian and Greek, as demonstrated by Bergsma and Kondrak (2007). We utilized bilingual dictionaries representing only four of the eleven languages in this task 2 ; yet our cognate interference features still improved classifier accuracy. With more resources and with better methods of cognate identification, the cognate features have the potential to further contribute to native language identification.
Our error-alignment features can likewise be further investigated in the future. Currently, after analyzing texts with a spell-checker, we automatically accept the first suggestion as the correct one. In many cases, this leads to faulty corrections, and misleading alignments. By using context sensitive spellchecking, we can choose better corrections, and obtain information which improves classification.
This shared task was a wonderful introduction to Native Language Identification, and an excellent learning experience for members of our group,
2 .
2For each language L: (a) Look up the translation f of the intended word e in language L. (b) Compute the orthographic edit distance D between the words. (c) If D(e, f ) < t then f is assumed to be a cognate of e. (d) If f is a cognate and D(m, f ) < D(e, f ) then we consider it as a clue that L = L1.
Figure 1 :
1A cognate word influencing the spelling.
Table 1 :
1Examples of cognate interference in the data.
Table 3 :
3Confusion Matrix for our best classifier.Features
Test
Full system
81.7
w/o error alignments
81.3
w/o word unigrams
81.1
w/o cognate features
81.0
w/o production rules
80.6
w/o character bigrams
80.4
w/o word bigrams
76.7
Table 4 :
4Accuracy of various feature combinations.classifications are between languages that are either linguistically or culturally related (Jarvis and Crossley, 2012). For example, Korean is often misclassified as Japanese or Chinese. The two languages are not linguistically related to Korean, but both have historically had cultural ties with Korean. Likewise, while Hindi and Telugu are not related linguistically, they are both spoken in the same geographic area, and speakers are likely to have contact with each other.
http://aspell.net
. Shane Bergsma, Grzegorz Kondrak, Shane Bergsma and Grzegorz Kondrak. 2007.
Alignment-based discriminative string similarity. Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. the 45th Annual Meeting of the Association for Computational LinguisticsAlignment-based discriminative string similarity. In Proceedings of the 45th Annual Meeting of the Associ- ation for Computational Linguistics, pages 656-663.
Stylometric analysis of scientific articles. Shane Bergsma, Matt Post, David Yarowsky, Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMontréal, CanadaShane Bergsma, Matt Post, and David Yarowsky. 2012. Stylometric analysis of scientific articles. In Proceed- ings of the 2012 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, pages 327-337, Montréal, Canada.
TOEFL11: A 2 French. Daniel Blanchard, Joel Tetreault, Derrick Higgins, Aoife Cahill, Martin Chodorow, Spanish, German, and ItalianCorpus of Non-Native EnglishTechnical reportucational Testing ServiceDaniel Blanchard, Joel Tetreault, Derrick Higgins, Aoife Cahill, and Martin Chodorow. 2013. TOEFL11: A 2 French, Spanish, German, and Italian. Corpus of Non-Native English. Technical report, Ed- ucational Testing Service.
Applying many-to-many alignments and HMMs to letter-to-phoneme conversion. 2012. Approaching Language Transfer Through Text Classification: Explorations in the Detection-based Approach. Bristol, UK. Sittichai Jiampojamarn, Grzegorz Kondrak, and Tarek Sherif64Proceedings of NAACL-HLTScott Jarvis and Scott Crossley, editors. 2012. Approach- ing Language Transfer Through Text Classification: Explorations in the Detection-based Approach, vol- ume 64. Multilingual Matters Limited, Bristol, UK. Sittichai Jiampojamarn, Grzegorz Kondrak, and Tarek Sherif. 2007. Applying many-to-many alignments and HMMs to letter-to-phoneme conversion. In Pro- ceedings of NAACL-HLT, pages 372-379.
Making large-scale support vector machine learning practical. Thorsten Joachims, Advances in kernel methods. MIT PressThorsten Joachims. 1999. Making large-scale support vector machine learning practical. In Advances in ker- nel methods, pages 169-184. MIT Press.
Accurate unlexicalized parsing. Dan Klein, Christopher D Manning, Proceedings of the 41st Annual Meeting on Association for Computational Linguistics. the 41st Annual Meeting on Association for Computational Linguistics1Dan Klein and Christopher D. Manning. 2003. Ac- curate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computa- tional Linguistics-Volume 1, pages 423-430.
Word similarity, cognation, and translational equivalence. Grzegorz Kondrak, To appearGrzegorz Kondrak. 2013. Word similarity, cognation, and translational equivalence. To appear.
Automatically determining an anonymous author's native language. Moshe Koppel, Jonathan Schler, Kfir Zigdon, Intelligence and Security Informatics. Moshe Koppel, Jonathan Schler, and Kfir Zigdon. 2005a. Automatically determining an anonymous author's na- tive language. Intelligence and Security Informatics, pages 41-76.
Determining an author's native language by mining a text for errors. Moshe Koppel, Jonathan Schler, Kfir Zigdon, Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining. the eleventh ACM SIGKDD international conference on Knowledge discovery in data miningChicago, ILACMMoshe Koppel, Jonathan Schler, and Kfir Zigdon. 2005b. Determining an author's native language by mining a text for errors. In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge dis- covery in data mining, pages 624-628, Chicago, IL. ACM.
Machine learning in automated text categorization. Fabrizio Sebastiani, ACM computing surveys (CSUR). 34Fabrizio Sebastiani. 2002. Machine learning in auto- mated text categorization. ACM computing surveys (CSUR), 34(1):1-47.
Aoife Cahill, and Martin Chodorow. 2012. Native tongues, lost and found: Resources and empirical evaluations in native language identification. Joel Tetreault, Daniel Blanchard, Proceedings of COLING 2012. COLING 2012Mumbai, IndiaJoel Tetreault, Daniel Blanchard, Aoife Cahill, and Mar- tin Chodorow. 2012. Native tongues, lost and found: Resources and empirical evaluations in native language identification. In Proceedings of COLING 2012, pages 2585-2602, Mumbai, India.
A report on the first native language identification shared task. Joel Tetreault, Daniel Blanchard, Aoife Cahill, Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications. the Eighth Workshop on Innovative Use of NLP for Building Educational ApplicationsAtlanta, GA, USAJoel Tetreault, Daniel Blanchard, and Aoife Cahill. 2013. A report on the first native language identification shared task. In Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications, Atlanta, GA, USA.
Using classifier features for studying the effect of native language on the choice of written second language words. Oren Tsur, Ari Rappoport, Proceedings of the Workshop on Cognitive Aspects of Computational Language Acquisition. the Workshop on Cognitive Aspects of Computational Language AcquisitionPrague, Czech RepublicOren Tsur and Ari Rappoport. 2007. Using classifier fea- tures for studying the effect of native language on the choice of written second language words. In Proceed- ings of the Workshop on Cognitive Aspects of Com- putational Language Acquisition, pages 9-16, Prague, Czech Republic.
Exploiting parse structures for native language identification. Sze-Meng Jojo Wong, Mark Dras, Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. the 2011 Conference on Empirical Methods in Natural Language ProcessingEdinburgh, Scotland, UKSze-Meng Jojo Wong and Mark Dras. 2011. Exploit- ing parse structures for native language identification. In Proceedings of the 2011 Conference on Empiri- cal Methods in Natural Language Processing, pages 1600-1610, Edinburgh, Scotland, UK. |
11,966,594 | INTEGRATED TECHNIQUES FOR PHRASE EXTRACTION FROM SPEECH | We present an integrated approach to speech and natural language processing which uses a single parser to create training for a statistical speech recognition component and for interpreting recognized text. On the speech recognition side, our innovation is the use of a statistical model combining N-gram and context-free grammars. On the natural language side, our innovation is the integration of parsing and semantic interpretation to build references for only targeted phrase types. In both components, a semantic grammar and partial parsing facilitate robust processing of the targeted portions of a domain. This integrated approach introduces as much linguistic structure and prior statistical information as is available while maintaining a robust full-coverage statistical language model for recognition. In addition, our approach facilitates both the direct detection of linguistic constituents within the speech recognition algorithms and the creation of semantic interpretations of the recognized phrases. | [
696805,
5222302,
5094703
] | INTEGRATED TECHNIQUES FOR PHRASE EXTRACTION FROM SPEECH
Marie Meteer mmeteer@bbn.com
BBN Systems and Technologies Cambridge
02138Massachusetts
J Robin Rohlicek rohlicek@bbn.com
BBN Systems and Technologies Cambridge
02138Massachusetts
INTEGRATED TECHNIQUES FOR PHRASE EXTRACTION FROM SPEECH
We present an integrated approach to speech and natural language processing which uses a single parser to create training for a statistical speech recognition component and for interpreting recognized text. On the speech recognition side, our innovation is the use of a statistical model combining N-gram and context-free grammars. On the natural language side, our innovation is the integration of parsing and semantic interpretation to build references for only targeted phrase types. In both components, a semantic grammar and partial parsing facilitate robust processing of the targeted portions of a domain. This integrated approach introduces as much linguistic structure and prior statistical information as is available while maintaining a robust full-coverage statistical language model for recognition. In addition, our approach facilitates both the direct detection of linguistic constituents within the speech recognition algorithms and the creation of semantic interpretations of the recognized phrases.
INTRODUCTION
Language modeling for speech recognition has focused on robustness, using statistical techniques such as n-grams, whereas work in language understanding and information extraction has relied more on rule based techniques to leverage linguistic and domain information. However, the knowledge needed in these two components of a speech language system is actually very similar. In our work, we take an integrated approach, which uses a single grammar for both language modeling and language understanding for targeted portions of the domain and uses a single parser for both training the language model and extracting information from the output of the recognizer.
The goal of our work is provide speech recognition capabilities that are analogous to those of information extraction systems: given large amounts of (often low quality) speech, selectively interpret particular kinds of information. For example, in the air traffic control domain, we want to determine the flight IDs, headings, and altitudes of the planes, and to ignore other information, such as weather and ground movement.
The following is a summary of the main techniques we use in our approach:
Integration of N-gram and context free grammars for speech recognition: While statistically based Markovchain language models (N-gram models) have been shown to be effective for speech recognition, there is, in general, more structure present in natural language than N-gram models can capture. Linguistically based approaches that use statistics to provide probabilities for word sequences that are accepted by a grammar typically require a full coverage grammar, and therefore are only useful for constrained sublanguages. In the work presented here, we combine linguistic structure in the form of a partial-coverage phrase structure grammar with statistical N-gram techniques. The result is a robust statistical grammar which explicitly incorporates syntactic and semantic structure. A second feature of our approach is that we are able to determine which portions of the text were recognized by the phrase grammars, allowing us to isolate these phrases for more processing, thus reducing the overall time needed for interpretation.
Partial parsing: It is well recognized that full coverage grammars for even subsets of natural language are beyond the state of the art, since text is inevitably errorful and new words frequently occur. There is currently a upsurge in research in partial parsing in the natural language community (e.g., Hindle 1983, Weischedel, et al. 1991, where rather than building a single syntactic tree for each sentence, a forest is returned, and phrases outside the coverage of the grammar and unknown words are systematically ignored. We are using the partial parser "Sparser" (McDonald 1992), which was developed for extracting information from open text, such as Wall Street Journal articles.
Figure 1:
Semantic grammar." Central to our approach is the use of a minimal, semantically based grammar. This allows us to build targeted grammars specific to the domain. It also makes the grammar much more closely tied to the lexicon, since the lexical items appear in the rules directly and in general there are many categories, each covering only a small number of lexical items. As Schabes (1992) points out in reference to lexicalized stochastic tree adjoining grammars (SLTAG), an effective linguistic model must capture both lexical and hierarchical information. Context free grammars using only syntactic information fail to capture lexical information. Figure 1 shows a block diagram of the overall approach with the two components which use the parser shaded: the model construction component and the interpretation component.
For both the language modeling and information extraction, we are using the partial parser Sparser (McDonald 1992). Sparser is a bottom-up chart parser which uses a semantic phrase structure grammar (i.e. the nonterminals are semantic categories, such as HEADING or FLIGHT-ID, rather than traditional syntactic categories, such as CLAUSE or NOUN-PHRASE). Sparser makes no assumption that the chart will be complete, i.e. that a top level category will cover all of the input, or even that all terminals will be covered by categories, effectively allowing unknown words to be ignored. Rather it simply builds constituent structure for those phrases that are in its grammar.
In Section Two, we describe language modeling, and in Three, we focus on semantic interpretation. In Section Four, we present the results of our initial tests in the air traffic control domain, and finally we conclude with future directions for the work.
LANGUAGE MODELING
There are two main inputs to the model construction portion of the system: a transcribed speech training set and a phrase-structure grammar. The phrase-structure grammar Overall Approach is used to partially parse the training text. The output of this is: (1) a top-level version of the original text with subsequences of words replaced by the non-terminals that accept those subsequences; and (2) a set of parse trees for the instances of those nonterminals.
Rule Probabilities
Figure two below shows a sample of the rules in the ATC grammar followed by examples of transcribed text and the text modified by the parser. Note that in this case, where goal is to model aircraft identifiers and a small set of air traffic control commands, other phrases like the identification of the controller, traffic information, etc., are ignored. They will be modelled by the n-gram, rather than as specific phrases.
R1 (def-rule land-action > ("land")) R2 (def-rule takeoff-action > ("takeoff")) R3 (def-rule takeoff-action > ("go")) R4 (def-rule clrd/land > ("cleared" "to" land-action) R5 (def-rule clrd/takeoff > ("cleared" "to" takeoff-action)) R6 (def-rule clrd/takeoff > ("cleared" "for" takeoff-action ))) R7 (def-rule tower-clearance > (runway clrd/land) R8 (def-rule tower-clearance > (runway clrd/takeoff )) Figure 2: Phrase structure rules for tower clearance >Nera twenty one zero nine runway two two fight cleared for takeoff >COMMERCIAL-AIRPLANE TOWER-CLEARANCE >Nera thirty seven twelve Boston tower runway two two fight cleared for takeoff >COMMERCIAL-AIRPLANE Boston tower TOWER-CLEARANCE >Jet Link thirty eight sixteen Boston tower runway two two fight cleared for takeoff traffic on a five mile final landing two two fight >COMMERCIAL-AIRPLANE Boston tower TOWER-CLEARANCE traffic on a five mile final landing RUNWAY >Jet Link thirty eight zero five runway two two fight cleared for takeoff sorry for the delay >COMMERCIAL-AIRPLANE TOWER-CLEARANCE sorry for the delay Figure 3: Training text modified by parser
Using the modified training text we construct a probabilistic model for sequences of words and nonterminals. The parse trees are used to obtain statistics for the estimation of production probabilities for the rules in the grammar. Since we assume that the production probabilities depend on their context, a simple count is insufficient. Smoothed maximum likelihood production probabilities are estimated based on context dependent counts. The context is defined as the sequence of rules and positions on the right-hand sides of these rules leading from the root of the parse tree to the non-terminal at the leaf. The probability of a parse therefore takes into account that the expansion of a category may depend on its parents. However, it does not take into consideration the expansion of the sister nonterminals, though we are currently exploring means of doing this (cf. Mark, et al. 1992).
In the above grammar (Figure 2), the expansion of TAKEOFF-ACTION may be different depending on whether it is part of rule 5 or rule 6. Therefore, the "context" of a production is a sequence of rules and positions that have been used up to that point, where the "position" is where in the RHS of the rule the nonterminal is. For example, in the parse shown below (Figure 4), the context of R2 (TAKEOFF-ACTION > "takeoff') is rule 8/position 2, rule 6/position 3. We discuss the probabilities required to evaluate the probability of a parse in the next section. In order to use a phrase-structure grammar directly in a time-synchronous recognition algorithm, it is necessary to construct a finite-state network representation If there is no recursion in the grammar, then this procedure is straightforward: for each rule, each possible context corresponds to a separate subnetwork. The subnetworks for different rules are nested. We are currently comparing methods of allowing limited recursion (e.g. following Pereira & Wright 1990). Figure 5 shows the expansion of the rules in from Figure 2.
There have been several attempts to use probability estimates with context free grammars. The most common technique is using the Inside-Outside algorithm (e.g. Pereira & Schabes 1992, Mark, et al. 1992) to infer a grammar over bracketed texts or to obtain Maximum-Likelihood estimates for a highly ambiguous grammar. However, most require a full coverage grammar, whereas we assume that only a selective portion of the text will be covered by the grammar. A second difference is that they use a syntactic grammar, which results in the parse being highly ambiguous (thus requiring the use of the Inside-Outside algorithm). We use a semantic grammar, with which there is rarely multiple interpretations for a single utterance. 1
Probability Estimation
Both the context-dependent production probabilities of the phrase grammar and one for the Markov chain probabilities for the top-level N-gram model must be estimated. We use the same type of "backing-off' approach in both cases. The Maximum-Likelihood (ML) estimate reduces to a simple relative-frequency computation in the N-gram case.
In the phrase grammar case, we assume that the parses are in general unambiguous, which has been the case so far in our domain. Specifically, we only consider a single parse and accumulate relative frequency statistics for the various contexts in order to obtain the ML production The approach we use to backing off is described in Placeway, et al. (1993 where r is the number of different next symbols/rules seen in the context and n is the number of times the context was observed.
INFORMATION EXTRACTION
The final stage of processing is the interpretation of the recognized word sequence. We use the same phrase structure grammar for interpretation as that used to build the recognition model. However, in this last phase, we take advantage of the semantic interpretation facility of the parser.
Most approaches to natural language understanding separate parsing (finding a structural description) from interpretation (finding a semantic analysis). In the work presented here, we use a single component for both. The Sparser system integrates parsing and interpretation to determine "referents" for phrases incrementally as they are recognized, rather than waiting for the entire parse to finish. The referent of a phrase is the object in the domain model that the phrase refers to. For example, the initial domain model consists of objects that have been created for entities which are known to the system in advance, such as airlines. When the name of an airline is recognized, such as "Delta", its referent is the airline object, #<airline delta>. Referents for entities that cannot be anticipated, such as number sequences and individual airplanes, are created incrementally
Controller Transmission:
when the phrase is recognized. Figure 6 shows an example of the edges created by the parser and their referents.
When a referent actually refers to an entity in the world, such as a runway or airplane, then the same referent object is cataloged and reused each time that entity is mentioned. The referent for a number sequence is a number object with the value the sequence represents. The referent for the entire phrase "Delta three five nine" is an object of type airplane. In some cases, the object will also be indexed by various subparts (such as indexing a flight ID by the digit portion of the ID) to aid in disambiguating incomplete subsequent references. For example, in the pilot reply in Figure 6, indexing allows the system to recognize that the number "three five nine" actually refers to the previously mentioned Delta flight.
We extend the notion of referent from simply things in the world to utterance acts as well, such as commands. Each time a command is uttered, a new referent is created. Command referents are templates which are created when some core part is recognized and then added to compositional as other (generally optional) information is recognized. So following our earlier example of tower clearances, rules 4, 5, and 6 instantiate a takeoff clearance template and fill in the action type, whereas rules 7 and 8 fill in the "runway" field. We show examples of each of these groups and the templates in Figure 7 below: R6 (def-rule clrd/takeoff ("cleared" "for" takeoff-action) :referent (:function create-tower-clearance third)) R8 (def-rule tower-clearance (runway clrd/takeoff) :referent (:function add-to-tower-clearance second first)) #<tower-clearance Type: TOWER-CLEARANCE ACTION: #<TAKEOFF> RUNWAY: #<Runway 26L>>
RESULTS
This approach was first applied in the Gisting system (Rohlicek, et al. 1992), where the goal was to extract flight IDs from off-the-air recordings of ATC communications. In this application, the input is extremely noisy and recognition performance is generally quite poor. We report the general word recognition accuracy and flight ID recognition accuracy for both the combined phrase structure and n-gram language models (as described in section 2), and just n-grams.
The training data consists of 2090 transcribed controller transmissions. Testing data consists of 469 transmissions of average length 16. The results are presented for controller transmissions where the start and end times of the transmissions are known.
As shown in Table 1, the overall word accuracy was improved only slightly (70% to 72%), which was expected since we only modeled a small portion of the domain. However, the best result was in the fraction of flight IDs detected, where we halved the miss rate (from 11% down to 5% The next set of experiments we ran focused on comparing general word accuracy with word accuracy in the targeted portion of the domain (i.e. that portion covered by the grammar). Using a different ATC dataset (still operational data, but recorded in the tower rather than off the air), we compared bi-grams with our combined rule based and ngram approach The grammar covered approximately 68% of the training data. We tested not only the overall word accuracy, but also the word accuracy in those portions of the text that were modeled by the grammar. As shown in Table 2, not only was there an improvement in the overall word score using the integrated vs. the bi-gram language model, we can see that the improvement in accuracy in the targeted portion of the domain was much greater in the integrated approach.
Integrated
)rds word
Our third set of experiments focused on the information extraction portion of the system. We evaluated the ability of the parser to extract two kinds of commands from the output of recognition. In these experiments, we took truth to be the performance of the parser on the transcribed text, since we did not have truth annotated for these phrases in our test data. (It has been our experience in working w~h flight IDs, which were annotated, that in the ATC domain the phrases are regular enough that the parser will extract nearly 100% of the information in the targeted categories. The errors that occur are generally caused by restarts, speech errors, or transcription errors.)
Using the same training and test conditions as the first set of experiments described above 1, we extracted phrases for tower clearances using the grammar partially shown above (Figure 2), and direction orders, which generally consisted of a direction to turn and some heading. The test set consisted of 223 controller utterances and we scored as correct only exact matches, where the same referent object was found and all of the fields matched exactly. Results are shown in Table Three We observe that the precision and recall for direction orders is drastically better than that for tower clearances, even though the grammars for the two are very similar in size.
One difference, which we would like to explore further, is that the direction orders grammar was part of the language model which was used for recognition, whereas tower clearances were not modelled by the phrase grammar, only the n-gram. To know if this was a factor, we need to compare the actual word recognition accuracy for these two phrase types.
In looking at the results for tower clearances, we found that although the exact match score was very low, there were many partial matches, where for example the runway and or the action type (takeoff, land, etc.) were found correctly, even though the entire tower clearance was not recognized. In order to take into account these partial matches, we rescored the precision and recall, counting each individual piece of information (runway, action, and clearance), so that an exact match gets a score of 3 and partial matches score a 1 or 2. Using this measure, we got a significantly improved performance: precision 64.4 and recall 63.8. These results highlight one of the main the advantage of this approach, that even with errorful input, useful information can be found.
FUTURE WORK
We have shown the the approach described here both improves overall word accuracy in recognition and provides a means for extracting targeted information even recognition performance is quite poor. Our next goal is to apply the technique to new domains. As part of this effort we are developing a set of tools for building and evaluating grammars.
We are also also applying these techniques in new applications. In particular, we have recently performed experiments in Event Spotting, which is an extension of wordspotting where the goal is to determine the location of phrases, rather than single keywords. We used the parser/extraction portion of the system to find examples of phrase types in the corpus and to evaluate the results, as well as in the language model of the recognizer. In an experiment detecting time and date phrases in the Switchboard corpus (which is conversational telephone quality data), we saw an increase in detection rate over strictly bi-gram or phoneme loop language models (Jeanrenaud, et al. 1994).
Figure 4 :
4"RUNWA¥-NUM ,,,,,,,,,,,,.,,,,,,,,,,,,,"' r ~,,,.,,,,,,,,,,,.,,,,,,. Parse tree with path highlighted
For the phrase grammar, we estimate probabilities of the form P(rn+ 1 I (r I, Pl), (r2, P2) ..... (rn, Pn)) where r i are the rules and Pi are the positions within the rules. In the N-gram case, we are estimating P(Sn+l I sl, s2 ..... Sn)where Sl, s2 ..... Sn is the sequence of words and nonterminals leading up to Sn+l. In both cases, the estimate is based on a combination of the Maximum-Likelihood estimate, and the estimates in a reduced context: P(rn+l I (r 2, P2) ..... (rn, Pn)) and P(sn+ 1 I s2 ..... Sn).
Figure 5 :
5Finite state network probabilities.
Figure 7 :Figure 6 :
76Rules with referents and completed template. Pilot Reply: edge'-="[ ~ delta:J[ #<number359>[ll I .I tokens--~ delta I three I fivel nine I runway I two I six I left I cleared i for Parse
).N-gram
& phrase
N-gram
Word Recognition
Sub.
Del.
Ins
Acc.
18.6
4.5
5.2
72
20.4
5.0
4.3
70
FID rec.
accuracy
57
53
Table 1: Results for Gisting experiment.
Table 2 :
2Comparison between Bi-grams and integrated approach.
.Exact Match
Direction
Tower
Order
Clearance
Total in reference
38
118
Total in recog.
35
117
Precision
91.4%
43.6%
Recall
81.6%
43.2%
False Positives
1
11
Misses
5
12
Errors
2
7
Partial Match
Precision
64.4
Recall
63.8
Table 3 :
3Precision and recall in extracting information.
Note on difference was that these tests were done on recognition results after automatic segmentation and classification according to pilot and controller, which generally decrease recognition accuracy.
Deterministic Parsing of Syntactic Nonfluencies. Don Hindle, Proc. of the 21st Annual Meeting of the Association for Computational Linguistics. of the 21st Annual Meeting of the Association for Computational LinguisticsHindle, Don (1983) "Deterministic Parsing of Syntactic Non- fluencies" Proc. of the 21st Annual Meeting of the Association for Computational Linguistics, June 15-17, pp. 123-128.
Spotting Events in Continuous Speech. P Jeanrenaud, M Siu, R Rohlicek, M Meteer, H Gish, Proceedings of International Conference of Acoustics, Speech, and Signal Processing (ICASSP). International Conference of Acoustics, Speech, and Signal Processing (ICASSP)Adelaide, Australiato appear inJeanrenaud, P., Siu, M., Rohlicek, R., M., Meteer, Gish, H. (1994) "Spotting Events in Continuous Speech" to appear in Proceedings of International Conference of Acoustics, Speech, and Signal Processing (ICASSP), April 1994, Adelaide, Australia.
Parameter Estimation for Constrained Context-free Language Models. K Mark, M Miller, U Grenander, S Abney, Proceedings of the Speech and Natural Language Workshop. the Speech and Natural Language WorkshopSan Mateo, CAMorgan KaufmannMark, K., Miller, M., Grenander, U., & Abney, S. (1992) "Parameter Estimation for Constrained Context-free Language Models" in Proceedings of the Speech and Natural Language Workshop, February, 1992, Morgan Kaufmann, San Mateo, CA, p. 146-149.
An Efficient Chart-based Algorithm for Partial Parsing of Unrestricted Texts. David D Mcdonald, Proceedings of the 3rd Conference on Applied Natural Language Processing. the 3rd Conference on Applied Natural Language ProcessingTrento, ItalyMcDonald, David D. (1992) "An Efficient Chart-based Algorithm for Partial Parsing of Unrestricted Texts" in Proceedings of the 3rd Conference on Applied Natural Language Processing, April 1-3, 1992, Trento, Italy, pp. 193-200.
Inside-Outside Reestimation from Partially Bracketed Corpora. F Pereira, E Schabes, Proceedings of the Speech and Natural Language Workshop. the Speech and Natural Language WorkshopSan Mateo, CAMorgan KaufmannPereira, F. & Schabes, E. (1992) "Inside-Outside Reestimation from Partially Bracketed Corpora" in Proceedings of the Speech and Natural Language Workshop, February, 1992, Morgan Kaufmann, San Mateo, CA, p. 122-127.
Finite-state approximation of phrase structured grammars. F Pereira, R Wright, Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics. the 29th Annual Meeting of the Association for Computational LinguisticsBerkeley, CaliforniaPereira, F. & Wright, R. (1991) "Finite-state approximation of phrase structured grammars" Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics, June 18-21, 1991, Berkeley, California, pp.246-255.
Estimation of Powerful Language Models from Small and Large Corpora. P Placeway, S Schwartz, P Fung, L Nguyen, Proceedings of International Conference of Acoustics, Speech, and Signal Processing. International Conference of Acoustics, Speech, and Signal ProcessingICASSPPlaceway, P., Schwartz, S., Fung, P., & Nguyen, L., (1993) "Estimation of Powerful Language Models from Small and Large Corpora", in Proceedings of International Conference of Acoustics, Speech, and Signal Processing (ICASSP).
Gisting Conversational Speech. R Rohlicek, D Ayuso, M Bates, R Bobrow, A Boulanger, H Gish, M Jeanrenaud, M Meteer, M Siu, Proceedings of International Conference of Acoustics, Speech, and Signal Processing (ICASSP). International Conference of Acoustics, Speech, and Signal Processing (ICASSP)2Rohlicek, R., Ayuso, D., Bates, M., Bobrow, R., Boulanger, A., Gish, H., Jeanrenaud, M., Meteer, M., Siu, M. (1992) "Gisting Conversational Speech" in Proceedings of International Conference of Acoustics, Speech, and Signal Processing (ICASSP), Mar. 23-26, 1992, Vol.2, pp. 113- 116.
Stochastic Tree-Adjoining Grammars. E Schabes, Proceedings of the Speech and Natural Language Workshop. the Speech and Natural Language WorkshopSan Mateo, CAMorgan KaufmannSchabes, E. (1992) "Stochastic Tree-Adjoining Grammars" in Proceedings of the Speech and Natural Language Workshop, February, 1992, Morgan Kaufmann, San Mateo, CA, p. 140- 145.
Partial Parsing: A report on work in progress. R Weischedel, D Ayuso, R Bobrow, S Boisen, R Ingria, J Palmucci, Proceedings of the Speech and Natural Language Workshop. the Speech and Natural Language WorkshopPacific Grove, CAMorgan KaufmannWeischedel, R., Ayuso, D., Bobrow, R., Boisen, S., Ingria, R., Palmucci, J. (1991) "Partial Parsing: A report on work in progress" in Proceedings of the Speech and Natural Language Workshop, February, Morgan Kaufmann, Pacific Grove, CA, p. 204-209. |
26,571,395 | How Many Stemmata with Root Degree k? | We are investigating parts of the mathematical foundations of stemmatology, the science reconstructing the copying history of manuscripts. After Joseph Bédier in 1928 got suspicious about large amounts of root bifurcations he found in reconstructed stemmata, Paul Maas replied in 1937 using a mathematical argument that the proportion of root bifurcating stemmata among all possible stemmata is so large that one should not become suspicious to find them abundant. While Maas' argument was based on one example with a tradition of three surviving manuscripts, we show in this paper that for the whole class of trees corresponding to Maasian reconstructed stemmata and likewise for the class of trees corresponding to complete historical manuscript genealogies, root bifurcations are apriori the most expectable root degree type. We do this by providing a combinatorial formula for the numbers of possible so-called Greg trees according to their root degree(Flight, 1990). Additionally, for complete historical manuscript trees (regardless of loss), which coincide mathematically with rooted labeled trees, we provide formulas for root degrees and derive the asymptotic degree distribution. We find that root bifurcations are extremely numerous in both kinds of trees. Therefore, while previously other studies have shown that root bifurcations are expectable for true stemmata, we enhance this finding to all three philologically relevant types of trees discussed in breadth until today. | [] | How Many Stemmata with Root Degree k?
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 13-14, 2017. 2017
Armin Hoenen hoenen@em.uni-frankfurt.de
CEDIFOR Goethe University
Frankfurt
Steffen Eger
UKP TU
Darmstadt
Ralf Gehrke gehrke@rz.uni-frankfurt.de
CEDIFOR Goethe University
Frankfurt
How Many Stemmata with Root Degree k?
Proceedings of the 15th Meeting on the Mathematics of Language
the 15th Meeting on the Mathematics of LanguageLondon, UKAssociation for Computational LinguisticsJuly 13-14, 2017. 2017
We are investigating parts of the mathematical foundations of stemmatology, the science reconstructing the copying history of manuscripts. After Joseph Bédier in 1928 got suspicious about large amounts of root bifurcations he found in reconstructed stemmata, Paul Maas replied in 1937 using a mathematical argument that the proportion of root bifurcating stemmata among all possible stemmata is so large that one should not become suspicious to find them abundant. While Maas' argument was based on one example with a tradition of three surviving manuscripts, we show in this paper that for the whole class of trees corresponding to Maasian reconstructed stemmata and likewise for the class of trees corresponding to complete historical manuscript genealogies, root bifurcations are apriori the most expectable root degree type. We do this by providing a combinatorial formula for the numbers of possible so-called Greg trees according to their root degree(Flight, 1990). Additionally, for complete historical manuscript trees (regardless of loss), which coincide mathematically with rooted labeled trees, we provide formulas for root degrees and derive the asymptotic degree distribution. We find that root bifurcations are extremely numerous in both kinds of trees. Therefore, while previously other studies have shown that root bifurcations are expectable for true stemmata, we enhance this finding to all three philologically relevant types of trees discussed in breadth until today.
Introduction
Stemmatology is the science trying to reestablish the copy history of a text surviving in a number of versions. One of the editors' objectives in stemmatology can be approaching the original authorial wording, which itself is most probably lost, given the body of extant text variants (Cameron, 1987).
In order to do so, the philologist may reconstruct the copy history of the manuscripts so as to better understand which variants are most likely original. Usually, the visual reconstruction is a graph or more precisely a tree where the nodes symbolize manuscripts and the copy processes are depicted by the edges. Such a visual reconstruction is then called a stemma. For an example of a stemma, see Figure 1.
Maybe the biggest and surely most famous problem in philology is an observation that the French philologist Joseph Bédier made editing the medieval French text "Le lai de l'ombre" in 1890, 1913and 1928(Bédier, 1890, 1913, 1928. Bédier observed that 105 out of 110 stemmata, the vast majority, in a collection he had made without controlling for root degree patterns had a bifurcation immediately below their root, an observation repeated multiple times thereafter on different collections, compare Table 1.
This observation was worrisome. If there are exactly two texts (nodes) directly below the assumed authorial original (root), 1 the implications for text reconstruction of the urtext are the following. An editor may choose one of the two texts as his/her preferred base text at will and reconstruct the ancestral text from this base text eliciting only in special cases the second or yet another variant.
Collection root bifurcations root tri-or multifurcations Bédier (1928) 95.5% 4.5% Castellani (1957) 82.5% 17.5% Haugen (2015) Bibliotheca A.
85.5% 14.5% Haugen (2015) Editiones A.
80.5% 19.5% Table 1: Percentages of root bifurcative stemmata in four collections, reported in (Haugen, 2015). Note that extending his collection through stemmata which are not yet viewed as conclusive by the composer, Castellani (1957, p.24) reports only 75 − 76% root bifurcating trees. Bédier was worried about editors consciously or subconsciously choosing a base manuscript for the urtext after their taste and justifying this by postulating root bifurcations in their stemmata. As a second explanation for a large incidence of root bifurcations in reconstructed stemmata he suspected a methodology-inherent tendency for overseparation since editors always look for the one authorial in opposition to all other variants (a fallacy of the stemmatic method). One can easily imagine that the subsequent debate had far-reaching consequences for textual criticism and editing. The community divided into best text editors (or Bédierists) which abandoned stemmatic approaches altogether and based their editions on a good available manuscript and those which continued and continue to produce stemmata (or Lachmannians). More realistically, any modern editor may choose among one of those approaches depending on his/her material and circumstances. Nevertheless, the argument has ever since stimulated much research repeatedly including mathematical argumentation, see for instance, Greg (1931), Maas (1937), Fourquet (1946), Whitehead (1951), Pasquali (1952), Castellani (1957, Hering (1967), Kleinlogel (1968), Weitzman (1982), Weitzman (1987), Grier (1989), Haugen (2002), Timpanaro (2005), Haugen (2010), Haugen (2015), Hoenen (2016). Maas argued that the number of stemmata with a root bifurcation among all possible stemmata which can be reconstructed (thus regarding stemma generation apriori as a random process) would be naturally high. One should thus rather not be too surprised of large proportions in real reconstructed stemmata: those were no good reason to abandon the stemmatic method. Maas numerically based this counter argument on the example of traditions with three surviving manuscripts. 2 Bédierists could have reacted to this and could have tried to seek a generalization of his argument. However, neither Bédierists nor Lachmannians have ever come up with such a generalization. What if Maas' argument would only hold for three surviving manuscripts, but witness completely different proportions for 4, 5, or 60 survivors? Would those numbers reveal justification for being suspicious of the real-world reconstructions?
In fact, Maas himself estimated numbers of possible stemmata for a number of surviving manuscripts of up to 5 according to Flight (1990), who decades later generalized the type of graphs Maas had considered for the modeling of stemmata. Flight (1990) provided a formula to count numbers of these so-called Greg trees, given a certain number of survivors. However, the question of the proportion of root bifurcating stem-2 Maas distinguishes two kinds of traditions of medieval texts: texts read by many and texts read by few. He assumes that strict stemmatics fails for texts read by many, which should be characterized by a larger number of survivors. Yet, not all philologists follow this distinction. Pasquali and Pieraccioni (1952) distinguish open and closed traditions, where the latter are such which are largely free of flaws complicating stemmatic assessment. Closed traditions are not straighforwardly connected with the number of survivors, compare also West (1973), which is why there is no reason to limit the range of surviving manuscripts to very small numbers and surely not to just one or two examples. mata and how this proportion develops-thus ultimately the generalization of Maas' argument-has not yet been answered. In this paper, we fill this gap and provide a formula for the numbers of possible root k-furcating stemmata given m surviving manuscripts and compute the proportion of root bifurcating stemmata among all stemmata given m survivors.
Our work connects to a tradition both in linguistics and biology to count certain subclasses of graphs. In our case these graphs are trees, whereas other works have counted alignments between two or multiple sequences, that is, certain bi-or multi-partite graphs (Griggs et al., 1990;Covington, 2004;Eger, 2015).
Counting Manuscript Trees: Prerequisites
The theoretical entity used to model manuscript genealogy is a tree. A tree, as a concept from graph theory, is a set of nodes V together with a set of (unordered) edges E, with E ⊆ {{u, v} | u, v ∈ V }. The two defining properties of trees is that they must be free of cycles (including selfcycles) and connected. General works on counting different types of trees appear early on (Cayley, 1889), and research on trees is comprehensive, compare Moon (1970). The similarity of the three disciplines of historical linguistics, phylogeny and stemmatology has likewise been noticed early and led to various transfers and adaptations between methods of those fields, compare O'Hara (1996). Especially in the domain of phylogeny the understanding of trees is a central issue and consequently much research has focussed on phylogenetic trees, see for instance Felsenstein (1978); Swofford (1990); Huson (1998);Felsenstein (2004). One characteristic of phylogenetic trees is that they are apriori exclusively bifurcating. Thus, the question for a proportion of root bifurcating trees becomes meaningless. Apart from this, the manual reconstruction of a consistent and complete genome or characterome of ancestors is by no means as central an issue as in stemmatics (Platnick and Cameron, 1977;Cameron, 1987). In the context of manuscript trees, although a number of the above enumerated philological studies count stemmatic trees under certain conditions or elaborate on specific phenomena, Flight (1990) is apparently the first to provide a generalized definition for stemmas. He aims at solving the question, which he attributes to Maas (1958), how many different stemmas may exist for some given number of surviving manuscripts (Flight, 1990, p.122).
To solve this, he counts so called Greg trees. 3 Based on Flight (1990), we define a rooted directed Greg tree (which Flight names after the textual critic W. W. Greg) as a tree with a distinguished root, m labeled nodes standing for surviving manuscripts and n unlabeled nodes symbolizing hypothetical manuscripts. The latter must have an outdegree of at least two. There can be neither chains of hypothetical manuscripts (unlabeled nodes) with indegree one and outdegree one nor unlabeled leafs. This restriction corresponds to philological practice (Maas, 1937). A rooted Greg tree therefore symbolizes a reconstructed stemma. With this definition, Flight (1990) recovers the numbers of possible trees for three surviving manuscripts as postulated by Maas (1937), see Figure 2. Flight (1990) gives a recursive formula for the enumeration of unrooted and rooted Greg trees, building on all (four) generalized conditions on how to add a new labeled node and tabulates all possible Greg trees for up to 12 labeled nodes. Thus, he extends values mentioned by Maas as well as corrects Maas' numbers. From the 22 rooted Greg trees for 3 survivors, there are 12 root bifurcating ones, compare again Figure 2. The recursive formula Flight gives for rooted Greg trees g(m, n) on m labeled and n unlabeled nodes is: 4 g(m, n) = (m + n − 2) · g(m − 1, n − 1)
+ (2m + 2n − 2) · g(m − 1, n) + (n + 1) · g(m − 1, n + 1).
If we fix m, the number of unlabeled nodes n can vary in the range of {0, 1, . . . , m−1} and the sum, over n, of all such (m, n)-trees for a fixed m is the number g(m) of possible rooted Greg trees for m survivors (Flight, 1990). This gives the number of possible stemmata one can reconstruct for m surviving manuscripts adhering to philological principles. 5 (1) Figure 2: The unlabeled rooted (root topmost node) topologies of possible stemmata for three surviving manuscripts as thought of by Maas (1937). White nodes symbolize reconstructed lost manuscripts (unlabeled) whereas black nodes are survivors (labeled). The number in brackets refers to the number of possible distinct labeled trees (label permutations) for each topology.
While Flight (1990) does not compute numbers of Greg trees according to their root degree, Hering (1967), referring to a colleague of his, 6 tabulates the numbers of root k-furcating Greg trees (and the numbers of rooted Greg trees being the sum over all k) up to m = 6. The sums for all k at a fixed m coincides exactly with g(m) calculated by Flight (1990). Alas, there is no formula provided by Hering (1967). Furthermore, he states that a calculation for more than 6 survivors would be difficult. This is demoralizing insofar as surely numbers (much) larger than m = 6 are relevant to the philological debate. For instance, according to Weitzman (1987), numbers of survivors in Greek and Latin traditions can range from 1 to "well over 100".
Counting Manuscript Trees: New Formulas
A Meta Formula
First, we present a general formula for counting trees with fixed root degree and two different types of nodes (e.g., black and white), which we use later on to derive our main results. We write T for a class of trees and T for |T |.
If the root of a rooted tree has degree k and the tree has µ black nodes and ν white nodes, it means that the tree has k subtrees, which we also perceive as rooted. The root node, r, is either black or white. We connect r to the root of each subtree. Each of these subtrees can have some size s 1 + p 1 , . . . , s k + p k , where s i is the number of black nodes in branch i and p i is the number of white nodes in the same branch. The sum of the s i must equal µ − δ B and the sum of the p i must equal ν −δ W , since there are in total µ black nodes and ν white nodes. Here, δ B is a binary variable indicating whether r is a black node and analogously for δ W , where δ B = 1 iff δ W = 0. If the black nodes are distinguishable, we can choose the subsets of nodes of sizes s 1 , . . . , s k from a total of µ − δ B nodes, and analogously for the white nodes. There are
( µ−δ B s 1 ,...,s k ) possibilities to do so, where ( m k 1 ,...,k ℓ ) = m! k 1 !···k ℓ ! are the multinomial co- efficients.
Now, we specialize. We assume that the black nodes are distinguishable and the white nodes are indistinguishable. Then, for any class of rooted trees T µ,ν with µ such black nodes and ν such white nodes, the number T µ,ν,k of rooted labeled trees from T µ,ν in which the root has degree k has the form
µ ∑ (s,p)∈C((µ−1,ν),k) ( µ − 1 s ) F (s, p) + ∑ (s,p)∈C((µ,ν−1),k) ( µ s ) F (s, p).
Here, C((a, b), ℓ) denotes the number of vector compositions (Eger, 2017)
of the 'vector' (a, b) ∈ N 2 with ℓ parts; that is, C((a, b), ℓ) = {(s 1 , . . . , s ℓ ), (p 1 , . . . , p ℓ ) | s 1 + · · · + s ℓ = a, p 1 + · · · + p k = b}.
Moreover, by s and p, we denote tuples (s 1 , . . . , s k ) and (p 1 , . . . , p k ), respectively. The above sum formula arises because the root node can either be black or white. If it is black, we have the additional factor µ because the black nodes are distinguishable and each of them can be the root. Finally, F is a function of the sizes s 1 , . . . , s k , p 1 , . . . , p k which will be specified in any particular case. Now, we have overcounted T µ,ν,k since we have counted subtrees as if they were ordered, while in reality different orders of the subtrees do not constitute a distinct tree t ∈ T µ,ν,k . Thus, we have to divide by k! to finally arrive at:
T µ,ν,k = µ k! ∑ (s,p)∈C((µ−1,ν),k) ( µ − 1 s ) F (s, p) + 1 k! ∑ (s,p)∈C((µ,ν−1),k) ( µ s ) F (s, p).(1)
It is possible that T µ,ν,k can be expressed simpler-e.g., as a linear combination of the terms T µ+τ,ν+ρ,k+κ for integers τ, ρ, κ-for specific choices of F .
Root k-furcating Greg Trees
We are now ready to derive the general formula for the number g k (m, n) of root k-furcating Greg trees for m survivors (labeled nodes) and n hypothetical (unlabeled) nodes. The only question remaining from above is how we have to specify the function F (s, p) on the k subtrees. This is very simple, however. Since all branches i are independent of each other, F (s, p) takes the form of a product of individual factors:
F (s, p) = k ∏ i=1 g(s i , p i )
where g is the function of Flight (1990). The number g k (m, n) of root k-furcating Greg trees for m survivors and n hypothetical nodes is hence given by (1) with this specification of F .
We make three additional remarks. The s i satisfy s i ≥ 1, since the specification of Greg trees disallows to have only unlabeled nodes (i.e., s i = 0) in a branch. In contrast, the p i may take on the value zero and therefore satisfy p i ≥ 0. Moreover, the p i actually satisfy 0 ≤ p i < s i because of the link restrictions on unlabeled nodes in Greg trees. While the constraint on the p i 's is automatically taken care of by the function g of Flight (1990), explicitly accounting for it can speed up computations. 7 Finally, when k = 1, we have to exclude the second term in (1) from consideration because, by definition, the root of a Greg tree cannot have degree one when it is unlabeled.
The numbers g k (m) of root k-furcating Greg trees for m survivors and an arbitrary number of hypothetical manuscripts n is the sum over n of root k-furcating (m, n)-trees. In other words, g k (m) = ∑ n≥0 g k (m, n). Table 2 shows the growth of g k (m) until m, k = 15.
We are now interested in the proportions of root bifurcating Greg trees among all Greg trees since this was aluded to in Bédier (1928). That is, we investigate the ratio
R 2 (m) = g 2 (m) ∑ k≥1 g k (m)
. 7 In order to more efficiently compute the numbers, we also used further simplified formulas for specific k where possible. Root unifurcating Greg trees (here g1) are especially easily computed. The root can only be labeled, since an unlabeled node as root must have degree at least two. Then, the number of possible root unifurcating Greg trees corresponds to m · g(m − 1). Root-(m − 1)-furcating rooted Greg trees for all m ̸ = 2 coincide with the pentagonal numbers (sequence A000326 in the OEIS), whose number is given by compare Flight (1990). Code for computation is available from https://github.com/ArminHoenen/KFurcatingRootedGregTrees.
The sequence of numbers for k = 2
is now integer sequence A286432 in the OEIS.
At m = 2, the proportion is one third, at m = 3, Maas' famous example, we witness a proportion of R 2 = 0.54545. For m = 10, R 2 is already 0.59958 with the increase slowing down. For m = 20, we have R 2 = 0.60351 and at m = 100 survivors the proportion is R 2 = 0.60599. Growth is further slowing down and at m = 200 the proportion is R 2 = 0.60626. While we are not able to prove it, we think it is a very safe conjecture that R 2 (m) converges to below 0.607, as m tends toward infinity. Figure 3 plots the proportions of trees with root degree k = 1, k = 2 and k > 2, as m becomes larger. Figure 4 plots the root degree distribution for fixed m.
Root bifurcations thus outweigh all other root degree patterns by far. Maas' argument was therefore generally true as what regards a large expectability of root bifurcations in reconstructible stemmata. Nevertheless, the observed proportions are considerably lower than Bédier's ones. However, a better fit occurs when we exclude all trees with root degree one from consideration. A root degree of one requires root to be labeled and thus surviving, a case which is empirically probably quite rare, although not impossible. In Bédier's collection presumably, there simply had not been any root unifurcating stemma with a surviving root and he does not comprehensively discuss this general possibility. In Castellani's (1957) and Haugen's (2010) collections there have been no counts of root unifurcations. At m = 200, the fraction of unifurcating trees is about 21.467%, which means that the fraction of trees with root degree two is R 2 (m) = 0.60626 1 − 0.21467 = 0.7719 at m = 200, when trees with root degree one are discarded. Comparing this number to those in Table 1, we observe that the empirically reported numbers for actual collections of stemmata are just slightly above this reference point. This would indicate that there seems to be a bias for root bifurcations, but that this bias is rather low. While Bédier had looked at R 2 (m) orR 2 (m) (coinciding in his collection), Maas explicitly looked at R k>2 (m) = ∑ k>2 g k (m) ∑ k≥1 g k (m) for m = 3, and based his counter argument to Bédier's conclusions on that. This has been criticized variously because R 2 (3) corresponds to 12 in 22, the complement of which is not Maas' 1 but 10 in 22, a ratio probably too small to base a counter argument on it. Neither Bédier nor Maas discuss root unifurcating cases extensively, but they could make a crucial difference in the ratios of interest since including root unifurcating trees, non-root-bifurcating would no longer be equivalent to root multifurcating in meaning. Thus, Maas' shift of focus from root bifurcating to root multifurcating introduces ambiguity. Responding to such ambiguity, we demonstrated a mathematically sound way of looking both at proportions of root degree patterns with (R 2 (m)) and without root unifurcations (R 2 (m)).
Hering (1967), probably aware that root degrees of k = 1 appear to be somewhat unrealistic in actually observed stemmas, stated that instead of following Maas' focus, one should rather look at
R HE (m) = ∑ k>2 g k (m) g 2 (m)
which Hering (1967) investigated until m = 6 and for which he speculated that it would probably never surpass 0.33 or lie even lower. Looking at the plot of the proportions, see Figure 3, we can see that Hering was right, the asymptote is however rather 0.3. The extraordinary role of root unifurcations is immediately visible, since they are the only k witnessing a decline. This naturally follows from their restrictions-for instance their root can only be labeled, meaning that only the first term in (1) will be relevant, while for all other root degree patterns both add up. In order to gain a deeper insight, we are now looking at another type of tree which plays an important role in stemmatology.
The Second Type of Manuscript Tree
While Maas had looked at possible trees a philologist can reconstruct, other studies looked at true historical trees and their proportionalities. The underlying process reflected in stemmatological trees is the generation of manuscripts and their copying. There is (in many cases) one original-which we can understand as a root node to a rooted treewhich gets copied a certain number of times (children in first generation). Each manuscript (including root) can be copied a certain number of times again (always including 0 times) and so forth. We assume each node to represent a unique text symbolized through a distinct label. In this way, the copy history can be understood/displayed as a rooted labeled tree. Since copying is a process from a vorlage 8 to a copy, the edges can be understood as directed.
Such a tree depicts the complete copy history of a text-and not as a stemma does, the reconstructible portion of it. It ignores loss of manuscripts (does not assume or know any unlabeled node) and extends to the entire copying history of a text. In order to avoid terminological confusion, the class of trees depicting this complete copy history of a tradition has been called an arbre réel in philology, a term coined by Fourquet (1946)-for convenience referred to as arbre in the rest of the paper. 9 Arbres were usually used as hypothetical units of argumentation for outlining general scenarios of copying and proliferation in philological discourse, see for instance Castellani (1957). However, recently, they have gained actuality through artificial traditions, that is, complete copied sets with known ground truth (Spencer et al., 2004;Baret et al., 2006;Roos and Heikkilä, 2009;Hoenen, 2015), where arbres are used for evaluation, comparing them to computationally reconstructed stemmata.
In the following, we are looking at arbres themselves and provide an answer to the question how prevalent root bifurcation is in arbres. This may be useful for future research on the general effects of loss induced tree transformations (turning an arbre into a stemma), as has been exemplarily done for a restricted set of topologies by Trovato and Guidi (2004). Greg (1927) had already hypothesized that deformations arbres undergo through historical manuscript loss may be a reason for expectable root bifurcations in stemmata. 10 We note that the following is a special case of our already derived results. In other words, we now evaluate g k (m, 0), in our above notation. However, this special case admits simpler closed-form formulas as well as a derivation of the asymptotic degree distribution.
Rooted Labeled Trees
By Cayley's formula (Cayley, 1889), the number T ′ m of labeled trees on m nodes is given by m m−2 . The number T m of rooted labeled trees is then given by m m−1 since each of the m nodes can be the root. Now, let's assume that the root has degree k = 1, . . . , m − 1. How many such trees are there, T m,k ?
To answer this, we invoke our meta formula, Formula (1), with the following specification of F (s, p):
F (s, p) = g(s 1 , 0) · · · g(s k , 0) since p = (0, . . . , 0), as we have no unlabeled nodes in this case. We have g(s, 0) = T s since g(s, 0) retrieves the number of rooted labeled trees with s nodes.
for so-called R-trees, there is no conceptual overlap whatsoever. 10 The kind of stemma we are talking about here is not a reconstructed stemma for any number of surviving manuscripts but rather the one single "true" stemma or stemma reale as termed by Timpanaro (2005).
Thus, combining this insight with the formula of Cayley, we find that there are exactly
m k! ∑ s∈C(m−1,k) ( m − 1 s ) s s 1 −1 1 · · · s s k −1 k (2)
rooted labeled trees on m nodes with root degree k, where we let C(m − 1, k) stand for C((m − 1, 0), k). An alternative, simpler formula for T m,k is given by:
T m,k = m · ( m − 2 k − 1 ) · (m − 1) m−1−k . (3)
For k = 1 this formula is not difficult to show. For k = 2 it has the following combinatorial interpretation. A rooted labeled tree has a root, for which we may choose any of the m nodes. Then there are (m − 1) vertices left. There are (m − 1) m−3 possible labeled trees on them. Since the (m − 1) vertices form a tree, there are (m − 2) edges connecting them. We may take any of these, and replace it by connections of their endpoints to the root. This yields all the rooted labeled trees in which the root node has degree 2. For k > 2 a similar, but more involved argument applies (Moon, 1970, Theorem 3.2). Next, we ask for the probability P m [k] that a randomly chosen rooted labeled tree from T m has root degree k = 1, 2, . . .. We find
P m [k] = T m,k T m = ( m−2 k−1 ) (m − 1) k−1 · ( m − 1 m ) m−2 .(4)
The second factor in this product equals (1 − 1 m ) m−2 and thus converges to exp(−1) as m → ∞. For the first factor A = ( m−2 k−1 ) (m−1) k−1 , we find • for k = 1: A = 1 −→ 1,
• for k = 2: A = (m−2) (m−1) −→ 1, • for k = 3: A = (m−2)(m−3) 2 1 (m−1)(m−1) −→ 1 2
as m → ∞. In general, we have for A:
A = (m − 2)(m − 3) · · · (m − k) (m − 1)(m − 1) · · · (m − 1) 1 (k − 1)!
When k is fixed and m → ∞, then this converges to 1 (k−1)! . Hence, the asymptotic distribution P
[k] of P m [k] is P [k] = exp(−1) (k − 1)!
which is a Poisson distribution with parameter λ = 1, denoted as Poisson(λ). Figure 5 compares the asymptotic Poisson P [k] distribution to the actual finite distributions P m [k]. We see that convergence is rapid. For m = 40, P m [k] is visually already extremely close to Poisson(λ = 1). From P [k], we infer that root bifurcations are asymptotically twice as likely as trifurcations but exactly as likely as unifurcations, and have a probability of roughly 0.37. Moreover, the larger k gets, the smaller the probability of root k-furcating trees-and this probability is rapidly decaying in k. As a side note, we emphasize that the asymptotic probability for bifurcations has a particularly beautiful mathematical form, namely, the inverse of Leonhard Euler's constant e.
These mathematical derivations, if they are based on a plausible description of reality, suggest that in history many original manuscripts may have been copied only once, the same number has been copied twice, half as many three times and a third of that number four times, a fourth of that number (for four) five times and so on. That is, if indeed a random process that selects each arbre for a fixed number of trees on m nodes with equal likelihood is a good model of true copy history. On this, any more sophisticated model can operate.
If root bifurcations are already very numerous, then an immediately related question would be what consequences this could have for a stemma when thinking about the transformations an arbre undergoes through historical loss. To this end, Weitzman (1982; has shown, and Trovato and Guidi (2004) come to a similar conclusion, that historically realistic scenarios of loss would imply a large quantity of bifurcations and root bifurcations in stemmata based on transformed arbres. Those do exceed 1 e and thus a possible effect of historical loss is to increase the percentage of root bifurcations, in which case 1 e would rather operate as a lower bound.
Conclusion
We have counted root k-furcating rooted labeled trees and root k-furcating rooted Greg trees. For the former, the asymptotic root degree distribution has been derived mathematically. For the latter, we have provided exact formulas that allow to approximate the asymptotic root degree distribution. From this, we (very strongly) conjecture that root bifurcating Greg trees have an asymptotic probability of above (and close to) 0.606.
In both cases, relating to a model of representation of arbres (true and complete historical manuscript genealogies) and stemmata (reconstructed genealogies from surviving nodes), the proportions of root bifurcating trees for historically relevant tradition sizes is the largest in respect to the other root degrees. Therefore, while previously other studies have shown that root bifurcations are expectable for true stemmata, we enhance this finding to reconstructible stemmata and arbres so that this statement now covers the three philologically relevant general types of trees discussed until today. Concerning stemmata, we have argued that the proportions of root bifurcating stemmata observed in real collections of genealogies is close to what is mathematically predicted, with a seemingly small bias for root bifurcations.
In the philological debate, where numerical arguments have been pursued since the very beginning, the formulas presented here contribute to clarify the basic combinatorial nature of the entities involved in the modeling of manuscript evolution. We believe that in an ever more computational stemmatological endeavour cultivating the mathematical foundations can only have positive effects.
While our findings with respect to root degrees of rooted labeled trees are certainly far from novel to the mathematics community, our formulas for Greg trees, which generalize rooted labeled trees, are, to our best knowledge, original.
Figure 1 :
1First modern stemma bySchlyter, 1827, from O'Hara (1996.
3m 2 −m 2 .Table 2 :
22This is so because there are only three principle architectures of root-(m − 1)-furcating rooted Greg trees, the individual formulas for the enumeration of which sum to the same as the pentagonal numbers: m + m(m − 1) + ( m 2 ) . Finally, for a root m-furcation, there is always only one Greg tree. Numbers of root k-furcations for all possible k for rooted Greg trees with m survivors up to m = 15, g k (m).Note that the first numbers until m = 6 occur inHering (1967). Exact numbers are provided until m = 10 and for (m − 1)-trees and otherwise numbers are in scientific notation. equals the number of all rooted Greg trees for the current m,
Figure 3 :Figure 4 :
34Proportions of root unifurcating and root bifurcating rooted Greg trees among all possible rooted Greg trees for a fixed m as well as R k>2 (m) and R HE (m). Note that the first three proportions add to 1. Root degree distribution for trees counted by g k (m) for fixed m = 3, 5, 15, 25.
Figure 5 :
5Asymptotic distribution Poisson(λ = 1) and finite distributions P m [k] for m = 5, 10, 20, 40.
More precisely, in most cases, a root of such a tree represents a hypothetical intermediary: the latest common ancestor of all survivors. It corresponds to the oldest objectively reconstructible text and is called archetype.
According toJosuat-Vergès (2015), a similar problem in phylogeny has been described and tackled byFelsenstein (1978) as recognized byKnuth (2005).4 Flight refers to these as g * , but for brevity and since we do not deal with unrooted Greg trees, we denote them simply as g.5 The number sequence g(m) is listed as integer sequence A005264 in the On-Line Encyclopedia of Integer Sequences (OEIS), published electronically at https://oeis.org .
Prof. Dr. Wolfgang Engel, a mathematician from Rostock University.
Vorlage is a loaned term for original of a copy, not of a tradition deriving from German used in philology.9 Although in French terminology the same term is used
Testing methods on an artificially created textual tradition. Linguistica Computationale XXIV-XXV. Instituti Editoriali e Poligrafici Internationali. P. Baret, C. Macé, and P. RobinsonPisa-RomaP. Baret, C. Macé, and P. Robinson (eds.). 2006. Test- ing methods on an artificially created textual tra- dition. In Linguistica Computationale XXIV-XXV. Instituti Editoriali e Poligrafici Internationali, Pisa- Roma, volume XXIV-XXV, pages 255-281.
La tradition manuscrite du 'Lai de l'Ombre': Réflexions sur l'Art d'Éditer les Anciens Textes. J Bédier, Romania. 394Rpt. Paris: ChampionJ. Bédier. 1928. La tradition manuscrite du 'Lai de l'Ombre': Réflexions sur l'Art d'Éditer les Anciens Textes. Romania 394:161-196, 321-356. (Rpt. Paris: Champion, 1970).
The upside-down cladogram: problems in manuscript affiliation. H D Cameron, Biological Metaphor and Cladistic Classification: an Interdiscplinary Approach. University of PennsylvaniaH. D. Cameron. 1987. The upside-down cladogram: problems in manuscript affiliation. In Biological Metaphor and Cladistic Classification: an Inter- discplinary Approach, University of Pennsylvania, pages 227-242.
Bédier avait-il raison?: La méthode de Lachmann dans leséditions de textes du moyen age: leçon inaugurale donnéeà l'Université de Fribourg le 2 juin 1954. A E Castellani, Number 20 in Discours universitaires.Éditions universitairesA. E. Castellani. 1957. Bédier avait-il raison?: La méthode de Lachmann dans leséditions de textes du moyen age: leçon inaugurale donnéeà l'Université de Fribourg le 2 juin 1954. Number 20 in Discours universitaires.Éditions universitaires.
1889. A theorem on trees. A Cayley, Quarterly Journal of Mathematics. 23A. Cayley. 1889. A theorem on trees. Quarterly Jour- nal of Mathematics 23:376-378.
The number of distinct alignments of two strings. M A Covington, Journal of Quantitative Linguistics. 113M. A. Covington. 2004. The number of dis- tinct alignments of two strings. Journal of Quantitative Linguistics 11(3):173-182.
. 10.1080/0929617042000314921https://doi.org/10.1080/0929617042000314921.
On the number of many-to-many alignments of multiple sequences. S Eger, Journal of Automata, Languages and Combinatorics. 201S. Eger. 2015. On the number of many-to-many align- ments of multiple sequences. Journal of Automata, Languages and Combinatorics 20(1):53-65.
The combinatorics of weighted vector compositions. S Eger, S. Eger. 2017. The combinatorics of weighted vector compositions. ArXiv preprint https://arxiv.org/abs/1704.04964.
The number of evolutionary trees. J Felsenstein, Systematic Zoology. 271J. Felsenstein. 1978. The number of evolutionary trees. Systematic Zoology 27(1):27-33.
Inferring phylogenies. J Felsenstein, Sinauer Associates SunderlandJ. Felsenstein. 2004. Inferring phylogenies. Sinauer Associates Sunderland.
How many stemmata?. C Flight, Manuscripta. 342C. Flight. 1990. How many stemmata? Manuscripta 34(2):122-128.
Le paradoxe de Bédier. J Fourquet, Mélanges. 1945IIJ. Fourquet. 1946. Le paradoxe de Bédier. Mélanges 1945(II):1-46.
The calculus of variants: an essay on textual criticism. W W Greg, Clarendon PressW. W. Greg. 1927. The calculus of variants: an essay on textual criticism. Clarendon Press.
Recent theories of textual criticism. W W Greg, Modern Philology. 284W. W. Greg. 1931. Recent theories of textual criticism. Modern Philology 28(4):401-404.
Lachmann, Bédier and the bipartite stemma: towards a responsible application of the common-error method. Revue d'histoire des textes. J Grier, 18J. Grier. 1989. Lachmann, Bédier and the bipartite stemma: towards a responsible application of the common-error method. Revue d'histoire des textes 18(1988):263-278.
On the number of alignments of k sequences. J R Griggs, P Hanlon, A M Odlyzko, M S Waterman, Graph. Comb. 62J. R. Griggs, P. Hanlon, A. M. Odlyzko, and M. S. Waterman. 1990. On the number of alignments of k sequences. Graph. Comb. 6(2):133-146.
. 10.1007/BF01787724https://doi.org/10.1007/BF01787724.
The Spirit of Lachmann, the Spirit of Bédier: Old Norse Textual Editing in the Electronic Age. O E Haugen, Annual Meeting of The Viking Society. University College London8O. E. Haugen. 2002. The Spirit of Lachmann, the Spirit of Bédier: Old Norse Textual Editing in the Elec- tronic Age. In Annual Meeting of The Viking Soci- ety, University College London. volume 8.
Is stemmatology inherently dichotomous? On the silva portentosa of Old Norse stemmata. O E Haugen, Studia Stemmatologica. O. E. Haugen. 2010. Is stemmatology inherently di- chotomous? On the silva portentosa of Old Norse stemmata. Studia Stemmatologica .
The silva portentosa of stemmatology Bifurcation in the recension of Old Norse manuscripts. O E Haugen, Digital Scholarship in the Humanities. 302O. E. Haugen. 2015. The silva portentosa of stem- matology Bifurcation in the recension of Old Norse manuscripts. Digital Scholarship in the Humanities 30(2).
. W Hering, Zweispaltige Stemmata. Philologus-Zeitschrift für antike Literatur und ihre Rezeption. 1111-2W. Hering. 1967. Zweispaltige Stemmata. Philologus- Zeitschrift für antike Literatur und ihre Rezeption 111(1-2):170-185.
Das artifizielle Manuskriptkorpus TASCFE. A Hoenen, DHd 2015 -Von Daten zu Erkenntnissen -Book of abstracts. at/o:dhd2015.abstracts-gesamtA. Hoenen. 2015. Das artifizielle Manuskriptkorpus TASCFE. In DHd 2015 -Von Daten zu Erkennt- nissen -Book of abstracts, DHd. http://gams.uni- graz.at/o:dhd2015.abstracts-gesamt.
Silva Portentosissima Computer-Assisted Reflections on Bifurcativity in Stemmas. A Hoenen, Digital Humanities 2016: Conference Abstracts. Jagiellonian University & Pedagogical UniversityA. Hoenen. 2016. Silva Portentosissima Computer- Assisted Reflections on Bifurcativity in Stem- mas. In Digital Humanities 2016: Con- ference Abstracts. Jagiellonian University & Pedagogical University. pages 557-560. http://dh2016.adho.org/abstracts/311.
Splitstree: analyzing and visualizing evolutionary data. D H Huson, Bioinformatics. 141D. H. Huson. 1998. Splitstree: analyzing and visualiz- ing evolutionary data. Bioinformatics 14(1):68-73.
Derivatives of the tree function. M Josuat-Vergès, The Ramanujan Journal. 381M. Josuat-Vergès. 2015. Derivatives of the tree func- tion. The Ramanujan Journal 38(1):1-15.
. A , Das StemmaproblemA. Kleinlogel. 1968. Das Stemmaproblem.
. Philologus-Zeitschrift für antike Literatur und ihre Rezeption. 1121-2Philologus-Zeitschrift für antike Literatur und ihre Rezeption 112(1-2):63-82.
Generating all combinations and partitions. D E Knuth, 4The art of computer programmingD. E. Knuth. 2005. The art of computer programming, volume 4: Generating all combinations and parti- tions, fascicle 3.
Leitfehler und Stemmatische Typen. P Maas, Byzantinische Zeitschrift. 372P. Maas. 1937. Leitfehler und Stemmatische Typen. Byzantinische Zeitschrift 37(2):289-294.
Textual Criticism. P Maas, Clarendon PressP. Maas. 1958. Textual Criticism. Clarendon Press.
Counting labelled trees. J W Moon, Canadian Mathematical CongressJ. W. Moon. 1970. Counting labelled trees. Canadian Mathematical Congress.
R J O'hara, Trees of history in systematics and philology. Memorie della Società Italiana di Scienze Naturali e del Museo Civico di Storia Naturale di Milano. 27R. J. O'Hara. 1996. Trees of history in systematics and philology. Memorie della Società Italiana di Scienze Naturali e del Museo Civico di Storia Natu- rale di Milano 27(1):81-88.
Storia della tradizione e critica del testo. G Pasquali, D Pieraccioni, Le MonnierG. Pasquali and D. Pieraccioni. 1952. Storia della tradizione e critica del testo. Le Monnier.
Cladistic methods in textual, linguistic, and phylogenetic analysis. N I Platnick, H D Cameron, Systematic Zoology. 264N. I. Platnick and H. D. Cameron. 1977. Cladis- tic methods in textual, linguistic, and phylogenetic analysis. Systematic Zoology 26(4):380-385.
Evaluating methods for computer-assisted stemmatology using artificial benchmark data sets. T Roos, T Heikkilä, Literary and Linguistic Computing. 24T. Roos and T. Heikkilä. 2009. Evaluating methods for computer-assisted stemmatology using artificial benchmark data sets. Literary and Linguistic Com- puting 24:417-433.
Phylogenetics of artificial manuscripts. M Spencer, E A Davidson, A C Barbrook, C J Howe, Journal of Theoretical Biology. 227M. Spencer, E. A. Davidson, A. C. Barbrook, and C. J. Howe. 2004. Phylogenetics of artifi- cial manuscripts. Journal of Theoretical Biology 227:503-511.
PAUP: Phylogenetic Analysis Using Parsimony Version 3.0. D L Swofford, Illinois Natural History SurveyD. L. Swofford. 1990. PAUP: Phylogenetic Analysis Using Parsimony Version 3.0, May 1990. Illinois Natural History Survey.
The Genesis of Lachmann's Method. S Timpanaro, ChicagoUniversity of ChicagoS. Timpanaro. 2005. The Genesis of Lachmann's Method. University of Chicago, Chicago.
Sugli stemmi bipartitidecimazione, asimmetria e calcolo delle probabilità. P Trovato, V Guidi, Filologia Italiana. 1P. Trovato and V. Guidi. 2004. Sugli stemmi bipartiti - decimazione, asimmetria e calcolo delle probabilità. Filologia Italiana 1:9-48.
Computer simulation of the development of manuscript traditions. M P Weitzman, ALLC Bulletin. Association for Library and Linguistic Computing Bangor. 102M. P. Weitzman. 1982. Computer simulation of the de- velopment of manuscript traditions. ALLC Bulletin. Association for Library and Linguistic Computing Bangor 10(2):55-59.
The evolution of manuscript traditions. M P Weitzman, Journal of the Royal Statistical Society. Series A (General. M. P. Weitzman. 1987. The evolution of manuscript traditions. Journal of the Royal Statistical Society. Series A (General) pages 287-308.
Textual Criticism and Editorial Technique: Applicable to Greek and Latin texts. M L West, TeubnerStuttgartM. L. West. 1973. Textual Criticism and Editorial Technique: Applicable to Greek and Latin texts. Teubner, Stuttgart.
The twobranch stemma. F Whitehead, C E Pickford, Bulletin Bibliographique de la Société Internationale Arthurienne. 3F. Whitehead and C. E. Pickford. 1951. The two- branch stemma. Bulletin Bibliographique de la Société Internationale Arthurienne 3:83-90. |
6,214,377 | Exploiting Structured Data, Negation Detection and SNOMED CT Terms in a Random Indexing Approach to Clinical Coding | The problem of providing effective computer support for clinical coding has been the target of many research efforts. A recently introduced approach, based on statistical data on co-occurrences of words in clinical notes and assigned diagnosis codes, is here developed further and improved upon. The ability of the word space model to detect and appropriately handle the function of negations is demonstrated to be important in accurately correlating words with diagnosis codes, although the data on which the model is trained needs to be sufficiently large. Moreover, weighting can be performed in various ways, for instance by giving additional weight to 'clinically significant' words or by filtering code candidates based on structured patient records data. The results demonstrate the usefulness of both weighting techniques, particularly the latter, yielding 27% exact matches for a general model (across clinic types); 43% and 82% for two domain-specific models (ear-nosethroat and rheumatology clinics). | [
11265322,
5293141
] | Exploiting Structured Data, Negation Detection and SNOMED CT Terms in a Random Indexing Approach to Clinical Coding
15 September 2011
Aron Henriksson
DSV
Stockholm University
Stockholm University
Martin Hassel
DSV
Stockholm University
Stockholm University
Exploiting Structured Data, Negation Detection and SNOMED CT Terms in a Random Indexing Approach to Clinical Coding
Proceedings of the Workshop on Biomedical Natural Language Processing
the Workshop on Biomedical Natural Language ProcessingHissar, Bulgaria15 September 2011
The problem of providing effective computer support for clinical coding has been the target of many research efforts. A recently introduced approach, based on statistical data on co-occurrences of words in clinical notes and assigned diagnosis codes, is here developed further and improved upon. The ability of the word space model to detect and appropriately handle the function of negations is demonstrated to be important in accurately correlating words with diagnosis codes, although the data on which the model is trained needs to be sufficiently large. Moreover, weighting can be performed in various ways, for instance by giving additional weight to 'clinically significant' words or by filtering code candidates based on structured patient records data. The results demonstrate the usefulness of both weighting techniques, particularly the latter, yielding 27% exact matches for a general model (across clinic types); 43% and 82% for two domain-specific models (ear-nosethroat and rheumatology clinics).
Introduction
Clinicians spend much valuable time and effort in front of a computer, assigning diagnosis codes during or after a patient encounter. Tools that facilitate this task would allow costs to be reduced or clinicians to spend more of their time tending to patients, effectively improving the quality of healthcare. The idea, then, is that clinicians should be able simply to verify automatically assigned codes or to select appropriate codes from a list of recommendations.
Previous Work
There have been numerous attempts to provide clinical coding support, even if such tools are yet to be widely used in clinical practice (Stanfill et al., 2010). The most common approach has been to view it essentially as a text classification problem. The assumption is that there is some overlap between clinical notes and the content of assigned diagnosis codes, making it possible to predict possible diagnosis codes for 'uncoded' documents. For instance, in the 2007 Computational Challenge (Pestian et al., 2007), free-text radiology reports were to be assigned one or two labels from a set of 45 ICD-9-CM codes. Most of the best-performing systems were rule-based, achieving micro-averaged F 1 -scores of up to 89.1%.
Some have tried to enhance their NLP-based systems by exploiting the structured data available in patient records. Pakhomov et al. (2006) use gender information-as well as frequency datato filter out improbable classifications. The motivation is that gender has a high predictive value, particularly as some categories make explicit gender distinctions.
Medical terms also have a high predictive value when it comes to classification of clinical notes (see e.g. Jarman and Berndt, 2010). In an attempt to assign ICD-9 codes to discharge summaries, the results improved when extra weight was given to words, phrases and structures that provided the most diagnostic evidence (Larkey and Croft, 1995).
Given the inherent practice of ruling out possible diseases, symptoms and findings, it seems important to handle negations in clinical text. In one study, it was shown that around 9% of automatically detected SNOMED CT findings and disorders were negated . In the attempt of Larkey and Croft (1995), negated medical terms are annotated and handled in various ways; however, none yielded improved results.
Random Indexing of Patient Records
In more recent studies, the word space model, in its Random Indexing mold (Sahlgren, 2001;Sahlgren, 2006), has been investigated as a possible alternative solution to clinical coding support . Statistical data on co-occurrences of words and ICD-10 1 codes is used to build predictive models that can generate recommendations for uncoded documents. In a list of ten recommended codes, general models-trained and evaluated on all clinic types-achieve up to 23% exact matches and 60% partial matches, while domainspecific models-trained and evaluated on a particular type of clinic-achieve up to 59% exact matches and 93% partial matches.
A potential limitation of the above models is that they fail to capture the function of negations, which means that negated terms in the clinical notes will be positively correlated with the assigned diagnosis codes. In the context of information retrieval, Widdows (2003) describes a way to remove unwanted meanings from queries in vector models, using a vector negation operator that not only removes unwanted strings but also synonyms and neighbors of the negated terms. To our knowledge, however, the ability of the word space model to handle negations has not been studied extensively.
Aim
The aim of this paper, then, is to develop the Random Indexing approach to clinical coding support by exploring three potential improvements:
1. Giving extra weight to words used in a list of SNOMED CT terms.
2. Exploiting structured data in patient records to calculate the likelihood of code candidates.
3. Incorporating the use of negation detection.
Method
Random Indexing is applied on patient records to calculate co-occurrences of tokens (words and ICD-10 codes) on a document level. The resulting models contain information about the 'semantic similarity' of individual words and diagnosis codes 2 , which is subsequently used to classify uncoded documents.
Stockholm EPR Corpus
The models are trained and evaluated on a Swedish corpus of approximately 270,000 clinically coded patient records, comprising 5.5 million notes from 838 clinical units. This is a subset of the Stockholm EPR corpus (Dalianis et al., 2009). A document contains all free-text entries concerning a single patient made on consecutive days at a single clinical unit. The documents in the partitions of the data sets on which the models are trained (90%) also include one or more associated ICD-10 codes (on average 1.7 and at most 47). In the testing partitions (10%), the associated codes are retained separately for evaluation. In addition to the complete data set, two subsets are created, in which there are documents exclusively from a particular type of clinic: one for ear-nose-throat clinics and one for rheumatology clinics. Variants of the three data sets are created, in which negated clinical entities are automatically annotated using the Swedish version of NegEx (Skeppstedt, 2011). The clinical entities are detected through exact string matching against a list of 112,847 SNOMED CT terms belonging to the semantic categories 'finding' and 'disorder'. It is important to handle ambiguous terms in order to reduce the number of false positives; therefore, the list does not include findings which are equivalent to a common non-clinical unigram or bigram (see . A negated term is marked in such a way that it will be treated as a single word, although with its proper negated denotation. Multi-word terms are concatenated into unigrams.
The data is finally pre-processed: lemmatization is performed using the Granska Tagger (Knutsson et al., 2003), while punctuation, digits and stop words are removed.
Word Space Models
Random Indexing is performed on the training partitions of the described data sets, resulting in a total of six models (Table 1): two variants of the general model and two variants of the two domainspecific models 3 .
Election of Diagnosis Codes
The models are then used to produce a ranked list of recommended diagnosis codes for each of the documents in the testing partitions of the corresponding data sets. This list is created by letting each of the words in a document 'vote' for a number of semantically similar codes, thus necessitating the subsequent merging of the individual lists. This ranking procedure can be carried out in a number of ways, some of which are explored in this paper. The starting point, however, is to use the semantic similarity of a word and a diagnosis code-as defined by the cosine similarity scoreand the idf 4 value of the word. This is regarded as our baseline model , to which negation handling and additional weighting schemes are added.
Weighting Techniques
For each of the models, we apply two distinct weighting techniques. First, we assume a technocratic approach to the election of diagnosis codes. We do so by giving added weight to words which are 'clinically significant'. That is here achieved by utilizing the same list of SNOMED CT findings and disorders that was used by the negation detection system. However, rather than trying to match the entire term-which would likely result in a fairly limited number of hits-we opted simply to give weight to the individual (non stop) words used in those terms. These words are first lemmatized, as the data on which the matching is performed has also been lemmatized. It will also allow hits independent of morphological variations. We also perform weighting of the correlated ICD-10 codes by exploiting statistics generated from the fixed fields of the patient records, namely gender, age and clinical unit. The idea is to use known information about a to-be-coded document in order to assign weights to code candidates according to plausibility, which in turn is based on past combinations of a particular code and each of the structured data entries. For instance, if the model generates a code that has very rarely been assigned to a patient of a particular sex or age group-and the document is from the record of such a patient-it seems sensible to give it less weight, effectively reducing the chances of that code being recommended. In order for an unseen combination not to be ruled out entirely, additive smoothing is performed. Gender and clinical unit can be used as defined, while age groups are created for each and every year up to the age of 10, after which ten-year intervals are used. This seems reasonable since age distinctions are more sensitive in younger years.
In order to make it possible for code candidates that are not present in any of the top-ten lists of the individual words to make it into the final topten list of a document, all codes associated with a word in the document are included in the final re-ranking phase. This way, codes that are more likely for a given patient are able to take the place of more improbable code candidates. For the general models, however, the initial word-based code lists are restricted to twenty, due to technical efficiency constraints.
Evaluation
The evaluation is carried out by comparing the model-generated recommendations with the clinically assigned codes in the data. This matching is done on all four possible levels of ICD-10 according to specificity (see Figure 1).
Results
The general data set, on which General Model and General NegEx Model are trained and evaluated, comprises approximately 274,000 documents and 12,396 unique labels. The ear-nose-throat data set, on which ENT Model and ENT NegEx Model are trained and evaluated, contains around 23,000 documents and 1,713 unique labels. The rheumatology data set, on which Rheuma Model and Rheuma NegEx Model are trained an evaluated, contains around 9,000 documents and 630 unique labels (Table 2). The proportion of the detected clinical entities that are negated is 13.98% in the complete, general data set and slightly higher in the ENT (14.32%) and rheumatology data sets (16.98%) ( Table 3).
data set documents codes General ∼274 k 12,396 ENT ∼23 k 1,713 Rheumatology ∼9 k 630
General Models
The baseline for the general models finds 23% of the clinically assigned codes (exact matches), when the number of model-generated recommendations is confined to ten (Table 4). Meanwhile, matches on the less specific levels of ICD-10, i.e. partial matches, amount to 25%, 33% and 60% respectively (from specific to general).
The single application of one of the weighing techniques to the baseline model boosts performance somewhat, the fixed fields-based code filtering (26% exact matches) slightly more so than the technocratic word weighting (24% exact matches). The negation variant of the general model, General NegEx Model, performs somewhat better-up two percentage points (25% exact matches)-than the baseline model. The technocratic approach applied to this model does not yield any observable added value. The fixed fields filtering does, however, result in a further improvement on the three most specific levels (27% exact matches).
A combination of the two weighting schemes does not appear to bring much benefit to either of the general models, compared to solely performing fixed fields filtering.
Ear-Nose-Throat Models
The baseline for the ENT models finds 33% of the clinically assigned codes (exact matches) and 34% (L3), 41% (L2) and 62% (L1) at the less specific levels (Table 5).
Technocratic word weighing yields a modest improvement over the baseline model: one percentage point on each of the levels. Filtering code candidates based on fixed fields statistics, however, leads to a remarkable boost in results, from 33% to 43% exact matches. ENT NegEx Model performs slightly better than the baseline model, although only as little as a single percentage point (34% exact matches). Performance drops when the technocratic approach is applied to this model. The fixed fields filtering, on the other hand, similarly improves results for the negation variant of the ENT model; however, there is no apparent additional benefit in this case of negation handling. In fact, it somewhat hampers the improvement yielded by this weighting technique.
As with the general models, a combination of the two weighting techniques does not affect the results much for either of the ENT models.
Rheumatology Models
The baseline for the rheumatology models finds 61% of the clinically assigned codes (exact matches) and 61% (L3), 68% (L2) and 92% (L1) at the less specific levels (Table 6).
Compared to the above models, the technocratic approach is here much more successful, resulting in 72% exact matches. Filtering the code candidates based on fixed fields statistics leads to a further improvement of ten percentage points for exact matches (82%). Rheuma NegEx Model achieves only a modest improvement on L2. Moreover, this model does not benefit at all from the technocratic approach; neither is the fixed fields filtering quite as successful in this model (67% exact matches).
A combination of the two weighting schemes adds only a little to the two variants of the rheumatology model. Interesting to note is that the negation variant performs the same or even much worse than the one without any negation handling.
Discussion
The two weighting techniques and the incorporation of negation handling provide varying degrees of benefit-from small to important boosts in performance-depending to some extent on the model to which they are applied. Table 4: General Models, with and without negation handling. Recall (top 10), measured as the presence of the clinically assigned codes in a list of ten model-generated recommendations. E = exact match, L3→L1 = matches on the other levels, from specific to general. The baseline is for the model without negation handling only. Table 5: Ear-Nose-Throat Models, with and without negation handling. Recall (top 10), measured as the presence of the clinically assigned codes in a list of ten model-generated recommendations. E = exact match, L3→L1 = matches on the other levels, from specific to general. The baseline is for the model without negation handling only. Table 6: Rheumatology Models, with and without negation handling. Recall (top 10), measured as the presence of the clinically assigned codes in a list of ten model-generated recommendations. E = exact match, L3→L1 = matches on the other levels, from specific to general. The baseline is for the model without negation handling only.
Model Clinical Entities Negations Negations/Clinical Entities
ENT Model ENT NegEx Model
Weighting E L3 L2 L1 E L3 L2 L1
Rheuma Model Rheuma NegEx Model
Weighting E L3 L2 L1 E L3 L2 L1
Technocratic Approach
The technocratic approach, whereby clinically significant words are given extra weight, does result in some improvement when applied to all models that do not incorporate negation handling. The effect this weighting technique has on Rheuma Model is, however, markedly different from when it is applied to the other two corresponding models. It could potentially be the result of a more precise, technical language used in rheumatology documentation, where certain words are highly predictive of the diagnosis. However, the results produced by this model need to be examined with some caution, due to the relatively small size of the data set on which the model is based and evaluated.
Since this approach appears to have a positive impact on all of the models where negation handling is not performed, assigning even more weight to clinical terminology may yield additional benefits. This would, of course, have to be tested empirically and may differ from domain to domain.
Structured Data Filtering
The technique whereby code candidates are given weight according to their likelihood of being accurately assigned to a particular patient recordbased on historical co-occurrence statistics of diagnosis codes and, respectively, age, gender and clinical unit-is successful across the board. To a large extent, this is probably due to a set of ICD-10 codes being frequently assigned in any particular clinical unit. In effect, it can partly be seen as a weighting scheme according to code frequency. There are also codes, however, that make gender and age distinctions. It is likewise well known that some diagnoses are more prevalent in certain age groups, while others are exclusive to a particular gender.
It is interesting to note the remarkable improvement observed for the two domains-specific models. Perhaps the aforementioned factor of frequently recurring code assignments is even stronger in these particular types of clinics. By contrast, there are no obvious gender-specific diagnoses in either of the two domains; however, in the rheumatology data, there are in fact 23 codes that have frequently been assigned to men but never to women. In such cases it is especially beneficial to exploit the structured data in patient records. It could also be that the restriction to twenty code candidates for each of the individual words in the general models was not sufficiently large a number to allow more likely code candidates to make it into the final list of recommendations. That said, it seems somewhat unlikely that a code that is not closely associated with any of the words in a document should make it into the final list.
Even if the larger improvements observed for the domain-specific models may, again, in part be due to the smaller amounts of data compared with the general model, the results clearly indicate the general applicability and benefit of such a weighting scheme.
Negation Detection
The incorporation of automatic detection of negated clinical entities improves results for all models, although more so for the general model than the domain-specific models. This could possibly be ascribed to the problem of data sparsity. That is, in the smaller domain-specific models, there are fewer instances of each type of negated clinical entity (11.7 on average in ENT and 9.4 on average in rheumatology) than in the general model (31.6 on average). This is problematic since infrequent words, just as very frequent words, are commonly assumed to hold little or no information about semantics (Jurafsky and Martin, 2009). There simply is little statistical evidence for the rare words, which potentially makes the estimation of their similarity with other words uncertain. For instance, Karlgren and Sahlgren (2001) report that, in their TOEFL test experiments, they achieved the best results when they removed words that appeared in only one or two documents. While we cannot just remove infrequent codes, the precision of these suggestions are likely to be lower.
The prevalence of negated clinical entitiesalmost 14% in the entire data set-indicates the importance of treating them as such in an NLPbased approach to clinical coding. Due to the extremely low recall (0.13) of the simple method of detecting clinical entities through exact string matching , negation handling could potentially have a more marked impact on the models if more clinical entities were to be detected, as that would likely also entail more negated terms.
There are, of course, various ways in which one may choose to handle negations. An alternative could have been simply to ignore negated terms in the construction of the word space models, thereby not correlating negated terms with affirmed diagnosis codes. Even if doing so may make sense, the approach assumed here is arguably better since a negated clinical entity could have a positive correlation with a diagnosis code. That is, ruling out or disconfirming a particular diagnosis may be indicative of another diagnosis.
Combinations of Techniques
When the technocratic weighting technique is applied to the variants of the models which include annotations of negated clinical entities, there is no positive effect. In fact, results drop somewhat when applied to the two domain-specific models. A possible explanation could perhaps be that clinically significant words that are constituents of negated clinical entities are not detected in the technocratic approach. The reason for this is that the application of the Swedish NegEx system, which is done prior to the construction and evaluation of the models, marks the negated clinical entities in such a way that those words will no longer be recognized by the technocratic word detector. Such words may, of course, be of importance even if they are negated. This could be worked around in various ways; one would be simply to give weight to all negated clinical entities.
Fixed fields filtering applied to the NegEx models has an impact that is more or less comparable to the same technique applied to the models without negation handling. This weighting technique is thus not obviously impeded by the annotations of negated clinical entities, with the exception of the rheumatology models, where an improvement is observed, yet not as substantial as when applied to Rheuma Model.
A combination of the technocratic word weighting and the fixed fields code filtering does not appear to provide any added value over the sole application of the latter weighting technique. Likewise, the same combination applied to the NegEx version does not improve on the results of the fixed fields filtering.
In this study, fine-tuning of weights has not been performed, neither internally or externally to each of the weighting techniques. It may, of course, be that, for instance, gender distinctions are more in-formative than age distinctions-or vice versaand thus need to be weighted accordingly. By the same token should the more successful weighting schemes probably take precedence over the less successful variants.
Classification Problem
It should be pointed out that the model-generated recommendations are restricted to a set of properly formatted ICD-10 codes. Given the conditions under which real, clinically generated data is produced, there is bound to be some noise, not least in the form of inaccurately assigned and illformatted diagnosis codes. In fact, only 67.9% of the codes in the general data set are in this sense 'valid' (86.5% in the ENT data set and 66.9% in the rheumatology data set). As a result, a large portion of the assigned codes in the testing partition cannot be recommended by the models, possibly having a substantial negative influence on the evaluation scores. For instance, in the ear-nosethroat data, the five most frequent diagnosis codes are not present in the restricted result set. Not all of these are actually 'invalid' codes but rather action codes etc. that were not included in the list of acceptable code recommendations. A fairer evaluation of the models would be either to include such codes in the restricted result set or to base the restricted result set entirely on the codes in the data. Furthermore, there is a large number of unseen codes in the testing partitions, which also cannot be recommended by the models (358 in the general data set, 79 in the ENT data set and 39 in the rheumatology data set). This, on the other hand, reflects the real-life conditions of a classification system and so should not be eschewed; however, it is interesting to highlight when evaluating the successfulness of the models and the method at large.
Conclusion
The Random Indexing approach to clinical coding benefits from the incorporation of negation handling and various weighting schemes. While assigning additional weight to clinically significant words yields a fairly modest improvement, filtering code candidates based on structured patient records data leads to important boosts in performance for general and domain-specific models alike. Negation handling is also important, although the way in which it is here performed seems to require a large amount of training data for marked benefits. Even if combining a number of weighting techniques does not necessarily give rise to additional improvements, tuning of the weighting factors may help to do so.
Figure 1 :
1The structure of ICD-10 allows division into four levels.
Table 1 :
1The six models.w/o negations
w/ negations
General Model
General NegEx Model
ENT Model
ENT NegEx Model
Rheuma Model
Rheuma NegEx Model
Table 2 :
2Data set statistics.
Table 3 :
3Negation Statistics. The number of detected clinical entities, the number of negated clinical entities and the percentage of the detected clinical entities that are negated.General Model
General NegEx Model
Weighting
E
L3
L2
L1
E
L3
L2 L1
Baseline
0.23 0.25 0.33 0.60 0.25 0.27 0.35 0.62
Technocratic
0.24 0.26 0.34 0.61 0.25 0.27 0.35 0.62
Fixed Fields
0.26 0.28 0.36 0.61 0.27 0.29 0.37 0.63
Technocratic + Fixed Fields 0.26 0.28 0.36 0.62 0.27 0.29 0.37 0.63
The 10th revision of the International Classification of Diseases and Related Health Problems (World Health Organization, 2011).
According to the distributional hypothesis, words that appear in similar contexts tend to have similar properties. If two words repeatedly co-occur, we can assume that they in some way refer to similar concepts(Harris, 1954). Diagnosis codes are here treated as words.
ENT = Ear-Nose-Throat, Rheuma = Rheumatology. 4 Inverse document frequency, denoting a word's discriminatory value.
AcknowledgmentsWe would like to thank all members of our research group, IT for Health, for their support and input. We would especially like to express our gratitude to Maria Skeppstedt for her important contribution to the negation handling aspects of this work, particularly in adapting her Swedish NegEx system to our specific needs. Thanks also to the reviewers for their helpful comments.
The Stockholm EPR Corpus: Characteristics and Some Initial Findings. Hercules Dalianis, Martin Hassel, Sumithra Velupillai, Proceedings of ISHIMR 2009. ISHIMR 2009Hercules Dalianis, Martin Hassel and Sumithra Velupillai. 2009. The Stockholm EPR Corpus: Char- acteristics and Some Initial Findings. In Proceedings of ISHIMR 2009, pp. 243-249.
S Zellig, Harris, Distributional structure. Word, 10. Zellig S. Harris. 1954. Distributional structure. Word, 10, pp. 146-162.
Diagnosis Code Assignment Support Using Random Indexing of Patient Records -A Qualitative Feasibility Study. Aron Henriksson, Martin Hassel, Maria Kvist, Proceedings of AIME, 13th Conference on Artificial Intelligence in Medicine. AIME, 13th Conference on Artificial Intelligence in MedicineAron Henriksson, Martin Hassel and Maria Kvist. 2011. Diagnosis Code Assignment Support Using Random Indexing of Patient Records -A Qualita- tive Feasibility Study. In Proceedings of AIME, 13th Conference on Artificial Intelligence in Medicine, pp. 348-352.
Election of Diagnosis Codes: Words as Responsible Citizens. Aron Henriksson, Martin Hassel, Proceedings of Louhi, 3rd International Workshop on Health Document Text Mining and Information Analysis. Louhi, 3rd International Workshop on Health Document Text Mining and Information AnalysisAron Henriksson and Martin Hassel. 2011. Election of Diagnosis Codes: Words as Responsible Citizens. In Proceedings of Louhi, 3rd International Workshop on Health Document Text Mining and Information Analysis.
Throw the Bath Water Out, Keep the Baby: Keeping Medically-Relevant Terms for Text Mining. Jay Jarman, J Donald, Berndt, Proceedings of AMIA. AMIAJay Jarman and Donald J. Berndt. 2010. Throw the Bath Water Out, Keep the Baby: Keeping Medically-Relevant Terms for Text Mining. In Pro- ceedings of AMIA, pp. 336-340.
An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Pearson Education International. Daniel Jurafsky, James H Martin, 806NJ, USASpeech and Language ProcessingDaniel Jurafsky and James H. Martin. 2009. Speech and Language Processing. An Introduction to Nat- ural Language Processing, Computational Linguis- tics, and Speech Recognition. Pearson Education In- ternational, NJ, USA, p. 806.
From Words to Understanding. Foundations of Real-World Intelligence. Jussi Karlgren, Magnus Sahlgren, Jussi Karlgren and Magnus Sahlgren. 2001. From Words to Understanding. Foundations of Real-World Intelligence, pp. 294-308.
A Robust Shallow Parser for Swedish. Ola Knutsson, Johnny Bigert, Viggo Kann, Proceedings of Nodalida. NodalidaOla Knutsson, Johnny Bigert and Viggo Kann. 2003. A Robust Shallow Parser for Swedish. In Proceedings of Nodalida.
Automatic Assignment of ICD9 Codes to Discharge Summaries. Leah S Larkey, W Bruce Croft, Amerst, MA, USAIn PhD thesis University of Massachusetts at AmherstLeah S. Larkey and W. Bruce Croft. 1995. Automatic Assignment of ICD9 Codes to Discharge Sum- maries. In PhD thesis University of Massachusetts at Amherst, Amerst, MA, USA.
Automating the Assignment of Diagnosis Codes to Patient Encounters Using Example-based and Machine Learning Techniques. V S Serguei, James D Pakhomov, Christopher G Buntrock, Chute, J Am Med Inform Assoc. 13Serguei V.S. Pakhomov, James D. Buntrock and Christopher G. Chute. 2006. Automating the As- signment of Diagnosis Codes to Patient Encounters Using Example-based and Machine Learning Tech- niques. J Am Med Inform Assoc, 13, pp. 516-525.
A Shared Task Involving Mulit-label Classification of Clinical Free Text. John P Pestian, Christopher Brew, Pawel Matykiewicz, Neil Hovermale, K Johnson, Wlodzislaw Cohen, Duch, Proceedings of BioNLP 2007: Biological, translational, and clinical language processing. BioNLP 2007: Biological, translational, and clinical language processingJohn P. Pestian, Christopher Brew, Pawel Matykiewicz, DJ Hovermale, Neil Johnson, K. Bretonnel Cohen and Wlodzislaw Duch. 2007. A Shared Task Involv- ing Mulit-label Classification of Clinical Free Text. In Proceedings of BioNLP 2007: Biological, trans- lational, and clinical language processing, pp. 97- 104.
Vector-Based Semantic Analysis: Representing Word Meanings Based on Random Labels. Magnus Sahlgren, Proceedings of Semantic Knowledge Acquisition and Categorization Workshop at ESS-LLI'01. Semantic Knowledge Acquisition and Categorization Workshop at ESS-LLI'01Magnus Sahlgren. 2001. Vector-Based Semantic Anal- ysis: Representing Word Meanings Based on Ran- dom Labels. In Proceedings of Semantic Knowledge Acquisition and Categorization Workshop at ESS- LLI'01.
The Word-Space Model: Using distributional analysis to represent syntagmatic and paradigmatic relations between words in highdimensional vector spaces. Magnus Sahlgren, PhD thesis Stockholm University. Stockholm, SwedenMagnus Sahlgren. 2006. The Word-Space Model: Us- ing distributional analysis to represent syntagmatic and paradigmatic relations between words in high- dimensional vector spaces. In PhD thesis Stockholm University, Stockholm, Sweden.
Negation detection in Swedish clinical text: An adaption of Negex to Swedish. Maria Skeppstedt, Journal of Biomedical Semantics. 2S3Maria Skeppstedt. 2011. Negation detection in Swedish clinical text: An adaption of Negex to Swedish. Journal of Biomedical Semantics 2, S3.
Retrieving disorders and findings: Results using SNOMED CT and NegEx adapted for Swedish. Maria Skeppstedt, Hercules Dalianis, Gunnar H Nilsson, Proceedings of Louhi, 3rd International Workshop on Health Document Text Mining and Information Analysis. Louhi, 3rd International Workshop on Health Document Text Mining and Information AnalysisMaria Skeppstedt, Hercules Dalianis and Gunnar H. Nilsson. 2011. Retrieving disorders and findings: Results using SNOMED CT and NegEx adapted for Swedish. In Proceedings of Louhi, 3rd International Workshop on Health Document Text Mining and In- formation Analysis.
A systematic literature review of automated clinical coding and classification systems. Mary H Stanfill, Margaret Williams, Susan H Fenton, Robert A Jenders, William R Hersh, J Am Med Infrom Assoc. 17Mary H. Stanfill, Margaret Williams, Susan H. Fen- ton, Robert A. Jenders and William R Hersh. 2010. A systematic literature review of automated clinical coding and classification systems. J Am Med Infrom Assoc, 17, pp. 646-651.
Orthogonal Negation in Vector Spaces for Modelling Word-Meanings and Document Retrieval. Dominic Widdows, Proceedings of ACL. ACLDominic Widdows. 2003. Orthogonal Negation in Vec- tor Spaces for Modelling Word-Meanings and Docu- ment Retrieval. In Proceedings of ACL, pp. 136-143.
International Classification of Diseases (ICD). In World Health Organization. RetrievedWorld Health Organization. 2011. International Clas- sification of Diseases (ICD). In World Health Organization. Retrieved June 19, 2011, from http://www.who.int/classifications/icd/en/. |
11,160,504 | Statistical Machine Translation with Scarce Resources Using Morpho-syntactic Information | RWTH Aachen RWTH AachenIn statistical machine translation, correspondences between the words in the source and the target language are learned from parallel corpora, and often little or no linguistic knowledge is used to structure the underlying models. In particular, existing statistical systems for machine translation often treat different inflected forms of the same lemma as if they were independent of one another. The bilingual training data can be better exploited by explicitly taking into account the interdependencies of related inflected forms. We propose the construction of hierarchical lexicon models on the basis of equivalence classes of words. In addition, we introduce sentence-level restructuring transformations which aim at the assimilation of word order in related sentences. We have systematically investigated the amount of bilingual training data required to maintain an acceptable quality of machine translation. The combination of the suggested methods for improving translation quality in frameworks with scarce resources has been successfully tested: We were able to reduce the amount of bilingual training data to less than 10% of the original corpus, while losing only 1.6% in translation quality. The improvement of the translation results is demonstrated on two German-English corpora taken from the Verbmobil task and the Nespole! task. * Computational Linguistics Volume 30, Number 2in most statistical machine translation systems. Apart from the improved coverage, the proposed lexicon models enable the disambiguation of ambiguous word forms by means of annotation with morpho-syntactic tags.OverviewThe article is organized as follows. After briefly reviewing the basic concepts of the statistical approach to machine translation, we discuss the state of the art and related work as regards the incorporation of morphological and syntactic information into systems for natural language processing. Section 2 describes the information provided by morpho-syntactic analysis and introduces a suitable representation of the analyzed corpus. Section 3 suggests solutions for two specific aspects of structural difference, namely, question inversion and separated verb prefixes. Section 4 is dedicated to hierarchical lexicon models. These models are able to infer translations of word forms from the translations of other word forms of the same lemma. Furthermore, they use morpho-syntactic information to resolve categorial ambiguity. In Section 5, we describe how disambiguation between different readings and their corresponding translations can be performed when no context is available, as is typically the case for conventional electronic dictionaries. Section 6 provides an overview of our procedure for training model parameters for statistical machine translation with scarce resources. Experimental results are reported in Section 7. Section 8 concludes the presentation with a discussion of the achievements of this work. | [
13442531,
14386564,
21393511,
16177159,
9717543,
651085,
13158483,
5216540
] | Statistical Machine Translation with Scarce Resources Using Morpho-syntactic Information
Sonja Nießen
Hermann Ney
Statistical Machine Translation with Scarce Resources Using Morpho-syntactic Information
RWTH Aachen RWTH AachenIn statistical machine translation, correspondences between the words in the source and the target language are learned from parallel corpora, and often little or no linguistic knowledge is used to structure the underlying models. In particular, existing statistical systems for machine translation often treat different inflected forms of the same lemma as if they were independent of one another. The bilingual training data can be better exploited by explicitly taking into account the interdependencies of related inflected forms. We propose the construction of hierarchical lexicon models on the basis of equivalence classes of words. In addition, we introduce sentence-level restructuring transformations which aim at the assimilation of word order in related sentences. We have systematically investigated the amount of bilingual training data required to maintain an acceptable quality of machine translation. The combination of the suggested methods for improving translation quality in frameworks with scarce resources has been successfully tested: We were able to reduce the amount of bilingual training data to less than 10% of the original corpus, while losing only 1.6% in translation quality. The improvement of the translation results is demonstrated on two German-English corpora taken from the Verbmobil task and the Nespole! task. * Computational Linguistics Volume 30, Number 2in most statistical machine translation systems. Apart from the improved coverage, the proposed lexicon models enable the disambiguation of ambiguous word forms by means of annotation with morpho-syntactic tags.OverviewThe article is organized as follows. After briefly reviewing the basic concepts of the statistical approach to machine translation, we discuss the state of the art and related work as regards the incorporation of morphological and syntactic information into systems for natural language processing. Section 2 describes the information provided by morpho-syntactic analysis and introduces a suitable representation of the analyzed corpus. Section 3 suggests solutions for two specific aspects of structural difference, namely, question inversion and separated verb prefixes. Section 4 is dedicated to hierarchical lexicon models. These models are able to infer translations of word forms from the translations of other word forms of the same lemma. Furthermore, they use morpho-syntactic information to resolve categorial ambiguity. In Section 5, we describe how disambiguation between different readings and their corresponding translations can be performed when no context is available, as is typically the case for conventional electronic dictionaries. Section 6 provides an overview of our procedure for training model parameters for statistical machine translation with scarce resources. Experimental results are reported in Section 7. Section 8 concludes the presentation with a discussion of the achievements of this work.
Introduction
The statistical approach to machine translation has proved successful in various comparative evaluations since its revival by the work of the IBM research group more than a decade ago. The IBM group dispensed with linguistic analysis, at least in its earliest publications. Although the IBM group finally made use of morphological and syntactic information to enhance translation quality (Brown et al. 1992;Berger et al. 1996), most of today's statistical machine translation systems still consider only surface forms and use no linguistic knowledge about the structure of the languages involved.
In many applications only small amounts of bilingual training data are available for the desired domain and language pair, and it is highly desirable to avoid at least parts of the costly data collection process. The main objective of the work reported in this article is to introduce morphological knowledge in order to reduce the amount of bilingual data necessary to sufficiently cover the vocabulary expected in testing. This is achieved by explicitly taking into account the interdependencies of related inflected forms. In this work, a hierarchy of equivalence classes at different levels of abstraction is proposed. Features from those hierarchy levels are combined to form hierarchical lexicon models, which can replace the standard probabilistic lexicon used
Statistical Machine Translation
In statistical machine translation, every target language string e I 1 = e 1 · · · e I is assigned a probability Pr(e I 1 ) of being a valid word sequence in the target language and a probability Pr(e I 1 |f J 1 ) of being a translation for the given source language string f J 1 = f 1 · · · f J . According to Bayes' decision rule, the optimal translation for f J 1 is the target string that maximizes the product of the target language model Pr(e I 1 ) and the string translation model Pr(f J 1 |e I 1 ). Many existing systems for statistical machine translation (García-Varea and Casacuberta 2001;Germann et al. 2001;Nießen et al. 1998;Och, Tillmann, and Ney 1999) implement models presented by Brown, Della Pietra, Della Pietra, and Mercer (1993): The correspondence between the words in the source and the target strings is described by alignments that assign target word positions to each source word position. The probability that a certain target language word will occur in the target string is assumed to depend basically only on the source words aligned with it.
1.3 Related Work 1.3.1 Morphology. Some publications have already dealt with the treatment of morphology in the framework of language modeling and speech recognition: Kanevsky, Roukos, and Sedivy (1997) propose a statistical language model for inflected languages. They decompose word forms into stems and affixes. Maltese and Mancini (1992) report that a linear interpolation of word n-grams, part of speech n-grams, and lemma n-grams yields lower perplexity than pure word-based models. Larson et al. (2000) apply a data-driven algorithm for decomposing compound words in compounding languages as well as for recombining phrases to enhance the pronunciation lexicon and the language model for large-vocabulary speech recognition systems.
As regards machine translation, the treatment of morphology is part of the analysis and generation step in virtually every symbolic machine translation system. For this purpose, the lexicon should contain base forms of words and the grammatical category, subcategorization features, and semantic information in order to enable the size of the lexicon to be reduced and in order to account for unknown word forms, that is, word forms not present explicitly in the dictionary.
Today's statistical machine translation systems build upon the work of P. F. Brown and his colleagues at IBM. The translation models they presented in various papers between 1988 and 1993 (Brown et al. 1988;Brown et al. 1990;Brown, Della Pietra, Della Pietra, and Mercer 1993) are commonly referred to as IBM models 1-5, based on the numbering in Brown, Della Pietra, Della Pietra, and Mercer (1993). The underlying (probabilistic) lexicon contains only pairs of full forms. On the other hand, Brown et al. (1992) had already suggested word forms be annotated with morpho-syntactic information, but they did not perform any investigation on the effects.
Translation with Scarce
Resources. Some recent publications, like Al-Onaizan et al. (2000), have dealt with the problem of translation with scarce resources. Al-Onaizan et al. report on an experiment involving Tetun-to-English translation by different groups, including one using statistical machine translation. Al-Onaizan et al. assume the absence of linguistic knowledge sources such as morphological analyzers and dictionaries. Nevertheless, they found that the human mind is very well capable of deriving dependencies such as morphology, cognates, proper names, and spelling variations and that this capability was finally at the basis of the better results produced by humans compared to corpus-based machine translation. The additional information results from complex reasoning, and it is not directly accessible from the full-wordform representation in the data.
This article takes a different point of view: Even if full bilingual training data are scarce, monolingual knowledge sources like morphological analyzers and data for training the target language model as well as conventional dictionaries (one word and its translation[s] per entry) may be available and of substantial usefulness for improving the performance of statistical translation systems. This is especially the case for more-inflecting major languages like German. The use of dictionaries to augment or replace parallel corpora has already been examined by Brown, Della Pietra, Della Pietra, and Goldsmith (1993) and Koehn and Knight (2001), for instance.
Morpho-syntactic Information
A prerequisite for the methods for improving the quality of statistical machine translation described in this article is the availability of various kinds of morphological and syntactic information. This section describes the output resulting from morphosyntactic analysis and explains which parts of the analysis are used and how the output is represented for further processing.
Description of the Analysis Results
For obtaining the required morpho-syntactic information, the following analyzers for German and English were applied: gertwol and engtwol for lexical analysis and gercg and engcg for morphological and syntactic disambiguation. For a description of the underlying approach, the reader is referred to Karlsson (1990). Tables 1 and 2 give examples of the information provided by these tools.
Treatment of Ambiguity
The examples in Tables 1 and 2 demonstrate the capability of the tools to disambiguate among different readings: For instance, they infer that the word wollen is a verb in the indicative present first-person plural form. Without any context taken into account, wollen has other readings. It can even be interpreted as derived from an adjective with the meaning "made of wool." The inflected word forms on the German part of the Verbmobil (cf. Section 7.1.1) corpus have on average 2.85 readings (1.86 for the English corpus), 58% of which can be eliminated by the syntactic analyzers on the basis of sentence context. Common bilingual corpora normally contain full sentences, which provide enough context information for ruling out all but one reading for an inflected word form. To reduce the remaining uncertainty, preference rules have been implemented. For instance, it is assumed that the corpus is correctly true-case-converted beforehand, and as a consequence, non-noun readings of uppercase words are dropped. Furthermore, indicative verb readings are preferred to subjunctive or imperative. In addition, some simple domain-specific heuristics are applied. The reading "plural of Esse" for the German word form Essen, for instance, is much less likely in the domain of appointment scheduling and travel arrangements than the readings "proper name of the town Essen" or the German equivalent of the English word meal. As can be seen in Table 3, the reduction in the number of readings resulting from these preference rules is fairly small in the case of the Verbmobil corpus.
The remaining ambiguity often lies in those parts of the information which are not used or which are not relevant to the translation task. For example, the analyzers cannot tell accusative from dative case in German, but the case information is not essential for the translation task (see also Table 4). Section 2.4 describes a method for selecting morpho-syntactic tags considered relevant for the translation task, which results in a further reduction in the number of readings per word form to 1.06 for German and 1.01 for English. In these rare cases of ambiguity it is admissible to resort to the unambiguous parts of the readings, that is, to drop all tags causing mixed interpretations. Table 3 summarizes the gradual resolution of ambiguity. The analysis of conventional dictionaries poses some special problems, because they do not provide enough context to enable effective disambiguation. For handling this special situation, dedicated methods have been implemented; these are presented in Section 5.1.
The Lemma-Tag Representation
A full word form is represented by the information provided by the morpho-syntactic analysis: from the interpretation gehen verb indicative present first singular, that is, the base form plus part of speech plus the other tags, the word form gehe can be restored. It has already been mentioned that the analyzers can disambiguate among different readings on the basis of context information. In this sense, the information inherent in the original word forms is augmented by the disambiguating analyzer. This can be useful for choosing the correct translation of ambiguous words. Of course, these disambiguation clues result in an enlarged vocabulary. The vocabulary of the new representation of the German part of the Verbmobil corpus, for example, in which full word forms are replaced by base form plus morphological and syntactic tags (lemmatag representation), is one and a half times as large as the vocabulary of the original corpus. On the other hand, the information in the lemma-tag representation can be accessed gradually and ultimately reduced: For example, certain instances of words can be considered equivalent. This fact is used to better exploit the bilingual training data along two directions: detecting and omitting unimportant information (see Section 2.4) and constructing hierarchical translation models (see Section 4). To summarize, the lemma-tag representation of a corpus has the following main advantages: It makes context information locally available, and it allows information to be explicitly accessed at different levels of abstraction.
Equivalence Classes of Words with Similar Translation
Inflected word forms in the input language often contain information that is not relevant for translation. This is especially true for the task of translating from a more inflecting language like German into English, for instance: In parallel German/English corpora, the German part contains many more distinct word forms than the English part (see, for example, Table 5). It is useful for the process of statistical machine translation to define equivalence classes of word forms which tend to be translated by the same target language word: The resulting statistical translation lexicon becomes Table 4 Candidates for equivalence classes.
Part of speech Candidates
Noun
Gender (masculine, feminine, neuter) and case (nominative, dative, accusative) Verb
Number (singular, plural) and person (first, second, third) Adjective
Gender, case, and number Number Case smoother, and the coverage is considerably improved. Such equivalence classes are constructed by omitting those items of information from morpho-syntactic analysis which are not relevant for translation. The lemma-tag representation of the corpus helps to identify the unimportant information. The definition of relevant and unimportant information, respectively, depends on many factors like the languages involved, the translation direction, and the choice of the models. We detect candidates for equivalence classes of words automatically from the probabilistic lexicon trained for translation from German to English. For this purpose, those inflected forms of the same base form which result in the same translation are inspected. For each set of tags T, the algorithm counts how often an additional tag t 1 can be replaced with a certain other tag t 2 without effect on the translation. As an example, let T = 'blau-adjective', t 1 ='masculine' and t 2 ='feminine'. The two entries ('blau-adjective-masculine'|'blue') and ('blau-adjective-feminine'|'blue') are hints for detecting gender as nonrelevant when translating adjectives into English. Table 4 lists some of the most frequently identified candidates to be ignored while translating: The gender of nouns is irrelevant for their translation (which is straightforward, as the gender of a noun is unambiguous), as are the cases nominative, dative, accusative. (For the genitive forms, the translation in English differs.) For verbs the candidates number and person were found: The translation of the first-person singular form of a verb, for example, is often the same as the translation of the third-person plural form. Ignoring (dropping) those tags most often identified as irrelevant for translation results in the building of equivalence classes of words. Doing so results in a smaller vocabulary, one about 65.5% the size of the vocabulary of the full lemmatag representation of the Verbmobil corpus, for example-it is even smaller than the vocabulary of the original full-form corpus.
The information described in this section is used to improve the quality of statistical machine translation and to better exploit the available bilingual resources.
Treatment of Structural Differences
Difference in sentence structure is one of the main sources of errors in machine translation. It is thus promising to "harmonize" the word order in corresponding sentences. The presentation in this section focuses on the following aspects: question inversion and separated verb prefixes. For a more detailed discussion of restructuring for statistical machine translation the reader is referred to Ney (2000, 2001).
Question Inversion
In many languages, the sentence structure of questions differs from the structure in declarative sentences in that the order of the subject and the corresponding finite verb is inverted. From the perspective of statistical translation, this behavior has some dis-advantages: The algorithm for training the parameters of the target language model Pr(e I 1 ), which is typically a standard n-gram model, cannot deduce the probability of a word sequence in an interrogative sentence from the corresponding declarative form. The same reasoning is valid for the lexical translation probabilities of multiwordphrase pairs. To harmonize the word order of questions with the word order in declarative sentences, the order of the subject (including the appendant articles, adjectives etc.) and the corresponding finite verb is inverted. In English questions supporting dos are removed. The application of the described preprocessing step in the bilingual training corpus implies the necessity of restoring the correct forms of the translations produced by the machine translation algorithm. This procedure was suggested by Brown et al. (1992) for the language pair English and French, but they did not report on experimental results revealing the effect of the restructuring on the translation quality.
Separated Verb Prefixes
German prefix verbs consist of a main part and a detachable prefix, which can be shifted to the end of the clause. For the automatic alignment process, it is often difficult to associate one English word with more than one word in the corresponding German sentence, namely, the main part of the verb and the separated prefix. To solve the problem of separated prefixes, all separable word forms of verbs are extracted from the training corpus. The resulting list contains entries of the form prefix|main. In all clauses containing a word matching a main part and a word matching the corresponding prefix part occurring at the end of the clause, the prefix is prepended to the beginning of the main part.
Hierarchical Lexicon Models
In general, the probabilistic lexicon resulting from training the translation model contains all word forms occurring in the training corpus as separate entries, not taking into account whether or not they are inflected forms of the same lemma. Bearing in mind that typically more than 40% of the word forms are seen only once in training (see, for example, Table 5), it is obvious that for many words, learning the correct translations is difficult. Furthermore, new input sentences are expected to contain unknown word forms, for which no translation can be retrieved from the lexicon. This problem is especially relevant for more-inflecting languages like German: Texts in German contain many more distinct word forms than their English translations. Table 5 also reveals that these words are often generated via inflection from a smaller set of base forms.
A Hierarchy of Equivalence Classes of Inflected Word Forms
As mentioned in Section 2.3, the lemma-tag representation of the information from morpho-syntactic analysis makes it possible to gradually access information with different grades of abstraction. Consider, for example, the German verb form ankomme, which is the indicative present first-person singular form of the lemma ankommen and can be translated into English by arrive. The lemma-tag representation provides an "observation tuple" consisting of • the original full word form (e.g., ankomme),
• morphological and syntactic tags (part of speech, tense, person, case, . . . ) (e.g., verb, indicative, present tense, 1st person singular), and
• the base form (e.g., ankommen).
In the following, t i 0 = t 0 , . . . , t i denotes the representation of a word where the base form t 0 and i additional tags are taken into account. For the example above, t 0 = ankommen, t 1 = verb, and so on. The hierarchy of equivalence classes F 0 , . . . , F n is as follows:
F n = F(t n 0 ) = ankommen verb indicative present singular 1 F n−1 = F(t n−1 0 ) = ankommen verb indicative present singular F n−2 = F(t n−2 0 ) = ankommen verb indicative present . . . F 0 = F(t 0 ) = ankommen
where n is the maximum number of morpho-syntactic tags. The mapping from the full lemma-tag representation back to inflected word forms is generally unambiguous; thus F n contains only one element, namely, ankomme. F n−1 contains the forms ankomme, ankommst, and ankommt; in F n−2 the number (singular or plural) is ignored, and so on. The largest equivalence class contains all inflected forms of the base form ankommen. 1 Section 4.2 introduces the concept of combining information at different levels of abstraction.
Log-Linear Combination
In modeling for statistical machine translation, a hidden variable a J 1 , denoting the hidden alignment between the words in the source and target languages, is usually introduced into the string translation probability:
Pr(f J 1 |e I 1 ) = a J 1 Pr(f J 1 , a J 1 |e I 1 ) = a J 1 Pr(a J 1 |e I 1 ) · Pr(f J 1 |a J 1 , e I 1 )(1)
In the following, T j = t n 0 j denotes the lemma-tag representation of the jth word in the input sentence. The sequence T J 1 stands for the sequence of readings for the word sequence f J 1 and can be introduced as a new hidden variable:
Pr(f J 1 |a J 1 , e I 1 ) = T J 1 Pr(f J 1 , T J 1 |a J 1 , e I 1 )(2)
which can be decomposed into
Pr(f J 1 |a J 1 , e I 1 ) = T J 1 J j=1 Pr(f j , T j |f j−1 1 , T j−1 1 , a J 1 , e I 1 )(3)
1 The order of omitting tags can be defined in a natural way depending on the part of speech. In principle this decision can also be left to the maximum-entropy training, when features for all possible sets of tags are defined, but this would cause the number of parameters to explode. As the experiments in this work have been carried out only with up to three levels of abstraction as defined in Section 4.2, the set of tags of the intermediate level is fixed, and thus the priority of the tags needs not be specified. The relation between this equivalence class hierarchy and the suggestions in Section 2.4 is clear:
Choosing candidates for morpho-syntactic tags not relevant for translation amounts to fixing a level in the hierarchy. This is exactly what has been done to define the intermediate level in Section 4.2.
Let T (f j ) be the set of interpretations which are regarded valid readings of f j by the morpho-syntactic analyzers on the basis of the whole-sentence context f J 1 . We assume that the probability functions defined above yield zero for all other readings, that is, when T j ∈ T (f j ). Under the usual independence assumption, which states that the probability of the translation of words depends only on the identity of the words associated with each other by the word alignment, we get
Pr(f J 1 |a J 1 , e I 1 ) = T J 1 T j ∈ T (f j ) J j=1 p(f j , T j |e a j )(4)
As has been argued in Section 2.2, the number of readings |T (f j )| per word form can be reduced to one for the tasks for which experimental results are reported here. The elements in equation (4) are the joint probabilities p(f , T|e) of f and the readings T of f given the target language word e. The maximum-entropy principle recommends choosing for p the distribution which preserves as much uncertainty as possible in terms of maximizing the entropy, while requiring p to satisfy constraints which represent facts known from the data. These constraints are encoded on the basis of feature functions h m (x), and the expectation of each feature h m over the model p is required to be equal to the observed expectation. The maximum-entropy model can be shown to be unique and to have an exponential form involving a weighted sum over the feature functions h m (Ratnaparkhi 1997). In equation (5), the notation t n 0 is used again for the lemma-tag representation of an input word (this was denoted by T in equations (2)-(4) for notational simplicity):
p(f , T|e) = p Λ (f , t n 0 |e) = exp m λ m h m (e, f , t n 0 ) f ,t n 0 exp m λ m h m (e,f ,t n 0 )(5)
where Λ = {λ m } is the set of model parameters with one weight λ m for each feature function h m . These model parameters can be trained using converging iterative training procedures like the ones described by Darroch and Ratcliff (1972) or Della Pietra, Della Pietra, and Lafferty (1995).
In the experiments presented in this article, the sum over the word formsf and the readingst n 0 in the denominator of equation (5) is restricted to the readings of word forms having the same base form and partial reading as a word form f aligned at least once to e.
The new lexicon model p Λ (f , t n 0 |e) can now replace the usual lexicon model p(f |e), over which it has the following main advantages:
• The decomposition of the modeled events into feature functions allows meaningful probabilities to be provided for word forms that have not occurred during training as long as the feature functions involved are well-defined. (See also the argument later in the article and the definition of first-level and second-level feature functions presented in Section 4.2.1.)
• Introducing the hidden variable T = t n 0 and constraining the lexicon probability to be zero for interpretations considered nonvalid readings of f (that is, for t n 0 ∈ T (f )) amounts to making context information from the complete sentence f J 1 locally available: The sentence context was taken into account by the morpho-syntactic analyzer, which chose the valid readings T (f ).
Definition of Feature Functions.
There are numerous possibilities for defining feature functions. We do not need to require that they all have the same parametric form or that the components be disjoint and statistically independent. Still, it is necessary to restrict the number of parameters so that optimizing them is practical. We used the following types of feature functions, which have been defined on the basis of the lemma-tag representation (see Section 2.3): In terms of the hierarchy introduced in Section 4.1, this means that information at three different levels in the hierarchy is combined. The subsets T of relevant tags mentioned previously fix the intermediate level. 2 This choice of the types of features as well as the choice of the subsets T is reasonable but somewhat arbitrary. Alternatively one can think of defining a much more general set of features and applying some method of feature selection, as has been done, for example, by Foster (2000), who compared different methods for feature selection within the task of translation modeling for statistical machine translation. Note that the log-linear model introduced here uses one parameter per feature. For the Verbmobil task, for example, there are approximately 162, 000 parameters: 47,800 for the first-order features, 55,700 for the second-order features, and 58,500 for the third-order features. No feature selection or threshold was applied: All features seen in training were used.
Training
Procedure. The overall process of training and testing with hierarchical lexicon models is depicted in Figure 1. This figure includes the possibility of using restructuring operations as suggested in Section 3 in order to deal with structural differences between the languages involved. This can be especially advantageous in the case of multiword phrases which jointly fulfill a syntactic function: Not merging them Training and test with hierarchical lexicon. "(Inverse) restructuring," "analyze," and "annotation" all require morpho-syntactic analysis of the transformed sentences.
would raise the question of how to distribute the syntactic tags which have been associated with the whole phrase. In Section 5.2 we describe a method of learning multiword phrases using conventional dictionaries. The alignment on the training corpus is trained using the original source language corpus containing inflected word forms. This alignment is then used to count the co-occurrences of the annotated "words" in the lemma-tag representation of the source language corpus with the words in the target language corpus. These event counts are used for the maximum-entropy training of the model parameters Λ.
The probability mass is distributed over (all readings of) the source language word forms to be supported for test (not necessarily restricted to those occurring during training). The only precondition is that the firing features for these unseen events are known. This "vocabulary supported in test," as it is called in Figure 1, can be a predefined closed vocabulary, as is the case in Verbmobil, in which the output of a speech recognizer with limited output vocabulary is to be translated. In the easiest case it is identical to the vocabulary found in the source language part of the training corpus. The other extreme would be an extended vocabulary containing all automatically generated inflected forms of all base forms occurring in the training corpus. This vocabulary is annotated with morpho-syntactic tags, ideally under consideration of all possible readings of all word forms.
To enable the application of the hierarchical lexicon model, the source language input sentences in test have to be analyzed and annotated with their lemma-tag representation before the actual translation process. So far, the sum over the readings in equation (4) has been ignored, because when the techniques for reducing the amount of ambiguity described in Section 2.2 and the disambiguated conventional dictionaries resulting from the approach presented in Section 5.1 are applied, there remains almost always only one reading per word form.
Conventional Dictionaries
Conventional dictionaries are often used as additional evidence to better train the model parameters in statistical machine translation. The expression conventional dictionary here denotes bilingual collections of word or phrase pairs predominantly collected "by hand," usually by lexicographers, as opposed to the probabilistic lexica, which are learned automatically. Apart from the theoretical problem of how to incorporate external dictionaries in a mathematically sound way into a statistical framework for machine translation (Brown, Della Pietra, Della Pietra, and Goldsmith 1993) there are also some pragmatic difficulties: As discussed in Section 2.2, one of the disadvantages of these conventional dictionaries as compared to full bilingual corpora is that their entries typically contain single words or short phrases on each language side. Consequently, it is not possible to distinguish among the translations for different readings of a word. In normal bilingual corpora, the words can often be disambiguated by taking into account the sentence context in which they occur. For example, from the context in the sentence Ich werde die Zimmer buchen, it is possible to infer that Zimmer in this sentence is plural and has to be translated by rooms in English, whereas the correct translation of Zimmer in the sentence Ich hätte gerne ein Zimmer is the singular form room. The dictionary used by our research group for augmenting the bilingual data contains two entries for Zimmer: ('Zimmer'|'room') and ('Zimmer'|'rooms').
Disambiguation without Context
The approach described in this section is based on the observation that in many of the cases of ambiguous entries in dictionaries, the second part of the entry-that is, the other-language side-contains the information necessary to decide upon the interpretation. In some other cases, the same kind of ambiguity is present in both languages, and it would be possible and desirable to associate the (semantically) corresponding readings with one another. The method proposed here takes advantage of these facts in order to disambiguate dictionary entries. Figure 2 sketches the procedure for the disambiguation of a conventional dictionary D. In addition to D, a bilingual corpus C 1 of the same language pair is required to train the probability model for tag sequence translations. The word forms in C 1 need not match those in D. C 1 is not necessarily the training corpus for the translation task in which the disambiguated version of D will be used. It does not even have to be taken from the same domain.
A word alignment between the sentences in C 1 is trained with some automatic alignment algorithm. Then the words in the bilingual corpus are replaced by a reduced form of their lemma-tag representation, in which only a subset of their morpho-syntactic tags is retained-even the base form is dropped. The remaining subset of tags, in the following denoted by T f for the source language and T e for the target language, consists of tags considered relevant for the task of aligning corresponding readings. This is not necessarily the same set of tags considered relevant for the task of translation which was used, for example, to fix the intermediate level for the log-linear lexicon Disambiguation of conventional dictionaries. "Learn phrases," "analyze," and "annotation" require morpho-syntactic analysis of the transformed sentences. combination in Section 4.2.1. In the case of the Verbmobil corpus, the maximum length of a tag sequence is five.
The alignment is used to count the frequency of a certain tag sequence t f in the source language to be associated with another tag sequence t e in the target language and to compute the tag sequence translation probabilities p(t f |t e ) as relative frequencies. For the time being, these tag sequence translation probabilities associate readings of words in one language with readings of words in the other language: Multiword sequences are not accounted for.
To alleviate this shortcoming it is possible and advisable to automatically detect and merge multiword phrases. As will be described in Section 5.2, the conventional bilingual dictionary itself can be used to learn and validate these phrases. The resulting multiword phrases P e for the target language and P f for the source language are afterwards concatenated within D to form entries consisting of pairs of "units."
The next step is to analyze the word forms in D and generate all possible readings of all entries. It is also possible to ignore those readings that are considered unlikely for the task under consideration by applying the domain-specific preference rules proposed in Section 2.2. The process of generating all readings includes replacing word forms with their lemma-tag representation, which is thereafter reduced by dropping all morpho-syntactic tags not contained in the tag sets T f and T e .
Using the tag sequence translation probabilities p(t f |t e ), the readings in one language are aligned with readings in the other language. These alignments are applied to the full lemma-tag representation (not only tags in T f and T e ) of the expanded dictionary containing one entry per reading of the original word forms. The highest-ranking aligned readings according to p(t f |t e ) for each lemma are preserved.
The resulting disambiguated dictionary contains two entries for the German word Zimmer: ('Zimmer-noun-sg.'|'room-noun-sg.') and ('Zimmer-noun-pl.'|'room-nounpl.'). The target language part is then reduced to the surface forms: ('Zimmer-noun-sg.'| 'room') and ('Zimmer-noun-pl.'|'rooms'). Note that this augmented dictionary, in the following denoted by D , has more entries than D as a result of the step of generating all readings. The two entries ('beabsichtigt'|'intends') and ('beabsichtigt'|'intended'), for example, produce three new entries: ('beabsichtigt-verb-ind.-pres.-sg.-3rd'|'intends'), ('beabsichtigt-verb-past-part.'|'intended'), and ('beabsichtigtadjective-pos.'|'intended').
Multiword Phrases
Some recent publications deal with the automatic detection of multiword phrases (Och and Weber 1998;Tillmann and Ney 2000). These methods are very useful, but they have one drawback: They rely on sufficiently large training corpora, because they detect the phrases from automatically learned word alignments. In this section a method for detecting multiword phrases is suggested which merely requires monolingual syntactic analyzers and a conventional dictionary. Some multiword phrases which jointly fulfill a syntactic function are provided by the analyzers. The phrase irgend etwas ('anything'), for example, may form either an indefinite determiner or an indefinite pronoun. irgend=etwas is merged by the analyzer in order to form one single vocabulary entry. In the German part of the Verbmobil training corpus 26 different, nonidiomatic multiword phrases are merged, while there are 318 phrases suggested for the English part. In addition, syntactic information like the identification of infinitive markers, determiners, modifying adjectives (for example, single room), premodifying adverbials (more comfortable), and premodifying nouns (account number) are used for detecting multiword phrases. When applied to the English part of the Verbmobil training corpus, these hints suggest 7,225 different phrases.
Altogether, 26 phrases for German and about 7,500 phrases for English are detected in this way. It is quite natural that there are more multiword phrases found for English, as German, unlike English, uses compounding. But the experiments show that it is not advantageous to use all these phrases for English. Electronic dictionaries can be useful for detecting those phrases which are important in a statistical machine translation context: A multiword phrase is considered useful if it is translated into a single word or a distinct multiword phrase (suggested in a similar way by syntactic analysis) in another language. There are 290 phrases chosen in this way for the English language.
Overall Procedure for Training with Scarce Resources
Taking into account the interdependencies of inflected forms of the same base form is especially relevant when inflected languages like German are involved and when training data are sparse. In this situation many of the inflected word forms to account for in test do not occur during training. Sparse bilingual training data also make additional conventional dictionaries especially important. Enriching the dictionaries by aligning corresponding readings is particularly useful when the dictionaries are used in conjunction with a hierarchical lexicon, which can access the information necessary to distinguish readings via morpho-syntactic tags. The restructuring operations described in Section 3 also help in coping with the data sparseness problem, because they make corresponding sentences more similar. This section proposes a procedure for combining all these methods in order to improve the translation quality despite sparseness of data. Figure 3 sketches the proposed procedure. SMT
Figure 3
Training with scarce resources. "Restructuring," "learn phrases," and "annotation" all require morpho-syntactic analysis of the transformed sentences.
Two different bilingual corpora C 1 and C 2 , one monolingual target language corpus, and a conventional bilingual dictionary D can contribute in various ways to the overall result. It is important to note here that C 1 and C 2 can, but need not, be distinct, and that the monolingual corpus can be identical to the target language part of C 2 . Furthermore these corpora can be taken from different domains, and C 1 can be (very) small. Only C 2 has to represent the domain and the vocabulary for which the translation system is built, and only the size of C 2 and the monolingual corpus have a substantial effect on the translation quality. It is interesting to note, though, that a basic statistical machine translation system with an accuracy near 50% can be built without any domain-specific bilingual corpus C 2 , solely on the basis of a disambiguated dictionary and the hierarchical lexicon models, as Table 9 shows.
• In the first step, multiword phrases are learned and validated on the dictionary D in the way described in Section 5.2. These multiword phrases are concatenated in D. Then an alignment is trained on the first bilingual corpus C 1 . On the basis of this alignment, the tag sequence translation probabilities which are needed to align corresponding readings in the dictionary are extracted, as proposed in Section 5.1. The result of this step is an expanded and disambiguated dictionary D . For this purpose, C 1 does not have to cover the vocabulary of D. Besides C 1 can be comparatively small, given the limited number of tag sequence pairs (t f |t e ) for which translation probabilities must be provided: In the Verbmobil training corpus, for example, there are only 261 different German and 110 different English tag sequences.
• In the next step, the second bilingual corpus C 2 and D are combined, and a word alignment A for both is trained. C 2 , D , and A are presented as input to the maximum-entropy training of a hierarchical lexicon model as described in Section 4.2.
• The language model can be trained on a separate monolingual corpus.
As monolingual data are much easier and cheaper to compile, this corpus might be (substantially) larger than the target language part of C 2 .
Experimental Results
The Tasks and the Corpora
Tests were carried out on Verbmobil data and on Nespole! data. As usual, the sentences from the test sets were not used for training. The training corpora were used for training the parameters of IBM model 4.
Verbmobil.
Verbmobil was a project for automatic translation of spontaneously spoken dialogues. A detailed description of the statistical translation system within Verbmobil is given by Ney et al. (2000) and by Och (2002). Table 5 summarizes the characteristics of the English and German parallel corpus used for training the parameters of IBM model 4. A conventional dictionary complements the training corpus (see Table 6 for the statistics). The vocabulary in Verbmobil was considered closed: There are official lists of word forms which can be produced by the speech recognizers. Such lists exist for German and English (see Table 7). Table 8 lists the characteristics of the two test sets Test and Develop taken from the end-to-end evaluation in Verbmobil, the development part being meant to tune system parameters on a held-out corpus different from the training as well as the test corpus. As no parameters are optimized on the development set for the methods described in this article, most of the experiments were carried out on a joint set containing both test sets. Table 7 The official vocabularies in Verbmobil.
English German
Number of word forms 6,871 10,157 Number of base forms 3,268 6,667 It aimed to provide multimodel support for negotiation (Nespole! 2000;Lavie et al. 2001). Table 5 summarizes the corpus statistics of the Nespole! training set. Table 8 provides the corresponding figures for the test set used in this work.
The Translation System
For testing we used the alignment template translation system, described in Och, Tillmann, and Ney (1999). Training the parameters for this system entails training of IBM model 4 parameters in both translation directions and combining the resulting alignments into one symmetrized alignment. From this symmetrized alignment, the lexicon probabilities as well as the so-called alignment templates are extracted. The latter are translation patterns which capture phrase-level translation pairs.
Performance Measures
The following evaluation criteria were used in the experiments:
BLEU (Bilingual Evaluation Understudy): This score, proposed by Papineni et al. (2001), is based on the notion of modified n-gram precision, with n ∈ {1, . . . , 4}: All candidate unigram, bigram, trigram, and four-gram counts are collected and clipped against their corresponding maximum reference counts. The reference n-gram counts are calculated on a corpus of reference translations for each input sentence. The clipped candidate counts are summed and normalized by the total number of candidate ngrams. The geometric mean of the modified precision scores for a test corpus is calculated and multiplied by an exponential brevity penalty factor to penalize too-short translations. BLEU is an accuracy measure, while the others are error measures.
m-WER (multireference word error rate): For each test sentence there is a set of reference translations. For each translation hypothesis, the edit distance (number of substitutions, deletions, and insertions) to the most similar reference is calculated.
SSER (subjective sentence error rate): Each translated sentence is judged by a human examiner according to an error scale from 0.0 (semantically and syntactically correct) to 1.0 (completely wrong).
ISER (information item semantic error rate): The test sentences are segmented into information items; for each of these items, the translation candidates are assigned either "OK" or an error class. If the intended information is conveyed, the translation of an information item is considered correct, even if there are slight syntactic errors which do not seriously deteriorate the intelligibility.
For evaluating the SSER and the ISER, we have used the evaluation tool EvalTrans (Nießen and Leusch 2000), which is designed to facilitate the work of manually judging evaluation quality and to ensure consistency over time and across evaluators.
Impact of the Corpus Size
It is a costly and time-consuming task to compile large texts and have them translated to form bilingual corpora suitable for training the model parameters for statistical machine translation. As a consequence, it is important to investigate the amount of data necessary to sufficiently cover the vocabulary expected in testing. Furthermore, we want to examine to what extent the incorporation of morphological knowledge sources can reduce this amount of necessary data. Figure 4 shows the relation between the size of a typical German corpus and the corresponding number of different full forms. At the size of 520,000 words, the size of the Verbmobil corpus used for training, this curve still has a high growth rate.
To investigate the impact of the size of the bilingual corpus available for training, on translation quality three different setups for training the statistical lexicon on Verbmobil data have been defined:
• using the full training corpus as described in Table 5, comprising 58,000 sentences
• restricting the corpus to 5,000 sentences (approximately every 11th sentence)
• using no bilingual training corpus at all (only a bilingual dictionary; see subsequent discussion)
The language model is always trained on the full English corpus. The argument for this is that monolingual corpora are always easier and less expensive to obtain than bilingual corpora. A conventional dictionary is used in all three setups to complement SMT with Scarce Resources the bilingual corpus. In the last setup, the lexicon probabilities are trained exclusively on this dictionary
As Table 9 shows, the quality of translation drops significantly when the amount of bilingual data available during training is reduced: When the training corpus is restricted to 5,000 sentences, the SSER increases by about 7% and the ISER by about 3%. As could be expected, the translations produced by the system trained exclusively on a conventional dictionary are very poor: The SSER jumps over 60%. 7.5 Results for Log-Linear Lexicon Combination 7.5.1 Results on the Verbmobil Task. As was pointed out in Section 4, the hierarchical lexicon is expected to be especially useful in cases in which many of the inflected word forms to be accounted for in test do not occur during training. To systematically investigate the model's generalization capability, it has been applied on the three different setups described in Section 7.4. The training procedure was the one proposed in Section 6, which includes restructuring transformations in training and test. Table 9 summarizes the improvement achieved for all three setups.
Training on 58,000 sentences plus conventional dictionary: Compared to the effect of restructuring, the additional improvement achieved with the hierarchical lexicon is relatively small in this setup. The combination of all methods results in a relative improvement in terms of SSER of almost 13% and in terms of information ISER of more than 16% as compared to the baseline.
Training on 5,000 sentences plus conventional dictionary: Restructuring alone can improve the translation quality from 37.3% to 33.6%. The benefit from the hierarchical lexicon is larger in this setup, and the resulting in SSER is 31.8%. This is a relative improvement of almost 15%. The relative improvement in terms of ISER is almost 22%. Note that by applying the methods proposed here, the corpus for training can be reduced to less than 10% of the original size while increasing the SSER only from 30.2% to 31.8% compared to the baseline when using the full corpus.
Training only on conventional dictionary: In this setup the impact of the hierarchical lexicon is clearly larger than the effect of the restructuring methods, because here the data sparseness problem is much more important than the word order problem. The overall relative reduction in terms of SSER is 13.7% and in terms of ISER 19.1%. An error rate of about 52% is still very poor, but it is close to what might be acceptable when only the gist of the translated document is needed, as is the case in the framework of document classification or multilingual information retrieval.
Examples taken from the Verbmobil Eval-2000 test set are given in Table 10. Smoothing the lexicon probabilities over the inflected forms of the same lemma enables the translation of sind as would instead of are. The smoothed lexicon contains the translation convenient for any inflected form of bequem. The comparative more convenient would be the completely correct translation. The last two examples in the table demonstrate the effect of the disambiguating analyzer, which on the basis of the sentence context identifies Zimmer as plural (it has been translated into the singular form room by the baseline system) and das as an article to be translated by the instead of a pronoun which would be translated as that. The last example demonstrates that overfitting on domain-specific training can be problematic in some cases: Generally, because is a good translation for the co-ordinating conjunction denn, but in the appointmentscheduling domain, denn is often an adverb, and it often occurs in the same sentence as dann, as in Wie wäre es denn dann?. The translation for this sentence is something like How about then?. Because of the frequency of this domain-specific language use, the word form denn is often aligned to then in the training corpus. The hierarchical lexicon distinguishes the adverb reading and the conjunction reading, and the correct translation because is the highest-ranking one for the conjunction.
Results on the Nespole! Task.
We were provided with a small German-English corpus from the Nespole! project (see Section 7.1 for a description). From Table 5 it is obvious that this task is an example of very scarce training data, and it is thus interesting to test the performance of the methods proposed in this article on this task. The same conventional dictionary as was used for the experiments on Verbmobil data (cf. Table 6) complemented the small bilingual training corpus. Furthermore, the (monolingual) English part of the Verbmobil corpus was used in addition to the English part of the Nespole! corpus for training the language model. Table 11 summarizes the results. Information items have not been defined for this test set. An overall relative improvement of 16.5% in the SSER can be achieved.
Conclusion
In this article we have proposed methods of incorporating morphological and syntactic information into systems for statistical machine translation. The overall goal was to improve translation quality and to reduce the amount of parallel text necessary to train the model parameters. Substantial improvements on the Verbmobil task and the Nespole! task were achieved. Some sentence-level restructuring transformations have been introduced which are motivated by knowledge about the sentence structure in the languages involved. These transformations aim at the assimilation of word orders in related sentences.
A hierarchy of equivalence classes has been defined on the basis of morphological and syntactic information beyond the surface forms. The study of the effect of using information from either degree of abstraction led to the construction of hierarchical lexicon models, which combine different items of information in a log-linear way. The benefit from these combined models is twofold: First, the lexical coverage is improved, because the translation of unseen word forms can be derived by considering information from lower levels in the hierarchy. Second, category ambiguity can be resolved, because syntactical context information is made locally accessible by means of annotation with morpho-syntactic tags. As a side effect of the preparative work for setting up the underlying hierarchy of morpho-syntactic information, those pieces of information inherent in fully inflected word forms that are not relevant for translation are detected.
A method for aligning corresponding readings in conventional dictionaries containing pairs of fully inflected word forms has been proposed. The approach uses information deduced from one language side to resolve category ambiguity in the corresponding entry in the other language. The resulting disambiguated dictionaries have proven to be better suited for improving the quality of machine translation, especially if they are used in combination with the hierarchical lexicon models.
The amount of bilingual training data required to achieve an acceptable quality of machine translation has been systematically investigated. All the methods mentioned previously contribute to a better exploitation of the available bilingual data and thus to improving translation quality in frameworks with scarce resources. Three setups for training the parameters of the statistical lexicon on Verbmobil data have been examined: (1) Using the full 58,000 sentences comprising the bilingual training corpus, (2) restricting the corpus to 5,000 sentences, and (3) using only a conventional dictionary. For each of these setups, a relative improvement in terms of subjective sentence error rate between 13% and 15% as compared to the baseline could be obtained using combinations of the methods described in this article. The amount of bilingual training data could be reduced to less than 10% of the original corpus, while losing only 1.6% in accuracy as measured by the subjective sentence error rate. A relative improvement of 16.5% in terms of subjective sentence error rate could also be achieved on the Nespole! task.
First
level: m = {L,ẽ}, where L is the base form: h 1 L,ẽ (e, f , t n 0 ) = 1 if e =ẽ and t 0 = L and f ∈ F(t n 0 ) ( * ) 0 otherwise Second level: m = {T, L,ẽ}, with subsets T of cardinality ≤ n of morpho-syntactic tags considered relevant (see Section 2.4 for a description of the detection of relevant tags): h 2 T,L,ẽ (e, f , t n 0 ) = 1 if ( * ) and T ⊆ t n 1 ( * * ) 0 otherwise Third level: m = {F, T, L,ẽ}, with the fully inflected original word form F: h 3 F,T,L,ẽ (e, f , t n 0 ) = 1 if ( * * ) and F = f 0 otherwise
Figure 1 Training and test with hierarchical lexicon. "(Inverse) restructuring," "analyze," and "annotation" all require morpho-syntactic analysis of the transformed sentences.
Figure 2 Disambiguation of conventional dictionaries. "Learn phrases," "analyze," and "annotation" require morpho-syntactic analysis of the transformed sentences.
Figure 4
4Impact of corpus size (measured in number of running words in the corpus) on vocabulary size (measured in number of different full-form words found in the corpus) for the German part of the Verbmobil corpus.
Table 1
1Sample analysis of a German sentence. Input: Wir wollen nach dem Abendessen nach Essen aufbrechen. (In English: We want to start for Essen after dinner.)Original
Base form
Tags
Wir
wir
personal-pronoun plural first nominative
wollen
wollen
verb indicative present plural first
nach
nach
preposition dative
dem
das
definite-article singular dative neuter
Abendessen Abend#essen noun neuter singular dative
nach
nach
preposition dative
Essen
Essen
noun name neuter singular dative
Esse
noun feminine plural dative
Essen
noun neuter plural dative
Essen
noun neuter singular dative
aufbrechen
auf|brechen
verb separable infinitive
Table 2
2Sample analysis of an English sentence. Input: Do we have to reserve rooms?.Original Base form Tags
Do
do
verb present not-singular-third finite auxiliary
we
we
personal-pronoun nominative plural first subject
have
have
verb infinitive not-finite main
to
to
infinitive-marker
reserve
reserve
verb infinitive not-finite main
rooms
room
noun nominative plural object
Table 3
3Resolution of ambiguity on the Verbmobil corpus.Number of readings per word form
Disambiguation
German
English
None
2.85
1.86
By context
1.20
1.02
By preference
1.19
1.02
By selecting relevant tags
1.06
1.01
By resorting to unambiguous part
1.00
1.00
with Scarce Resourceslearn phrases
disambiguated
dictionary D'
conventional
dictionary D
bil. corpus
C1
disambiguate
dictionary
restructuring
restructuring
monolingual
corpus
LM
training
language
model
restructuring
annotation
train
alignment
combined
corpus
train hierarch.
lexicon
alignment
on combined
corpus
alignment
model
lexicon
model
bil. corpus
C2
Table 5
5Statistics of corpora for training: Verbmobil and Nespole! Singletons are types occurring only
once in training.
Verbmobil
Nespole!
English German English German
Number of sentences
58,073
58,073
3,182
3,182
Number of distinct sentences
57,731
57,771
1,758
1,767
Number of running word forms
549,921 519,523 15,568
14,992
Number of running word forms without punctuation 453,612 418,974 12,461
11,672
Number of word forms
4,673
7,940
1,034
1,363
Number of singleton word forms
1,698
3,453
403
641
Number of base forms
3,639
6,063
1,072
870
Number of singleton base forms
1,236
2,546
461
326
Table 6
6Conventional dictionary used to complement the training corpus.English German
Number of entries
10,498
10,498
Number of running word forms
15,305
12,784
Number of word forms
5,161
7,021
Number of base forms
3,666
5,479
Table 8
8Statistics for the test sets for German to English translation: Verbmobil
Eval-2000 (Test and Develop) and Nespole!
Verbmobil
Nespole!
Test Develop
Number of sentences
251
276
70
Number of running word forms in German part 2,628
3,159
456
Number of word forms in German part
429
434
180
Trigram LM perplexity of reference translation
30.5
28.1
76.9
7.1.2 Nespole!. Nespole! is a research project that ran from January 2000 to June 2002.
Table 9
9Results for hierarchical lexicon models and translation with scarce resources. "Restructuring" entails treatment of question inversion and separated verb prefixes as well as merging of phrases in both languages. A conventional dictionary is available in all three setups. The language model is always trained on the full monolingual English corpus. Task: Verbmobil. Testing on 527 sentences (Test and Develop).Number of sentences
for training
BLEU m-WER SSER ISER
58,000
Baseline
53.7%
34.1%
30.2% 14.1%
Restructuring
56.3
32.5
26.6
12.8
+ dictionary disambiguated
+ hierarchical lexicon
57.1
31.8
26.3
11.8
5,000
Baseline
47.4
38.0
37.3
17.4
Restructuring
52.1
34.7
33.6
15.2
+ dictionary disambiguated
+ hierarchical lexicon
52.9
33.9
31.8
13.7
0
Baseline
23.3
53.6
60.4
29.8
Restructuring
29.1
50.2
57.8
30.0
+ dictionary disambiguated
+ hierarchical lexicon
32.6
48.0
52.8
24.1
Table 10
10Examples of the effect of the hierarchical lexicon. Input ich würde das Hilton vorschlagen denn es ist das beste. Baseline I would suggest that Hilton then it is the best. Hierarchical lexicon I would suggest the Hilton because it is the best.Input
sind Sie mit einem Doppelzimmer einverstanden?
Baseline
are you agree with a double room?
Hierarchical lexicon
would you agree with a double room?
Input
mit dem Zug ist es bequemer.
Baseline
by train it is UNKNOWN-bequemer.
Hierarchical lexicon
by train it is convenient.
Input
wir haben zwei Zimmer.
Baseline
we have two room.
Hierarchical lexicon
we have two rooms.
Table 11
11Results for hierarchical lexicon model Nespole! "Restructuring" entails treatment of question inversion and separated verb prefixes as well as merging of phrases in both languages. The same conventional dictionary was used as in the experiments the Verbmobil. The language model was trained on a combination of the English parts of the Nespole! corpus and the Verbmobil corpus.BLEU m-WER SSER
Baseline
31.6% 50.2%
41.1%
Restructuring
33.7
45.9
38.1
+ hierarchical lexicon 36.5
44.1
34.3
Computational LinguisticsVolume 30, Number 2
Of course, there is not only one set of relevant tags, but at least one per part of speech. In order to keep the notation as simple as possible, this fact is not accounted for in the formulas and the textual descriptions.
AcknowledgmentsThis work has been partially supported as part of the Verbmobil project (contract number 01 IV 701 T4) by the German Federal Ministry of Education, Science, Research and Technology and as part of the EuTrans project (project number 30268) by the European Union. For the provision of the Nespole! data we thank the Nespole! consortium, listed on the project's home page (Nespole! 2000). Special thanks to Alon Lavie, Lori Levin, Stephan Vogel, and Alex Waibel (in alphabetical order).
Translating with scarce resources. Al-Onaizan, Ulrich Yaser, Ulf Germann, Kevin Hermjakob, Philipp Knight, Daniel Koehn, Kenji Marcu, Yamada, Proceedings of the 17th National Conference on Artificial Intelligence (AAAI). the 17th National Conference on Artificial Intelligence (AAAI)Austin, TXAl-Onaizan, Yaser, Ulrich Germann, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Daniel Marcu, and Kenji Yamada. 2000. Translating with scarce resources. In Proceedings of the 17th National Conference on Artificial Intelligence (AAAI), pages 672-678, Austin, TX, August.
Language translation apparatus and method of using context-based translation models. United States Patent. Adam L Berger, F Peter, Stephen A Della Brown, Vincent J Pietra, J R Della Pietra, A S Gillett, Kehler, Patent Number 5510981Berger, Adam L., Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, J. R. Gillett, and A. S. Kehler. 1996. Language translation apparatus and method of using context-based translation models. United States Patent, Patent Number 5510981, April.
But dictionaries are data too. Peter F Brown, A Della Stephen, Vincent J Pietra, M J Della Pietra, Goldsmith, Proceedings of the ARPA Human Language Technology Workshop '93. the ARPA Human Language Technology Workshop '93Princeton, NJBrown, Peter F., Stephen A. Della Pietra, Vincent J. Della Pietra, and M. J. Goldsmith. 1993. But dictionaries are data too. In Proceedings of the ARPA Human Language Technology Workshop '93, pages 202-205, Princeton, NJ, March.
A statistical approach to machine translation. Peter F Brown, John Cocke, Stephen A Della Pietra, Vincent J Della Pietra, Frederick Jelinek, John D Lafferty, Robert L Mercer, Paul S Roossin, Computational Linguistics. 162Brown, Peter F., John Cocke, Stephen A. Della Pietra, Vincent J. Della Pietra, Frederick Jelinek, John D. Lafferty, Robert L. Mercer, and Paul S. Roossin. 1990. A statistical approach to machine translation. Computational Linguistics, 16(2):79-85.
A statistical approach to language translation. Peter F Brown, John Cocke, Stephen A Della Pietra, Vincent J Della Pietra, Frederick Jelinek, Robert L Mercer, Paul S Roossin, Proceedings of COLING 1988: The 12th International Conference on Computational Linguistics. COLING 1988: The 12th International Conference on Computational LinguisticsBudapestBrown, Peter F., John Cocke, Stephen A. Della Pietra, Vincent J. Della Pietra, Frederick Jelinek, Robert L. Mercer, and Paul S. Roossin. 1988. A statistical approach to language translation. In Proceedings of COLING 1988: The 12th International Conference on Computational Linguistics, pages 71-76, Budapest, August.
Analysis, statistical transfer, and synthesis in machine translation. Peter F Brown, A Della Stephen, Vincent J Della Pietra, John D Pietra, Robert L Lafferty, Mercer, Proceedings of TMI 1992: Fourth International Conference on Theoretical and Methodological Issues in MT. TMI 1992: Fourth International Conference on Theoretical and Methodological Issues in MTMontreal, Quebec, CanadaBrown, Peter F., Stephen A. Della Pietra, Vincent J. Della Pietra, John D. Lafferty, and Robert L. Mercer. 1992. Analysis, statistical transfer, and synthesis in machine translation. In Proceedings of TMI 1992: Fourth International Conference on Theoretical and Methodological Issues in MT, pages 83-100, Montreal, Quebec, Canada, June.
. Peter F Brown, A Della Stephen, Vincent J Pietra, Robert L Della Pietra, Brown, Peter F., Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L.
Mathematics of statistical machine translation: Parameter estimation. Mercer, Computational Linguistics. 192Mercer. 1993. Mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263-311.
Generalized iterative scaling for log-linear models. J N Darroch, D Ratcliff, Annals of Mathematical Statistics. 43Darroch, J. N. and D. Ratcliff. 1972. Generalized iterative scaling for log-linear models. Annals of Mathematical Statistics, 43:1470-1480.
Search algorithms for statistical machine translation based on dynamic programming and pruning techniques. Della Pietra, A Stephen, J Vincent, John D Della Pietra, Lafferty, CMU-CS-95-144Proceedings of ACL-EACL 2001: The 39th Annual Meeting of the Association for Computational Linguistics. September. Germann, Ulrich, Michael Jahr, Kevin Knight, Daniel Marcu, and Kenji YamadaACL-EACL 2001: The 39th Annual Meeting of the Association for Computational LinguisticsPittsburgh, PA. Foster, George; Hong Kong; Santiago de Compostela, Spain; Toulouse, FranceCarnegie Mellon UniversityTechnical ReportProceedings of ACL 2000: The 38th Annual Meeting of the Association for Computational Linguistics. joint with EACL 2001Della Pietra, Stephen A., Vincent J. Della Pietra, and John D. Lafferty. 1995. Inducing features of random fields. Tech- nical Report CMU-CS-95-144, Carnegie Mellon University, Pittsburgh, PA. Foster, George. 2000. A maximum entropy/minimum divergence translation model. In Proceedings of ACL 2000: The 38th Annual Meeting of the Association for Computational Linguistics, pages 37-44, Hong Kong, October. García-Varea, Ismael and Francisco Casacuberta. 2001. Search algorithms for statistical machine translation based on dynamic programming and pruning techniques. In Proceedings of the MT Summit VIII, pages 115-120, Santiago de Compostela, Spain, September. Germann, Ulrich, Michael Jahr, Kevin Knight, Daniel Marcu, and Kenji Yamada. 2001. Fast decoding and optimal decoding for machine translation. In Proceedings of ACL-EACL 2001: The 39th Annual Meeting of the Association for Computational Linguistics (joint with EACL 2001), pages 228-235, Toulouse, France, July.
Statistical language model for inflected languages. United States Patent. Dimitri Kanevsky, Salim Roukos, Patent Number 5835888Kanevsky, Dimitri, Salim Roukos, and Jan Sedivy. 1997. Statistical language model for inflected languages. United States Patent, Patent Number 5835888.
Constraint grammar as a framework for parsing running text. Fred Karlsson, Proceedings of COLING 1990: The 13th International Conference on Computational Linguistics. COLING 1990: The 13th International Conference on Computational LinguisticsHelsinki3Karlsson, Fred. 1990. Constraint grammar as a framework for parsing running text. In Proceedings of COLING 1990: The 13th International Conference on Computational Linguistics, volume 3, pages 168-173, Helsinki, August.
Knowledge sources for word-level translation models. Philipp Koehn, Kevin Knight, Proceedings of EMNLP 2001: Conference on Empirical Methods in Natural Language Processing. Lillian Lee and Donna HarmanEMNLP 2001: Conference on Empirical Methods in Natural Language ProcessingPittsburgh, PAKoehn, Philipp and Kevin Knight. 2001. Knowledge sources for word-level translation models. In Lillian Lee and Donna Harman, editors, Proceedings of EMNLP 2001: Conference on Empirical Methods in Natural Language Processing, pages 27-35, Pittsburgh, PA, June.
Compound splitting and lexical unit recombination for improved performance of a speech recognition system for German parliamentary speeches. Martha Larson, Daniel Willett, Joachim Köhler, Gerhard Rigoll, Proceedings ICSLP 2000: Sixth International Conference on Spoken Language Processing. ICSLP 2000: Sixth International Conference on Spoken Language ProcessingBeijing3Larson, Martha, Daniel Willett, Joachim Köhler, and Gerhard Rigoll. 2000. Compound splitting and lexical unit recombination for improved performance of a speech recognition system for German parliamentary speeches. In Proceedings ICSLP 2000: Sixth International Conference on Spoken Language Processing, volume 3, pages 945-948, Beijing, February.
An automatic technique to include grammatical and morphological information in a trigram-based statistical language model. Alon Lavie, Chad Langley, Alex Waibel, Fabio Pianesi, Gianni Lazzari, Paolo Coletti, Loredana Taddei, Franco Balducci ; Maltese, G , F Mancini, Proceedings of ICASSP 1992: International Conference on Acoustics, Speech and Signal Processing. James AllanICASSP 1992: International Conference on Acoustics, Speech and Signal ProcessingSan Diego; San FranciscoProceedings of HLT 2001: First International Conference on Human Language Technology Research. NESPOLE! (NEgotiating through SPOken Language in e-commerce. Project homepage. Available atLavie, Alon, Chad Langley, Alex Waibel, Fabio Pianesi, Gianni Lazzari, Paolo Coletti, Loredana Taddei, and Franco Balducci. 2001. Architecture and design considerations in NESPOLE! A speech translation system for e-commerce applications. In James Allan, editor, Proceedings of HLT 2001: First International Conference on Human Language Technology Research, pages 31-39, San Diego, March. Maltese, G., and F. Mancini. 1992. An automatic technique to include grammatical and morphological information in a trigram-based statistical language model. In Proceedings of ICASSP 1992: International Conference on Acoustics, Speech and Signal Processing, pages 157-160, San Francisco, March. NESPOLE! (NEgotiating through SPOken Language in e-commerce). 2000 Project homepage. Available at http:// nespole.itc.it/.
Algorithms for statistical translation of spoken language. Hermann Ney, Sonja Nießen, Franz Josef Och, Hassan Sawaf, Christoph Tillmann, Stephan Vogel, IEEE Transactions on Speech and Audio Processing. 81Ney, Hermann, Sonja Nießen, Franz Josef Och, Hassan Sawaf, Christoph Tillmann, and Stephan Vogel. 2000. Algorithms for statistical translation of spoken language. IEEE Transactions on Speech and Audio Processing, 8(1):24-36.
EvalTrans, a tool for semi-automatic evaluation of machine translation. Sonja Nießen, Gregor Leusch, Proceedings of LREC 2000. LREC 2000Nießen, Sonja and Gregor Leusch. 2000. EvalTrans, a tool for semi-automatic evaluation of machine translation. In Proceedings of LREC 2000, Athens. Tool is available at http://www-i6.Informatik. RWTH-Aachen.DE/˜niessen/Evaluation/.
Morpho-syntactic analysis for reordering in statistical machine translation. Sonja Nießen, Hermann Ney, Proceedings of COLING 2000: The 18th International Conference on Computational Linguistics. COLING 2000: The 18th International Conference on Computational LinguisticsSaarbrücken, Germany, July. Nießen, Sonja and Hermann Ney; Santiago de Compostela, SpainProceedings of MT Summit VIIINießen, Sonja and Hermann Ney. 2000. Improving SMT quality with morpho-syntactic analysis. In Proceedings of COLING 2000: The 18th International Conference on Computational Linguistics, pages 1081-1085, Saarbrücken, Germany, July. Nießen, Sonja and Hermann Ney. 2001. Morpho-syntactic analysis for reordering in statistical machine translation. In Proceedings of MT Summit VIII, pages 247-252, Santiago de Compostela, Spain, September.
A DP based search algorithm for statistical machine translation. Sonja Nießen, Stephan Vogel, Hermann Ney, Christoph Tillmann, Proceedings of COLING-ACL 1998: The 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics. COLING-ACL 1998: The 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational LinguisticsMontreal, Quebec, CanadaNießen, Sonja, Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1998. A DP based search algorithm for statistical machine translation. In Proceedings of COLING-ACL 1998: The 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics, pages 960-967, Montreal, Quebec, Canada, August.
Machine Translation: From Single-Word Models to Alignment Templates. Franz Och, Josef, Aachen, GermanyComputer Science Department, RWTH-University of TechnologyPh.D. thesisOch, Franz Josef. 2002. Machine Translation: From Single-Word Models to Alignment Templates. Ph.D. thesis, Computer Science Department, RWTH-University of Technology, Aachen, Germany.
Improving statistical natural language translation with categories and rules. Franz Och, Christoph Josef, Hermann Tillmann, Ney, Proceedings of COLING-ACL 1998: The 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics. Och, Franz Josef and Hans WeberCOLING-ACL 1998: The 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational LinguisticsCollege Park; Montreal, Quebec, CanadaUniversity of MarylandProceedings of EMNLP 1999: Conference on Empirical Methods in Natural Language ProcessingOch, Franz Josef, Christoph Tillmann, and Hermann Ney. 1999. Improved alignment models for statistical machine translation. In Proceedings of EMNLP 1999: Conference on Empirical Methods in Natural Language Processing, pages 20-28, University of Maryland, College Park, June. Och, Franz Josef and Hans Weber. 1998. Improving statistical natural language translation with categories and rules. In Proceedings of COLING-ACL 1998: The 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics, pages 985-989, Montreal, Quebec, Canada, August.
Bleu: A method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, RC22176 (W0109-022IBM Research Division. Yorktown Heights, NYTechnical ReportPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2001. Bleu: A method for automatic evaluation of machine translation. Technical Report RC22176 (W0109-022), IBM Research Division, Yorktown Heights, NY, September.
A simple introduction to maximum entropy models for natural language processing. Adwait Ratnaparkhi, 97-08Proceedings of COLING 2000: The 18th International Conference on Computational Linguistics. COLING 2000: The 18th International Conference on Computational LinguisticsPhiladelphia, May. Tillmann, Christoph and Hermann Ney; Saarbrücken, GermanyInstitute for Research in Cognitive Science, University of PennsylvaniaTechnical ReportWord re-ordering and DP-based search in statistical machine translationRatnaparkhi, Adwait. 1997. A simple introduction to maximum entropy models for natural language processing. Technical Report 97-08, Institute for Research in Cognitive Science, University of Pennsylvania, Philadelphia, May. Tillmann, Christoph and Hermann Ney. 2000. Word re-ordering and DP-based search in statistical machine translation. In Proceedings of COLING 2000: The 18th International Conference on Computational Linguistics, pages 850-856, Saarbrücken, Germany, August. |
248,779,953 | Is Attention Explanation? An Introduction to the Debate | The performance of deep learning models in NLP and other fields of machine learning has led to a rise in their popularity, and so the need for explanations of these models becomes paramount. Attention has been seen as a solution to increase performance, while providing some explanations. However, a debate has started to cast doubt on the explanatory power of attention in neural networks. Although the debate has created a vast literature thanks to contributions from various areas, the lack of communication is becoming more and more tangible. In this paper, we provide a clear overview of the insights on the debate by critically confronting works from these different areas. This holistic vision can be of great interest for future works in all the communities concerned by this debate. We sum up the main challenges spotted in these areas, and we conclude by discussing the most promising future avenues on attention as an explanation. | [
129945615,
233864728,
182953113,
44062236,
218684573,
155092004,
235254444,
222291117,
215416110,
234777911,
218487351,
173188413,
218973757,
220047423,
202583616,
184486746,
184486755,
235293798,
982761,
208100714,
220046827,
236477936,
11212020,
67855860,
199552244,
222176890
] | Is Attention Explanation? An Introduction to the Debate
Association for Computational LinguisticsCopyright Association for Computational LinguisticsMay 22-27, 2022 c 2022
Adrien Bibal adrien.bibal@uclouvain.be
CENTAL, IL&C
University of Louvain
Belgium
Rémi Cardon remi.cardon@uclouvain.be
CENTAL, IL&C
University of Louvain
Belgium
David Alfter david.alfter@uclouvain.be
CENTAL, IL&C
University of Louvain
Belgium
Rodrigo Wilkens
CENTAL, IL&C
University of Louvain
Belgium
Xiaoou Wang xiaoou.wang@uclouvain.be
CENTAL, IL&C
University of Louvain
Belgium
Thomas François thomas.francois@uclouvain.be
CENTAL, IL&C
University of Louvain
Belgium
Patrick Watrin patrick.watrin@uclouvain.be
CENTAL, IL&C
University of Louvain
Belgium
Is Attention Explanation? An Introduction to the Debate
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
the 60th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1May 22-27, 2022 c 2022
The performance of deep learning models in NLP and other fields of machine learning has led to a rise in their popularity, and so the need for explanations of these models becomes paramount. Attention has been seen as a solution to increase performance, while providing some explanations. However, a debate has started to cast doubt on the explanatory power of attention in neural networks. Although the debate has created a vast literature thanks to contributions from various areas, the lack of communication is becoming more and more tangible. In this paper, we provide a clear overview of the insights on the debate by critically confronting works from these different areas. This holistic vision can be of great interest for future works in all the communities concerned by this debate. We sum up the main challenges spotted in these areas, and we conclude by discussing the most promising future avenues on attention as an explanation.
Introduction
Attention mechanisms have been widely used in various tasks of Natural Language Processing (NLP) as well as in other fields of machine learning (e.g., Computer Vision (Mnih et al., 2014;Li et al., 2019)). These mechanisms draw insight from the intuition that humans build the representation of a whole scene by dynamically focusing on relevant parts at different times (Rensink, 2000).
The general form of attention has been named differently according to authors (alignment model (Bahdanau et al., 2015) and attention mechanism (Vaswani et al., 2017)). In essence, the attention function maps a query Q and keys K to scalar scores (Vaswani et al., 2017). These scores are fed to a softmax function, in turn producing a set of attention weights that are then applied to values V. Different kinds of attention are thus possible according to how many keys are attended to * T. François and P. Watrin are co-last authors.
(global vs. local attention, according to Luong et al. (2015)) and where the query is generated (cross vs. self-attention as in the works of Bahdanau et al. (2015) and Vaswani et al. (2017)). In this paper, we focus on attention regardless of these technical differences. There are mainly two ways of computing the attention weightsα: Bahdanau et al. (2015) introduced additive attention α = softmax(w 3 T tanh(W 1 K + W 2 Q)), where w 3 , W 1 , W 2 model parameters to be learned, and Vaswani et al. (2017) introduced scaled dot-product attentionα = softmax KQ √ m , where m represents the dimension of K. These two forms are theoretically similar (Vaswani et al., 2017) and generally give the same results (Jain and Wallace, 2019), the dot-product form being faster on certain tasks from a practical point of view.
Since the introduction of attention mechanisms in the literature, many have seen the opportunity to use the weights for explaining neural networks (e.g., Xu et al. (2015); Martins and Astudillo (2016); Choi et al. (2016); Xie et al. (2017); Mullenbach et al. (2018)). Indeed, the attention weights link the input to the remaining of the network with the aim of performing a certain task, and are trained to do so through back-propagation. This link between the input and the remaining of the network is used to work on explainability, which in machine learning and NLP is defined as the capacity to explain a non-interpretable (Bibal and Frénay, 2016), i.e., black-box, model (Guidotti et al., 2018). The two major ways to explain black-box models are global explanations, providing clues about the behavior of the model as a whole, and local explanations, explaining particular decisions. Using attention to explain neural networks mainly pertains to the latter, even if some authors study attention for global explanation (e.g., Clark et al. (2019)).
Explanations can also be faithful (how close the explanation is to the inner workings of the model) (Rudin, 2019;Jacovi and Goldberg, 2020), or plausible (does the user consider the explanation of the model plausible?) (Riedl, 2019;Jacovi and Goldberg, 2020). It should be noted that explanation presupposes some degree of transparency to the user, whether it is faithful or plausible. Indeed, disregarding this aspect would entail that the most faithful explanation is the black-box model itself.
Recently, a debate fundamentally questioned whether attention can be used as explanation (Jain and Wallace, 2019). An immediate response by Wiegreffe and Pinter (2019) challenged some of the arguments of Jain and Wallace (2019). To this day, the debate about "is attention explanation?" continues and is the source of a rich and diverse literature. Researchers from different areas have mostly contributed to this debate without referring to works outside, and sometimes even inside, their area. These insights include theoretical analyses of attention, the necessity to bring users in the loop, questioning the evaluation methodology for model explanation, and more. This paper brings together the papers from these different areas in order to provide an outline of the quickly growing and vast literature on the subject. Moreover, we discuss the lessons learned and highlight the main issues and perspectives. To accurately reflect the debate, we only focus on papers that are posterior to the works of Jain and Wallace (2019) and Wiegreffe and Pinter (2019), and that explicitly rely on these two papers to contribute to the debate. This paper proposes the first introduction to the debate about "is attention explanation?". The main contributions of this work are as follows:
• a summary and a discussion of the actual state of the debate by identifying convergences and disagreements in the literature;
• an extraction and structure of the main insights from papers of different areas that generally do not interact; and
• the bases for developing research on attention as explanation, with a more integrated state-ofthe-art built upon a multitude of perspectives.
In order to present the different insights on the debate, we briefly summarize the two seminal papers (Section 2), describing the arguments of the two original papers that represent the source of the ongoing debate. We also present survey papers that mention the debate within a broader context (Section 3). We then investigate the different research perspectives we extracted from the literature (Sections 4 to 9). Finally, we analyze the insights offered by those works and offer foundations to build upon for future research related to attention as explanation (Section 10).
2 Starting Point of the Debate Jain and Wallace (2019) make a set of observations on attention weights in a battery of experiments: (i) an analysis of the correlations between attention weights and feature importance methods (gradientbased and leave-one-out) and (ii) a study of the impact of counterfactual attention weight distributions on the final prediction by randomly shuffling the attention weights, and by shuffling them adversarially (i.e., by creating distributions that correspond to a focus on a different set of features than the one in the original attention distribution). The experiments are performed on three tasks: binary text classification, question answering and natural language inference. When commenting upon the results of their experiments, the authors' observations are: (i) there are poor correlations between attention weights and gradient-based or leave-oneout methods for explanation and (ii) shuffling the attention weights in a neural model does not affect the final prediction, except for some rare cases where the prediction relies on a few high precision tokens. The conclusion they draw from the poor correlations with other explanation methods and the lack of exclusive explanation is that attention cannot be used as a means of explanation.
Wiegreffe and Pinter (2019) agree on the importance of the questions raised by Jain and Wallace (2019) and reply to their claims. They agree with the first observation and the corresponding experimental setup. However, they object to the second claim, stating that only modifying the attention weights in the model does not produce a real attention-based model. Indeed, if the attention weights should be modified for experimental purposes, then the model should be retrained to correspond to a real trained model with those modified attention weights. In addition, they also object to the exclusive explanation argument that attention is "an explanation, not the explanation" (Wiegreffe and Pinter, 2019, p. 13). Indeed, several plausible explanations can co-exist for a similar degree of faithfulness.
The clash between the initial use of attention as explanation and the 2019 studies debating over the validity of considering attention as an expla-nation started a vast literature on the subject. The following section presents survey papers that are mentioning the debate within a broader perspective.
Survey Papers Mentioning the Debate
Usually, when exploring a question, survey papers are a good starting point, as they have the advantage of covering a broader scope. However, there is no in-depth introduction to the debate, as survey papers only briefly mention the debate and sometimes do not really add something significant for the discussion (e.g., Chaudhari et al. (2019) and Lindsay (2020)). Please note that we only discuss surveys that add significant elements to the discussion. Galassi et al. (2020) propose a survey on attention. They recall the results of Jain and Wallace (2019) on the fact that attention may not be explanation, but also refer to the fact that only faithful explanations (and not plausible ones; see Section 7) are considered. The "explanation" perspective of the survey is focused on the work of , which discusses how well attention captures the importance of abstract features in multilayer neural networks when dealing with images. Galassi et al. (2020) argue that an answer to the question "is attention explanation?" with image data may not generalize to text, and should be verified, as human understanding mechanisms strongly differ between images and texts.
de Santana Correia and Colombini (2021) introduce the debate in broad terms in Section 5.7 of their survey, but point out that, based on the work of Vashishth et al. (2019), the answer to the question "is attention explanation?" can take different shapes based on the NLP task that is studied (see our Section 6 for more details on this point of the debate). Later in their paper, they also mention, like Galassi et al. (2020), that some works show that attention in transformers focuses on syntactical structures (Voita et al., 2018;Vig and Belinkov, 2019;Tenney et al., 2019;Clark et al., 2019). This indicates that global explanations based on attention can be provided, but do not answer the need for the local, decision-based, explanation that is mainly discussed in the debate. Ras et al. (2021) also stress that the debate has been extended to several NLP tasks in the work of Vashishth et al. (2019). They add the information that mixed results have been obtained in the debate (Serrano and Smith, 2019;Baan et al., 2019).
Contrary to the short introductions to the debate in these survey papers, we aim at providing a clear and rather exhaustive view of the different ways the debate is tackled in the literature. The different insights on the debate, which are unfortunately not regrouped and discussed in these surveys (because the debate is not their main focus), are numerous: some papers add arguments about the fact that attention is not explanation (Section 4), provide analyses to explain why attention is not explanation (Section 5), analyze the debate on different NLP tasks (Section 6), discuss the methodological issues at the heart of the debate (Section 7), evaluate the explanatory power of attention with humans (Section 8), or propose solutions to make attention become explanation (based on technical developments or on user-in-the-loop strategies) (Section 9). Table 1 presents an overview of all works discussed in our paper, with the task(s) and architecture(s) they study (when applicable), and the section(s) in which they appear.
Additional Arguments about Attention is not Explanation
Some works may be considered as the direct continuation of the arguments of Jain and Wallace (2019) by adding experiments that corroborate their findings, e.g., by showing that the comparison of attention with other explainable methods different from the gradient-based one leads to similar conclusions. Serrano and Smith (2019) show that removing features considered as important by attention less often leads to a decision flip than removing features considered important by gradient-based methods. This means that the features deemed important by attention for a decision are not so important for the model. This, therefore, adds to the first argument of Jain and Wallace (2019) against the relevance of attention as an indicator of feature importance. Thorne et al. (2019) demonstrate that applying LIME (Ribeiro et al., 2016) on an attention-based neural network can provide good explanations that the attention itself cannot provide. They conclude on this subject that their experimental results are aligned with the ones of Jain and Wallace (2019).
Mohankumar et al. (2020) investigate attention on top of LSTMs (attention-LSTMs). Their study focuses on why attention in such models neither provides plausible, nor faithful, explanations. They use a variety of NLP tasks (sentiment analysis, natural language inference, question answering and paraphrase detection) and randomly permute atten- (2021) Text Classification SciBERT and AL-BERT Section 9.2 Table 1: Summary of works taking part in the debate by order of appearance in this paper. Note that some architectures contain attention layers by design (e.g., BERT and HANs), while an attention layer is generally added on top of the other ones (e.g., LSTMs and RNNs).
tion weights as Jain and Wallace (2019). They find that attention-LSTM's outputs do not change much after the permutation and conclude that attention weights are not faithful explanations in attention-LSTMs. The authors propose changes to attention-LSTMs to make attention a faithful explanation (see Section 9.1). Moreover, by analyzing the attention given to part-of-speech tags, they find that the model cannot provide a plausible explanation either, since, for several datasets, a significant amount of attention is given to punctuation.
Finally, Ethayarajh and Jurafsky (2021) show that attention weights are not Shapley values (i.e., a method for feature importance) (Lundberg and Lee, 2017). This result is in line with Jain and Wallace (2019) on the fact that the attention weights do not correlate with other explanation techniques (saliency maps or Shapley values). The authors however note that attention flows (i.e., an extension of attention weights obtained after postprocessing) (Abnar and Zuidema, 2020) are Shapley values, which may indicate that using attention in another way could lead to explanation.
Analyses of Why Attention is not Explanation
In addition to the arguments in the literature on the fact that attention is not explanation, another part of the literature focuses on understanding the reasons why it is not explanation. Bai et al. (2021) show that attention can be put on uninteresting tokens because of an effect they call "combinatorial shortcuts". The key idea is that attention is calculated on the basis of a biased input: "the attention mechanism will try to select biased features to adapt the biased estimations to minimize the overall loss functions" (Bai et al., 2021, p. 27). For instance, if one adds random tokens (such as A, B, and C) to all documents in a corpus, one might find that some of these tokens are considered as important for the positive (or negative) class because their representation ends up being similar to the representation of "good" (or "bad"), even if their information content for the task is negligible, as they are present in all documents. Brunner et al. (2020) theoretically show that attention weights in transformers can be decomposed into two parts, from which the "effective attention" part corresponds to the attention that really affects the output. Effective attention focuses on the effective input needed by the model for the task and is not biased by the representation of the input. Kobayashi et al. (2020) extend the work of Brunner et al. (2020), but focus on describing the effective attention part in more detail instead of using it to improve the model. Likewise, Sun and Marasović (2021) also extend the work of Brunner et al. (2020) and delve deeper into the explanation of effective attention and its use for explaining the model. Sun and Lu (2020) study attention through two specific scores: attention and polarization. The attention score corresponds to the absolute value associated with each input token before the transformation into an attention weight. The polarization score is a global score (not instance-specific) for each input token, indicating its importance for predicting the positive or negative class. The authors show through these two scores why attentionbased models are stable in their prediction, even when attention weights differ. They also show that the match between attention and polarizing scores strongly depends on the hyperparameter values.
By analyzing the effect of regularization on attention, Tutek and Šnajder (2020) show that one of the reasons why attention cannot be used as a faithful explanation is due to the fact that all input tokens roughly have the same influence on the prediction. The authors show that regularizing attention-based models so that embedded tokens e t better corre-spond to their hidden representation rnn(e t ) produces explanations that are more faithful to the model. However, Meister et al. (2021) show that regularizing generally decreases the correlation between attention and explanation techniques, if the regularization is directed towards sparse attention weights. The authors conclude that sparsity, which is often viewed as increasing interpretability of models in the literature, in this case reduces the faithfulness of explanations.
Another way to analyze the problem is to study the change in the representation of the meaning of a sentence when (i) an attention layer is added, and when (ii) the type of RNN encoding the input is changed . The authors show that, in addition to an increase in accuracy, the use of attention also makes the model more stable in terms of representation of sentence meanings.
Is Attention Explanation on Different
Tasks?
In this section, we introduce arguments from the literature that claim that, despite some proofs that attention is not always explanation, attention can be explanation on certain NLP tasks. In general, attention mechanisms seem to provide faithful explanations in syntax-related tasks such as part-ofspeech tagging and syntactic annotation. Clark et al. (2019) thus investigate the attention heads in BERT in the context of syntactic dependency tagging and co-reference resolution. They find that attention heads at different layers attend to different kinds of information (e.g., direct objects of verbs, determiners of nouns or referential antecedents), with earlier layers having a broader attention span. Furthermore, attention heads in the same layer tend to show similar distributions, which is a counter to the argument of Li et al. (2018) on the fact that encouraging attention heads to learn different distributions within layers can improve performance. Overall, knowledge of syntax seems to be encoded by a variety of attention heads in different layers, and thus attention can be used as a global explanation for the tasks under investigation. Similarly, Vig and Belinkov (2019) investigate attention in GPT-2, in particular for part-of-speech and syntactic tagging. They find that each partof-speech is attended to by a specific subset of attention heads, and that attention heads in adjacent layers attend to similar part-of-speech tags. In general, attention shows which tokens were attended to for the task at hand and can thus be used as a global explanation. Clark et al. (2019) and Vig and Belinkov (2019) are some of the few works analyzing attention as explanation in a multi-head setting. Additional work is needed to establish the similarities and differences between single and multiple heads in the context of the debate.
In a different vein, Vashishth et al. (2019) investigate the role of attention across a variety of NLP tasks. They show that, when the input consists of a single sequence (e.g., in sentiment classification), the attention mechanism is comparable to a gating unit and, as such, the learned weights cannot be interpreted as attention. Therefore, in this context, attention does not provide an explanation of the model's reasoning. The reduction of attention to gating units however does not hold true for selfattention networks nor for tasks depending on an additional text sequence, as for example in neural machine translation or natural language inference (pair-wise tasks and text generation tasks). In such cases, altering learned attention weights significantly degrades performance and attention appears to be an explanation of the model and to correlate with feature importance measures.
Evaluation Methodology for Explanation
This section focuses on critics of the methodology when evaluating explanations via attention. The critics mainly focus on two points in the evaluation setup of Jain and Wallace (2019). First, Jain and Wallace (2019) claim that there should be a consistency between attention weights and other explanation methods -which Wiegreffe and Pinter (2019) agree with -and find none. Second, they state that the fact that attention could offer different explanations (which they show by shuffling the attention weights) is an issue, which is a strong point of disagreement with Wiegreffe and Pinter (2019). Regarding the first point, Neely et al. (2021) compare explanation methods from the literature (LIME, Integrated Gradients, DeepLIFT, Grad-SHAP and Deep-SHAP) with attention-based explanations. The comparison is performed on two types of classification: single-sequence classification (sentiment classification) and pair-sequence classification (language inference and understanding, and question answering). The authors find slight agreement between the different explanation methods, including attention-based explanations.
They conclude that checking for consistency between explanation methods should not be a criterion for evaluation, which goes against the agreement between the two seminal papers.
The second point on shuffling the attention weights is a subject of more discussion. Ju et al. (2021) propose a general discussion about logic traps in evaluating interpretation. Their take on this point of the debate is that a model with its manipulated attention weights in the work of Jain and Wallace (2019) "cannot even be regarded as a trained model, which makes their manipulation meaningless" (Ju et al., 2021, p. 4), which adds to the point made by Wiegreffe and Pinter (2019). argue that it is too early for the debate to take place because there are no good definition and evaluation of explanations. The authors propose a Definition Driven Pipeline (DDP) to evaluate explanations based on the definition of faithfulness. They show that following this DDP can produce an evaluation of explanations that is less biased and can even drive the development of new faithful explanations.
Calling for more clearly differentiating between faithfulness and plausibility when evaluating explanation, Jacovi and Goldberg (2020) define five guidelines for evaluating faithfulness, building upon the common pitfalls and sub-optimal practices they observed in the literature. They propose an organization of the literature into three types: model assumption, prediction assumption, and linearity assumption. They state that the distinction between Jain and Wallace (2019) and Wiegreffe and Pinter (2019) is the underlying assumptions they use for evaluating attention heat-maps as explanations. The former attempts to provide different explanations of similar decisions per instance (therefore linked to prediction assumption). The latter critiques the former and is more anchored in the model assumption type of work.
Evaluating Explanations with Humans
The notion of plausibility of attention-based explanations implies asking humans to evaluate whether attention provides a plausible explanation for the model's decisions. A first issue is whether human judges can agree on what plausible explanations of a decision (e.g., a prediction) are. In an experiment involving predictions for sentiment analysis and reading comprehension, Vashishth et al. (2019) ask humans to decide whether the top 3 highest weighted words in 200 samples are relevant for the model's prediction. They reported a very high agreement among judges (i.e., Cohen's κ over 0.8), which leads to think that words receiving the highest attention can form a plausible explanation.
A second interesting issue is the type of human annotations that should be captured in order to assess model's plausibility. The most common approach is to ask humans to assess attention heatmaps produced by a model. In Vashishth et al. (2019), users assess the relevance of the top 3 highest weighted words, whereas Mohankumar et al.
(2020) ask evaluators to decide which of two attention heatmaps better explains the model's prediction as regards to three dimensions: overall prediction, completeness (which heatmap highlights all the words required for the prediction) and correctness (highlights only the important words and not unnecessary words). Another way to assess the difference between human and machine attention, in Sen et al. (2020), consists in asking humans to highlight important words for a classification task. The authors report an agreement percentage around 70% for this task and show that attention weights on top of bi-RNNs align pretty well with human attention. This finding is especially true for words for which annotators agree on the importance.
A third line of research (Sood et al., 2020) uses eye tracking measures to investigate whether machine attention match human attention. The authors hypothesize that machine attention distributions should correlate with human attention strategies for a given task (e.g., question answering). They found that human and machine attention distributions are more similar on easier tasks, which may mean that, for difficult tasks, humans required more varied strategies. For LSTMs and CNNs, diverging more from human attention leads to a drop in performance, which is not the case for XLNets.
However, the fact that humans could reliably assess model's plausibility does not ensure that the model is faithful (Jacovi and Goldberg, 2020). In fact, Pruthi et al. (2020) cast serious doubts on using attention maps as a way for users to audit explanations in the context of fairness. More precisely, the authors train various architectures of neural network models on datasets that are all gender-biased and whose predictions heavily rely on "impermissible" tokens (e.g., pronouns). An adapted loss function is used to penalize the attention values of these impermissible tokens. The authors conclude that, although the problematic tokens are still used by the models, they do not appear in the attention map, which wrongly leads users to believe that the models are unbiased. In other words, the authors proved that a plausible explanation does not always imply that the explanation is faithful.
Solutions to Make Attention Explanation
This section proposes an overview of the different solutions that have been developed to tackle the various challenges raised by the debate. We identify two types of solutions: the first type, presented in Section 9.1, concerns purely technical solutions that are often based on the theoretical and empirical analyses presented in Section 5. The second type of solutions, presented in Section 9.2, leverages user-in-the-loop strategies to align machine attention with human attention.
Technical Solutions
The technical solutions developed to make attention an explanation differ by whether they use attention values directly or indirectly. Within a recurrent network, the representation of an input element contains a summary of the components of its context. As such, the attention weight computed for that element is imprecise because it indirectly focuses on the context. In order to avoid this dispersion, some researchers seek to reinforce the link between attention weights and input elements. Chrysostomou and Aletras (2021) propose a weighted representation c of input elements h i using the attention weights α i and scores s i that are specific to the elements themselves: c = i h i α i s i . They propose three learning strategies for that score (Linear TaSk, Feature-wise TaSk and Convolutional TaSk) and compare their solutions to three baseline explanations methods (Word Omission, InputXGrad and Integrated Gradients). Their results show that their solutions are an improvement over the baselines. Mohankumar et al. (2020) propose the introduction of more diversity in the hidden states learned by LSTMs, enabling the observation of elements separately from their context. They evaluate two different strategies in their paper: orthogonalization and diversity driven training. The first strategy imposes a constraint of orthogonality on the hidden states, while in the second strategy, the model learns to consider the hidden states sepa-rately thanks to an additional term in the objective function. The authors show that the resulting attention values offer explanations that are not only more faithful, but also more plausible.
Tutek and Šnajder (2020) explore different hidden state regularization methods in order to preserve a strong link with the corresponding input elements. They propose a regularization scheme that positively impacts the attention weights by reinforcing their link with the model prediction, which, in turn, leads to more faithful explanations.
The above approaches rely on a property of recurrent networks and seek to work on the attention by modifying the representation of the input elements within the network. In parallel, some researchers focus directly on the attention weights. Moradi et al. (2021) modify the loss function by adding a term that penalizes non-faithful attention. In order to quantify faithfulness, they propose a measure that combines three different stress tests: ZeroOutMax, Uniform and RandomPermute. They show that their method optimizes faithfulness, while improving the model's performance. Bai et al. (2021) propose to weight the elements of the input X to counter the effect of combinatorial shortcuts (see Section 5). The weighting scheme is based on the fact that when estimating E(Y|X M) in attention, where M are masks applied ( ) to the elements of the input X, the choice of masks M is biased by X and Y because of the key and query elements when computing attention. The authors therefore weights the instances by w = P (y) P (y|m) to disconnect m from y, and, in turn, to encourage m to select meaningful elements of x to predict y.
Attention can be Explanation When Users are in the Loop
Another way to make attention become explanation is to bring users into the loop. This approach is sometimes called supervised attention, as the user attention is used by the model during training. Strout et al. (2019) show that using human rationale to supervise attention can produce explanations that are better accepted by users, but can also lead to better results in terms of performance. Zhong et al. (2019) modify an attention-based LSTM to make it match user provided attention. In order to do that, they compare the distributions of machine and user attention and use a Kullback-Leibler divergence between the two distributions to penalize the attention of the model.
In the same idea of supervised attention, Heo et al. (2020) extend the meta-learning technique called neural processes to include attention. Their Neural Attention Processes (NAP) are designed to consider user-provided attention in an active learning fashion through the use of context points. Kanchinadam et al. (2020) also extend the training of attention to obtain a supervised version of attention. Their approach consists in the addition of a term in the objective function of their model to penalize the difference between the machine and the user attention. As in Heo et al. (2020), the authors make use of active learning in their method called Rationale-based Active Learning with Supervised Attention (RALSA) to collect user attention.
Finally, Arous et al. (2021) introduce MApping human Rationales To Attention (MARTA), a Bayesian framework to include human rationale in order to adapt machine attention. As for all other works in this section, the method improves the performance of the model while providing humanunderstandable explanations.
Discussion
As stated earlier in this paper, one of the difficulties in this debate is that the insights are brought from papers of different areas that do not always cite each other. In fact, even inside a particular area, papers do not always refer to each other. In this section, we aim at bridging the gap between the different papers and their area in order to extract the main conclusions and some points of tension.
First of all, like Thorne et al. (2019) who state that LIME can be used for explanation, thus questioning the need for attention, Bastings and Filippova (2020) state that saliency methods can be used for explanation, removing the need for attention. Therefore, according to Bastings and Filippova (2020), if explanation tools already exist, why is the debate about attention useful? Two answers can be provided to this question. First, attention is something that is learned for performance purposes, so it would be useful if it could be used as explanation also, instead of using additional post-hoc tools. Second, the existence of the debate kick-started solutions that are now moving towards explanation.
Solutions for making attention explanation should consider the two sides of explanation: faithfulness and plausibility. This subject is at the heart of the debate, as Wiegreffe and Pinter (2019) already mentioned the focus of Jain and Wallace (2019) on faithful explanations only. Indeed, users may not be satisfied by explanations that are only faithful, as they need to be plausible for them too. The right balance between plausibility and faithfulness may lie in human-based evaluations (Section 8) and supervised attention (Section 9.2).
That being said, faithfulness should also be evaluated on its own right, without any consideration of plausibility, to check if the explanation matches the model behavior. However, as explained by Jacovi and Goldberg (2020), faithfulness should not be evaluated in a binary fashion: the level of faithfulness needed for attention to be accepted as an explanation should be measured. Furthermore, the faithfulness of attention is generally evaluated with gradient-based techniques, and other techniques like LIME, as a ground truth. However, several works show that these techniques can lead to unexpected (and potentially misleading) results (Feng et al., 2018;Slack et al., 2020). As human-based evaluations are used to assess the plausibility of explanations, and cannot be used for assessing faithfulness (Jacovi and Goldberg, 2020), the question of how to evaluate faithfulness is still open.
Still on the subject of evaluation, we noted that the different contributions to the debate are often based on different setups (as outlined by Table 1). Indeed, except for the analysis of attention on different tasks (Section 6), the contributions often base their claims on one or two tasks of their choice. The same issue has been observed with the use of different input embeddings and different architectures surrounding the attention layer(s). However, authors like stress that the lack of a common ground when discussing faithfulness, plausibility and explanations is not conducive to finding answers to the debate.
On the side of solutions, the common intuitive solution in interpretability and explanation that regularizing a model to be sparse improves our understanding of the model is not well supported in the literature for attention. In fact, some authors like Meister et al. (2021) note that inducing sparsity may in fact reduce the faithfulness of attention.
Another perspective that is better suited for obtaining faithful explanations is effective attention (Brunner et al., 2020;Kobayashi et al., 2020;Sun and Marasović, 2021). Indeed, while attention per se may not be explanation, further studies and uses of effective attention as a sub-part of attention may prove useful to learn a faithful explanation.
If plausible explanations, alongside faithfulness, are needed, supervised attention is a good perspective. The argument for supervised attention is wellfounded: if attention is not explanation and if faithfulness is not enough, then making machine attention match human attention may be a solution. While one can argue that attention has originally been introduced for performance purposes and that supervised attention may work against this advantage, several studies show that, in fact, guiding attention increases performance (e.g., Strout et al. (2019)). Supervised attention is therefore a solution that both optimizes performance and explainability. The main cost of this solution is that it requires the participation of users, but solutions can handle few-shot user annotations (e.g., Heo et al. (2020)). Grimsley et al. (2020) offer a philosophical perspective on the debate. They show that works studying attention as explanation do so in a causal framework. They argue that it is an issue because the object of study does not fit in that type of framework. The reason is that the link between the attention layer and the model's output cannot be isolated from the other components of the model. They conclude that "attention weights alone cannot be used as causal explanation for model behavior" (Grimsley et al., 2020(Grimsley et al., , p. 1786. This entails that assuming causality when evaluating the explanatory power of attention is doomed to fail by design. The authors propose non-causal explanation paradigms to explore the issue, such as mathematical, structural modal, and minimal-model explanations.
Conclusion
We have shown that the debate about the question "is attention explanation?" already produced a vast and diverse literature. Throughout our analysis, we highlighted various insights that can help advance the debate: theoretically refining concepts around the notion of explanation (in particular plausibility and faithfulness), developing a common ground in the evaluation setup (e.g., similar input embeddings and architectures), extending the studies and uses of effective attention, and improving the integration of users for a supervised attention. We intend that our work provides a solid ground for further research, calling for more integration to answer the question "is attention explanation?". In particular, combining the findings from the different areas (e.g., to produce a supervised effective attention) seems to be among the most promising avenues.
3897
AcknowledgmentsThis research benefited from the support of the Walloon region with a Win2Wal funding.
Quantifying attention flow in transformers. Samira Abnar, Willem Zuidema, Proceedings of ACL. ACLSamira Abnar and Willem Zuidema. 2020. Quantify- ing attention flow in transformers. In Proceedings of ACL, pages 4190-4197.
MARTA: Leveraging human rationales for explainable text classification. Ines Arous, Ljiljana Dolamic, Jie Yang, Akansha Bhardwaj, Giuseppe Cuccu, Philippe Cudré-Mauroux, Proceedings of AAAI. AAAIInes Arous, Ljiljana Dolamic, Jie Yang, Akansha Bhardwaj, Giuseppe Cuccu, and Philippe Cudré- Mauroux. 2021. MARTA: Leveraging human ratio- nales for explainable text classification. In Proceed- ings of AAAI, pages 5868-5876.
Do transformer attention heads provide transparency in abstractive summarization?. Joris Baan, Marlies Maartje Ter Hoeve, Anne Van Der Wees, Maarten Schuth, De Rijke, Proceedings of the SI-GIR Workshop FACTS-IR. the SI-GIR Workshop FACTS-IRJoris Baan, Maartje ter Hoeve, Marlies van der Wees, Anne Schuth, and Maarten de Rijke. 2019. Do trans- former attention heads provide transparency in ab- stractive summarization? In Proceedings of the SI- GIR Workshop FACTS-IR.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyung Hyun Cho, Yoshua Bengio, Proceedings of ICLR. ICLRDzmitry Bahdanau, Kyung Hyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR.
Why attentions may not be interpretable?. Bing Bai, Jian Liang, Guanhua Zhang, Hao Li, Kun Bai, Fei Wang, Proceedings of the ACM SIGKDD Conference. the ACM SIGKDD ConferenceBing Bai, Jian Liang, Guanhua Zhang, Hao Li, Kun Bai, and Fei Wang. 2021. Why attentions may not be interpretable? In Proceedings of the ACM SIGKDD Conference, pages 25-34.
The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?. Jasmijn Bastings, Katja Filippova, Proceedings of the EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPJasmijn Bastings and Katja Filippova. 2020. The ele- phant in the interpretability room: Why use atten- tion as explanation when we have saliency meth- ods? In Proceedings of the EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 149-155.
Interpretability of machine learning models and representations: An introduction. Adrien Bibal, Benoît Frénay, Proceedings of ESANN. ESANNAdrien Bibal and Benoît Frénay. 2016. Interpretability of machine learning models and representations: An introduction. In Proceedings of ESANN, pages 77- 82.
On identifiability in transformers. Gino Brunner, Yang Liu, Damián Pascual, Oliver Richter, Massimiliano Ciaramita, Roger Wattenhofer, Proceedings of ICLR. ICLRGino Brunner, Yang Liu, Damián Pascual, Oliver Richter, Massimiliano Ciaramita, and Roger Watten- hofer. 2020. On identifiability in transformers. In Proceedings of ICLR.
An attentive survey of attention models. Sneha Chaudhari, Varun Mithal, Gungor Polatkan, Rohan Ramanath, arXiv:1904.02874Sneha Chaudhari, Varun Mithal, Gungor Polatkan, and Rohan Ramanath. 2019. An attentive survey of at- tention models. arXiv:1904.02874.
RETAIN: an interpretable predictive model for healthcare using reverse time attention mechanism. Edward Choi, Mohammad Taha Bahadori, Joshua A Kulas, Andy Schuetz, F Walter, Jimeng Stewart, Sun, Proceedings of NeurIPS. NeurIPSEdward Choi, Mohammad Taha Bahadori, Joshua A Kulas, Andy Schuetz, Walter F Stewart, and Ji- meng Sun. 2016. RETAIN: an interpretable predic- tive model for healthcare using reverse time atten- tion mechanism. In Proceedings of NeurIPS, pages 3512-3520.
Improving the faithfulness of attention-based explanations with task-specific information for text classification. George Chrysostomou, Nikolaos Aletras, Proceedings of ACL-IJCNLP. ACL-IJCNLPGeorge Chrysostomou and Nikolaos Aletras. 2021. Im- proving the faithfulness of attention-based explana- tions with task-specific information for text classifi- cation. In Proceedings of ACL-IJCNLP.
What does BERT look at? An analysis of BERT's attention. Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D Manning, Proceedings of the ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPKevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning. 2019. What does BERT look at? An analysis of BERT's attention. In Pro- ceedings of the ACL Workshop BlackboxNLP: An- alyzing and Interpreting Neural Networks for NLP, pages 276-286.
Attention, please! A survey of neural attention models in deep learning. Alana De, Santana Correia, Esther Luna Colombini, arXiv:2103.16775Alana de Santana Correia and Esther Luna Colombini. 2021. Attention, please! A survey of neural atten- tion models in deep learning. arXiv:2103.16775.
Attention flows are shapley value explanations. Kawin Ethayarajh, Dan Jurafsky, Proceedings of ACL-IJCNLP. ACL-IJCNLPKawin Ethayarajh and Dan Jurafsky. 2021. Attention flows are shapley value explanations. In Proceed- ings of ACL-IJCNLP.
Pathologies of neural models make interpretations difficult. Eric Shi Feng, Alvin Wallace, I I Grissom, Mohit Iyyer, Pedro Rodriguez, Jordan Boyd-Graber, Proceedings of EMNLP. EMNLPShi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. 2018. Pathologies of neural models make interpretations difficult. In Proceedings of EMNLP, pages 3719- 3728.
Attention in natural language processing. Andrea Galassi, Marco Lippi, Paolo Torroni, IEEE Transactions on Neural Networks and Learning Systems. 3210Andrea Galassi, Marco Lippi, and Paolo Torroni. 2020. Attention in natural language processing. IEEE Transactions on Neural Networks and Learning Sys- tems, 32(10):4291-4308.
Why attention is not explanation: Surgical intervention and causal reasoning about neural models. Christopher Grimsley, Elijah Mayfield, Julia Rs Bursten, Proceedings of LREC. LRECChristopher Grimsley, Elijah Mayfield, and Julia RS Bursten. 2020. Why attention is not explanation: Surgical intervention and causal reasoning about neural models. In Proceedings of LREC, pages 1780-1790.
A survey of methods for explaining black box models. Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, Dino Pedreschi, ACM Computing Surveys. 515Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black box models. ACM Computing Surveys, 51(5):1-42.
Cost-effective interactive attention learning with neural attention processes. Jay Heo, Junhyeon Park, Hyewon Jeong, Kwang Joon Kim, Juho Lee, Eunho Yang, Sung Ju Hwang, Proceedings of ICML. ICMLJay Heo, Junhyeon Park, Hyewon Jeong, Kwang Joon Kim, Juho Lee, Eunho Yang, and Sung Ju Hwang. 2020. Cost-effective interactive attention learning with neural attention processes. In Proceedings of ICML, pages 4228-4238.
Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness?. Alon Jacovi, Yoav Goldberg, Proceedings of ACL. ACLAlon Jacovi and Yoav Goldberg. 2020. Towards faith- fully interpretable NLP systems: How should we de- fine and evaluate faithfulness? In Proceedings of ACL, pages 4198-4205.
Attention is not explanation. Sarthak Jain, C Byron, Wallace, Proceedings of NAACL-HLT. NAACL-HLTSarthak Jain and Byron C Wallace. 2019. Attention is not explanation. In Proceedings of NAACL-HLT, pages 3543-3556.
Yiming Ju, Yuanzhe Zhang, Zhao Yang, Zhongtao Jiang, Kang Liu, arXiv:2109.05463The logic traps in evaluating post-hoc interpretations. Yiming Ju, Yuanzhe Zhang, Zhao Yang, Zhongtao Jiang, Kang Liu, and Jun Zhao. 2021. The logic traps in evaluating post-hoc interpretations. arXiv:2109.05463.
Rationale-based human-in-theloop via supervised attention. Teja Kanchinadam, Keith Westpfahl, Qian You, Glenn Fung, Proceedings of the KDD workshop DaSH. the KDD workshop DaSHTeja Kanchinadam, Keith Westpfahl, Qian You, and Glenn Fung. 2020. Rationale-based human-in-the- loop via supervised attention. In Proceedings of the KDD workshop DaSH.
Attention is not only a weight: Analyzing transformers with vector norms. Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui, Proceedings of EMNLP. EMNLPGoro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, and Kentaro Inui. 2020. Attention is not only a weight: Analyzing transformers with vector norms. In Pro- ceedings of EMNLP, pages 7057-7075.
Multi-head attention with disagreement regularization. Jian Li, Zhaopeng Tu, Baosong Yang, Tong Michael R Lyu, Zhang, Proceedings of EMNLP. EMNLPJian Li, Zhaopeng Tu, Baosong Yang, Michael R Lyu, and Tong Zhang. 2018. Multi-head attention with disagreement regularization. In Proceedings of EMNLP, pages 2897-2903.
Selective kernel networks. Xiang Li, Wenhai Wang, Xiaolin Hu, Jian Yang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionXiang Li, Wenhai Wang, Xiaolin Hu, and Jian Yang. 2019. Selective kernel networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 510-519.
Attention in psychology, neuroscience, and machine learning. W Grace, Lindsay, Frontiers in Computational Neuroscience. 1429Grace W Lindsay. 2020. Attention in psychology, neu- roscience, and machine learning. Frontiers in Com- putational Neuroscience, 14:29.
Are interpretations fairly evaluated? a definition driven pipeline for post-hoc interpretability. Ninghao Liu, Yunsong Meng, Xia Hu, Tie Wang, Bo Long, arXiv:2009.07494Ninghao Liu, Yunsong Meng, Xia Hu, Tie Wang, and Bo Long. 2020. Are interpretations fairly evalu- ated? a definition driven pipeline for post-hoc inter- pretability. arXiv:2009.07494.
A unified approach to interpreting model predictions. M Scott, Su-In Lundberg, Lee, Proceedings of NeurIPS. NeurIPSScott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Pro- ceedings of NeurIPS, pages 4768-4777.
Effective approaches to attentionbased neural machine translation. Minh-Thang Luong, Hieu Pham, Christopher D Manning, Proceedings of EMNLP. EMNLPMinh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention- based neural machine translation. In Proceedings of EMNLP, pages 1412-1421.
From softmax to sparsemax: A sparse model of attention and multi-label classification. Andre Martins, Ramon Astudillo, Proceedings of ICML. ICMLAndre Martins and Ramon Astudillo. 2016. From soft- max to sparsemax: A sparse model of attention and multi-label classification. In Proceedings of ICML, pages 1614-1623.
Is sparse attention more interpretable?. Clara Meister, Stefan Lazov, Isabelle Augenstein, Ryan Cotterell, Proceedings of ACL-IJCNLP. ACL-IJCNLPClara Meister, Stefan Lazov, Isabelle Augenstein, and Ryan Cotterell. 2021. Is sparse attention more in- terpretable? In Proceedings of ACL-IJCNLP, pages 122-129.
Recurrent models of visual attention. Volodymyr Mnih, Nicolas Heess, Alex Graves, Proceedings of NeurIPS. NeurIPSVolodymyr Mnih, Nicolas Heess, and Alex Graves. 2014. Recurrent models of visual attention. In Pro- ceedings of NeurIPS, pages 2204-2212.
Towards transparent and explainable attention models. Akash Kumar Mohankumar, Preksha Nema, Sharan Narasimhan, M Mitesh, Khapra, Balaraman Balaji Vasan Srinivasan, Ravindran, Proceedings of ACL. ACLAkash Kumar Mohankumar, Preksha Nema, Sharan Narasimhan, Mitesh M Khapra, Balaji Vasan Srini- vasan, and Balaraman Ravindran. 2020. Towards transparent and explainable attention models. In Proceedings of ACL, pages 4206-4216.
Measuring and improving faithfulness of attention in neural machine translation. Pooya Moradi, Nishant Kambhatla, Anoop Sarkar, Proceedings of EACL. EACLPooya Moradi, Nishant Kambhatla, and Anoop Sarkar. 2021. Measuring and improving faithfulness of at- tention in neural machine translation. In Proceed- ings of EACL, pages 2791-2802.
Explainable prediction of medical codes from clinical text. James Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, Jacob Eisenstein, Proceedings of NAACL-HLT. NAACL-HLTJames Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, and Jacob Eisenstein. 2018. Explainable pre- diction of medical codes from clinical text. In Pro- ceedings of NAACL-HLT, pages 1101-1111.
Order in the court: Explainable AI methods prone to disagreement. Michael Neely, F Stefan, Schouten, J R Maurits, Ana Bleeker, Lucic, Proceedings of the ICML Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI. the ICML Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AIMichael Neely, Stefan F Schouten, Maurits JR Bleeker, and Ana Lucic. 2021. Order in the court: Explain- able AI methods prone to disagreement. In Proceed- ings of the ICML Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI.
Learning to deceive with attention-based explanations. Danish Pruthi, Mansi Gupta, Bhuwan Dhingra, Graham Neubig, Zachary C Lipton, Proceedings of ACL. ACLDanish Pruthi, Mansi Gupta, Bhuwan Dhingra, Gra- ham Neubig, and Zachary C Lipton. 2020. Learn- ing to deceive with attention-based explanations. In Proceedings of ACL, pages 4782-4793.
Gabrielle Ras, Ning Xie, Marcel Van Gerven, Derek Doran, arXiv:2004.14545Explainable deep learning: A field guide for the uninitiated. Gabrielle Ras, Ning Xie, Marcel van Gerven, and Derek Doran. 2021. Explainable deep learning: A field guide for the uninitiated. arXiv:2004.14545.
The dynamic representation of scenes. A Ronald, Rensink, Visual cognition. 71Ronald A. Rensink. 2000. The dynamic representation of scenes. Visual cognition, 7(1):17-42.
Explaining the predictions of any classifier. Sameer Marco Tulio Ribeiro, Carlos Singh, Guestrin, Proceedings of the ACM SIGKDD Conference. the ACM SIGKDD Conferencewhy should I trust you?Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "why should I trust you?" Explain- ing the predictions of any classifier. In Proceedings of the ACM SIGKDD Conference, pages 1135-1144.
Human-centered artificial intelligence and machine learning. O Mark, Riedl, Human Behavior and Emerging Technologies. 11Mark O Riedl. 2019. Human-centered artificial intelli- gence and machine learning. Human Behavior and Emerging Technologies, 1(1):33-36.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Cynthia Rudin, Nature Machine Intelligence. 15Cynthia Rudin. 2019. Stop explaining black box ma- chine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5):206-215.
Human attention maps for text classification: Do humans and neural networks focus on the same words?. Cansu Sen, Thomas Hartvigsen, Biao Yin, Xiangnan Kong, Elke Rundensteiner, Proceedings of ACL. ACLCansu Sen, Thomas Hartvigsen, Biao Yin, Xiangnan Kong, and Elke Rundensteiner. 2020. Human at- tention maps for text classification: Do humans and neural networks focus on the same words? In Pro- ceedings of ACL, pages 4596-4608.
Is attention interpretable?. Sofia Serrano, A Noah, Smith, Proceedings of ACL. ACLSofia Serrano and Noah A Smith. 2019. Is attention interpretable? In Proceedings of ACL, pages 2931- 2951.
Fooling LIME and SHAP: Adversarial attacks on post hoc explanation methods. Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, Himabindu Lakkaraju, Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. the AAAI/ACM Conference on AI, Ethics, and SocietyDylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. 2020. Fooling LIME and SHAP: Adversarial attacks on post hoc explanation methods. In Proceedings of the AAAI/ACM Confer- ence on AI, Ethics, and Society, pages 180-186.
Interpreting attention models with human visual attention in machine reading comprehension. Ekta Sood, Simon Tannert, Diego Frassinelli, Andreas Bulling, Ngoc Thang Vu, Proceedings of CoNLL. CoNLLEkta Sood, Simon Tannert, Diego Frassinelli, Andreas Bulling, and Ngoc Thang Vu. 2020. Interpreting at- tention models with human visual attention in ma- chine reading comprehension. In Proceedings of CoNLL, pages 12-25.
Do human rationales improve machine explanations?. Julia Strout, Ye Zhang, Raymond Mooney, Proceedings of the ACL Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP. the ACL Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLPJulia Strout, Ye Zhang, and Raymond Mooney. 2019. Do human rationales improve machine explana- tions? In Proceedings of the ACL Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 56-62.
Effective attention sheds light on interpretability. Kaiser Sun, Ana Marasović, Findings of ACL-IJCNLP. Kaiser Sun and Ana Marasović. 2021. Effective atten- tion sheds light on interpretability. In Findings of ACL-IJCNLP, pages 4126-4135.
Understanding attention for text classification. Xiaobing Sun, Wei Lu, Proceedings of ACL. ACLXiaobing Sun and Wei Lu. 2020. Understanding atten- tion for text classification. In Proceedings of ACL, pages 3418-3428.
BERT rediscovers the classical NLP pipeline. Ian Tenney, Dipanjan Das, Ellie Pavlick, Proceedings of ACL. ACLIan Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proceedings of ACL, pages 4593-4601.
Generating token-level explanations for natural language inference. James Thorne, Andreas Vlachos, Proceedings of NAACL-HLT. NAACL-HLTChristos Christodoulopoulos, and Arpit MittalJames Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2019. Generating token-level explanations for natural language inference. In Proceedings of NAACL-HLT, pages 963-969.
Staying true to your word:(how) can attention become explanation?. Martin Tutek, Proceedings of the ACL Workshop on Representation Learning for NLP. the ACL Workshop on Representation Learning for NLPMartin Tutek and Jan Šnajder. 2020. Staying true to your word:(how) can attention become explanation? In Proceedings of the ACL Workshop on Representa- tion Learning for NLP, pages 131-142.
Shikhar Vashishth, Shyam Upadhyay, Gaurav Singh Tomar, Manaal Faruqui, arXiv:1909.11218Attention interpretability across NLP tasks. Shikhar Vashishth, Shyam Upadhyay, Gaurav Singh Tomar, and Manaal Faruqui. 2019. Attention inter- pretability across NLP tasks. arXiv:1909.11218.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Proceedings of NeurIPS. NeurIPSAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NeurIPS, pages 5998- 6008.
Analyzing the structure of attention in a transformer language model. Jesse Vig, Yonatan Belinkov, Proceedings of the ACL Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP. the ACL Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLPJesse Vig and Yonatan Belinkov. 2019. Analyzing the structure of attention in a transformer language model. In Proceedings of the ACL Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 63-76.
Context-aware neural machine translation learns anaphora resolution. Elena Voita, Pavel Serdyukov, Rico Sennrich, Ivan Titov, Proceedings of ACL. ACLElena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine transla- tion learns anaphora resolution. In Proceedings of ACL, pages 1264-1274.
Attention is not not explanation. Sarah Wiegreffe, Yuval Pinter, Proceedings of EMNLP-IJCNLP. EMNLP-IJCNLPSarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of EMNLP- IJCNLP, pages 11-20.
An interpretable knowledge transfer model for knowledge base completion. Qizhe Xie, Xuezhe Ma, Zihang Dai, Eduard Hovy, Proceedings of ACL. ACLQizhe Xie, Xuezhe Ma, Zihang Dai, and Eduard Hovy. 2017. An interpretable knowledge transfer model for knowledge base completion. In Proceedings of ACL, pages 950-962.
Show, attend and tell: Neural image caption generation with visual attention. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, Yoshua Bengio, Proceedings of ICML. ICMLKelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual atten- tion. In Proceedings of ICML, pages 2048-2057.
Lingyu Hua, and Dawei Song. 2021. How does attention affect the model?. Cheng Zhang, Qiuchi Li, Findings of ACL-IJCNLP. Cheng Zhang, Qiuchi Li, Lingyu Hua, and Dawei Song. 2021. How does attention affect the model? In Find- ings of ACL-IJCNLP, pages 256-268.
Self-attention generative adversarial networks. Han Zhang, Ian Goodfellow, Dimitris Metaxas, Augustus Odena, Proceedings of ICML. ICMLHan Zhang, Ian Goodfellow, Dimitris Metaxas, and Au- gustus Odena. 2019. Self-attention generative ad- versarial networks. In Proceedings of ICML, pages 7354-7363.
Fine-grained sentiment analysis with faithful attention. Ruiqi Zhong, Steven Shao, Kathleen Mckeown, arXiv:1908.06870Ruiqi Zhong, Steven Shao, and Kathleen McKeown. 2019. Fine-grained sentiment analysis with faithful attention. arXiv:1908.06870. |
221,692,460 | Production-based Cognitive Models as a Test Suite for Reinforcement Learning Algorithms | We introduce a framework in which production-rule based computational cognitive modeling and Reinforcement Learning can systematically interact and inform each other. We focus on linguistic applications because the sophisticated rule-based cognitive models needed to capture linguistic behavioral data promise to provide a stringent test suite for RL algorithms, connecting RL algorithms to both accuracy and reactiontime experimental data. Thus, we open a path towards assembling an experimentally rigorous and cognitively realistic benchmark for RL algorithms. We extend our previous work on lexical decision tasks and tabular RL algorithms (Brasoveanu and Dotlačil, 2020b) with a discussion of neural-network based approaches, and a discussion of how parsing can be formalized as an RL problem. | [
6628106,
310507
] | Production-based Cognitive Models as a Test Suite for Reinforcement Learning Algorithms
Adrian Brasoveanu
UC Santa Cruz
Linguistics 1156 High St95064Santa CruzCA
Jakub Dotlačil j.dotlacil@gmail.com
Utrecht University Utrecht
The Netherlands
Production-based Cognitive Models as a Test Suite for Reinforcement Learning Algorithms
10.18653/v1/P17Online Event, November 19, 2020. c 2020 Association for Computational Linguistics 28
We introduce a framework in which production-rule based computational cognitive modeling and Reinforcement Learning can systematically interact and inform each other. We focus on linguistic applications because the sophisticated rule-based cognitive models needed to capture linguistic behavioral data promise to provide a stringent test suite for RL algorithms, connecting RL algorithms to both accuracy and reactiontime experimental data. Thus, we open a path towards assembling an experimentally rigorous and cognitively realistic benchmark for RL algorithms. We extend our previous work on lexical decision tasks and tabular RL algorithms (Brasoveanu and Dotlačil, 2020b) with a discussion of neural-network based approaches, and a discussion of how parsing can be formalized as an RL problem.
Reinforcement Learning and Production-based Cognitive Models
We introduce a framework in which we can start exploring how Reinforcement Learning (RL; Sutton and Barto 2018) algorithms scale up against human cognitive performance, as captured by complex, production-based cognitive models. Our ultimate goal is to focus on sophisticated cognitive models of linguistic skills, e.g., the parsers in Lewis and Vasishth (2005); Hale (2011); Engelmann (2016), because cognitive models that use theoreticallygrounded linguistic representations and processes call for richly structured representations and complex rule systems that pose significant challenges for RL algorithms. These cognitive models, which capture humanparticipant accuracy and latency data obtained from forced-choice and reaction-time experiments, can provide exacting, experimentally established benchmarks for the performance of artificial RL agents. These benchmarks will enable us to see if and when different RL algorithms fail, and how exactly they fail. In this paper, we report a small pilot study that exemplifies three modes of failure. Neural-network based function-approximation approaches sometimes (i) fail to learn even fairly simple rule systems in a stable manner. Even when they seem to learn, (ii) they fail by learning a lot of noise (incorrect rules), particularly in more complex tasks. Tabular approaches fare better, but (iii) learn complex tasks much more slowly, and still learn a lot of noise in more complex tasks, albeit less so than neural-network approaches.
Bridging the RL-cognitive modeling divide also promises to shed new light on the issue of cognitive model learnability. The learnability problem for production-rule based models can be divided into two parts: (i) rule acquisition -forming complex rules out of simpler ones, and (ii) rule orderingdeciding which rule to fire when. We focus here on the easier problem of rule ordering, and show how, on one hand, linguistic cognitive models provide a benchmark for RL algorithms and, on the other hand, RL provides a framework to systematically investigate cognitive model learnability in a formally and computationally explicit way.
We investigate the issue of rule-ordering learning using the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture (see Anderson and Lebiere 1998;Anderson 2007). The advantage of using ACT-R is that this cognitive architecture and RL have very close, albeit largely unexplored, connections (Fu andAnderson 2006, Sutton andBarto 2018, Ch. 14). ACT-R tries to address the rule acquisition and ordering problems, but its proposed solutions -production compilation and rule-utility estimation, respectively -have not been systematically applied to complex models for linguistic skills (apart from Taatgen and Anderson 2002, which investigates the role of production compilation in morphology acquisition).
After a brief overview of ACT-R and the description of a linguistic task for which rule-ordering learning will be studied (Section 2), we show how the linguistic task can be analyzed as an RL problem (Section 3), and discuss the results of our experiments with tabular and neural-network based Q-learning algorithms (Section 4). We then briefly discuss how parsing can be formalized as an RL problem (Section 5), and conclude with a summary and some directions for future work (Section 6).
Learning Goal-conditioned Rules in Lexical Decision: A Simple Test Case
There are two types of memory in ACT-R. On one hand, we have declarative memory ('knowing that'), which encodes our knowledge of facts. Facts are represented as chunks / attribute-value matrices, e.g., the lexical chunk for the word elephant:
( sg On the other hand, we have procedural memory ('knowing how'), which consists of the set of productions that fire in series to generate cognitive behavior / processes. These productions have the form of rewrite rules in formal grammars (e.g., context free / phrase structure grammars), but in ACT-R, they are conditionalized cognitive actions: the ACT-R mind fires a production, i.e., takes the action encoded in it, if the current cognitive state satisfies the preconditions of that production. Procedural memory and its production rules are the focus of our investigation and RL experiments here.
An example production is provided in (2): if the current cognitive state is such that the goal buffer (which drives cognitive processes in ACT-R) encodes a TASK of 'retrieving' the lexical entry for the FORM 'elephant,' then (=⇒), we take the action of placing a Retrieval (buffer) request to search declarative memory for a word with the FORM 'elephant,' and we consequently update the TASK in the goal buffer to one of 'retrieval done.'
(2) Goal> TASK: retrieving FORM: elephant =⇒ Goal> TASK: retrieval done Retrieval> ISA: word FORM: elephant Implicit in this example production is that an ACT-R mind is composed of modules, which include declarative and procedural memory, but also visual and motor modules etc. Modules are not directly accessible: they can only be accessed through their associated buffers, e.g., the retrieval buffer is associated with declarative memory. Buffers serve a dual purpose: individually, they provide the input/output interface to specific modules; as a whole, however, buffers represent the current cognitive state of the mind. Crucially, productions fire based on the current cognitive state, i.e., they are conditioned on the contents of various buffers.
The ACT-R architecture constrains cognitive behavior in various ways, two of which are that (i) buffers can hold only one chunk, and (ii) only one production can fire at any given time.
The framework and the range of issues that emerge when we try to systematically bridge RL and ACT-R are best showcased with a simple kind of linguistic tasks: lexical decision (LD) tasks. We briefly outline in Section 5 how to extend this approach to parsing models implemented in ACT-R. In an LD task, human participants see a string of letters on a screen. If the participants think the string of letters is a word, they press one key (J in our setup). If they think the string is not a word, they press a different key (F in our setup). After pressing the key, the next stimulus is presented. We will investigate the extent to which two kinds of RL agents can be used to learn goal-conditioned rules in an ACT-R based cognitive model of LD tasks.
The main point of proposing and examining an ACT-R model of LD tasks is to construct a simple example of a production-rule based model that enables us to study learnability issues associated with RL algorithms. Our discussion recapitulates the main results in Brasoveanu and Dotlačil (2020b), and extends them with an initial foray into neuralnetwork based RL approaches. The LD model and RL algorithms can be scaled up in future work to more complex and cognitively realistic syntactic and semantic parsing models, since LD is basically a subcomponent of parsing.
The LD model provides the basic scaffolding of production rules needed for LD tasks, which is all that we need for our purposes: fleshing it out to capture major experimental results about LD, or comparing it to previously proposed cognitive models of LD is not our focus here.
We model three LD tasks of increasing length, hence difficulty: (i) a 1-stimulus task consisting only of the word elephant, (ii) a 2-stimuli task consisting of the word elephant and a non-word, and (iii) a 4-stimuli task consisting of the word elephant, a non-word, the word dog, and another non-word.
The model components are split between declarative memory, which stores the lexical knowledge of an English speaker, and procedural memory, which stores rules that enable the model to carry out the LD task. LD tasks can be modeled in ACT-R with a small number of rules (see Dotlačil 2019, 2020a). We will assume 4 rules, provided in standard ACT-R format below. These rules were originally hand-coded to fire serially by conditioning all the actions on specific goal states. The goal conditions are stricken out, indicating that goal states were not provided to the RL agents: the order of the rules was not hand-coded for them. Instead, we want the RL agents to learn the rule ordering. With fully specified, hand-coded rules, the LD task unfolds as follows. Assume the initial goal STATE of the ACT-R model is retrieving, and the word elephant appears on the virtual screen of the model, which is automatically stored in the VALUE slot of the visual buffer. At this initial stage, the preconditions of Rule 1 are satisfied, so the rule fires. This starts an attempt to retrieve a word with the form elephant from declarative memory, and the goal STATE is updated to retrieval done. When the word is successfully retrieved, Rule 2 fires and the J key is pressed. At that point:
i. in the 1-stimulus task, the text FINISHED is displayed, then Rule 4 fires and ends the task;
ii. in the 2-stimuli task, the non-word is displayed, then Rule 1 fires again; the retrieval attempt fails since we cannot retrieve a nonword from declarative memory, so Rule 3 fires and the F key is pressed; at that point, the text FINISHED is displayed, then Rule 4 fires and ends the task;
iii. in the 4-stimuli task, the first non-word is displayed, Rule 1 fires again, then, just as in the 2-stimuli task, Rule 3 fires and the F key is pressed, after which the word dog is displayed, Rule 1 fires for the third time followed by Rule 2, which means that the J key is pressed and the second non-word is displayed; then, Rule 1 fires for the final time, followed by Rule 3, which triggers an F-key press, after which the text FINISHED is displayed, so Rule 4 fires and ends the task.
Thus, the rule sequences for the 3 LD tasks are as shown in (3), assuming fully specified, handcoded rules. However, as we mentioned, we do not hand-code the goal-state preconditions, indicated by striking out the goal states in the 4 rules above. We only specify the actions (and preconditions associated with buffers other than the goal buffer) and let the RL agents, which can select any rule at any given time, learn to carry out the LD tasks.
(3) 1-stim rules:
[1 -2] -4 2-stim rules: [1 -2] -[1 -3] -4 4-stim rules: [1 -2] -[1 -3] -[1 -2] -[1 -3] -4
We see that proper rule ordering / sequencing is crucial to successfully completing an LD task, which is like searching for a path through a maze:
(i) the position in the maze is the current cognitive state of the ACT-R mind, (ii) the possible moves (up, left etc.) are the production rules we can fire, and (iii) a path through the maze is given by the proper sequence of production rules we need to fire to complete the LD task.
Rule Ordering as an RL Problem
Markov Decision Processes (MDPs) are the stochastic models of sequential decision-making that form the basis of RL approaches to learning. In an MDP, an agent interacts with its environment and needs to make decisions at discrete time steps t = 1, 2, . . . , n. Defining what counts as the agent and what counts as its environment is part of the modeling process. At every step t, all the information from the past relevant for the current action selection is captured in the current state of the process s t . This is the Markov property: the future is independent of the past given the current state.
Agent state s t , reward r t action a t Environment Figure 1: Agent-environment interaction in an MDP As Figure 1 shows, the environment passes to the agent a state s t and, at the same time, a reward signal r t . The agent observes the current state s t and reward r t and takes an action a t , which is passed from the agent to the environment. The cycle then continues: at time step t + 1, the environment responds to the agent's action with a new state s t+1 and a new reward signal r t+1 . Based on these, the agent selects a new action a t+1 etc. The definitions of 'state' and 'action' depend on the problem, and are part of the modeling process, just like defining what counts as the agent and its environment.
The agent's policy is a complete specification of what action to take at any time step. Given the Markovian nature of the MPD, the policy π is effectively a mapping from the state space S to the action space A, π : S → A. A deterministic policy is a mapping from any given state s t to an action a t = π(s t ), while a stochastic policy is a mapping from any given state s t to a probability distribution over actions a t ∼ π(s t ).
The agent's goal is to maximize some form of cumulative reward over an episode, which is a com-plete, usually multi-step interaction between the agent and its environment. In our case, an episode would be a full simulation of a 1/2/4-stim LD task.
The agent learns (solves/optimizes the MDP) by updating its policy π to maximize the (per-episode) cumulative reward. The standard cumulative reward for an episodic task is the discounted return G: at time step t < n (n is the final step in the episode), G t = r t+1 + γr t+2 + γ 2 r t+3 + · · · + γ n−t−1 r n , i.e., G t is the sum of the current reward and the discounted future rewards until the final step n of the episode. Future rewards are discounted in finite / episodic tasks because the agent has a preference for more immediate rewards. The present value of future rewards is determined by the discount factor γ (0 ≤ γ ≤ 1). We define the (state-)action value function Q π (s, a) to be the expected (discounted) return when starting in state s, performing action a and then following the policy π until the end of the episode.
The agent selects actions with the goal of maximizing its expected discounted return. If we estimate the Q function for a given policy based on the interactions between the agent and its environment, i.e., based on experience, we can improve that policy by 'greedification:' given a state s, we can always select the optimal action in s, i.e., the action with the maximal expected return according to our current Q estimate.
Q-learning algorithms, which are the focus of our investigation here (given their widespread use), come in various flavors. The simplest one is tabular Q-learning (Watkins, 1989;Watkins and Dayan, 1992), which is fairly effective for our LD tasks. We will also investigate approaches that approximate the Q-function with neural networks, specifically Deep Q-networks (DQN, Mnih and al 2015).
In tabular Q-learning, the Q function S×A → R is represented as a look-up table that stores the estimated values of all possible state-action pairs. The Q table is initialized to an arbitrary fixed value (0). The agent then updates the Q table incrementally at each time step t: the value of the pair (s t , a t ), where s t is the state relative to which the agent took action a t , is updated based on the reward signal r t+1 and the new state s t+1 that the agent receives back from the environment after taking action a t .
Q-learning is a form of temporal difference (TD) learning, as shown in (4). The Q new value estimate for the state-action pair (s t , a t ) is based on the Q old value, updated by some proportion α (the learning rate; 0 < α ≤ 1) of the TD error.
(4) Q new (st, at) ← Q old (st, at) + α · TD (temporal difference) error rt+1 + γ · max a t+1 Q old (st+1, at+1) next-state value estimate TD target (updated value) − Q old (st, at)
The TD error is the difference between the TD target -which is an updated estimate of the value of the (s t , a t ) pair -and the Q old value estimate. The TD target consists of (i) the reward r t+1 the agent receives after action a t , which is part of the new data the agent gets back from the environment after action a t , plus (ii) the estimate of the value of the next state s t+1 , where the next state s t+1 is the other part of the new data the agent gets back from the environment after action a t . The Q-learning optimal estimate for the value of the next state s t+1 is discounted by γ, since this state is in the future relative to the state-action pair (s t , a t ) we're currently updating. This optimal estimate for s t+1 is aggressively confident / optimistic (in contrast to Expected Sarsa, for example; see van Seijen et al. 2009): the agent looks at all the possible actions a t+1 that can be taken in state s t+1 and assumes that the action a t+1 with the highest Q old -value provides an accurate estimate of the s t+1 value.
For tabular Q-learning, the agent (in the RL sense) is a Q-value table that assigns values to all possible state-action pairs and that guides the rule selection process at every cognitive step. The environment is the cognitive state of the ACT-R model / mind, which could conceivably consist of (i) all the modules (procedural memory, declarative memory and visual and motor modules) together with (ii) their associated buffers (goal, retrieval, visualwhat, visual-where and the manual buffer). This, however, would lead to a very large state space S, which in turn would lead to a large Q-value table. DQN and similar neural-network approaches can help with the large state-space problem, but we will nonetheless take a state s to consist just of: (i) the goal buffer, (ii) the retrieval buffer, (iii) the value in the visual-what buffer, if any, and finally, (iv) the state of the manual buffer (busy or free). For example, the state after the word elephant is retrieved from declarative memory is: goal: {STATE: retrieval done}, retrieval: {FORM: elephant}, visual value: elephant, manual: free.
The action space consists of the 4 rules above, namely retrieving, lexeme retrieved, no lexeme found and finished, together with a special action None that the agent selects when it wants to not fire any rule because it prefers to wait for a new cognitive state. The reward structure is as follows: (i) the agent receives a positive reward of 1 at the end of an episode (when the LD task is completed), specifically, when the goal STATE is done; (ii) the agent receives a negative reward of −0.15 for every rule it selects, other than None; (iii) there is no penalty for waiting and selecting no rule, i.e., for selecting the special action None, which is optimal when waiting for retrieval requests from declarative memory to complete, for example; (iv) finally, at every step, the agent receives a negative reward equal to the amount of time that has elapsed between the immediately preceding step and the current step (multiplied by −1 to make it negative).
This reward structure is designed to encourage the agent to finish the task as soon as possible by selecting the smallest number of rules. The negative temporal reward (iv) discourages the agent from just repeatedly selecting an action, e.g., None. This ends up timing out the LD task in a small number of steps and fast-forwards the agent to the maximum waiting time per stimulus the ACT-R environment allows for, which we set to 2 seconds per word for the LD task.
Thus, given a simple reward structure that incorporates fairly minimal cognitive assumptions, RL enables us to induce proper rule sequences to complete the LD tasks: RL enables us to leverage the simple assumptions built into the reward structure to solve the much harder rule-ordering problem by direct experience / trial-and-error interaction with the LD tasks.
Experiments and Results
We assume the usual ACT-R defaults, e.g., rule firing time is set to 50 ms. The discount factor γ is set to 0.95 and the learning rate α is set to 10 −3 . We use an -greedy policy to balance exploration and exploitation, with annealed from a maximum of 1 to a minimum of 0.01.
We investigate two types of agents / algorithms: (i) tabular Q-learning (the main results for tabular agents are from Brasoveanu and Dotlačil 2020b), and (ii) DQN. To a large extent, the agents learn by trial and error to successfully carry out the LD tasks: they learn how to properly order the rules This is no small feat given that the actual number of steps, i.e., decision points, when the agents needs to select an action, is larger than the high-level sequences of rule firings in (3) above. For example, for a 1-stim task, there are actually 12 steps where the agent needs to decide whether to wait or to fire a specific rule (when the agent does not complete the task perfectly, it might take much more than 12 steps). The 2-stim task requires 18 such steps (if perfectly completed), and the 4-stim task requires 34 steps (again, if perfectly completed).
The reason for the higher number of steps compared to the number of rules is that our LD simulations involve the visual and motor modules (to read strings of characters and to press keys) in addition to the declarative memory module. Visual and motor actions, just as retrievals from declarative memory, take time, and the agent needs to make decisions while waiting for them to complete.
Tabular Q-learning in LD Tasks
The higher the number of steps, i.e., the higher number of decision points for the tabular agent, the harder the task is to learn for the tabular agents. As the plots in Figure 2 show, repeated from Brasoveanu and Dotlačil (2020b), learning is faster and less noisy for shorter tasks (fewer stimuli), but the tabular Q-learning agent manages to learn even the most complex 4-stimuli task moderately well.
We simulate 15,000 episodes, i.e., 15,000 LD decision tasks consisting of 1 stimulus only (the word elephant), from which the tabular Q agent learnsshown in the leftmost plot in Figure 2. After about 5,000 episodes, the task is completed in ≈ 12 steps, which is the length of the task when completed perfectly. For some episodes, the number of steps is smaller than 12. In these cases, the agent times out the task (e.g., by selecting the None action several times) and receives steep negative temporal rewards leading to low returns.
A close examination of the agent's final Q-value table, which stores the agent's rule-firing preferences for any given state, indicates that the agent has learned goal-conditioned rules perfectly. We only look at states for which at least one action/rule has a non-0 value (recall that all Q-values are initialized to 0). For each such state, we identify the action/rule with the highest value. There are 8 states total with at least one non-0 value action, and the maximum-value action for each of these states makes complete sense. For example, None is the maximum-value rule at the beginning of every episode when the agent waits for some text to be automatically detected and stored in the visual buffer. Similarly, after a retrieval request is placed, the agent waits for the process to complete.
As the middle plot in Figure 2 shows, we also simulate 15,000 2-stim episodes (LD tasks consisting of the word elephant and the non-word not a word). After about 9,000 episodes, the task is completed in ≈ 18 steps, which is the length of this task when the agent completes it perfectly. A close examination of the agent's final Q-value table indicates that the agent has learned goalconditioned rules almost perfectly. Once again, we only look at states for which at least one action has a non-0 value -a total of 13 states. For each state, we identify the maximum-value action, and for 12 states, this action makes complete sense. However, unlike in the 1-stim task, there is one state-action pair that encodes a questionable rule. We see here how, for more complex tasks, the tabular RL agents learn spurious rules, which are a by-product of the noisy trial-and-error learning process. This lack of robust learning, which can be characterized as overgeneralization, or as vulnerability to 'adversarial' inputs, becomes even more prominent in the 4-stim task, where the tabular Q-learning agent learns even more spurious rules. As the rightmost plot in Figure 2 shows, we simulate 25,000 4-stim episodes (LD tasks consisting of the word elephant, a non-word, the word dog and another non-word). We need more episodes for this task because it is longer, hence more complex, than the 1/2-stim tasks. It takes about 22,000 episodes for the task to be reliably completed in less than 40 steps. The task takes 34 steps when the agent completes it perfectly, but even after 25,000 episodes, the agent takes more steps than that because it tries incorrect rules or waits for no reason. An examination of the final Q-value table indicates that the agent has learned goal-conditioned rules fairly well, but there is also a good amount of spurious rules. There are 24 states total with at least one action with a non-0 value. Out of these, 6 states have questionable / nonsensical maximum-value actions.
DQN in LD Tasks
The DQN agents use an artificial neural network (ANN) to approximate the Q-function. We use a simple multilayer perceptron with a hidden layer of size 64. A small hyperparameter search indicated that a hidden size of 32 seems to be too small, while 128 or 256, for example, seems to be too large.
The ANNs are trained using 1-step semi-gradient TD (a.k.a. semi-gradient TD(0); Sutton and Barto 2018, Chapters 9-11), with the Adam optimizer (Kingma and Ba, 2015) and a mean squared TD error loss function (see (4) above for the TD error).
As the leftmost plot in Figure 3 shows, the DQN agent takes longer than the tabular agent to learn the 1-stim task, but it completes it more or less perfectly after about 7,500 episodes. We inspect the Qfunction approximation encoded by the ANN at the end of the simulation by identifying the maximumvalue rule for each of the 36 possible states. Unlike tabular approaches, function-approximation approaches aggressively generalize over states by design, which is why they are appropriate for large state (and action) spaces. The final Q-function approximation aggressively generalizes by taking the finished rule to be the maximum value action for 31 out of 36 states. This makes sense given that the finished rule is immediately followed by the final positive reward of 1.
The other rules are triggered largely only when they are appropriate. For example, the lexeme retrieved rule is triggered only in one state -immediately after the word 'elephant' is successfully retrieved from declarative memory. The None rule is only triggered in two states: when the agent is waiting for the visual module to autodetect and encode the text on the virtual screen, and when waiting for the retrieval request to declarative memory to complete. But the DQN agent overgeneralizes the retrieving rule. It is appropriately triggered after the text on the virtual screen is stored in the visual buffer, i.e., when visual value is the word 'elephant,' but it is also triggered in one other state when the None rule is appropriate because the agent is waiting for the visual module to auto-detect the text on the virtual screen.
As the middle plot in Figure 3 shows, the DQN agent fails to learn the 2-stim task in a stable manner. We tried several different random seeds, and the DQN agent exhibits unstable learning in most of them, sometimes to an even larger extent than depicted here. An examination of the final Qfunction approximation reveals that, once again, the finished rule is the maximum value action for the vast majority of states (37 out of 48). The None rule is triggered only in 4 states. In two of them, the agent is waiting for the visual module to auto-detect the text on the virtual screen and encode it in the visual buffer (whether the manual buffer is free or busy). In another one, the agent is waiting for the retrieval request associated with the non-word to complete. However, the DQN agent has not learned that None should also be triggered when waiting for the retrieval request associated with 'elephant,' and it incorrectly triggers None in a state where retrieving is more appropriate.
The lexeme retrieved rule is triggered in 3 states. One of them is the expected one: immediately after the word 'elephant' is successfully retrieved from declarative memory. Another one is a reasonable overgeneralization to a state that is exactly the same as the first one except that the visual value is the non-word. In the third state, however, the None rule is more appropriate since the retrieval process for the word 'elephant' is still in progress. The agent has clearly not learned when to trigger the no lexeme found rule, which is triggered in only one state for which the retrieving rule is appropriate (since the word 'elephant' has just been read off the virtual screen). Finally, the retrieving rule is triggered in 3 states, one of which is appropriate as it immediately follows the point at which the nonword has been read off the virtual screen. However, the DQN agent also triggers this rule in two other states, for which it does not make much sense.
As the rightmost plot in Figure 3 shows, the DQN agent seems to performs much better than the tabular Q agent on the 4-stim task, learning to complete it efficiently after about 2,000 episodes. But an examination of the final Q-function approximation reveals an unexpected result: the retrieving rule is aggressively overgeneralized to 82 states (out of 108). The finished rule is the maximum value action for 9 states only, the lexeme retrieved rule for 8 states, the None rule for 7 states, and the no lexeme found rule for 2 states.
The finished rule is mostly triggered in states in which the visual value is FINISHED (5 out of 9 states), but it is also incorrectly triggered in states in which the 4 stimuli are stored in the visual buffer. It is not clear at all that the agent has learned this rule. The lexeme retrieved rule exhibits a similar profile. It is correctly triggered when the retrieval process for the two words are completed successfully, but there is a lot of noise also: 6 out of 8 states are not states in which this rule should be clearly triggered, and in two of them, the retrieval buffer is empty. Thus, it is far from clear that the agent has learned the lexeme retrieved rule.
The None rule is appropriately triggered when the agent is waiting for the visual module to autodetect text on the virtual screen, and when waiting for retrieval requests to complete for the two words 'elephant' and 'dog.' However, the agent has not learned to trigger this rule when waiting for retrieval requests associated with the two non-words. In addition, this rule is overgeneralized to several states where the retrieving rule is more appropriate. Finally, the DQN agent has clearly learned the no lexeme found rule: it is triggered in only two states, after failed retrieval requests associated with the two non-words.
In conclusion, we see that the DQN agent fails to learn the 2-stim task in a stable manner, learns the 1-stim task more slowly than the tabular Q agent, but exhibits an interesting behavior on the 4-stim task. This task seems to be learned very quickly (compared to tabular Q), but there is a very significant amount of noise in the final Q-function approximation. It is therefore not clear that the appropriate preconditions for most of the rules have actually been learned.
Parsing as an RL Problem
In this section, we briefly discuss how parsing can be formalized as an RL problem. Just as the LD task, the parsing task can be implemented in ACT-R (cf. Lewis and Vasishth 2005;Dotlačil 2018, 2020a). The parser components are split over various ACT-R modules and buffers: (i) lexical knowledge is encoded in declarative memory, (ii) knowledge of grammar and parsing actions are encoded in procedural memory, (iii) expectations about upcoming syntactic categories are encoded in the goal buffer, (iv) information about the current partially-built syntactic parse is encoded in the imaginal buffer (a secondary goal-like buffer), and finally, (v) visual information from the environment is transferred via the visual buffer.
We consider a simple example, which features an eager left-corner parser (Resnik, 1992). Assume we have a simple grammar with four phrase structure rules: (i) S → NP VP, (ii) NP → Det N, (iii) VP → V, (iv) VP → V NP. Also, assume that we are reading the sentence A boy sleeps word by word. As shown in Figure 4, we start with the empty visual buffer and our goal stack (the stack of the expected syntactic categories) consists of just S: our goal is to parse a sentence.
We then shift focus to the first word, the information is transferred to the goal buffer, at which point we retrieve its syntactic category Det(erminer) from declarative memory. We can now take a series Figure 4: Partial trees built incrementally when reading the sentence A boy sleeps word by word of cognitive steps -that is, we fire a series of productions -that lead to a new state. The new goal stack is N S: we now have the subgoal of finding a N(oun) on the way to S. Also, we build a partial syntactic structure of the form shown in the leftmost tree in Figure 4, and store it in the imaginal buffer. The noun boy is then brought into focus, its syntactic category N is retrieved, and we discharge the N goal at the top of the goal stack. At this point, we have the full corner of the rule S → NP VP, so we trigger it, which eagerly discharges the S goal and replaces it with the goal of finding a VP (verb phrase). At the same time, a richer partial tree, shown in the middle of Figure 4, is stored in the imaginal buffer. Finally, the verb sleeps is in focus, its syntactic category V is retrieved from declarative memory, and we trigger the rule VP → V that discharges the VP goal, resulting in an empty goal stack and the rightmost tree structure in Figure 4.
We see that rule ordering plays two roles in parsing. First, the parser has to correctly sequence actions per word: it has to collect the visual information, move it to the goal buffer, recall lexical information from the declarative memory, carry out a parsing action and move its visual attention to the following word. This sequencing of actions is akin to the one explored in the LD task. In addition, the parser has to find the right path through the sequence of parsing rules, e.g., it has to realize that the Det element at the start of the sentence should trigger the NP → Det N rule, followed by discharging the N goal etc. Incorrect sequencing would eventually lead to a dead end. For example, had the parser triggered the VP → V NP rule when parsing sleeps, it would incorrectly end up with an expectation for a non-existent direct object.
Summary and Future Work
We argued that sophisticated production-based cognitive models used to capture human behavioral data (particularly linguistic behavior) promise to provide a stringent test suite for RL algorithms. An immediate follow-up would be to explore how RL algorithms perform on a variety of productionbased cognitive models, whether linguistic, e.g., syntactic or semantic parsing, or non-linguistic. We have conducted pilot experiments with simple parsing models and tasks, and they are much more difficult than the LD tasks explored in this paper.
Another direction for future research is investigating other value-based tabular learning algorithms (Sarsa, Expected Sarsa), as well as extensively studying ANN-based functionapproximation approaches to reinforcement learning, both value and policy based.
Similarly, we might want to investigate curriculum learning (see Elman 1993; Rusu and al 2016 among others) for increasingly complex tasks. A DQN agent that has already learned the 1-stim task might be able to learn the 2/4-stim tasks quickly and well. Curriculum or transfer learning might also enable agents to learn from much fewer interactions, and/or from explicit instructions.
Figure 2 :Figure 3 :
23Tabular Q:Steps per episode for the 1-stim (left), 2-stim (middle) and 4-stim (right) tasks DQN: Steps per episode for 1-stim (left), 2-stim (middle) and 4-stim (right) tasks and complete the LD tasks as efficiently as possible.
Visual focus: ∅36
Goal stack:
S
Structure:
∅
focus
= = ⇒
shift
Visual focus: a
Goal stack:
N S
Structure:
NP
N
Det
A
focus
= = ⇒
shift
Visual focus: boy
Goal stack:
VP
Structure:
S
VP
NP
N
boy
Det
A
focus
= = ⇒
shift
Visual focus: sleeps
Goal stack:
∅
Structure:
S
VP
V
sleeps
NP
N
boy
Det
A
AcknowledgmentsWe are grateful to three anonymous CMCL 2020 reviewers for their feedback on an earlier version of this paper, the NVIDIA Corporation for a grant of two Titan V GPUs used for this research, and the UCSC OR and THI for a matching grant for additional hardware. The usual disclaimers apply.
How can the human mind occur in the physical universe?. R John, Anderson, Oxford University PressJohn R. Anderson. 2007. How can the human mind occur in the physical universe? Oxford University Press.
The Atomic Components of Thought. John R Anderson, Christian Lebiere, Lawrence Erlbaum AssociatesHillsdale, NJJohn R. Anderson and Christian Lebiere. 1998. The Atomic Components of Thought. Lawrence Erlbaum Associates, Hillsdale, NJ.
An extensible framework for mechanistic processing models: From representational linguistic theories to quantitative model comparison. Adrian Brasoveanu, Jakub Dotlačil, Proceedings of the 2018 International Conference on Cognitive Modelling. the 2018 International Conference on Cognitive ModellingAdrian Brasoveanu and Jakub Dotlačil. 2018. An extensible framework for mechanistic processing models: From representational linguistic theories to quantitative model comparison. In Proceedings of the 2018 International Conference on Cognitive Modelling.
Quantitative comparison for generative theories. Adrian Brasoveanu, Jakub Dotlačil, Proceedings of the 2018 Berkeley Linguistic Society 44. the 2018 Berkeley Linguistic Society 44Adrian Brasoveanu and Jakub Dotlačil. 2019. Quan- titative comparison for generative theories. In Pro- ceedings of the 2018 Berkeley Linguistic Society 44.
Adrian Brasoveanu, Jakub Dotlačil, 10.1007/978-3-030-31846-8Computational Cognitive Modeling and Linguistic Theory. Language, Cognition, and Mind (LCAM) Series. Open AccessAdrian Brasoveanu and Jakub Dotlačil. 2020a. Com- putational Cognitive Modeling and Linguistic The- ory. Language, Cognition, and Mind (LCAM) Se- ries. Springer (Open Access).
Reinforcement learning for production-based cognitive models. Adrian Brasoveanu, Jakub Dotlačil, Proceedings of the 2020 International Conference on Cognitive Modelling. the 2020 International Conference on Cognitive ModellingAdrian Brasoveanu and Jakub Dotlačil. 2020b. Rein- forcement learning for production-based cognitive models. In Proceedings of the 2020 International Conference on Cognitive Modelling.
Learning and development in neural networks: The importance of starting small. Jeffrey L Elman, 10.1016/0010-0277(93)90058-4Cognition. 48Jeffrey L. Elman. 1993. Learning and development in neural networks: The importance of starting small. Cognition, 48:71-99.
Toward an integrated model of sentence processing in reading. Felix Engelmann, PotsdamUniversity of PotsdamPh.D. thesisFelix Engelmann. 2016. Toward an integrated model of sentence processing in reading. Ph.D. thesis, Uni- versity of Potsdam, Potsdam.
From recurrent choice to skill learning: A reinforcementlearning model. Wai-Tat Fu, John R Anderson, 10.1037/0096-3445.135.2.184Journal of Experimental Psychology: General. 1352Wai-Tat Fu and John R. Anderson. 2006. From re- current choice to skill learning: A reinforcement- learning model. Journal of Experimental Psychol- ogy: General, 135(2):184-206.
What a rational parser would do. John Hale, Cognitive Science. 35John Hale. 2011. What a rational parser would do. Cognitive Science, 35:399-443.
Adam: A method for stochastic optimization. P Diederik, Jimmy Lei Kingma, Ba, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsDiederik P. Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
An activation-based model of sentence processing as skilled memory retrieval. Richard Lewis, Shravan Vasishth, Cognitive Science. 29Richard Lewis and Shravan Vasishth. 2005. An activation-based model of sentence processing as skilled memory retrieval. Cognitive Science, 29:1- 45.
Human-level control through deep reinforcement learning. Volodymyr Mnih, Nature. 5187540Volodymyr Mnih and al. 2015. Human-level con- trol through deep reinforcement learning. Nature, 518(7540):529-533.
Left-corner parsing and psychological plausibility. Philip Resnik, Proceedings of the Fourteenth International Conference on Computational Linguistics. the Fourteenth International Conference on Computational LinguisticsNantes, FrancePhilip Resnik. 1992. Left-corner parsing and psycho- logical plausibility. In Proceedings of the Four- teenth International Conference on Computational Linguistics, Nantes, France.
Progressive neural networks. Andrei A Rusu, Andrei A. Rusu and al. 2016. Progressive neural net- works.
Reinforcement learning: An introduction. S Richard, Andrew G Sutton, Barto, MIT pressRichard S Sutton and Andrew G Barto. 2018. Rein- forcement learning: An introduction. MIT press.
Why do children learn to say "broke"? a model of learning the past tense without feedback. Niels A Taatgen, John R Anderson, Cognition. 862Niels A. Taatgen and John R. Anderson. 2002. Why do children learn to say "broke"? a model of learn- ing the past tense without feedback. Cognition, 86(2):123-155.
A theoretical and empirical analysis of Expected Sarsa. H Van Seijen, H Van Hasselt, S Whiteson, M Wiering, IEEE Symposium on Adaptive DP and RL. H. van Seijen, H. van Hasselt, S. Whiteson, and M. Wiering. 2009. A theoretical and empirical anal- ysis of Expected Sarsa. In IEEE Symposium on Adaptive DP and RL, pages 177-184.
Q-learning. J C H Christopher, Peter Watkins, Dayan, 10.1007/BF00992698Machine Learning. 8Christopher J. C. H. Watkins and Peter Dayan. 1992. Q-learning. Machine Learning, 8(3):279-292.
Learning from Delayed Rewards. Christopher John Cornish Hellaby Watkins, King's College, Cambridge, UKPh.D. thesisChristopher John Cornish Hellaby Watkins. 1989. Learning from Delayed Rewards. Ph.D. thesis, King's College, Cambridge, UK. |
233,029,470 | 賽德克語構詞結構之自動解析 Analyzing the Morphological Structures in Seediq Words | [] | 賽德克語構詞結構之自動解析 Analyzing the Morphological Structures in Seediq Words
Chuan-Jie 林川傑
Department of Computer Science and Engineering
Graduate Institute of Linguistics
Department of Computer Science and Engineering National
National Taiwan Ocean University
National Taiwan University
Taiwan Ocean University {10857039
00657120, 00657140
Lin
Department of Computer Science and Engineering
Graduate Institute of Linguistics
Department of Computer Science and Engineering National
National Taiwan Ocean University
National Taiwan University
Taiwan Ocean University {10857039
00657120, 00657140
Li-May 宋麗梅
Department of Computer Science and Engineering
Graduate Institute of Linguistics
Department of Computer Science and Engineering National
National Taiwan Ocean University
National Taiwan University
Taiwan Ocean University {10857039
00657120, 00657140
† Sung
Department of Computer Science and Engineering
Graduate Institute of Linguistics
Department of Computer Science and Engineering National
National Taiwan Ocean University
National Taiwan University
Taiwan Ocean University {10857039
00657120, 00657140
國立臺灣大學語言學研究所
Department of Computer Science and Engineering
Graduate Institute of Linguistics
Department of Computer Science and Engineering National
National Taiwan Ocean University
National Taiwan University
Taiwan Ocean University {10857039
00657120, 00657140
Jing-Sheng 游景勝
Department of Computer Science and Engineering
Graduate Institute of Linguistics
Department of Computer Science and Engineering National
National Taiwan Ocean University
National Taiwan University
Taiwan Ocean University {10857039
00657120, 00657140
王瑋Wei You
Department of Computer Science and Engineering
Graduate Institute of Linguistics
Department of Computer Science and Engineering National
National Taiwan Ocean University
National Taiwan University
Taiwan Ocean University {10857039
00657120, 00657140
李政勳Cheng-Hsun Wang
Department of Computer Science and Engineering
Graduate Institute of Linguistics
Department of Computer Science and Engineering National
National Taiwan Ocean University
National Taiwan University
Taiwan Ocean University {10857039
00657120, 00657140
廖子權Zih-Cyuan Lee
Department of Computer Science and Engineering
Graduate Institute of Linguistics
Department of Computer Science and Engineering National
National Taiwan Ocean University
National Taiwan University
Taiwan Ocean University {10857039
00657120, 00657140
國立臺灣海洋大學資訊工程學系 Liao
Department of Computer Science and Engineering
Graduate Institute of Linguistics
Department of Computer Science and Engineering National
National Taiwan Ocean University
National Taiwan University
Taiwan Ocean University {10857039
00657120, 00657140
賽德克語構詞結構之自動解析 Analyzing the Morphological Structures in Seediq Words
The morphological structure of a Seediq word consists of its word root, prefixes, infixes, and
suffixes. This kind of information cannot be obtained directly from the surface of a Seediq
The 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020) Taipei, Taiwan, September 24-26, 2020. The Association for Computational Linguistics and Chinese Language Processing word. Dictionaries only offer the information of word roots. Furthermore, due to the rule of vowel reduction in Seediq, the surface of a Seediq word is not the same as the concatenation of affixes and word root. This paper focuses on automatically analyzing the morphological structure of a Seediq word given its word root.
Moreover, there are also rules of vowel neutralization and final consonant variation. During the research, we found that a word root would return to its original form when combining with the suffixes. We define the original form of a root word as a "deep root". Since there is no information about deep roots in the dictionary, this paper also proposes methods to predict deep roots of Seediq words. The experimental data come from the works of Prof. Li-May Sung: the grammar book "賽德克語語法概論" (An Introduction to Seediq Grammar) and the online dictionary "賽 德克 語德固達 雅方言" (Tgdaya Seediq) from the Council of Indigenous Peoples.
First, several morphological analyzing rules were created from the knowledge provided in the grammar book. These rules were used to detect the occurrences of affixes. Deep roots were learned from the set of different words referencing to the same root words. The mapping of root words with their deep roots could be further used to derive deep-root-prediction rules for unknown words. The rule-based system successfully detected the deep root and the existence of affixes with a precision of 98.66% and a recall of 88.29% on the test data.
Because one prefix string can be divided into several different structures, we used machine learning methods to solve the ambiguity. The best system was developed by bigram model where grams were atomic prefixes. Zero probability in the bigram model was replaced by the unigram probability (weighted by α), where the unigram model was also smoothed by the Lidstone smoothing method (with an addition of λ to the frequencies). The best prefix analysis system achieved an accuracy of 76.92% on the test data.
關鍵詞:賽德克語,構詞結構自動解析,深層原形,臺灣原住民族語之自然語言處理
Keywords: Seediq, automatic analysis of morphological structures, deep root, natural language processing for Taiwanese indigenous languages 致謝 本論文之研究承蒙行政院科技部研究計畫 (MOST 109-2221-E-019 -053 -)
The issue of preservation and revitalization of the indigenous languages is gaining attention from the public in recent days. Developing NLP techniques related to the indigenous languages will help to preserve and promote these languages.Word inflection or morphological forms in Seediq are plentiful. Major categories of the inflections are mainly for representing the focus or aspect, such as perfective aspect, active voice, patient voice, locative voice, etc. The focus system of the Austronesian languages is quite different fromChinese. It is important to identify the information of focus or aspect in words if we want to study machine translation among Taiwanese indigenous languages and other languages.摘要
原住民族語言保存及振興的問題已日益受到重視。如果能開發出原住民族語相關自然語
言處理技術,有助於原住民語資料保存及族語推廣等工作。賽德克語的詞形變化相當多
樣,其中一大部分主要是為了標示動詞焦點或時貌,包括完成貌、主事焦點、受事焦點、
處所焦點等等。因為這種焦點系統為南島語系所特有,若要研究台灣原住民語和中文之
間的自動翻譯系統,辨別這類構詞資訊非常重要。
† 通訊作者 corresponding authors
The 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020)
Taipei, Taiwan, September 24-26, 2020. The Association for Computational Linguistics and Chinese Language Processing
1
一個賽德克詞的構詞結構會以它所參考的原形詞加上前、中、後綴的組合來呈現,然而
構詞結構資訊無法由詞面直接獲得,詞典中也僅能查到各賽德克詞參考的原形詞。更特
別的是,賽德克語構詞律有元音脫落規則,因此賽德克詞並非直接由原形詞接上詞綴而
得。因此本論文的主要目標是自動解析賽德克語的構詞結構,在給定一個賽德克詞及其
參考 形詞時,能夠解析出該賽德克詞裡所出現的前綴、中綴及後綴組合。
此外,賽德克語構詞律有元音中性化和詞尾輔音變化等等規則。在研究構詞情形的過程
中我們發現,加上後綴時原形詞部份會恢復回變化之前的樣子,我們將之定義為「深層
原形」 。因為字典中並無此項資訊,本論文也會探討如何猜測一個賽德克詞的深層原形。
實驗資料主要來自宋麗梅教授著作「賽德克語語法概論」及協助原住民族委員會開發的
「賽德克語德固達雅方言」線上詞典。
首先由語法書中整理出的構詞相關知識來撰寫規則,用以偵測詞綴的出現。深層原形則
是利用詞典中參考同一原形詞的不同賽德克詞來統計猜測,這些猜測結果又可再歸納出
常見的變化規則用來推測新詞彙的深層原形。詞綴及深層原形解析工作在測試資料的精
確率是 98.66%,而召回率是 88.29%。
至於前綴的部份,因為同一個前綴字串可能可以拆解出多種不同的前綴組合而產生歧異
情形,因此改以機器學習方式進行。在測試各種方法後,效果最好的是以基本前綴為單
位的二元機率模型。解決零機率的方法是降階至一元機率模型 (權重設為 α),而一元機
率模型解決零機率的方法又以 Lidstone smoothing 效能最好 (頻率增加值設為 λ)。前綴
組合最佳解析正確率為 76.92%。
Abstract |
|
219,304,211 | [] | Tools facilitating better use of online dictionaries: Technical aspects of Multidict, Wordlink and Clilstore
August 23 2014
Caoimhín P Ó Donnaíle caoimhin@smo.uhi.ac.uk
Sabhal Mòr Ostaig An t-Eilean Sgitheanach
IV44 8RQUK
Tools facilitating better use of online dictionaries: Technical aspects of Multidict, Wordlink and Clilstore
Proceedings of the First Celtic Language Technology Workshop
the First Celtic Language Technology WorkshopDublin, IrelandAugust 23 2014
The Internet contains a plethora of openly available dictionaries of many kinds, translating between thousands of language pairs. Three tools are described, Multidict, Wordlink and Clilstore, all openly available at multidict.net, which enable these diverse resources to be harnessed, unified, and utilised in ergonomic fashion. They are of particular benefit to intermediate level language learners, but also to researchers and learners of all kinds. Multidict facilitates finding and using online dictionaries in hundreds of languages, and enables easy switching between different dictionaries and target languages. It enables the utilization of page-image dictionaries in the Web Archive. Wordlink can link most webpages word by word to online dictionaries via Multidict. Clilstore is an open store of language teaching materials utilizing the power of Wordlink and Multidict. The programing and database structures and ideas behind Multidict, Wordlink and Clilstore are described.
Introduction
At multidict.net three tools are to be found, Multidict, Wordlink and Clilstore. Their development was funded by EC projects with the aim of developing and sharing tools for language learning, and thanks to this they are a freely and openly available resource. They support not only the major European languages, but also place a particular emphasis on supporting minority languages including the Celtic languages. They also currently support scores of non-European languages and have the potential to support many more.
The central idea behind them is that one of the best ways of learning a language is to use authentic materials as early as possible -materials which are of interest for their own sake. This is the "CLIL", "Content and Language Integrated Learning", in the name "Clilstore". In the past, this would have meant either the students laboriously looking up word after word in the dictionary, or else the teacher laboriously preparing glossaries of the most difficult words for each piece of reading material. Good authentic content is easy to find via the Internet for most subjects in most languages, but preparing the glossaries was tedious.
For the students, online dictionaries, and there are many of them, sped up the process of looking up words compared to the old paper dictionaries. But it was still tedious typing in words, and then typing or copying them in again to try them in another dictionary. Far better if you could just click on a word in a text to look it up. This is the idea behind Wordlink. It takes any webpage and modifies the html so that every word is linked to online dictionaries while the presentation of the page remains the same. This work is licensed under a Creative Commons Attribution 4.0 International Licence. Page numbers and proceedings footer are added by the organisers. Licence details: http://creativecommons.org/licenses/by/4.0/ Automatic glossing of text as an aid to learners is not an idea unique to this project. It is used by the Rikaichan 1 Firefox add-on for Japanese, by the BBC Vocab 2 facility for Welsh and Gaelic, by the Readlang 3 website, by the PIE 4 Chrome add-on for English, and by many e-books. While these systems have many advantages, they also have severe restrictions compared to Wordlink: restrictions to particular languages, or particular browsers, or particular websites, or particular in-house dictionaries. Wordlink differs in that it attempts to generalize to very many languages and to harness the many freely available online dictionaries.
The earliest versions of Wordlink contained the code and knowledge required to link to a range of online dictionaries translating to various target languages. But the list quickly became ridiculously long and it was realized that the work of selecting and accessing different dictionaries needed to be hived off to a separate facility. So Multidict was created, and is a tremendously useful standalone facility in its own right.
Finally Clilstore was created to make it easy for language teachers to create materials and lessons utilizing the power of Wordlink and Multidict, and to make it easy for students and teachers to find material of interest stored openly in Clilstore. The great thing about Clilstore is that it enables students to access interesting material which would otherwise be a bit too difficult for them to cope with. It has proved to be particularly useful to intermediate level learners, and to learners coming from cognate languages.
We now look at the technical workings behind each of these three tools in turn.
Multidict
The interface
Here is what Multidict looks like in use: The section at the top is the "Multidict navigation frame" which controls dictionary selection and lookup. (Yes, Multidict uses old-fashioned frames 5 .) Below that is the frame containing the output returned by the online dictionary. In this case Multidict is being used to look up the Gàidhlig word 1 http://rikaichan.mozdev.org (this and all web references are as accessed on the date of writing, 2014-06-24) 2 http://www.bbc.co.uk/cymru/vocab/ 3 http://www.learngaelic.net/advanced/lganla/index.jsp?lang=gd 4 https://sites.google.com/site/phoneticallyintuitiveenglish/using-pie/getting-a-word-s-meaning 5 http://www.w3.org/TR/html401/present/frames.html http://www.w3.org/TR/html5/obsolete.html dubh in the Gàidhlig to English dictionary Am Faclar Beag meanbh 6 (the concise version of Am Faclair Beag 7 ). Note (1) the url which can be used to refer to the dictionary output for this word. This can be particularly useful in the case of dictionaries which do not themselves have any way of linking to their output via a url. The "sl" stands for "source language" and "tl" stands for "target language". Note (2) the row of 16x16 pixel favicons for dictionaries. Clicking on one of these switches you to the corresponding dictionary. They give the navigation frame a cluttered appearance, but once you get to know them they are much quicker and more convenient than selecting a dictionary using the dropdown selector. If the dictionary website has its own favicon, as most do, then Multidict uses that. If not, we try to construct a mnemonic favicon for the dictionary using the dictionary's own colours. Both in the row of favicons and in the dictionary dropdown, the dictionaries are placed in some kind of compromise order of preference. Note (3) that some favicons have an underline. This signals that the dictionary is a page-image dictionary where the user will have to scan around by eye on the page to find the word in question. More about page-image dictionaries in section 2.7 below. An overline where present above a favicon signals that the dictionary is a concise version, perhaps designed for mobile phones, which can often be very useful if the dictionary is being used together with Wordlink.
Note (4) the favicon for the current dictionary, and (5) the Esc button which provides a convenient way of escape from Multidict's frames to the dictionary's own homepage. Multidict is in fact a very convenient way of finding dictionaries and we have no desire to keep users on Multidict if they prefer to head off and use the dictionary directly.
Multidict does not itself have any dictionary information, but relies entirely on directing users to online dictionaries. So we need to be fair and maintain good relations with dictionary owners. Multidict makes a point of never "scraping" 8 , never even caching information from dictionary pages. Output is always presented exactly as it comes from the dictionary, complete with any advertising. In fact, whenever possible, Multidict operates by sending a simple HTTP "redirect" to redirect the user's browser to the dictionary page. Multidict advertises to dictionary owners that they can ask for their dictionary to be removed from Multidict's database at any time for any reason, but no dictionary owner has ever requested this. Note (6) the "favicon" symbols for switching to closely related languages. This makes it easy, for example, to switch and look for the word dubh in Irish dictionaries instead of Scottish Gaelic. For most languages we just use language codes for these symbols, but for the Celtic languages we have colourful symbols available. The same is possible for the target language, although in the example above the only symbol shown is the "Gàidhlig" symbol for switching to Gàidhlig-Gàidhlig monolingual dictionaries. To support this system, the Multidict database has two tables holding information on closely related languages. Two tables because "closely related" for the purposes of the target language field may not be the same as closely related for the purposes of the source language field. There would be no point in trying an "sr-Latn" (Serbian in Latin script) word in an "sr" (Serbian in Cyrillic script) dictionary, but someone who understood "sr" could be expected to understand "sr-Latn".
The database behind it
How does Multidict work? For many dictionaries, very very simply. If when you look up the word dubh at friendlydict.org, you notice that the url is http://friendlydict.org/find?facail=dubh then you can be sure that by simply replacing dubh with geal in the url, you would look up the word geal. For such dictionaries, the Multidict database would store the string http://friendlydic.org/find?facail={word} and when the time came to look up a word, Multidict would simply replace {word} with the word in question and redirect the results frame to this address.
However, for many dictionaries, both good ones and less good, things are not so simple. Their html form submission uses POST method instead of GET method and there is no sign of a nice url containing the word to search for. In this case, Multidict has to construct and send an http POST request. It does this using the HTTP_Request2 PEAR 9 class. (PEAR being a repository of software for the PHP language.) Multidict captures the response to the request and despatches it to the results frame.
Multidict, Wordlink and Clilstore are written in PHP, and behind them is a mySQL (or MariaDB 10 to be precise) database. The database has a dict table with a record for each dictionary, storing the long name, the favicon and the dictionary's homepage address.
However, many dictionaries serve several languages, and the main business is done by the table dictParam, which is indexed by (dict, sl, tl). This table stores the url, as described above, any post parameters required, and has many other fields. A field called message can contain a tip to be displayed to users in the navigation frame, such as "Right-click to zoom". A field charextra can specify certain different kinds of extra processing to be applied to the word before lookup to satisfy the peculiarities of particular dictionaries. Some dictionaries require accents to be stripped from the word, some require them to be urlencoded 11 . The Irish Dineen 12 dictionary requires 'h's to be stripped from the word to convert to old spelling and dictionary order, and this is indicated by the string "striph" in the charextra field. A field handling specifies any particular handling required to obtain the output from the dictionary. The best behaved dictionaries get the value "redirect". Some particularly awkward dictionaries which require POST parameters and only accept requests from the user's browser get the value "form". This causes Multidict to construct a form in the results frame, fill in the search word, and cause the user's browser via Javascript to immediately submit it. Thus Multidict has a whole range of clever tricks and tools available to it, which means that it manages to handle between 80% and 90% of all dictionaries we have attempted to link to.
Language codes
Multidict currently tries to use IETF language codes 13 both externally and internally. i.e. It uses a twoletter ISO 639-1 14 language code such as "en", "fr", "de", "ga", "gd" if such is available, or a three letter ISO 639-3 15 language code such as "sco", "sga" when no two-letter code is available, and it sometimes makes use of country code and script code extensions such as "pt-BR" and "sr-Latn". When these are inadequate, such as for historic languages and dialects, it turns to LinguistList 16 codes for inspiration: e.g. "non-swe" (Old Swedish 17 ), and "oci-ara" (Aranese 18 ).
Where ISO 639-3 equates a two-letter language code with a three letter code denoting a macrolanguage 19 , as in the case of Latvian lt=lav which also includes Latgalian, Multidict uses the ISO 639-3 code for the precise language, in this case "lvs" for Standard Latvian. This differs from Google Translate, for example, which continues to use the two-letter code code for the dominant language in the macrolanguage grouping. Other languages where similar questions arise include Estonian et/ekk, Malay ms/zsm, Albanian sq/als, Azari az/azj, Uzbek uz/uzn, Persian fa/pes, Guarani gn/gug, Swahili sw/swh.
Closely related languages
As we increasingly try to cater for minority languages and dialects, the questions of how to deal with closely related languages become ever greater. On the one hand, we want to distinguish European Portuguese, currently coded as "pt", and Brazilian Portuguese, "pt-BR", especially if the dictionary site itself clearly distinguishes them among its language choices. On the other hand, we don't want users to be unable to find dictionaries which might be very useful to them, simply because of a small difference in language code. The "closely related languages" feature in the Multidict interface goes a very small way towards addressing this difficulty, but the problem requires more thought.
A webpage 20 available via the Multidict help system lists all the languages currently handled by Multidict. It lists languages ordered by language family, then sub-family and so on. Closely related languages are therefore located close together, and the webpage can be used to maintain Multidict's tables of closely related languages. To achieve this ordering, the Multidict database links each of its language codes to the corresponding LinguistList code, and holds a copy of the LinguistList Multitree 21 Composite Tree. However, because the Composite Tree provides nothing but a tree structure, albeit a tremendously useful finely-detailed tree structure, it is in itself inadequate for defining the required linearization of the tree. We always prefer to place the most closely related branches (closely related by geography if nothing else) adjacent to one another, rather than the children of each node being listed in some random order (as they currently are in Multitree itself, which places Baltic languages next to Celtic and Armenian, rather than next to Slavic). To do this, in Multidict's copy of the Composite Tree, we maintain, where relevant to Multidict, an ordering of the children of a parent node. This has to be laboriously researched each time a language is added to Multidict. It would be very useful if this ordering information were to be provided as a resource together with the Lin-guistList Composite Tree.
"n×n" dictionaries
Most online dictionaries only handle a limited number of language pair (sl, tl) combinations, and each of these is given a separate record in the dictParam table. However, some online dictionaries can translate between any of n×n language pairs. Most notably in recent years, Glosbe 22 and Global Glossary 23 translate surprisingly successfully between any pair out of hundreds of languages. To harness the tremendous power of these "n×n" dictionaries without cluttering the dictParam table with tens of thousands of records, the Multidict database uses the following tactic. In the sl field in the dictParam table, a "¤" symbol is placed, and this indicates to Multidict to refer to a separate table dictLang to obtain a list of the n languages which this particular n×n dictionary handles. The table can also translate between the language code used by Multidict and a different language code used by the dictionary. In the dictParam table, the url required for linking to the dictionary can (as can also the POST parameters) contain placeholders for sl and tl, such as for example:
http://friendlydic.org/find?from={sl}&to={tl}&facail={word} When Multidict looks up a word, it substitutes the relevant sl and tl. The tl field in the dictParam record for the n×n dictionary also contains a "¤" symbol if this is truly an n×n dictionary, including monolingual pairs such as English-English. If it is actually an "n×(n-1)" dictionary excluding monolingual pairs, this is denoted by placing instead an "x" in the tl field.
Quality ranking
To try to place the "best" dictionaries at the top of the list in the user interface, and also to ensure that the "best" dictionary for the language-pair is used by default, the dictParam table stores a "quality" figure for each dictionary. Of course, this is necessarily a compromise. What is best for one purpose might not be best for another. And things get messy when it comes to n×n dictionaries. Multidict already records and defaults to the previous dictionary which the user used for that language-pair. It might be best, instead of over-relying on a "quality" figure, to extend this recording system to the second and third most recent dictionaries used, or perhaps move to a system based on usage statistics.
Web Archive dictionaries
Online dictionary resources are often very scarce for minority languages. However, many excellent old paper dictionaries are now available in page-image format on the Web Archive at www.archive.org 24 , and also on Google Books 25 . The wonderful thing is that these dictionaries can be addressed by url on an individual page basis. So all we need to do to make the dictionary available via Multidict is to provide Multidict with a table giving it the first word on every page of the dictionary. Or actually, the last word on every page works slightly better because of the technicality that several headwords can have the same spelling. Providing such a table sounds like a daunting task, but in fact, by getting very ergonomically organized the time can be reduced to a few seconds per page, meaning that even a 1000 page dictionary can be dealt with in a few hours. To date, 23 such page-image dictionaries have been made available via Multidict (counting the reverse direction separately in 5 cases), namely 8 for Scottish Gaelic; 2 Irish; 1 Old Irish; 3 Manx; 1 Cornish; 1 Old English; 1 Middle English; 3 Nyanja and 3 Maori. In total, about 55,000 pages have been indexed. The biggest example is that all 4323 columns of the Old Irish eDIL 26 dictionary have been indexed, and in fact eDIL is currently more usable for most purposes via Multidict than using its own native search interface. Although the native search will search the whole dictionary, which can sometimes be wonderfully useful, it will find nothing at all if the search word is not specified exactly as written in the dictionary, including all accents and hyphens. With the vagaries of Old Irish spelling, it can be more useful to take the user to the right spot in alphabetic order as Multidict does, leaving him or her to complete the search by eye.
To enable access to these page-image dictionaries, Multidict uses two tables, dictPage which records the first (or last) word on every page, and dictPageURL which records the url templates required to translate these page numbers into urls. The mechanism can also cope with dictionaries which are split into several volumes, as is Dwelly in the Web Archive . A program dictpage.php does the job of redirecting the browser to the appropriate url.
Statistics
Multidict currently handles 271 different online dictionaries -there are 271 records in the dict table.
The dictParam table has 2101 records covering 1041 language pairs, but the numbers would be tens of thousands higher if the n×n dictionaries Glosbe and Global Glossary were included. Multidict currently handles 202 languges, or 140 if the n×n dictionaries are excluded.
Wordlink
The interface
In the example shown below, Wordlink is being used to view the Irish Wikipedia homepage. At the top is the Wordlink navigation frame which is used for control. Below that is a frame with what looks exactly like the Wikipedia page, but it is in fact a doctored version, with the html modifed by Wordlink to link every word to online dictionaries via Multidict, as shown on the right. Note (1) the url: http://multidict.net/wordlink/?sl=ga&url=http://ga.wikipedia.org/ which can be used to refer to the wordlinked page. An additional paramater navsize=1 can be used to reduce the navigation frame away to 1 pixel size if it is not required. If the url is specified in the form url=referer, the url is taken from the referer information in the http request. This means that by adding a link of this form to every page of a website, each page is linked to a Wordlinked version of itself for the benefit of language learners. This can be seen in use on the Fòram na Gàidhlig 27 website.
Note (2) the choice of mode, "Splitscreen" which causes Multidict and the dictionary results to be shown in a frame on the right. Wordlink has three other choices of mode available "New tab", "Same tab" and "Popup". Although Splitscreen is the default and is overwhelmingly the most used, the other modes could actually be very useful on smaller screens. Note (3) the option to "Remove existing links". By default, Wordlink does not actually link every word to a dictionary lookup. If you click on the word Dóitean, it will take you instead to a Wordlinked version of the Dóiteán Mór Londan Wikipedia page. "Remove existing links" does what it says and will instead ensure you are taken to a dictionary lookup of Dóiteán.
Note (4) the Esc button. Wordlink like Multidict makes it easy for you to escape from its frames to the webpage itself.
Note (5) that the word ndeachaigh has been clicked on to find it in the dictionary, and it is therefore highlighted and remains highlighted until another word is clicked. This small point is of major importance. Very often the user will need to scroll the dictionary information (as indeed in this example), and it is essential that the word be highlighted to make it easy to look back and continue reading.
Note (6) that although Multidict has been handed the wordform ndeachaigh by Wordlink, it has chosen instead to look up téigh, which it thinks is probably the appropriate "lemma", the dictionary headword to look up, and it has also lined up a row of other lemma suggestions to be tried in turn if the user reclicks "ndeachaigh" or clicks "Go" in Multidict. This new lemmatization feature built into Multidict has resulted in a big improvement in the user experience when using Wordlink and Clilstore. Some online dictionaries can do their own lemmatization, but many good dictionaries do not. And even when the dictionary itself offers excellent lemmatization suggestions, as does Ó Dónaill 28 in the example above, the new "click to retry" feature is so slick to use that it can be much quicker to just reclick and let Multidict do the work. The feature is described more fully in section 3.4 below.
The Wordlink program
The Wordlink program, like all the facilities at multidict.net is written in PHP 29 . It first sends off an HTTP request to fetch the webpage to be processed. It then converts it to UTF-8 character encoding 30 if it is not already in UTF-8, because all the facilities work internally entirely in UTF-8. It then processes the page to (1) convert existing links into links to Wordlinked pages (if this has not been switched off by "Remove existing links"), and (2) convert each word in runs of text into a link to make Multidict look up that word. We will not go into the details, but suffice it to say that it is not an easy task, and it is essential to ensure that relative links to images, stylesheets and Javascript libraries are all appropriately converted. It currently works by processing the html serially, but it would probably be better to convert it to use an html parser and then traverse the resulting DOM tree.
Wordlink does not work well with all webpages, particularly flashy games pages or TV company websites and suchlike. But it produces good to excellent results with a good 90% of the more textual webpages likely to be of interest to language learners. With well-behaved pages such as Wikipedia it works perfectly. It does not work at all with webpages requiring a login, such as Facebook or pages in virtual-learning environments. To do this would require it to store and forward user-credentials and would get us into the very iffy field of trust relationships. Nor does it work with the https (secure http) protocol.
Word segmentation
Wordlink links "words" to dictionaries, and for most languages it identifies words by the whitespace or punctuation characters surrounding them. This means that it does not deal with collocations or phrases or even hyphenated words such as "trade-union". In such cases, the user can always type additional text into the Multidict search box. But it would be nice if some sort of Javascript or browser extension could be devised to allow the user to select phrases with the mouse and look them up.
Breton and Catalan presented Wordlink with a slight problem, because "c'h" in Breton is regarded as a letter, as is "l·l" in Catalan, and at first Wordlink was splitting the word at what it thought was a punctuation character. This was easily cured by a small change to the program.
Japanese, Chinese, Korean and Thai webpages present it with the much bigger problem that these languages are normally written without any space between "words". However, we have newly built into it an interface with the Japanese word segmenter Mecab 31 . This seems to be successful, and gives the spinoff benefit that hovering over a Japanese word now displays its pronunciation in Hiragana. Japanese learners have such a hard task to face with unknown Kanji that even partial success could be of tremendous benefit. For Chinese, we managed to do the same with the Urheen 32 word segmenter and the results seem to be good, but at the time of writing this is performing far too slowly to be useful and has been switched off. The bother seems to be that Urheen does a lot of inefficient initialization every time it is called, but we might manage to find ways round this.
The "lemmatization" facility in Multidict
Although this belongs to Multidict as regards programming, it is described here because it is when Multidict is used together with Wordlink that all sorts of inflected wordforms are thrown at it. We put "lemmatization" in inverted commas, because the facility is only semi-trying to produce grammatical lemmas. Because it is only going to present the user with a string of possibilities, it does not need to go for grammatical purity and "headword suggestions" might be a better term than lemmas.
The basis of this facility in Multidict for most source languages is the Hunspell 33 spellchecker, which is the opensource spellchecker used by LibreOffice, OpenOffice, Firefox, etc. Old-fashioned spellcheckers just had a long list of wordforms in a .dic file. Hunspell, on the other hand, was originally developed for Hungarian which is a highly inflected language and works in a much more intelligent way using also a .aff file (aff<affix). The words in the .dic file can be labelled for grammatical category, and the .aff file contains the rules to produce a range of inflected wordforms relevant to that grammatical category. The great thing is that we do not need to attempt to understand or reverse engineer these rules. Hunspell itself has built into it a function to return the possible lemmas corresponding to any given wordform. All we need to do is to pull in from the Internet the Hunspell .dic and .aff files for lots of languages, and this we have done.
How successful Hunspell is at lemmatizing depends on the language and how Hunspell has been implemented for it. It is possible for an implementer to just throw lots of wordforms into the .dic file and put very few rules in the .aff file. Hunspell lemmatizes Basque very well, for example, but the current implementation does very little for German. For Scottish Gaelic it was not great and for Irish not much better, and so we turned to another solution, the use of a lemmatization table.
We were very fortunate and very grateful to be donated huge lemmatization tables for both Scottish Gaelic and Irish. And a huge public domain table for Italian, Morph-it 34 (Zanchetta and Baroni, 2005), was found on the Internet. Smaller batches added to this include the Old Irish verbforms from In Dúil Bélrai 35 ; tables from the Internet converting between en-US and en-GB English spelling; and tables converting between pre-Caighdeán and post-Caighdeán Irish spelling. These form the basis of an alternative method of lemmatization which Multidict has at its disposal, namely the lemmas table in the Multidict database which currently has 1.4 million wordforms. These can be labelled with the "batch" field, which can be used for example to denote those to be given priority, or those to be applied only for certain dictionaries.
Algorithmic "lemmatization" provides yet another tool in Multidict's lemmatization armoury. Again this is divided into a "priority" algorithm to be used first, and a non-priority algorithm. The priority algorithm includes the removal of initial mutations from Irish and Scottish Gaelic words, because this is nearly always something sensible to do. The non-priority algorithm includes throwing out any final 's' from English words, because this is normally a last resort when the word has not been recognized by Hunspell. The non-priority algorithm includes crude attempts to lemmatize words in the p-celtic languages, Welsh, Cornish and Breton, by naively changing the initial letter.
It turns out to be rather crucial, especially for Irish and Scottish Gaelic, to have priority records in the the lemmas table for the lemmatization of irregular verbs, otherwise many of them would not be recognised after initial mutation was removed. This has been done, and all the prepositional pronouns have been added too. This is something we really ought to do for every language: namely feed into the lemmatization table all the irregular verbs, irregular nouns, etc, because Hunspell deals with these rather poorly. Hunspell's priorities and ours are different. Its priority is to reduce the size of the .dic file by placing rules for regular verbs and nouns in the .aff file. Irregular verbforms take up relatively little space in the .dic file, so it just throws them in there and doesn't help us at all to lemmatize them. Multidict now has in place a very sophisticated, flexible mechanism for lemmatization, pulling in as required the different tools at its disposal. It would be good if experts for individual languages could co-operate to help implement and tailor these tools for each particular language.
The default "wfrule" string which Multidict uses to generate headword suggestions for a particular wordform is "lemtable~pri|prialg|self|lemtable|hun|lemalg". What this means in plain English is: concatenate the lists of headword suggestions produced by (1) those labelled "pri" in the lemmas table, (2) those produced by the priority algorithm, (3) the wordform itself, (4) those with no batch label in lemmas, (5) those provided by Hunspell, and (6) those produced by the non-priority algorithm. The | operator not only concatenates but causes duplicates to be removed from the list. However, different "wfrule" strings can be applied for different languages and dictionaries. As well as the | operator, there is another operator > which causes the array of suggestions generated by the previous rule to be used as input to a following rule. And brackets ( ) can also be used in this "algebra".
Beware of robots
In any publicly available facility such as Wordlink which can take any webpage and process it to produce another, it is essential to be very careful about robots.txt 36 and robots meta tags in the html header. At one point the server hosting multidict.net was running very slowly and on investigation it was found that Google was attempting to spider and index the entire Internet via Wordlink! The links on one Wordlinked webpage were leading it to other Wordlinked webpages. It took months before it completely stopped.
Clilstore
in the title or text, and for searching or ordering by language, CEFR, media length, number of words, number of views, etc. A wysiwyg editor, TinyMCE 38 , provides a facility for authors to produce rich colourful units without getting involved in html, although an html editor is also available.
To date (2014-06-24), Clilstore has 1072 units (excluding test units) in 49 different languages. The biggest number (416) are in English, but there are 116 in Arabic, 101 in Scottish Gaelic, 65 in Slovenian,51 in Irish,40 in Portuguese,38 in Spanish,34 in Italian,27 in Lithuanian,26 in German,22 in Danish. There is even one, complete with soundfile in Old Irish. Clilstore and Wordlink work fine with right-to-left languages such as Arabic, although good online dictionaries are still rather lacking for Arabic. Statistics show that the units have had so far over 203,000 views in total. Perhaps more interestingly and reliably, in the 3 months since we started collecting such statistics, there have been 6773 clicks (dictionary lookups) on words in Clilstore units.
Experience from workshops for Gaelic language summer courses 39 at various levels at Sabhal Mòr Ostaig shows that the Clilstore facility is most useful to intermediate level learners. Advanced users find it very useful too, as a store of videos and transcripts, but tend to click fairly seldom because they can understand well enough from context anyway. Learners coming from cognate languages with somewhat different spelling rules such as Irish learners of Scottish Gaelic find it particularly useful, as was seen on the summer courses on Scottish Gaelic for Irish speakers at Sabhal Mòr Ostaig.
Conclusion
The facilities described here work, have proved their worth 40 , and are freely and openly available. Much more could be done to develop them, of course. The interface is entirely through English at present, which is not good when trying to provide an immersion environment for Gaelic students, for example. Nor is good for Italian students at a Portuguese university, to have to go through an English interface to access Portuguese units. It would be good to internationalize the programs and provide localized interfaces.
Multidict and Wordlink use old-fashioned html frames 41 , which have no support in modern standards 42 , although they work well for the job in hand. It would be good to investigate switching to iframes 43 , although this would require increasing use of Javascript libraries for resizing.
Users can and do recommend new dictionaries for Multidict, but it would be good to develop this into more of a community facility.
Figure 1 .
1The Multidict interface
Figure 2 .
2The Wordlink interface
http://www.faclair.com/m/ 7 http://www.faclair.com 8 The practice of extracting partial information from webpages on another site: http://en.wikipedia.org/wiki/Web_scraping
http://multidict.net/multidict/languages.php 21 http://multitree.linguistlist.org 22 http://glosbe.com 23 http://www.globalglossary.org 24 https://archive.org/details/texts
http://books.google.com 26 http://edil.qub.ac.uk/dictionary/search.php
http://www.foramnagaidhlig.net/foram/ 28 http://breis.focloir.ie/ga/fgb/ 29 http://www.php.net 30 http://en.wikipedia.org/wiki/UTF-8
http://mecab.googlecode.com 32 http://www.openpr.org.cn/index.php/NLP-Toolkit-For-Natural-Language-Processing/68-Urheen-A-Chinese/English-Lexical-Analysis-Toolkit/View-details.html 33 http://hunspell.sourceforge.net 34 http://sslmitdev-online.sslmit.unibo.it/linguistics/morph-it.php 35 http://www.smo.uhi.ac.uk/sengoidelc/duil-belrai/
AcknowledgementsMultidict and Wordlink were first developed under the EC financed 44 POOLS-T 45 project. Clilstore was developed, and Multidict and Wordlink further developed under TOOLS 46 project financed by the EC's Lifelong Learning Programme. Much of the credit for their development goes to the suggestions, user testing and feedback by the project teams from 9 different European countries, and in particular to the project leader Kent Andersen. Wordlink was inspired by Kent's Textblender program. We are grateful to Kevin Scannell for the Irish lemmatization table used by Multidict, and to Mìcheal Bauer and Will Robertson for the Scottish Gaelic lemmatization table.
But storing the video or soundfile is left to the very many media hosting services available on the Internet, such as Youtube, Vimeo, TED, Teachertube, Ipadio and Soundcloud, from where they can be very easily added to the Clilstore unit by using the embed code supplied by the hosting service. This avoids us getting into large storage requirements, and hives off any copyright questions to services with mechanisms in place to deal with infringements. Each unit is labelled with a level, A1, A2, B1, B2, C1 or C2, from the Common European Framework of Reference for languages (CEFR 37 ). The index provides a rich facility for. Clilstore is the most recent of the three facilities. It makes it easy for teachers to harness the power of Wordlink and Multidict, by adding teaching "units" to the openly available online "store. The formula which has been found to be most successful has been a video or soundfile together with a transcript, and perhaps some exercises to test student understanding. Clilstore itself stores the text, and can store attachment files of limited sizeClilstore is the most recent of the three facilities. It makes it easy for teachers to harness the power of Wordlink and Multidict, by adding teaching "units" to the openly available online "store". The formu- la which has been found to be most successful has been a video or soundfile together with a transcript, and perhaps some exercises to test student understanding. Clilstore itself stores the text, and can store attachment files of limited size. But storing the video or soundfile is left to the very many media host- ing services available on the Internet, such as Youtube, Vimeo, TED, Teachertube, Ipadio and Soundcloud, from where they can be very easily added to the Clilstore unit by using the embed code supplied by the hosting service. This avoids us getting into large storage requirements, and hives off any copyright questions to services with mechanisms in place to deal with infringements. Each unit is labelled with a level, A1, A2, B1, B2, C1 or C2, from the Common European Frame- work of Reference for languages (CEFR 37 ). The index provides a rich facility for searching by words 36 http://en.wikipedia.org/wiki/Robots_exclusion_standard 37 http://en.wikipedia.org/wiki/Common_European_Framework_of_Reference_for_Languages http://www.coe.int/t/dg4/linguistic/Cadre1_en.asp
Morph-it! A free corpus-based morphological resource for the Italian language, proceedings of Corpus Linguistics. Eros Zanchetta, Marco Baroni, Birmingham, UK 38University of BirminghamEros Zanchetta and Marco Baroni.. 2005. Morph-it! A free corpus-based morphological resource for the Italian language, proceedings of Corpus Linguistics 2005, University of Birmingham, Birmingham, UK 38 http://www.tinymce.com 39 http://www.smo.uhi.ac.uk/gd/cursaichean/cursaichean-goirid
There are now 1072 Clilstore units, and new are created almost daily both by people inside the project and people completely unconnected with it. Wordlink has clocked up over 315. 000 dictionary lookups in the past six yearsThere are now 1072 Clilstore units, and new are created almost daily both by people inside the project and people com- pletely unconnected with it. Wordlink has clocked up over 315,000 dictionary lookups in the past six years.
Standard disclaimer applies: This publication reflects the views only of the author, and the Commission cannot be held responsible for any use which may be made of the information. Standard disclaimer applies: This publication reflects the views only of the author, and the Commission cannot be held responsible for any use which may be made of the information contained therein 45 http://languages.dk/pools-t |
||
226,283,983 | The Box is in the Pen: Evaluating Commonsense Reasoning in Neural Machine Translation | Does neural machine translation yield translations that are congenial with common sense? In this paper, we present a test suite to evaluate the commonsense reasoning capability of neural machine translation. The test suite consists of three test sets, covering lexical and contextless/contextual syntactic ambiguity that requires commonsense knowledge to resolve. We manually create 1,200 triples, each of which contain a source sentence and two contrastive translations, involving 7 different common sense types. Language models pretrained on large-scale corpora, such as BERT, GPT-2, achieve a commonsense reasoning accuracy of lower than 72% on target translations of this test suite. We conduct extensive experiments on the test suite to evaluate commonsense reasoning in neural machine translation and investigate factors that have impact on this capability. Our experiments and analyses demonstrate that neural machine translation performs poorly on commonsense reasoning of the three ambiguity types in terms of both reasoning accuracy ( 60.1%) and reasoning consistency ( 31%). We will release our test suite as a machine translation commonsense reasoning testbed to promote future work in this direction. | [
29162884,
202234053,
128296356,
52967399,
173188167,
2818281,
6628106,
53296520,
199558284,
173990423,
202540590,
202558451,
11212020,
202541043,
52019251,
52921687,
13751870
] | The Box is in the Pen: Evaluating Commonsense Reasoning in Neural Machine Translation
November 16 -20, 2020
Jie He jieh@tju.edu.cn
College of Intelligence and Computing
Tianjin University
TianjinChina
Tao Wang
School of Computer Science and Technology
Soochow University
SuzhouChina
Deyi Xiong dyxiong@tju.edu.cn
College of Intelligence and Computing
Tianjin University
TianjinChina
Qun Liu qun.liu@huawei.com
Huawei Noah's Ark Lab
Hong KongChina
The Box is in the Pen: Evaluating Commonsense Reasoning in Neural Machine Translation
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings
the 2020 Conference on Empirical Methods in Natural Language Processing: FindingsNovember 16 -20, 20203662
Does neural machine translation yield translations that are congenial with common sense? In this paper, we present a test suite to evaluate the commonsense reasoning capability of neural machine translation. The test suite consists of three test sets, covering lexical and contextless/contextual syntactic ambiguity that requires commonsense knowledge to resolve. We manually create 1,200 triples, each of which contain a source sentence and two contrastive translations, involving 7 different common sense types. Language models pretrained on large-scale corpora, such as BERT, GPT-2, achieve a commonsense reasoning accuracy of lower than 72% on target translations of this test suite. We conduct extensive experiments on the test suite to evaluate commonsense reasoning in neural machine translation and investigate factors that have impact on this capability. Our experiments and analyses demonstrate that neural machine translation performs poorly on commonsense reasoning of the three ambiguity types in terms of both reasoning accuracy ( 60.1%) and reasoning consistency ( 31%). We will release our test suite as a machine translation commonsense reasoning testbed to promote future work in this direction.
Introduction
Sixty years ago, the pioneering machine translation researcher and linguist Bar-Hillel published his well-known argument on the non-feasibility of general-purpose fully automatic high-quality machine translation (FAHQT) due to the inevitable requirement of world knowledge to help machine translation to infer correct translations for ambiguous words or linguistic structures (Bar-Hillel, 1960a). The example that Bar-Hillel uses as an * Equal Contributions. evidence for the need of commonsense knowledge in machine translation is "The box is in the pen", where machine translation is expected to perform reasoning on the relative sizes of "box" and "pen". Bar-Hillel also doubts that a machine, even equipped with extra-linguistic knowledge, would be able to reason with such knowledge spontaneously as human translators do (Bar-Hillel, 1960a;Macklovitch, 1995).
Modern natural language processing (NLP) has made tremendous progress, not only in building abundant resources to develop linguistic insights, but also in plenty of methodological practices. On the one hand, machine translation has been substantially advanced with large-scale parallel data and statistical models. Recent results even suggest that the quality of machine-generated translations is approaching professional human translators (Wu et al., 2016;Hassan et al., 2018). On the other hand, a wide variety of efforts have been conducted to either examine the commonsense reasoning capability of neural models in natural language understanding, establish commonsense reasoning challenges or enhance neural models in commonsense reasoning Talmor et al., 2018;Huang et al., 2019;Sap et al., 2019b).
Comparing with Bar-Hillel's doubts and recent progress on machine translation and commonsense reasoning, it is natural for us to ask questions: do we solve the machine translation impasse related to commonsense reasoning? Or particularly, are current neural machine translation models able to learn common sense? And if so, how much do they learn? Does neural machine translation acquire sufficient commonsense knowledge and have strong ability in commonsense reasoning to generate human-level high-quality translations? Methodological discussion on the feasibility of FAHQT given the recent progress is far beyond the scope of this work. Instead, we focus on empirically ana-(1) 这个 人 戴 的 表 走 了 3 分钟 。 The watch worn by this person went/walked for 3 minutes.
(2) 吃 了 游客 的 鳄鱼 。 The crocodile who ate the tourist/Ate the tourist's crocodile.
(3) 当 地震 袭击 中国 时 , 援助 的 是 中国 。 When the earthquake hit China, China received aid/China provided aid. Figure 1: Examples of the lexical ambiguity (1), contextless syntactic ambiguity (2) and contextual syntactic ambiguity (3). English Translations in bold are correct while underlined translations are incorrect.
lyzing the capability of state-of-the-art neural machine translation models in using extra-linguistic commonsense knowledge to resolve ambiguity at different linguistic levels and select correct translations after disambiguation.
In order to achieve this goal, we manually build a machine translation commonsense reasoning test suite on Chinese-to-English translation with three types of commonsense-related ambiguities: lexical ambiguity, contextless and contextual syntactic ambiguity (see Section 3.1 for more details). Examples are shown in Figure 1. With this test suite, we thoroughly evaluate the commonsense reasoning ability of state-of-the-art neural machine translation models, e.g., LSTM-and Transformer-based NMT (Bahdanau et al., 2015;Vaswani et al., 2017). We also conduct analyses on the commonsense reasoning capability according to commonsense knowledge types, sentence length and reasoning consistency and the size of training data.
To the best of our knowledge, this is the first work to understand and measure the commonsense reasoning capability in neural machine translation. The contributions of this paper can be summarized as follows:
• We build a test suite 1 to examine the ability of neural machine translation in commonsense reasoning, which provides a benchmark testbed for tracking progress in this direction. • Based on our experiments and analyses on evaluating commonsense reasoning in NMT, we find that: 1) commonsense reasoning related to lexical ambiguity and contextual syntactic ambiguity is more difficult than contextless syntactic ambiguity; 2) although the commonsense reasoning accuracy is higher than 50%, the reasoning consistency rate is far lower than 50% (random guess).
Related work
We briefly review recent efforts related to commonsense reasoning in NLP. We refer readers to Storks et al. (2019)'s article for a thorough survey in this area.
Commonsense Datasets
According to Gunning (2018), commonsense knowledge normally consists of a general theory of how the physical world works and a basic understanding of human motives and behaviors. In recent years, a wide variety of datasets on the two kinds of commonsense knowledge have been proposed. Sap et al. (2019b) (Speer et al., 2016)) in machine reading comprehension.
Commonsense Reasoning Evaluation
With pre-trained language models, like BERT (Devlin et al., 2019), GPT-2 (Radford et al., 2019) being widely used in various NLP tasks, studies have been performed to examine the commonsense reasoning capability in pre-trained neural language models. and Zhou et al. (2020) propose to measure the success rate of the pretrained language models in commonsense inference by calculating LM probabilities. Two sentences which are used to test commonsense inference differ only in commonsense concepts. Feldman et al. (2019) further explore unsupervised methods to generate commonsense knowledge using the world knowledge of pre-trained language models. Our commonsense reasoning evaluation resonates with these evaluation efforts.
Commonsense Knowledge and Reasoning in Machine Translation
Commonsense knowledge has long been acknowledged as an indispensable knowledge source for disambiguation in machine translation (Bar-Hillel, 1960b;Davis and Marcus, 2015). Knowledgebased machine translation (KBMT), one of the popular machine translation paradigms in 1980s, lays much stress on extra-linguistic world knowledge in machine translation (Nirenburg, 1989). Large ontology that is constructed either manually or automatically to provide world knowledge is one of essential components in KBMT (Knight and Luk, 1994). As data-driven machine translation, such as statistical machine translation (SMT) and neural machine translation, becomes de facto standard in machine translation, world knowledge has been less explicitly explored. Only a few studies have indirectly and partially exploited world knowledge in SMT or NMT, by incorporating linked open data resources such as DBpedia and BabelNet into SMT with modest improvements (Du et al., 2016;Srivastava et al., 2017;Moussallem et al., 2018).
Commonsense Reasoning Test Suite for Machine Translation
In this section, we discuss the design and construction of the test suite, including the rules and steps for building this test suite.
Test Suite Design
Different from commonsense reasoning in Winogram Schema Challenge (Levesque et al., 2012) or sentence reasonability judgment (i.e., "He put a turkey into the fridge" vs. "He put an elephant into the fridge") , where commonsense reasoning normally happens in one language, commonsense reasoning in NMT can be done either in the encoding of the source language (i.e., encoding reasonable source representations) or in the decoding of the target language (i.e., producing reasonable target outputs). As it is difficult to detect whether reasonable senses are identified and encoded in the encoder, we check target outputs from the decoder to test the commonsense reasoning capability of NMT. This is the first rule that we follow to design the test suite.
In the second rule for building the test suite, we manually create source sentences with ambiguity that requires commonsense reasoning. Inspired by Schwartz and Gomez (2009) and Ovchinnikova (2012), we ground the commonsense reasoning test on two types of ambiguity: lexical and syntactic ambiguity (LA and SA), which are common in machine translation. An example in LA is the "batter" in "she put the batter in the refrigerator" (food material vs. baseball player). SA relates to structures, for instance, "I saw a man swimming on the bridge" (I was standing on the bridge vs. The man was swimming on the bridge). We further refine SA into contextless (e.g., Example (2) in Figure 1) and contextual SA (e.g., Example (3) in Figure 1). The former can be correctly interpreted by resorting to commonsense knowledge while the latter cannot be interpreted uniquely if no more context is given.
The third rule that we conform to is to 1) create two contrastive source sentences for each lexical or syntactic ambiguity point, where each source sentence corresponds to one reasonable interpretation of the ambiguity point, and 2) to provide two contrastive translations for each created source sentence. This is similar to other linguistic evaluation by contrastive examples in the MT literature (Avramidis et al., 2019;Bawden et al., 2018;Müller et al., 2018;Sennrich, 2017). These two contrastive translations have similar wordings: one is correct and the other is not correct in that it translates the ambiguity part into the corresponding translation of the contrastive source sentence. This translation makes sense in the contrastive sentence but not in the sentence in question. Examples of contrastive source sentences and contrastive translations for each source sentence are shown in Figure 2, 3 and 4.
z 1 主力 部队 已经 对 敌人的 建筑 展开 了 攻关 。 1
The main force has already launched an attack on the enemy's building. 1 The main force has already launched a research on the enemy's building.
2 经过 两年 的 攻关 , 终于 解决 了 这道 技术 难题。 2 After two years of research, this technical problem has finally been solved.
2 After two years of attack, this technical problem has finally been solved. Finally, we have hired two linguistic experts to construct ambiguous source sentences and two professional human translators to provide contrastive translations for each source sentence. We ask them to create and translate with diverse words as much as possible and hire an extra linguistic expert and translator to review and double check source sentences and target translations after the two experts and translators cross check with each other.
Lexical Ambiguity Test Set
To construct this test set, we select words from a Chinese polysemous dictionary 2 so that the selected words have multiple interpretations. We avoid selecting words that are semantically close to one another in order to maintain diversity of the test set. We do not select words that are polysemous in Chinese but translated into the same words in English. Words that are translated into very different English words in different context and require commonsense knowledge to disambiguate are preferred.
This test set contains 200 example blocks. Each block is composed of two contrastive triples (z 1 , e r 1 , e c 1 ) and (z 2 , e r 2 , e c 2 ). As shown in Figure 2, z 1 and z 2 are contrastive with each other as they contain the same ambiguous word with different meanings. e r . and e c . are contrastive translations where the former is correct while the latter not. e c 1 and e c 2 are wrong translations in that they incorrectly interpret the ambiguous word in the way of e r 2 and e r 1 respectively. A selected polysemous word is used in only one example block.
Syntactic Ambiguity Test Sets
As mentioned before, we have two types of test sets for syntactic ambiguity: contextless and contextual 2 Download link for the Chinese polysemous dictionary z 1 维修 桌子 的 桌脚 。 e 1 r Repair the legs of the SA. Before we construct the two test sets, we select Chinese structures that are typically ambiguous, just like PP attachment in English (e.g., "He ate the apple in the refrigerator" from Schwartz and Gomez (2009)). Feng (1995) has deeply investigated syntactic ambiguity in Chinese and has found 26 structures that tend to generate sentences with different interpretations, such as "noun phrase + de (a Chinese particle) + shi (is) + noun phrase". From them, we use 12 structures to construct contrastive examples, where the subtle differences in Chinese can be clearly detected in English after translation.
With these 12 structure templates with potential syntactic ambiguity, we manually create 225 example blocks for the contextless SA test set and 175 blocks for the contextual SA test set. Examples of these two test sets are listed in Figure 3 and 4. Similar to the LA test set, each block is composed of two contrastive triples where two translations for each source sentence are also contrastive with each other in the way that we translate sentences in the LA test set. For the blocks in the contextless test set, we make sure that each ambiguous source sentence can be correctly interpreted with commonsense knowledge. We do not need extra context information for disambiguation. In con- Behaviors that objects will take in a particular situation 鸡/ chicken 不/not 吃了/eat 因为/because 这只鸡/the chicken 已 经/had already 吃了/eat 太多了/too much.
25.2
Taxonomy Systematic classification of objects and concepts 今年/this year 风调雨顺/weather is good 农民的秋景/the harvest of the farmers' autumn 一定/must be 很好/very good.
21.1
Action Some actions an object may be involved in 健康的/ healthy 医生/doctor 正在/is doing 手术/surgery.
Structures
Object A is part of Object B 削/Cut 西瓜的/the watermelon 皮/skin.
Emotions
Description of people's psychological activities and emotions 她/she 留 下/leave 眼 泪/tears 倾 倒/pour out 她 的/her 委 屈/grievances.
2.6
Procedural The type of common sense exercised in the performance of a task 学 生/students 被 调 查/were investigated 因 为/because 这 些 学 生/these students 是/were 这 个 事 件 的/the incident 目 击 者/witnesses.
1.3
Test Suite Analysis
We provide statistical analyses on the built test suite, which cover its size, distribution of knowledge types and the reasoning accuracy of pretrained language models on target translations of target translations of this test suite.
General Statistics
Statistics on the built test suite are displayed in Table 1. We show the number of triples, the number of unique tokens, and the average number of tokens per sentence in each test set. Although sentences in the test suite are not very long, they are very challenging to be correctly translated as commonsense reasoning is involved, which will be verified in our experiments.
Commonsense Knowledge Type
Evaluation of Pretrained Language Models on the Test Suite
In our test suite, we find that target translations of 93.7% instances (1,124 of 1200 test instances) can be determined if they are correct only from translations themselves (i.e., by performing commonsense reasoning), without reference to the corresponding source sentences. This is exactly what we want the test suite to be like as the purpose of this test suite is to evaluate commonsense reasoning rather than the ability of NMT in exploring source context for translation disambiguation not related to common sense. This is also consistent with the first rule for building the test suite: evaluating commonsense reasoning from the target side. Since the reasonability of these translations can be determined only from themselves, we want to know how challenging they are for pretrained language models in terms of commonsense reasoning. Hence, we evaluate state-of-the-art language models pretrained on large-scale data, including BERT (Devlin et al., 2019), GPT (Radford, 2018), and GPT-2 (Radford et al., 2019), on these 1,124 translation pairs (pairs of reference and contrastive translations). For notational convenience, we still use the test suite to refer to these instances as only 76 cases are excluded for this evaluation. Following and Zhou et al. (2020), for each pair (e r , e c ), we use a pretrained language model to compute the language model score of the two translations. The translation with a higher score is labelled as the correct one by the language model. By comparing these labels with ground-truth labels, we can obtain the commonsense reasoning accuracy of the corresponding language model on these instances.
Results are shown in Table 3. All language models are better than random guess, validating the commonsense reasoning ability of them. They perform worse on the contextual SA test than on the other two test sets, demonstrating the difficulty in cross-sentence commonsense reasoning. BERTlarge achieves the highest accuracy, 0.712. The number of parameters of BERT-large is equal to that of GPT2-medium, almost 3 times as large as that of GPT-2 base and BERT-base (340M vs. 117M). We conjecture that the reason for the superiority of BERT models over GPT/GPT-2 models is due to bidirectional context in BERT, which resonates with the findings of Zhou et al. (2020). The accuracies of all pretrained language models are all lower than 72%. This suggests that our test suite is very challenging in commonsense reasoning even for language models trained on an amount of data.
Experiments
In this section, we conducted extensive experiments to evaluate the commonsense reasoning capability of state-of-the-art neural machine translation on the built test suite.
Experimental setup
We adopted the CWMT Chinese-English corpus 3 of news domain as training data for NMT systems. This corpus contains 9M parallel sentences. We used byte pair encoding compression algorithm (BPE) (Sennrich et al., 2016) to process all these data and restricted merge operations to a maximum of 30k.
We trained two neural machine translation models on the training data: RNNSearch (Bahdanau et al., 2015) and Transformer (Vaswani et al., 2017). 3 Available at: http://nlp.nju.edu.cn/cwmt-wmt We used the Transformer base model with 6 layers and 8 self-attention heads per layer. As for RNNSearch, we employed neural architecture with 4 layers of LSTM and 512-dimension hidden states. We used Adam (Kingma and Ba, 2015) to train both NMT models. β1 and β2 of Adam were set to 0.9 and 0.999, the learning rate was set to 0.0005, and gradient norm was set to 5. To take full advantage of GPUs, we batched sentences of similar lengths. We trained both models on a single machine with 8 1080Ti cards. Each mini-batch contained 32,000 tokens. During decoding, we employed the beam search algorithm and set the beam size to 5.
Evaluation Metrics
For translation performance evaluation, we used sacrebleu (Post, 2018) to calculate case-sensitive BLEU-4 (Papineni et al., 2001).
To evaluate the commonsense reasoning accuracy of NMT on the test suite, we applied NMT models to score each pair (s, t) as follows:
Score(t|s) = 1 |t| |t| i=0 logp(t i |t <i , s)(1)
where p(t i |t <i , s) is the probabilty of the target word t i given the target history and source sentence. Given a triple (z, e r , e c ), if an NMT model scores the reference translation higher than the contrastive translation (i.e., Score(e r |z) > Score(e c |z)), the NMT model is believed to make a correct commonsense reasoning prediction. This is reasonable as e r and e c are only different in words or structures related to the lexical or syntactical commonsense ambiguity point as described in Section 3.1. By scoring each triple with an NMT model, we can measure the commonsense reasoning accuracy of the model on our test suite.
Results
BLEU scores for the two NMT models are given in Table 4. Commonsense reasoning results on the test suite are provided in Table 5. From the table and figure, we can observe that
• Both BLEU and commonsense reasoning accuracy clearly show that Transformer is better than RNNSearch. • Both RNNSearch and Transformer perform better on the contextless SA than on the contextual SA according to the commonsense reasoning accuracy. This is consistent with the results of pretrained language models shown in and RNNSearch on the CL-SA test set is larger than that on the other two test sets. The reason might be that the self-attention mechanism allows Transformer to more easily detect collocations (e.g., "leg" and "table" in Figure 3) for disambiguation on the CL-SA test set. Many CL-SA cases can be disambiguated by collocations according to our observation on this test set. • Compared with the relative BLEU improvement of Transformer over RNNSearch, the relative improvement in terms of commonsense reasoning accuracy is smaller (8.2% vs. 18.91% in BLEU), indicating that more efforts are expected to not only improve translation quality in terms of BLEU but also to enhance commonsense reasoning ability in NMT.
Effect of the Size of Training Data
We conducted experiments to investigate the impact of the amount of training data on the commonsense reasoning performance of the state-of-the-art NMT model Transformer. Results are displayed in Figure 5. Generally, with the increase of training data, The common-sense reasoning ability of NMT systems rises too. Although we used all CWMT Chinese-English training data to train NMT, we didn't have a chance to see that the commonsense reasoning accuracy tends to level off. We conjecture that the growth has the potential to continue. We leave using more data to measure the growth momentum of NMT commonsense reasoning to our future work. Yet another finding from Figure 5 is that the commonsense reasoning performance on the contextless SA test set is always higher that the other two test sets. As shown in the last subsection, the reasons for this may be due to shorter sentences and collocations in this test set.
Effect of Sentence Length
We carried out an analysis on the impact of the length of source sentences on commonsense reasoning. We divided the test suite into 5 groups according to the length of source sentences. The results are shown in Figure 6. Generally, Transformer is better than RNNSearch in almost all length groups. With the length of source sentences increasing, the commonsense reasoning performance tends to go down. This may suggest that long-distance or crosssentence commonsense reasoning is more challenging for NMT than short-distance reasoning, which is consistent with our finding on the CL-SA test set.
Effect of Commonsense Knowledge Types
Finally, we analyzed the commonsense reasoning capability of Transformer on different commonsense knowledge types. Studying different types of common sense can help us understand what kind of commonsense knowledge is more needed to solve commonsense reasoning problems in NMT. The results are shown in Figure 7. Transformer-based NMT obtains relatively good results on commonsense reasoning on properties, structures, actions, but performs badly on reasoning on behaviors and emotions.
6 Further Analysis
Analysis on Reasoning Consistency
Our test suite contains 600 example blocks, each of which focuses on only one LA/SA ambiguity point. For the two reasonable interpretations (z 1 , z 2 ) of a given ambiguity point, NMT models need to make two translation predictions: one for (e r 1 , e c 1 ) and the other for (e r 2 , e c 2 ). If they choose e r 1 and e r 2 (both right reasoning predictions) or e c 1 and e c 2 (both wrong reasoning predictions), we treat this as a consistent reasoning, otherwise inconsistent. Partially inspired by Zhou et al. (2020), we conducted an analysis on reasoning consistency.
We counted the times that a tested NMT model made consistent reasoning predictions and calculated the consistency rate on the three test sets. Results are shown in Table 6. Disappointingly, the reasoning consistency rates for both RNNSearch and Transformer are lower than random guess (0.5).
On the contextless SA test set where both NMT models have higher reasoning accuracies, the rates of reasoning consistency are also higher than those of the other two test sets.
Analysis on Translation Errors
We have already automatically evaluated commonsense reasoning in NMT with both reasoning accuracy and reasoning consistency rate. We further manually analyzed the translation errors of Transformer on the entire test suite. We roughly grouped translation errors into three categories: common sense errors (translations that are not consistent with common sense), ordinary meaning errors (wrong translations of sources words that are not commonsense ambiguity points) and other errors (e.g., missing words). These errors were manually detected and labeled by two annotators. They checked all examples in the test suite. The interannotator agreement, measured as the rate of the number of labels that the two annotators annotate consistently against the total number of labels from the two annotators, is 92%.
Results are reported in Table 7. The majority of translation errors are indeed related to common sense (71.6%). This suggests that our test suite is a suitable and challenging testbed for evaluating commonsense reasoning in NMT.
Conclusion
In this paper, we have presented a test suite, including a lexical ambiguity test set and two syntactic ambiguity test sets, to evaluate the commonsense reasoning capability of state-of-the-art neural machine translation models. We elaborate the rules of building this test suite and conduct statistical analyses on it. Our evaluation experiments and analyses on this test suite suggest that commonsense reasoning in modern machine translation models is still in its infant stage and that more efforts are to be expected to advance NMT in this direction.
Figure 2 :
2An example block in the LA test set.
Figure 3 :
3An example block in the contextless SA test set. z 1 办公室 里 有 两个 党 的 议员 , 他们 互相 攻击 对方 党派 的 观点 。 1 There are members of two parties in the office who are attacking each other's party views. 1 There are two members of the party in the office who are attacking each other's party views. 2 办公室 里 有 两个 党 的 议员 , 他们 在 竞选 党 主席 。 2There are two members of the party in the office who are running for the party chairman.2 There are members of two parties in the office who are running for the party chairman.
Figure 4 :
4An example block in the contextual SA test set.
Figure 6 :
6Commonsense Reasoning accuracy against the length of source sentences. The percentage of each group is shown under the corresponding length interval.
table . 1
.The leg that repairs the table. The hammer that repairs the table.z 2 维修 桌子 的 锤子 。
2 2 Repair the hammer of the table.
Table 2 :
2Commonsense knowledge categories and their percentages in the test sets.trast, we have to resort to additional context to interpret sentences in the contextual SA test set.
Table 3 :
3Commonsense Reasoning accuracy of pretrained language models on the 1,124 instances of the test suite.
Table 4 :
4BLEU scores on the test sets.LA
CL-SA CT-SA Total
RNNSearch 0.543 0.569
0.551
0.555
Transformer 0.565 0.656
0.571
0.601
Table 5 :
5Commonsense Reasoning accuracy on the test sets.
Table 3 ,
3suggesting that cross-sentence com-
monsense reasoning is also challenging for
NMT. Notice that the commonsense reason-
ing accuracy of pretrained language models
cannot be directly compared to that of NMT
models due to different evaluation procedure,
mechanisms for commonsense reasoning and
different test data. The BLEU scores on the
contextless SA test set are lower than those
on the contextual SA. We conjecture that this
is because the contextless SA test set consists
of very short sentences. Wrongly translated
words therefore have a very big impact on
BLEU scores.
• The performance gap between Transformer
Table 6 :
6Rates of Reasoning consistency on the three test sets. There are children of three kindergartens in the park, and a total of six children are playing games. Transformer: There are three kindergarten children in the park, a total of 6 children are playing games.Error type
Example
Table 7 :
7Translation error types. Words related to translation errors are underlined.
The built commonsense test suite will be publicly available at https://github.com/tjunlp-lab/CommonMT.
AcknowledgmentsThe present research was supported by the National Natural Science Foundation of China (Grant No. 61861130364), Natural Science Foundation of Tianjin (Grant No. 19JCZDJC31400) and the Royal Society (London) (NAF\R1\180122). We would like to thank the anonymous reviewers for their insightful comments. The corresponding author is Deyi Xiong (dyxiong@tju.edu.cn).
Linguistic evaluation of German-English machine translation using a test suite. Eleftherios Avramidis, Vivien Macketanz, Ursula Strohriegel, Hans Uszkoreit, 10.18653/v1/W19-5351Proceedings of the Fourth Conference on Machine Translation. the Fourth Conference on Machine TranslationFlorence, ItalyAssociation for Computational Linguistics2Shared Task Papers, Day 1)Eleftherios Avramidis, Vivien Macketanz, Ursula Strohriegel, and Hans Uszkoreit. 2019. Linguistic evaluation of German-English machine translation using a test suite. In Proceedings of the Fourth Con- ference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 445-454, Florence, Italy. Association for Computational Linguistics.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
A demonstration of the nonfeasibility of fully automatic high quality machine translation. Appendix III of 'The present status of automatic translation of languages. Yehoshua Bar-Hillel, Advances in Computers. 1Yehoshua Bar-Hillel. 1960a. A demonstration of the nonfeasibility of fully automatic high quality ma- chine translation. Appendix III of 'The present status of automatic translation of languages', Advances in Computers, 1:158-163.
The present status of automatic translation of languages. Yehoshua Bar-Hillel, 10.1016/S0065-2458(08)60607-5Advances in Computers. 1Yehoshua Bar-Hillel. 1960b. The present status of au- tomatic translation of languages. Advances in Com- puters, 1:91-163.
Evaluating discourse phenomena in neural machine translation. Rachel Bawden, Rico Sennrich, Alexandra Birch, Barry Haddow, 10.18653/v1/N18-1118Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong Papers; New Orleans, LouisianaAssociation for Computational Linguistics1Rachel Bawden, Rico Sennrich, Alexandra Birch, and Barry Haddow. 2018. Evaluating discourse phenom- ena in neural machine translation. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 1304-1313, New Orleans, Louisiana. Association for Computational Linguistics.
. Chandra Bhagavatula, Chaitanya Ronan Le Bras, Keisuke Malaviya, Ari Sakaguchi, Hannah Holtzman, Doug Rashkin, Scott Downey, Yejin Yih, Choi, Abductive commonsense reasoning. ArXiv, abs/1908.05739Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Han- nah Rashkin, Doug Downey, Scott Yih, and Yejin Choi. 2019. Abductive commonsense reasoning. ArXiv, abs/1908.05739.
Incorporating external knowledge into machine reading for generative question answering. Bin Bi, Chen Wu, Ming Yan, Wei Wang, Jiangnan Xia, Chenliang Li, 10.18653/v1/D19-1255Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsBin Bi, Chen Wu, Ming Yan, Wei Wang, Jiangnan Xia, and Chenliang Li. 2019. Incorporating ex- ternal knowledge into machine reading for gener- ative question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2521-2530, Hong Kong, China. Association for Computational Linguistics.
Piqa: Reasoning about physical commonsense in natural language. Yonatan Bisk, Rowan Zellers, Jianfeng Ronan Le Bras, Yejin Gao, Choi, AAAI. Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jian- feng Gao, and Yejin Choi. 2020. Piqa: Reasoning about physical commonsense in natural language. In AAAI.
Commonsense reasoning and commonsense knowledge in artificial intelligence. Ernest Davis, Gary Marcus, 10.1145/2701413Commun. ACM. 589Ernest Davis and Gary Marcus. 2015. Commonsense reasoning and commonsense knowledge in artificial intelligence. Commun. ACM, 58(9):92-103.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Event representation learning enhanced with external commonsense knowledge. Xiao Ding, Kuo Liao, Ting Liu, Zhongyang Li, Junwen Duan, 10.18653/v1/D19-1495Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsXiao Ding, Kuo Liao, Ting Liu, Zhongyang Li, and Junwen Duan. 2019. Event representation learning enhanced with external commonsense knowledge. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4896- 4905, Hong Kong, China. Association for Computa- tional Linguistics.
Using BabelNet to improve OOV coverage in SMT. Jinhua Du, Andy Way, Andrzej Zydron, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). the Tenth International Conference on Language Resources and Evaluation (LREC'16)Portorož, SloveniaEuropean Language Resources Association (ELRAJinhua Du, Andy Way, and Andrzej Zydron. 2016. Us- ing BabelNet to improve OOV coverage in SMT. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 9-15, Portorož, Slovenia. European Language Resources Association (ELRA).
Commonsense knowledge mining from pretrained models. Joshua Feldman, Joe Davison, Alexander M Rush, abs/1909.00505IJCNLP. Joshua Feldman, Joe Davison, and Alexander M. Rush. 2019. Commonsense knowledge mining from pre- trained models. IJCNLP, abs/1909.00505.
On the potential nature of chinese ambiguous constructions. Zhiwei Feng, Chinese. 9Zhiwei Feng. 1995. On the potential nature of chi- nese ambiguous constructions. In Chinese. Journal of Chinese Information Processing, 9(4):14-24.
Machine common sense concept paper. CoRR. David Gunning, abs/1810.07528David Gunning. 2018. Machine common sense con- cept paper. CoRR, abs/1810.07528.
Achieving human parity on automatic chinese to english news translation. Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, Shujie Liu, Tie-Yan Liu, Renqian Luo, Arul Menezes, Tao Qin, Frank Seide, Xu Tan, Fei Tian, Lijun Wu, Shuangzhi Wu, Yingce Xia, Dongdong Zhang, Zhirui Zhang, Ming Zhou, abs/1803.05567CoRRHany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Feder- mann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, Shujie Liu, Tie-Yan Liu, Renqian Luo, Arul Menezes, Tao Qin, Frank Seide, Xu Tan, Fei Tian, Lijun Wu, Shuangzhi Wu, Yingce Xia, Dongdong Zhang, Zhirui Zhang, and Ming Zhou. 2018. Achieving human parity on auto- matic chinese to english news translation. CoRR, abs/1803.05567.
A hybrid neural network model for commonsense reasoning. Pengcheng He, Xiaodong Liu, Weizhu Chen, Jianfeng Gao, 10.18653/v1/D19-6002Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing. the First Workshop on Commonsense Inference in Natural Language ProcessingHong Kong, ChinaAssociation for Computational LinguisticsPengcheng He, Xiaodong Liu, Weizhu Chen, and Jian- feng Gao. 2019. A hybrid neural network model for commonsense reasoning. In Proceedings of the First Workshop on Commonsense Inference in Natu- ral Language Processing, pages 13-21, Hong Kong, China. Association for Computational Linguistics.
Cosmos QA: Machine reading comprehension with contextual commonsense reasoning. Lifu Huang, Le Ronan, Chandra Bras, Yejin Bhagavatula, Choi, 10.18653/v1/D19-1243Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsLifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos QA: Machine reading comprehension with contextual commonsense rea- soning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 2391-2401, Hong Kong, China. Association for Computational Linguistics.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsDiederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Attention is (not) all you need for commonsense reasoning. Tassilo Klein, Moin Nabi, 10.18653/v1/P19-1477Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsTassilo Klein and Moin Nabi. 2019. Attention is (not) all you need for commonsense reasoning. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4831- 4836, Florence, Italy. Association for Computational Linguistics.
Building a largescale knowledge base for machine translation. Kevin Knight, K Steve, Luk, AAAI. 94Kevin Knight and Steve K Luk. 1994. Building a large- scale knowledge base for machine translation. In AAAI, volume 94, pages 773-778.
The winograd schema challenge. Hector J Levesque, Ernest Davis, Leora Morgenstern, Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning, KR'12. the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning, KR'12AAAI PressHector J. Levesque, Ernest Davis, and Leora Morgen- stern. 2012. The winograd schema challenge. In Proceedings of the Thirteenth International Con- ference on Principles of Knowledge Representa- tion and Reasoning, KR'12, pages 552-561. AAAI Press.
The future of mt is now and bar-hillel was (almost entirely) right. Elliott Macklovitch, The Fourth Bar-Ilan Symposium on Foundations of Artificial Intelligence. Elliott Macklovitch. 1995. The future of mt is now and bar-hillel was (almost entirely) right. In The Fourth Bar-Ilan Symposium on Foundations of Artificial In- telligence.
Machine translation using semantic web technologies: A survey. Diego Moussallem, Matthias Wauer, Axel-Cyrille Ngonga Ngomo, abs/1711.09476ArXiv. Diego Moussallem, Matthias Wauer, and Axel- Cyrille Ngonga Ngomo. 2018. Machine translation using semantic web technologies: A survey. ArXiv, abs/1711.09476.
A large-scale test set for the evaluation of context-aware pronoun translation in neural machine translation. Mathias Müller, Annette Rios, Elena Voita, Rico Sennrich, 10.18653/v1/W18-6307Proceedings of the Third Conference on Machine Translation: Research Papers. the Third Conference on Machine Translation: Research PapersBrussels, BelgiumAssociation for Computational LinguisticsMathias Müller, Annette Rios, Elena Voita, and Rico Sennrich. 2018. A large-scale test set for the eval- uation of context-aware pronoun translation in neu- ral machine translation. In Proceedings of the Third Conference on Machine Translation: Research Pa- pers, pages 61-72, Brussels, Belgium. Association for Computational Linguistics.
Knowledge-based machine translation. Sergei Nirenburg, 10.1007/BF00367750Machine Translation. 4Sergei Nirenburg. 1989. Knowledge-based machine translation. Machine Translation, 4(1):5-24.
Integration of world knowledge for natural language understanding. Ekaterina Ovchinnikova, Atlantis Thinking Machines. Ekaterina Ovchinnikova. 2012. Integration of world knowledge for natural language understanding. In Atlantis Thinking Machines.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, ACL. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2001. Bleu: a method for automatic eval- uation of machine translation. In ACL.
A call for clarity in reporting BLEU scores. Matt Post, 10.18653/v1/W18-6319Proceedings of the Third Conference on Machine Translation: Research Papers. the Third Conference on Machine Translation: Research PapersBelgium, BrusselsAssociation for Computational LinguisticsMatt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Belgium, Brussels. Association for Computa- tional Linguistics.
Improving language understanding by generative pre-training. Alec Radford, Alec Radford. 2018. Improving language understand- ing by generative pre-training.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Event2Mind: Commonsense inference on events, intents, and reactions. Maarten Hannah Rashkin, Emily Sap, Noah A Allaway, Yejin Smith, Choi, 10.18653/v1/P18-1043Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaLong Papers). Association for Computational LinguisticsHannah Rashkin, Maarten Sap, Emily Allaway, Noah A. Smith, and Yejin Choi. 2018. Event2Mind: Commonsense inference on events, intents, and reac- tions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 463-473, Melbourne, Australia. Association for Computational Linguis- tics.
WINOGRANDE: an adversarial winograd schema challenge at scale. Keisuke Sakaguchi, Le Ronan, Chandra Bras, Yejin Bhagavatula, Choi, AAAI. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhaga- vatula, and Yejin Choi. 2020. WINOGRANDE: an adversarial winograd schema challenge at scale. In AAAI.
Atomic: An atlas of machine commonsense for if-then reasoning. Maarten Sap, Emily Ronan Le Bras, Chandra Allaway, Nicholas Bhagavatula, Hannah Lourie, Brendan Rashkin, Noah A Roof, Yejin Smith, Choi, AAAI. Maarten Sap, Ronan Le Bras, Emily Allaway, Chan- dra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2019a. Atomic: An atlas of machine commonsense for if-then reasoning. In AAAI.
Social IQa: Commonsense reasoning about social interactions. Maarten Sap, Hannah Rashkin, Derek Chen, Yejin Ronan Le Bras, Choi, 10.18653/v1/D19-1454Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsMaarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019b. Social IQa: Com- monsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4462- 4472, Hong Kong, China. Association for Computa- tional Linguistics.
Acquiring applicable common sense knowledge from the web. A Hansen, Fernando Schwartz, Gomez, Proceedings of the Workshop on Unsupervised and Minimally Supervised Learning of Lexical Semantics. the Workshop on Unsupervised and Minimally Supervised Learning of Lexical SemanticsBoulder, Colorado, USAAssociation for Computational LinguisticsHansen A. Schwartz and Fernando Gomez. 2009. Ac- quiring applicable common sense knowledge from the web. In Proceedings of the Workshop on Unsu- pervised and Minimally Supervised Learning of Lex- ical Semantics, pages 1-9, Boulder, Colorado, USA. Association for Computational Linguistics.
How grammatical is characterlevel neural machine translation? assessing MT quality with contrastive translation pairs. Rico Sennrich, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. the 15th Conference of the European Chapter of the Association for Computational LinguisticsValencia, SpainAssociation for Computational Linguistics2Rico Sennrich. 2017. How grammatical is character- level neural machine translation? assessing MT qual- ity with contrastive translation pairs. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 376-382, Valencia, Spain. Association for Computational Linguistics.
Neural machine translation of rare words with subword units. Rico Sennrich, Barry Haddow, Alexandra Birch, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational Linguistics1Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), volume 1, pages 1715-1725.
Conceptnet 5.5: An open multilingual graph of general knowledge. Robyn Speer, Joshua Chin, Catherine Havasi, AAAI. Robyn Speer, Joshua Chin, and Catherine Havasi. 2016. Conceptnet 5.5: An open multilingual graph of gen- eral knowledge. In AAAI.
Improving machine translation through linked data. Ankit Srivastava, Georg Rehm, Felix Sasaki, Prague Bull. Math. Linguistics. 108Ankit Srivastava, Georg Rehm, and Felix Sasaki. 2017. Improving machine translation through linked data. Prague Bull. Math. Linguistics, 108:355-366.
Commonsense reasoning for natural language understanding: A survey of benchmarks, resources, and approaches. Shane Storks, Qiaozi Gao, Joyce Y Chai, abs/1904.01172CoRRShane Storks, Qiaozi Gao, and Joyce Y. Chai. 2019. Commonsense reasoning for natural language under- standing: A survey of benchmarks, resources, and approaches. CoRR, abs/1904.01172.
Commonsenseqa: A question answering challenge targeting commonsense knowledge. Alon Talmor, Jonathan Herzig, Nicholas Lourie, Jonathan Berant, NAACL-HLT. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2018. Commonsenseqa: A ques- tion answering challenge targeting commonsense knowledge. In NAACL-HLT.
Commonsense knowledge in machine intelligence. Niket Tandon, Aparna S Varde, Gerard De Melo, 10.1145/3186549.3186562SIGMOD Record. 464Niket Tandon, Aparna S. Varde, and Gerard de Melo. 2017. Commonsense knowledge in machine intelli- gence. SIGMOD Record, 46(4):49-52.
A simple method for commonsense reasoning. H Trieu, Quoc V Trinh, Le, abs/1806.02847CoRRTrieu H. Trinh and Quoc V. Le. 2018. A sim- ple method for commonsense reasoning. CoRR, abs/1806.02847.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Proceedings of the 31st International Conference on Neural Information Processing Systems. the 31st International Conference on Neural Information Processing SystemsCurran Associates IncAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Sys- tems, pages 6000-6010. Curran Associates Inc.
Does it make sense? and why? a pilot study for sense making and explanation. Cunxiang Wang, Shuailong Liang, Yue Zhang, Xiaonan Li, Tian Gao, 10.18653/v1/P19-1393Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsCunxiang Wang, Shuailong Liang, Yue Zhang, Xiao- nan Li, and Tian Gao. 2019. Does it make sense? and why? a pilot study for sense making and ex- planation. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 4020-4026, Florence, Italy. Association for Computational Linguistics.
. Yonghui Wu, Mike Schuster, Zhifeng Chen, V Quoc, Mohammad Le, Wolfgang Norouzi, Maxim Macherey, Yuan Krikun, Cao, Qin Gao, KlausYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus
Google's neural machine translation system: Bridging the gap between human and machine translation. Macherey, arXiv:1609.08144arXiv preprintMacherey, et al. 2016. Google's neural machine translation system: Bridging the gap between hu- man and machine translation. arXiv preprint arXiv:1609.08144.
SWAG: A large-scale adversarial dataset for grounded commonsense inference. Rowan Zellers, Yonatan Bisk, Roy Schwartz, Yejin Choi, 10.18653/v1/D18-1009Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsRowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversar- ial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 93- 104, Brussels, Belgium. Association for Computa- tional Linguistics.
HellaSwag: Can a machine really finish your sentence?. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, Yejin Choi, 10.18653/v1/P19-1472Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsRowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really finish your sentence? In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4791- 4800, Florence, Italy. Association for Computational Linguistics.
Record: Bridging the gap between human and machine commonsense reading comprehension. Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, Benjamin Van Durme, abs/1810.12885ArXiv. Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018. Record: Bridging the gap between human and ma- chine commonsense reading comprehension. ArXiv, abs/1810.12885.
Evaluating commonsense in pretrained language models. Xuhui Zhou, Yue Zhang, Leyang Cui, Dandan Huang, AAAI. Xuhui Zhou, Yue Zhang, Leyang Cui, and Dandan Huang. 2020. Evaluating commonsense in pre- trained language models. In AAAI. |
9,835,029 | Towards Personalized Synthesized Voices for Individuals with Vocal Disabilities: Voice Banking and Reconstruction | When individuals lose the ability to produce their own speech, due to degenerative diseases such as motor neurone disease (MND) or Parkinson's, they lose not only a functional means of communication but also a display of their individual and group identity. In order to build personalized synthetic voices, attempts have been made to capture the voice before it is lost, using a process known as voice banking. But, for some patients, the speech deterioration frequently coincides or quickly follows diagnosis. Using HMM-based speech synthesis, it is now possible to build personalized synthetic voices with minimal data recordings and even disordered speech. The power of this approach is that it is possible to use the patient's recordings to adapt existing voice models pre-trained on many speakers. When the speech has begun to deteriorate, the adapted voice model can be further modified in order to compensate for the disordered characteristics found in the patient's speech. The University of Edinburgh has initiated a project for voice banking and reconstruction based on this speech synthesis technology. At the current stage of the project, more than fifteen patients with MND have already been recorded and five of them have been delivered a reconstructed voice. In this paper, we present an overview of the project as well as subjective assessments of the reconstructed voices and feedback from patients and their families. | [] | Towards Personalized Synthesized Voices for Individuals with Vocal Disabilities: Voice Banking and Reconstruction
21-22 August, 2013.
Christophe Veaux
Centre for Speech Technology Research (CSTR)
University of Edinburgh
GrenobleFrance, UK
Junichi Yamagishi jyamagis@inf.ed.ac.uk
Centre for Speech Technology Research (CSTR)
University of Edinburgh
GrenobleFrance, UK
Simon King simon.king@ed.ac.uk
Centre for Speech Technology Research (CSTR)
University of Edinburgh
GrenobleFrance, UK
Towards Personalized Synthesized Voices for Individuals with Vocal Disabilities: Voice Banking and Reconstruction
SLPAT 2013, 4th Workshop on Speech and Language Processing for Assistive Technologies
21-22 August, 2013.Index Terms: HTSSpeech SynthesisVoice BankingVoice ReconstructionVoice Output Communication AidsMND
When individuals lose the ability to produce their own speech, due to degenerative diseases such as motor neurone disease (MND) or Parkinson's, they lose not only a functional means of communication but also a display of their individual and group identity. In order to build personalized synthetic voices, attempts have been made to capture the voice before it is lost, using a process known as voice banking. But, for some patients, the speech deterioration frequently coincides or quickly follows diagnosis. Using HMM-based speech synthesis, it is now possible to build personalized synthetic voices with minimal data recordings and even disordered speech. The power of this approach is that it is possible to use the patient's recordings to adapt existing voice models pre-trained on many speakers. When the speech has begun to deteriorate, the adapted voice model can be further modified in order to compensate for the disordered characteristics found in the patient's speech. The University of Edinburgh has initiated a project for voice banking and reconstruction based on this speech synthesis technology. At the current stage of the project, more than fifteen patients with MND have already been recorded and five of them have been delivered a reconstructed voice. In this paper, we present an overview of the project as well as subjective assessments of the reconstructed voices and feedback from patients and their families.
Introduction
Degenerative speech disorders have a variety of causes that include Multiple Sclerosis, Parkinson's, and Motor Neurone Disease (MND) also known in the USA as Amyotrophic Lateral Sclerosis (ALS). MND primarily affects the motor neurones in the brain and spinal cord. This causes a worsening muscle weakness that leads to a loss of mobility and difficulties with swallowing, breathing and speech production. Initial symptoms may be limited to a reduction in speaking rate, an increase of the voice's hoarseness, or an imprecise articulation. However, at some point in the disease progression, 80 to 95% of patients are unable to meet their daily communication needs using their speech; and most are unable to speak by the time of their death [1]. As speech becomes difficult to understand, these individuals may use a voice output communication aid (VOCA). These devices consist of a text entry interface such as a keyboard, a touch screen or an eye-tracker, and a text-to-speech synthesizer that generates the corresponding speech. However, when individuals lose the ability to produce their own speech, they lose not only a functional means of communication but also a display of their individual and social identity through their vocal characteristics.
Current VOCAs are not ideal as they are often restricted to a limited set of impersonal voices that are not matched to the age or accent of each individual. Feedback from patients, careers and patient societies has indicated that there is a great unmet need for personalized VOCAs as the provision of personalized voice is associated with greater dignity and improved self-identity for the individual and their family [2]. In order to build personalized VOCAs, several attempts have been made to capture the voice before it is lost, using a process known as voice banking. One example of this approach is ModelTalker [3], a free voice building service that can be used from any home computer in order to build a synthetic voice based on diphone concatenation, a technology developed in the 1980s. The user of this service has to record around 1800 utterances in order to fully cover the set of diphones and the naturalness of the synthetic speech is rather low. Cereproc [4] has provided a voice building service for individuals, at a relatively high cost, which uses unit selection synthesis, and is able to generate synthetic speech of increased naturalness. Wants Inc. in Japan also provides a commercial voice building service for individuals called "Polluxstar". This is based on a hybrid speech synthesis system [5] using both unit selection and statistical parametric speech synthesis [6] to achieve a natural speech quality. However, all these speech synthesis techniques require a large amount of recorded speech in order to build a good quality voice. Moreover the recorded speech data must be as intelligible as possible, since the data recorded is either used directly or partly as the voice output. This requirement makes such techniques more problematic for those patients whose voices have started to deteriorate. Therefore, there is a strong motivation to reduce the complexity and to increase the flexibility of the voice building process so that patients can have their own synthetic voices build from limited recordings and even deteriorating speech. Recently, a new voice building process using the hidden Markov model (HMM)-based speech synthesis technique has been investigated to create personalized VOCAs [7][8]. This approach has been shown to produce high quality output and offers two major advantages over existing methods for voice banking and voice building. First, it is possible to use existing speakerindependent voice models pre-trained over a number of speakers and to adapt them towards a target speaker. This process known as speaker adaptation [9] requires only a very small amount of speech data. The second advantage of this approach is that we can control and modify various components of the adapted voice model in order to compensate for the disorders found in the patient's speech. We call this process "voice reconstruction". Based on this new approach, the University of Edinburgh, the Euan MacDonald Center for MND and the Anne Rowling Regenerative Neurology Clinic have started a collaborative project for voice banking and voice reconstruction [10][11]. At the current stage of the project, more than 15 patients with MND have already been recorded and 5 of them have been delivered a reconstructed voice. We present here the technical concepts behind this project as well as a subjective assessment of the reconstructed voices.
HMM-Based Speech Synthesis
Our voice building process is based on the state-of-the-art HMM-based speech synthesizer, known as HTS [6]. As opposed to diphone or unit-selection synthesis, the HMM-based speech synthesizer does not use the recorded speech data directly as the voice output. Instead it is based on a vocoder model of the speech and the acoustic parameters required to drive this vocoder are represented by a set of statistical models. The vocoder used in HTS is STRAIGHT and the statistical models are contextdependent hidden semi-Markov models (HSMMs), which are HMMs with explicit state duration distributions. The state output distributions of the HSMMs represent three separate streams of acoustic parameters that correspond respectively to the fundamental frequency (logF0), the band aperiodicities and the mel-cepstrum, including their dynamics. For each stream, additional information is added to further describe the temporal trajectories of the acoustic parameters, such as their global variances over the learning data. Finally, separate decision trees are used to cluster the state durations probabilities and the state output probabilities using symbolic context information at the phoneme, syllable, word, and utterance level. In order to synthesize a sentence, a linguistic analyser is used to convert the sequence of words into a sequence of symbolic contexts and the trained HSMMs are invoked for each context. A parametergeneration algorithm is then used to estimate the most likely trajectory of each acoustic parameter given the sequence of models. Finally the speech is generated by the STRAIGHT vocoder driven by the estimated acoustic parameters.
Speaker Adaptation
One advantage of the HMM-based speech synthesis for voice building is that the statistical models can be estimated from a very limited amount of speech data thanks to speaker adaptation. This method [9] starts with a speaker-independent model, or "average voice model", learned over multiple speakers and uses model adaptation techniques drawn from speech recognition such as maximum likelihood linear regression (MLLR), to adapt the speaker independent model to a new speaker. It has been shown that using 100 sentences or approximately 6-7 minutes of speech data is sufficient to generate a speaker-adapted voice that sounds similar to the target speech [7]. This provides a much more practical way to build a personalized voices for patients. For instance, it is now possible to construct a synthetic voice for a patient prior to a laryngectomy operation, by quickly recording samples of their speech [8]. A similar approach can also be used for patients with degenerative diseases before the diseases affect their speech. The speaker adaptation process is most successful when the average voice model is already close to the voice characteristics of the target speaker. Therefore, one goal of the voice-bank project is to record a large catalogue of healthy voices from which we can derive a set of average voice models corresponding to different age, gender and regional accents combinations. This will be presented in Section 5.
Voice Reconstruction
Some individuals with neurodegenerative disease may already have speech symptoms at the time of the recording. In that case, the speaker adaptation process will also replicate these symptoms in the speaker-adapted voice. Therefore we need to remove speech disorders from the synthetic voice, so that it sounds more natural and more intelligible. However since the HTS is based on a vocoder model of the speech, we can now exploit the acoustic models learned during the training and the adaptation processes in order to control and modify various speech features. This is the second major advantage of using HMM-based speech synthesis. In particular, HTS has statistically independent models for duration, log-F0, band aperiodicity and mel-cepstrum. This allows the substitution of some models in the patient's speaker-adapted voice by that of a well-matched healthy voice or an average of multiple healthy voices, as illustrated in Figure 1. Although disordered speech perceptually deviates considerably from normal speech in many ways, it is known that its articulatory errors are consistent [12] and hence relatively predictable [13]. Therefore we can predefine a substitution strategy for a given condition, to some extent. For example, patients with MND often have a disordered speaking rate, contributing to a loss of the speech intelligibility. The substitution of the state duration models enables the timing disruptions to be regulated at the phoneme, word, and utterance levels. Furthermore, MND speakers often have breathy or hoarse speech, in which excessive breath through the glottis produces unwanted turbulent noise. In such cases, we can substitute the band aperiodicity models to produce a less breathy or hoarse output. In the following part of this section, we present different levels of model substitution. All these levels are combined in the final voice reconstruction process.
Baseline model substitution
In a first approach [7], the following models and information are substituted:
• Duration and aperiodicity models • Global variances of log-F0, aperiodicity and melcepstrum These parameters are the less correlated with the speaker identity and their substitution can fix some disorders such as slow speaking rate and excessive hoarseness. However, this substitution strategy cannot correct articulation disorders.
Component-wise model substitution
This is an extension of the baseline model substitution. Since the state output distributions have diagonal covariance matrix, we can substitute a component independently from the others. This component-wise substitution strategy allows to substitute the parts of the mel-cepstrum and log-F0 streams that are the less correlated with the speaker identity. In this way, we can further reduce some disorders without altering the voice identity. In particular, we substitute the mean and variance for the following components:
• 1 st coefficient of the mel-cepstrum (energy) •
High-order coefficients of the mel-cepstrum • Dynamics coefficients of the mel-cepstrum and log-F0 • Voiced/Unvoiced weights The substitution of the high order static coefficients and the dynamics coefficients of the mel-cepstrum will help to reduce the articulation disorders without altering the timbre. In our implementation, we replace all static coefficients of order N>40. The substitution of the dynamics coefficients of the log-F0 will help to regulate the prosodic disorders such as monotonic F0. Finally the replacement of the voiced/unvoiced weights will fix the breathiness disorders. The duration models, aperiodicity models, and global variances are also substituted as in the baseline strategy. We will refer to this method as the component-wise strategy.
Context-dependent model substitution
In the two previous strategies, the model substitutions are independent of the context. However, in HTS, the acoustic models are clustered after their contexts by separate decisions trees. We can use this contextual information to further refine the model substitution. For example, some MND patients cannot pronounce correctly the plosives, the approximants and the diphthongs. In these contexts, it is preferable to substitute all the mel-cepstrum coefficients in order to enhance the intelligibility of the speech. Therefore, we have defined a context-dependent strategy, in which the mel-cepstrum models are entirely substituted for some specific contexts. Since these contexts may vary from one patient to the other, we have designed a screening procedure in which the patients have to read out a set of 50 sentences covering most of the phonetic contexts. Their speech is then assessed by a speech therapist in order to define the contexts for which the models are to be substituted. Finally, the contextdependent and the component-wise model substitutions are combined in order to get the final version of the repaired voice. Ideally the voice donors used for the voice reconstruction should share the gender, age range and regional accent of the patient since these factors are likely to contribute to the characteristics of the voice. This is why we need to record a large number of healthy voice donors with a variety of age and regional accents, as presented in the next section.
Database of Voice Donors
One of the key elements of the voice-banking project is the creation of a catalogue of healthy voices with a wide variety of accents and voice identities. This voice catalogue is used to create the average voice models for the speaker adaptation and to select the voice donors for the voice reconstruction. So far we have recorded about 500 healthy voice donors with various accents (Scottish, Irish, Other UK). This database is already the largest UK speech research database. An illustration of the geographical distribution of the speakers' birthplaces is shown on Figure 2. Each speaker has been recorded in a semi-anechoic chamber for about one hour using at each time a different script in order to get the best phonetic coverage on average. The database of healthy voices is first used to create the average voice models used for speaker adaptation. Ideally, the average voice model should be close to the vocal identity of the patient and it has been shown that gender and regional accent are the most influent factors in speaker similarity perception [14]. Therefore, the speakers are clustered according to their gender and their regional accent in order to train specific average voice models. A minimum of 10 speakers is required in order to get robust average voice models. The healthy voice database is also used to select the voice donors for the model substitution process described in section 4. The voice donors are chosen among the speakers used to build the average voice model matched to the patient's gender and accent.
We first build a speaker-adapted voice for each of these speakers using the same average voice model. The acoustic models used in HTS represent each stream of parameters separately. Therefore, a set of acoustic distances between speaker-adapted voices can be defined for each of these streams (duration, log-F0, band aperiodicity, mel-cepstrum). These distances are defined as the average Karhunen-Loeve (KL) distances [15] between the acoustics models associated to the same stream of parameters. Finally, a voice donor is selected for each stream separately, as the one that minimizes the average acoustic distance for this stream.
Clinical Trial
As part of the voice-banking project, we are conducting a clinical trial in order to assess and further refine the voice building process for patients with degenerative speech disorders. So far, more than 15 patients with MND have already been recorded and 5 of them have been delivered a reconstructed voice. We present in the following sections a subjective assessment of the voice repair as well as the feedbacks from patients and their families.
Subjective evaluation of the voice repair
The substitution strategy presented in Section 4 was evaluated for the case of a MND patient. This patient was a 45 years old Scottish male that we recorded twice. A first recording of one hour (500 sentences) has been made just after diagnosis when he was at the very onset of the disease. At that time, his voice did not show any disorders and could still be considered as "healthy". A second recording of 15 minutes (50 sentences) has been made 10 months later. He has then acquired some speech disorders typically associated with MND, such as excessive hoarseness and breathiness, disruption of speech fluency, reduced articulation and monotonic prosody. The synthetic voices used in this experiment are shown in Table 1. The same male-Scottish average voice model, denoted as AV, was used to create all the synthetic voices. This average voice was trained on 17 male Scottish speakers using 400 sentences each giving a total of 6800 sentences. The synthetic voice created from the first recording of the patient ("healthy" speech) was used as the reference voice for the subjective evaluations. This reference voice is referred to as HC. This choice of a synthetic voice as reference instead of the natural recordings was done to avoid any bias due to the loss of quality inherent to the synthesis. The reconstructed voice IR was obtained by applying the combination of the component-wise and context-dependent substitution strategies to the speaker-adapted voice IC build from the second recording of the patient ("impaired" speech).
Voice Description AV
Average voice used for speaker adaptation HC Speaker adapted voice of the "healthy" speech IC Speaker adapted voice of the "impaired" speech IR Reconstructed voice using the component-wise and context-dependent model substitutions Table 1: Voices compared in the evaluation tests
In order to evaluate the effectiveness of the voice reconstruction, two subjective tests were conducted. The first one assesses the intelligibility of the synthesized voice and the second, the speaker similarity. The same 40 semantically unpredictable sentences [16] were synthesized for each of the 3 voices created from the patient's recordings (see Table 1). The resulting synthesized samples were divided into 4 groups such that each voice is represented by 10 The resulting average WERs for the intelligibility test are shown in Table 2. We are not interested here in the absolute values of the WER but in their relative values compared to the reference voice HC. As expected, the synthetic voice IC created from the "impaired" speech has a high WER, which means that the articulation disorders from the patient's speech have degraded the intelligibility. The important result here is that the model substitution improves the speech intelligibility of the reconstructed voice IR. The results of the similarity test are shown in Table 3. A first interesting result is that the voice clone IC created by speaker adaptation from the "impaired" speech is more similar to the healthy clone HC that the average voice AV.
In the case of this patient, this validates an implicit assumption of the voice reconstruction process: some valuable information about the original vocal identity should remain in the impaired speech. The other important result is the improvement of the average similarity scores when the model substitution strategies are applied. Between IR and AV, there is a mean improvement of 1 MOS (with a p-value << 1.e-5) and more surprisingly, there is also a significant improvement of 0.5 MOS (p-value << 1.e-3) between IC and IR. One explanation of this last result could be that the similarity of vocal identity is better perceived once the disorders have been regulated.
Feedback from patients and families
The results presented in the previous section are relative to the only patient whose 'healthy' voice was available to establish a reference. However, it remains to be demonstrated that similar results could be achieved with different patients. It is also important to assess the usability of the reconstructed voice in real conditions of use. Therefore, we have conducted an experimental trial with 5 patients whose voices have been reconstructed and made available through an on-line server. These patients can use their reconstructed voices from any computer, tablet or mobile phone as long as an Internet connection is available. A simple web interface allows them to enter a text and a synthesis request is sent to a remote server. Once the synthesis is done on the server, the synthesized speech is sent to the device and played through its loudspeakers. The patients and their families were asked to give their feedback on the quality of the reconstructed voice after a few weeks of use. In particular, they were asked to assess the intelligibility of the voice and its similarity to the user's voice before the start of the disease. We get 15 feedback in total corresponding to the 5 patients, their husbands/wives or their parents. The table 4 shows the mean opinion scores on a 5point scale (1 being the worst and 5 the best). These results are consistent with the subjective test presented in the previous section. It shows that the voice reconstruction process manages to remove most of the speech artifacts while retaining some of the voice characteristics of the patient. Most importantly, all the patients said they would rather choose their reconstructed voices over any commercially available synthetized voice.
Question
Mean Opinion Score std Similarity 3.5 0.7 Intelligibility 4 1.1
Conclusions
For VOCA users, speech synthesis is not an optional extra for reading out text, but a critical function for social communication and identity display. Therefore, there is a great need for personalized VOCAs as the provision of personalized voice is associated with greater dignity and improved self-identity for the individual and their family. In order to build personalized synthetic voices, attempts have been made to capture the voice before it is lost, but for some patients, the speech deterioration frequently coincides or quickly follows diagnosis. In such cases, HMM-based speech synthesis has two clear advantages: speaker adaptation and improved control. Speaker adaptation allows the creation of a synthetic voice with a limited amount of data. Then the structure of the acoustic models can be modified to repair the synthetic speech. In this paper, we have presented the results of an on-going clinical trial based on this new approach. The subjective evaluations and the feedback from the patients show that it is possible to build a synthesized voice that retains the vocal identity of the patient while removing most of the speech disorders. Although these results are presented for MND patients, the principle of the voice building and reconstruction process could be easily generalized to any other degenerative or acquired speech disorder.
Figure 1 :
1The structure of the acoustic models in HTS means that there can be a substitution of state output or state duration models between an healthy voice model and the patient voice model in order to compensate for any deterioration in the patient's speech.
Figure 2 :
2UK-wide speech database.
Table 2 :
2Word Error Rate (mean, standard deviation)
The same test sentence "People look, but no one ever finds it."
was synthesized for each of the 4 voices in Table 1. Participants
were asked to listen alternatively to the reference voice HC and
to the same sentence synthesized with the reconstructed voice IR
and the average voice model AV. The presentation order of the
voices being tested was randomized. Participants should rate the
similarity between the tested voice and the reference HC on a 5-
point scale (1: Very dissimilar, 2: Dissimilar, 3: Quite Similar, 4:
Very similar; and 5: Identical). However, the participants were
not given further instruction in order to avoid biasing towards
rating any specific form of similarity. A total of 40 native
English speakers performed the test using headphones.
Voice
Mean Opinion Score
std
AV
2.05
1.05
IC
2.61
1.21
IR
3.09
1.34
Table 3 :
3Similarity to the reference voice HC on a MOS-scale
(mean, standard deviation)
Table 4 :
4Feedback from patients and families (mean, standard deviation)
Trends in augmentative and alternative communication use by individuals with amyotrophic lateral sclerosis. M Doyle, B Phillips, Augmentative and Alternative Communication. 173Doyle, M. and Phillips, B. (2001), "Trends in augmentative and alternative communication use by individuals with amyotrophic lateral sclerosis," Augmentative and Alternative Communication 17 (3): pp.167-178.
I prefer this close': Perceptions of AAC by people with motor neurone disease and their communication partners. J Murphy, Augmentative and Alternative Communication. 20Murphy, J. (2004), "I prefer this close': Perceptions of AAC by people with motor neurone disease and their communication partners. Augmentative and Alternative Communication, 20, 259- 271.
A system for creating personalized synthetic voices. D Yarrington, C Pennington, J Gray, H T Bunnell, Proc. of ASSETS. Yarrington, D., Pennington, C., Gray, J., & Bunnell, H. T. (2005), "A system for creating personalized synthetic voices," Proc. of ASSETS.
XIMERA: a concatenative speech synthesis system with large-scale corpora. H Kawai, T Toda, J Yamagishi, T Hirai, J Ni, N Nishizawa, M Tsuzaki, K Tokuda, IEICE Trans. Information and Systems. 12Kawai, H., Toda, T., Yamagishi, J., Hirai, T., Ni, J., Nishizawa, N., Tsuzaki, M., and Tokuda, K. (2006) "XIMERA: a concatenative speech synthesis system with large-scale corpora," IEICE Trans. Information and Systems, J89-D-II (12), pp.2688-2698.
Statistical parametric speech synthesis, Speech Communication. H Zen, K Tokuda, A Black, 51Zen, H., Tokuda, K., & Black, A. (2009) "Statistical parametric speech synthesis, Speech Communication," 51, pp.1039-1064.
Building personalized synthesized voices for individuals with dysarthia using the HTS toolkit. S Creer, P Green, S Cunningham, J Yamagishi, IGI Global PressCreer, S., Green, P., Cunningham, S., & Yamagishi, J. (2010) "Building personalized synthesized voices for individuals with dysarthia using the HTS toolkit," IGI Global Press, Jan. 2010.
Reconstructing the Voice of an Individual Following Laryngectomy. Z A Khan, P Green, S Creer, S Cunningham, Augmentative and Alternative Communication. Khan, Z. A., Green P., Creer, S., & Cunningham, S. (2011) "Reconstructing the Voice of an Individual Following Laryngectomy," Augmentative and Alternative Communication.
Analysis of speaker adaptation algorithms for HMM-based speech synthesis and a constrained SMAPLR adaptation algorithm. J Yamagishi, T Kobayashi, Y Nakano, K Ogata, J Isogai, IEEE Trans. on ASL. 17Yamagishi, J., Kobayashi, T., Nakano, Y., Ogata, K. & Isogai, J. 2009. Analysis of speaker adaptation algorithms for HMM-based speech synthesis and a constrained SMAPLR adaptation algorithm. IEEE Trans. on ASL, 17, 66-83.
Voice Banking and Voice Reconstruction for MND patients. C Veaux, J Yamagishi, S King, Proceedings of ASSETS. Veaux, C., Yamagishi, J., King, S. (2011) "Voice Banking and Voice Reconstruction for MND patients," Proceedings of ASSETS.
Using HMM-based Speech Synthesis to Reconstruct the Voice of Individuals with Degenerative Speech Disorders. C Veaux, J Yamagishi, S King, Interspeech. Veaux, C., Yamagishi, J., King, S. (2012) "Using HMM-based Speech Synthesis to Reconstruct the Voice of Individuals with Degenerative Speech Disorders," Interspeech, Portland, USA.
Clinical management of dysarthric speakers. K M Yorkston, D R Beukelman, K R Bell, College-Hill PressYorkston, K. M., Beukelman, D. R. and Bell, K. R. (1998) "Clinical management of dysarthric speakers," College-Hill Press.
Adapting acoustic and lexical models to dysarthric speech. K T Mengistu, F Rudzicz, Proc. ICASSP. ICASSPMengistu, K.T. and Rudzicz, F., (2011) "Adapting acoustic and lexical models to dysarthric speech," Proc. ICASSP 2011.
Analysis of speaker clustering strategies for HMM-based speech synthesis. R Dall, C Veaux, J Yamagishi, S King, Proc. Interspeech. Dall, R., Veaux, C., Yamagishi, J. & King, S. (2012) "Analysis of speaker clustering strategies for HMM-based speech synthesis," Proc. Interspeech, Portland, USA.
Cluster criterion functions in spectral subspace and their application in speaker clustering. Haizhou Trung Hieu Nguyen, Eng Siong Li, Chng, Proceedings of ICASSP. ICASSPTrung Hieu Nguyen, Haizhou Li, and Eng Siong Chng. (2009) "Cluster criterion functions in spectral subspace and their application in speaker clustering," In Proceedings of ICASSP.
The SUS test: A method for the assessment of text-to-speech synthesis intelligibility using Semantically Unpredictable Sentences. C Benoît, M Grice, V Hazan, Speech Communication. Benoît C., Grice M., & Hazan, V. (1996) "The SUS test: A method for the assessment of text-to-speech synthesis intelligibility using Semantically Unpredictable Sentences," Speech Communication. |
2,163,836 | Acquisition of Desires before Beliefs: A Computational Investigation | The acquisition of Belief verbs lags behind the acquisition of Desire verbs in children. Some psycholinguistic theories attribute this lag to conceptual differences between the two classes, while others suggest that syntactic differences are responsible. Through computational experiments, we show that a probabilistic verb learning model exhibits the pattern of acquisition, even though there is no difference in the model in the difficulty of the semantic or syntactic properties of Belief vs. Desire verbs. Our results point to the distributional properties of various verb classes as a potentially important, and heretofore unexplored, factor in the observed developmental lag of Belief verbs. | [] | Acquisition of Desires before Beliefs: A Computational Investigation
Association for Computational LinguisticsCopyright Association for Computational LinguisticsAugust 8-9 2013. 2013
Libby Barak libbyb@cs.toronto.edu
Department of Computer Science
University of Toronto Toronto
Canada
Afsaneh Fazly afsaneh@cs.toronto.edu
Department of Computer Science
University of Toronto Toronto
Canada
Suzanne Stevenson suzanne@cs.toronto.edu
Department of Computer Science
University of Toronto Toronto
Canada
Acquisition of Desires before Beliefs: A Computational Investigation
Proceedings of the Seventeenth Conference on Computational Natural Language Learning
the Seventeenth Conference on Computational Natural Language LearningSofia, BulgariaAssociation for Computational LinguisticsAugust 8-9 2013. 2013
The acquisition of Belief verbs lags behind the acquisition of Desire verbs in children. Some psycholinguistic theories attribute this lag to conceptual differences between the two classes, while others suggest that syntactic differences are responsible. Through computational experiments, we show that a probabilistic verb learning model exhibits the pattern of acquisition, even though there is no difference in the model in the difficulty of the semantic or syntactic properties of Belief vs. Desire verbs. Our results point to the distributional properties of various verb classes as a potentially important, and heretofore unexplored, factor in the observed developmental lag of Belief verbs.
Introduction
Psycholinguistic studies have shown great interest in the learning of Mental State Verbs (MSVs), such as think and want, given the various cognitive and linguistic challenges in their acquisition. MSVs refer to an entity's inner states, such as thoughts and wishes, which the language learner must be able to perceive and conceptualize appropriately. Moreover, such verbs often appear in a Sentential Complement (SC) construction, which is complex for children because of the embedded clause.
Despite some shared properties, MSVs are a heterogeneous group, with different types of verbs exhibiting different developmental patterns. Specifically, a wealth of research shows that children produce Desire verbs, such as want and wish, earlier than Belief verbs, such as think and know (Shatz et al., 1983;Bartsch and Wellman, 1995;Asplin, 2002;Perner et al., 2003;de Villiers, 2005;Papafragou et al., 2007;Pascual et al., 2008). Some explanations for this pattern posit that differences in the syntactic usages of Desire and Belief verbs underlie the observed developmental lag of the latter (de Villiers, 2005;Pascual et al., 2008). In particular, Desire verbs occur mostly with an infinitival SC (as in I want (her) to leave), while Belief verbs occur mostly with a finite SC (a full tensed embedded clause, as in I think (that) she left). Notably, infinitivals appear earlier than finite SCs in the speech of young children (Bloom et al., 1984(Bloom et al., , 1989. Others suggest that Desire verbs are conceptually simpler (Bartsch and Wellman, 1995) or pragmatically/communicatively more salient (Perner, 1988;Fodor, 1992;Perner et al., 2003). Proponents of the conceptual and pragmatic accounts argue that syntax alone cannot explain the delay in the acquisition of Belief verbs, because children use finite SCs with verbs of Communication (e.g., say) and Perception (e.g., see) long before they use them with Belief verbs (Bartsch and Wellman, 1995).
We use a computational model of verb argument structure acquisition to shed light on the factors that might be responsible for the developmental gap between Desire and Belief verbs. Importantly, our model exhibits the observed pattern of learning Desire before Belief verbs, without having to encode any differences in difficulty between the two classes in terms of their syntactic or conceptual/pragmatic requirements. The behaviour of the model can thus be attributed to its probabilistic learning mechanisms in conjunction with the distributional properties of the input. In particular, we investigate how the model's learning mechanism interacts with the distributions of several classes of verbs -including Belief, Desire, Perception, Communication, and Action -in the finite and infinitival SC syntax to produce the observed pattern of acquisition of Desire and Belief verbs. Using a computational model can reveal the poten-tial effects of interactions of verb classes in human language acquisition which would be difficult to investigate experimentally. Our results suggest that the distributional properties of relevant verb classes are a potentially important, and heretofore unexplored, factor in experimental studies of the developmental lag of Belief verbs.
The Computational Model
We require an incremental model in which we can examine developmental patterns as it gradually learns relevant aspects of argument structures. This task calls for an ability to represent the semantic and syntactic properties of verb usages, including those containing MSVs and other kinds of verbs taking sentential complements (SCs). Most computational models of verb argument structure acquisition have largely focused on physical action verbs (Alishahi and Stevenson, 2008;Chang, 2009;Perfors et al., 2010;Parisien and Stevenson, 2011). Recently, Barak et al. (2012) extended the incremental Bayesian model of Alishahi and Stevenson (2008) to include the syntactic and semantic features required for the processing of MSVs and other verbs that take SCs. While Barak et al. (2012) modeled some developmental patterns of MSVs overall, their work did not account for the difference between Desire and Belief verbs. In this section, we present their model, which we adopt for our experiments. In Section 3, we describe how we modify the representation of the input in Barak et al. (2012) to enable our investigation of the differences among the MSV classes.
Overview of the Model
The input to the Barak et al. (2012) model is a sequence of frames, where each frame is a collection of syntactic and semantic features representing what the learner might extract from an utterance s/he has heard paired with a scene s/he has perceived. In particular, we consider syntactic properties, including syntactic pattern, argument count, and complement type, as well as semantic properties, including event primitives and event participants. Table 1 presents a sample frame illustrating possible values for these features.
The model incrementally groups the input frames into clusters that reflect probabilistic associations of the syntactic and semantic features across similar verb usages. Each learned cluster is a probabilistic (and possibly noisy) representa- tion of an argument structure construction: e.g., a cluster containing frames corresponding to usages such as I eat apples, She took the ball, and He got a book, etc., represents a Transitive Action construction. 1 Note that a cluster operates as more than simply a set of similar frames: The model can use the probabilistic associations among the various features of the frames in a cluster to generalize over the individual verb usages that it has seen. For example, if the model is presented with a frame corresponding to a transitive utterance using a verb it has not observed before, such as She gorped the ball, the example cluster above would lead the model to predict that gorp has semantic event primitives in common with other Action verbs like eat, take, and get. Such probabilistic reasoning is especially powerful because clusters involve complex interactions of features, and the model reasons across all such clusters to make suitable generalizations over its learned knowledge.
Algorithm for Learning Clusters
The model groups input frames into clusters on the basis of the overall similarity in the values of their syntactic and semantic features. Importantly, the model learns these clusters incrementally; the number and type of clusters is not predetermined. The model considers the creation of a new cluster for a given frame if the frame is not sufficiently similar to any of the existing clusters. Formally, the model finds the best cluster for a given input frame F as in:
BestCluster(F ) = argmax k∈Clusters P (k|F ) (1)
where k ranges over all existing clusters and a new one. Using Bayes rule:
P (k|F ) = P (k)P (F |k) P (F ) ∝ P (k)P (F |k) (2)
The prior probability of a cluster P (k) is estimated as the proportion of frames that are in k out of all observed input frames, thus assigning a higher prior to larger clusters, representing more frequent constructions. The likelihood P (F |k) is estimated based on the match of feature values in F and in the frames of k (assuming independence of the features):
P (F |k) = i∈F eatures P i (j|k)(3)
where i refers to the i th feature of F and j refers to its value, and P i (j|k) is calculated using a smoothed version of:
P i (j|k) = count i (j, k) n k(4)
where count i (j, k) is the number of times feature i has the value j in cluster k, and n k is the number of frames in k.
Attention to Mental Content
One factor proposed to play an important role in the acquisition of MSVs is the difficulty children have in being aware of (or perceiving the salience of) the mental content of a scene that an utterance may be describing (Papafragou et al., 2007). This difficulty arises because the aspects of a scene associated with an MSV -the "believing" or the "wanting" -are not directly observable, as they involve the inner states of an event participant. Instead, younger children tend to focus on the physical (observable) parts of the scene, which generally correspond to the event described in the embedded clause of an MSV utterance. For instance, young children may focus on the "making" action in He thinks Mom made pancakes, rather than on the "thinking".
A key component of the model of Barak et al. (2012) is a mechanism that simulates the gradually-developing ability in children to attend to the mental content rather than solely to the (embedded) physical action. This mechanism basically entails that the model may "misinterpret" an input frame containing an MSV as focusing on the semantics of the action in the sentential complement. Specifically, when receiving an input frame with an MSV, as in Table 1, there is a probability p that the frame is perceived with attention to the semantics corresponding to the physical action verb (here, make). In this case, the model correctly includes the syntactic features as in Table 1, on the assumption that the child can accurately note the number and pattern of arguments. However, the model replaces the semantic features with those that correspond to the physical action event and its participants. At very early stages, p is very high (close to 1), simulating the much greater saliency of physical actions compared to mental events for younger children. As the model "ages" (i.e., receives more input), p decreases, giving more and more attention to the mental content, gradually approaching adult-like abilities.
Experimental Setup
Generation of the Input Corpora
Because there are no readily available large corpora of actual child-directed speech (CDS) associated with appropriate semantic representations, we generate artificial corpora for our simulations that mimic the relevant syntactic properties of CDS along with automatically-produced semantic properties. Importantly, these artificial corpora have the distributional properties of the argument structures for the verbs under investigation based on an analysis of verb usages in CDS. To accomplish this, we adopt and extend the input-generation lexicon of Barak et al. (2012), which is used to automatically generate the syntactic and semantic features of the frames that serve as input to the model. Using this lexicon, each simulation corpus is created through a probabilistic generation of argument structure frames according to their relative frequencies of occurrence in CDS. Since the corpora are probabilistically generated, all experimental results are averaged over simulations on 100 different input corpora, to ensure the results are not dependent on idiosyncratic properties of a single generated corpus.
Our input-generation lexicon contains 31 verbs from various semantic classes and different frequency ranges; these verbs appear in a variety of syntactic patterns including the sentential complement (SC) construction. Our focus here is on learning the Belief and Desire classes; however, we include verbs from other classes to have a realistic context of MSV acquisition in the presence of other types of verbs. In particular, we include (physical) Action verbs because of their frequent usage in CDS, and we include Communication and Perception groups because of their suggested role in the acquisition of MSVs (Bloom et al., 1989;de Villiers, 2005). Table 2 lists the verbs of each semantic class, along with their overall frequency and their relative frequency with the finite (SC-fin) and infinitival SC (SC-inf) in our data.
For each of these 31 verbs, the distributional information about its argument structure was manually extracted from a random sample of 100 CDS usages (or all usages if fewer than 100) from eight corpora from CHILDES (MacWhinney, 2000). 2 The input-generation lexicon then contains the overall frequency of each verb, as well as the relative frequency with which it appears with each of its argument structures. Each argument structure entry for a verb also contains the values for all the syntactic and semantic features in a frame (see Table 1 for an example), which are determined from the manual inspection of the usages.
The values for syntactic features are based on simple observation of the order and number of verbs and arguments in the usage, and, if an argument is an SC, whether it is finite or infinitival. We add this latter feature (the type of the SC) to the syntactic representation used by Barak et al. (2012) to allow distinguishing the syntactic properties associated with Desire and Belief verbs. Note that this feature does not incorporate any potential level of difficulty in processing an infinitival vs. finite SC; the feature simply records that there are three different types of embedded arguments: SC-inf, SC-fin, or none. Thus, while Desire and Belief verbs that typically occur with an SC-inf or SC-fin have a distinguishing feature, there is nothing in this representation that makes Desire verbs inherently easier to process. This syntactic representation reflects our assumptions that a learner: (i) understands basic syntactic properties of an utterance, such as syntactic categories (e.g., noun and verb) and word order; and (ii) distinguishes between a finite complement, as in He thinks that Mom left, and an infinitival, as in He wants Mom to leave.
The values for the semantic features of a verb and its arguments are based on a simple taxonomy of event and participant role properties adapted from several resources, including Alishahi and Stevenson (2008), Kipper et al. (2008), andDowty (1991). In particular, we assume that the learner is able to perceive and conceptualize the general semantic properties of different kinds of events (e.g., state and action), as well as those of the event participants (e.g., agent, experiencer, and theme). In an adaptation of the lexicon of Barak et al., we make minimal assumptions about shared semantics across verb classes. Specifically, to encode suitable semantic distinctions among MSVs, and between MSVs and other verbs, we aimed for a representation that would capture reasonable as-sumptions about high-level similarities and differences among the verb classes. As with the syntactic features, we ensured that we did not simply encode the result we are investigating (that children have facility with Desire verbs before Belief verbs) by making the representation for Desire verbs easier to learn.
In the results presented in Section 4, "our model" refers to the computational model of Barak et al. (2012) together with our modifications to the input representation.
Simulations and Verb Prediction
Psycholinguistic studies have used variations of a novel verb prediction task to examine how strongly children (or adults) have learned to associate the various syntactic and semantic properties of a typical MSV usage. In particular, the typical Desire verb usage combines desire semantics with an infinitival SC syntax, while the typical Belief verb usage combines belief semantics with a finite SC syntax. In investigating the salience of these associations in human experiments, participants are presented with an utterance containing a nonce verb with an SC (e.g., He gorped that his grandmother was in the bed), sometimes paired with a corresponding scene representing a mental event (e.g., a picture or a silent video depicting a thinking event with heightened saliency). An experimenter then asks each participant what the nonce verb (gorp) "means" -i.e., what existing English verb does it correspond to (see, e.g., Asplin, 2002;Papafragou et al., 2007). The expectation is that, e.g., if a participant has a well-entrenched Belief construction, then they should have a strong association between the finite-SC syntax and belief semantics, and hence should produce more Belief verbs as the meaning of a novel verb in an finite-SC utterance (and analogously for infinitival SCs and Desire verbs).
We perform simulations that are based on such psycholinguistic experiments. After training the model on some number of input frames, we then present it with a test frame in which the main verb (head predicate) is replaced by a nonce verb like gorp (a verb that doesn't occur in our lexicon). Analogously to the human experiments, in order to study the differences in the strength of association between the syntax and semantics of Desire and Belief verbs, we present the model with two types of test frames: (i) a typical desire test frame, with syntactic features corresponding to the infinitival SC syntax, optionally paired (depending on the experiment) with semantic features associated with a Desire verb in our lexicon; and (ii) a typical belief test frame, with syntactic features corresponding to the finite SC syntax, optionally paired with semantic features from a Belief verb. 3 Given a test frame F test , we use the clusters learned by the model to calculate the likelihood of each of the 31 verbs v as the response of the model indicating the meaning of the novel verb, as in:
P (v|F test ) (5) = k∈Clusters P head (v|k)P (k|F test ) ∝ k∈Clusters P head (v|k)P (F test |k)P (k)
where P head (v|k) is the probability of the head feature having the value v in cluster k, calculated as in Eqn. (4); P (F test |k) is the probability of the test frame F test given cluster k, calculated as in Eqn.
(3); and P (k) is the prior probability of cluster k, calculated as explained in Section 2.2. What we really want to know is the likelihood of the model producing a verb from each of the semantic classes, rather than the likelihood of any particular verb. For each test frame, we calculate the likelihood of each semantic class by summing the likelihoods of the verbs in that class:
P (Class|F test ) = vc∈Class P (v c |F test )
where v c is one of the verbs in Class, and Class ranges over the 5 classes in Table 2. We average the verb class likelihoods across the 100 simulations.
Experimental Results
The novel verb prediction experiments described above have found differences in the performance of children across the two MSV classes (e.g., Asplin, 2002;Papafragou et al., 2007). For example, children performed better at predicting that a novel verb is a Desire verb in a typical desire context (infinitival-SC utterance paired with a desire scene), compared to their performance at identifying a novel verb as a Belief verb in a typical belief context (finite-SC utterance accompanied by a belief scene). In Section 4.1, we examine whether the model exhibits this behaviour in our verb class prediction task, thereby mimicking children's lag in facility with Belief verbs compared to Desire verbs.
Recall that some researchers attribute the above-mentioned developmental gap to the conceptual and pragmatic differences between the two MSV classes, whereas others suggest it is due to a difference in the syntactic requirements of the two classes. As noted in Section 3.1, we have tailored our representation of Desire and Belief verbs to not build in any differences in the ease or difficulty of acquiring their syntactic or semantic properties. Moreover, the possibility in the model for "misinterpretation" of mental content as action semantics (see Section 2.3) also applies equally to both types of verbs. Thus, any observed performance gap in the model reflects an interaction between its processing approach and the distributional properties of CDS. To better understand the role of the input, in Section 4.2 we examine how the distributional pattern of appearances of various semantic classes of verbs (including Belief, Desire, Communication, Perception and Action verbs) with the finite and infinitival SC constructions affects the learning of the two types of MSVs.
Verb Prediction Simulations
Here we compare the verb prediction responses of the participants in the experiments of Papafragou et al. (2007) (PCG), with those of the model when presented with a novel verb in a typical desire or belief test frame. (See Section 3.2 for how we construct these frames.) PCG report verb responses for the novel verb meaning as desire, belief, or action, where the latter category contains all other verb responses. Looking closely at the latter category in PCG, we find that most verbs are what we have termed (physical) Action verbs. We thus report the verb class likelihoods of the model for the Belief, Desire, and Action verbs in our lexicon. To compare the model's responses with those of the children and adults in PCG, we report the responses of the model to the test frames at two test points: after training the model with 500 input frames, resembling the "Child stage", and after presenting the model with 10, 000 input frames, representing the "Adult stage". is better at predicting Desire verbs for a desire test frame (.56) than it is at predicting Belief verbs for a belief test frame (.42) -cf. 59% Desire vs. 41% Belief prediction for PCG. In addition, as for both the children and adult participants of PCG, the model produces more Action verbs in a desire context than in a belief context at both stages. We note that although the adult participants of PCG perform well at identifying both Desire and Belief verbs, the model does not identify Belief verbs with the same accuracy as it does Desire verbs, even after processing 10, 000 input frames (i.e., the "Adult stage"). In Section 4.2, we will see that this is due to the model forming strong associations between the Communication and Perception verbs and the SC-fin usage (the typical syntax of Belief verbs). These associations might be overly strong in our model because of the limited number of verbs and verb classes -an issue we will need to address in the future. We also note that, unlike the results of PCG, the model only rarely produces Desire verbs in a Belief context. This also may be due to our choice of Desire verbs, which have extremely few SC-fin usages overall.
To summarize, similarly to children (Asplin, 2002;Papafragou et al., 2007), the model performs better at identifying Desire verbs compared to Belief verbs. Moreover, we replicate the experimental results of PCG without encoding any conceptual or syntactic differences in difficulty between the two types of verbs. Specifically, because the representation of Desire and Belief classes in our experiments does not build in a bias due to the ease of processing Desire verbs, the differential results in the model must be due to the interaction of the different distributional patterns in CDS (see Table 2) and the processing approach of the model. Although this finding does not rule out the role of conceptual or syntactic differences between Desire and Belief verbs in delayed acquisition of the latter, it points to the importance of the distributional patterns as a potentially important and relevant factor worth further study in human experiments. We further investigate this hypothesis in the following section.
A Closer Look at the Role of Syntax
The goal of the experiments presented here is to understand how an interaction among the 5 different semantic classes of verbs, in terms of their distribution of appearance with the two types of SC constructions, coupled with the probabilistic "misinterpretation" of MSVs in the model, might play a role in the acquisition of Desire before Belief verbs. Because our focus is on the syntactic properties of the verbs, we present the model with partial test frames containing a novel verb and syntactic features that correspond to either a finite SC usage (the typical use of a Belief verb) or an infinitival SC usage (the typical use of a Desire verb). 5 We refer to the partial test frames as SC-fin or SCinf test frames. We test the model periodically, over the course of 10, 000 input frames, in order to examine the progression of the verb class like- First, we examine the verb class prediction likelihoods, given an SC-inf test frame; see Figure 2(a). We can see that all through training, the likelihoods are mainly divided between Desire and Action verbs, with the Desire likelihood improving over time. Looking at Table 2, we note that the Desire and Action verbs have the highest frequency of occurrence with SC-inf (taking into account both the overall frequency of verbs, and their relative frequency with SC-inf), contributing to their strength of association with the infinitival-SC syntax. Note that the very high likelihood of Action verbs given an SC-inf test frame, especially at the earlier stages of training, cannot be solely due to their occurrence with SC-inf, since these verbs mostly occur with other syntactic patterns. Recall that the model incorporates a mechanism that simulates a higher probability of erroneously attending to the physical action (as opposed to the mental event) at earlier stages, simulating what has been observed in young children (see Section 2.3 for details). We believe that this mechanism is re-sponsible for some of the Action verb responses of the model for an SC-inf test frame.
Next, we look at the pattern of verb class likelihoods given an SC-fin test frame; see Figure 2(b). We can see that the likelihoods here are divided across a larger number of classes -namely, Action, Communication, and Perception -compared with Figure 2(a) for the SC-inf test frame. Since Action verbs do not occur in our data with SC-fin (see Table 2), their likelihood here comes from the misinterpretation of mental events (accompanied with SC-fin) as action. The initially high likelihoods of Communication and Perception verbs results from their high frequency of occurrence with SC-fin. Because at this stage Belief verbs are not always correctly associated with SCfin due to the high probability of misinterpreting them as action, we see a lower likelihood of predicting Belief verbs. Eventually, the model produces more Belief responses than any other verb class, since Beliefs have the highest frequency of occurrence with the finite-SC syntax.
To summarize, our results here confirm our hypothesis that the distributional properties of the verb classes with the finite and infinitival SC patterns, coupled with the learning mechanisms of the model, account for the observed developmental pattern of MSV acquisition in our model.
Discussion
We use a computational model of verb argument structure learning to shed light on the factors that might underlie the earlier acquisition of Desire verbs (e.g., wish and want) than Belief verbs (e.g., think and know). Although this developmental gap has been noted by many researchers, there are at least two competing theories as to what might be the important factors: differences in the conceptual/pragmatic requirements (e.g., Fodor, 1992; Bartsch and Wellman, 1995;Perner et al., 2003), or differences in the syntactic properties (e.g., de Villiers, 2005;Pascual et al., 2008). Using a computational model, we suggest other factors that may play a role in an explanation of the observed gap, and should be taken into account in experimental studies on human subjects.
First, we show that the model exhibits a similar pattern to children, in that it performs better at predicting Desire verbs compared to Belief verbs, given a novel verb paired with typical Desire or Belief syntax and semantics, respectively. This difference in performance suggests that the model forms a strong association between the desire semantics and the infinitival-SC syntax -one that is formed earlier and is stronger than the association it forms between the belief semantics and the finite-SC syntax. Importantly, the replication of this behaviour in the model does not require an explicit encoding of conceptual/pragmatic differences between Desire and Belief verbs, nor of a difference between the two types of SC syntax (finite and infinitival) with respect to their ease of acquisition. Instead, we find that what is responsible for the model's behaviour is the distribution of the semantic verb classes (Desire, Belief, Perception, Communication, and Action) with the finite and infinitival SC syntactic patterns in the input.
Children are also found to produce semantically-concrete verbs, such as Communication (e.g., say) and Perception verbs (e.g., see), with the finite SC before they produce (more abstract) Belief verbs with the same syntax. Psycholinguistic theories have different views on what this observation tells us about the delay in the acquisition of Belief verbs. For example, Bartsch and Wellman (1995) suggest that the earlier production of Communication verbs shows that even when children have learned the finite-SC syntax (and use it with more concrete verbs), they lack the required conceptual development to talk about the beliefs of others. Our results suggest a different take on these same findings: because Communication (and Perception) verbs also frequently appear with the finite-SC syntax in the input, the model learns a relatively strong association between each of these semantic classes and the finite SC. This in turn causes a delay in the formation of a sufficiently-strong association between the Belief verbs and that same syntax, compared with the association between the Desire verbs and the infinitival SC.
de Villiers (2005) suggests that associating Communication verbs with the finite-SC syntax has a facilitating effect on the acquisition of Belief verbs. In our model, we observe a competition between Communication and Belief verbs, in terms of their association with the finite-SC syntax. To further explore the hypothesis of de Villiers (2005) will require expanding our model with enriched semantic representations that enable us to investigate the bootstrapping role of Communication verbs in the acquisition of Beliefs.
Patrick Suppes. 1974. The semantics of children's language. American psychologist, 29(2):103.
Figure 1 (Figure 1 :
11a) gives the percent verb types from (a) Human participants in Papafragou et al. (a) Percent verb types produced by adult and child participants given a desire or belief utterance and scene. (b) The model's verb class likelihoods given a desire or belief test frame. Child stage is represented by 500 input frames compared to the 10, 000 input frames for Adult stage. PCG; 4 Figure 1(b) presents the results of the model. Similarly to the children in PCG, the model at earlier stages of learning ("Child stage")
Figure 2 :
2The model's verb class likelihoods for the individual semantic classes.lihoods over time.
Table 1 :
1An example input frame. The Syntactic featuresreflect an utterance such as He thinks Mom made pancakes:
i.e., syntactic pattern 'arg1 verb arg2 verb arg3', 3 arguments,
and finite SC. The Semantic features reflect a corresponding
conceptualized belief event with a physical action described
in the SC ({state, consider , cogitate, action}) whose
'arg1' participant ({experiencer , perceiver , considerer })
perceives the 'arg2' ({agent, animate}) acting on the 'arg3'
({theme, changed }).
Table 2 :
2The list of our 31 verbs from the five semantic classes, along with their overall frequency, and their relative frequency with the finite SC (SC-fin) or the infinitival SC (SC-inf).
Note that, because the associations are probabilistic, a construction may be represented by more than one cluster.
Brown (1973); Suppes (1974);Kuczaj (1977);Bloom et al. (1974);Sachs (1983);Lieven et al. (2009).
Table 2shows that, in our data, Belief verbs occur exclusively with finite clauses in an SC usage. Although Desire verbs occur in both SC-inf and SC-fin usages, the former outnumber the latter by almost 30 to 1 over all Desire verbs.
Based on results presented inTable 4, Page 149 inPapafragou et al. (2007), for the utterance and scene condition.
Verb prediction given an isolated utterance has been performed with adult participants (e.g.,Gleitman et al., 2005;Papafragou et al., 2007). Here we simulate the settings of such experiments, but do not compare our results with the experimental data, since they have not included children.(a) Model's likelihoods given SC-inf test frame (b) Model's likelihoods given SC-fin test frame
A computational model of early argument structure acquisition. Afra Alishahi, Suzanne Stevenson, Cognitive Science. 325Afra Alishahi and Suzanne Stevenson. 2008. A computational model of early argument struc- ture acquisition. Cognitive Science, 32(5):789- 834.
Can complement frames help children learn the meaning of abstract verbs?. Kristen N Asplin, UMass AmherstPh.D. thesisKristen N. Asplin. 2002. Can complement frames help children learn the meaning of abstract verbs? Ph.D. thesis, UMass Amherst.
Modeling the acquisition of mental state verbs. Libby Barak, Afsaneh Fazly, Suzanne Stevenson, Libby Barak, Afsaneh Fazly, and Suzanne Steven- son. 2012. Modeling the acquisition of mental state verbs. NAACL-HLT 2012.
Children talk about the mind. Karen Bartsch, Henry M Wellman, Oxford Univ. PressNew YorkKaren Bartsch and Henry M. Wellman. 1995. Children talk about the mind. New York: Ox- ford Univ. Press.
Imitation in language development: If, when, and why. Lois Bloom, Lois Hood, Patsy Lightbown, Cognitive Psychology. 63Lois Bloom, Lois Hood, and Patsy Lightbown. 1974. Imitation in language development: If, when, and why. Cognitive Psychology, 6(3):380-420.
Acquisition of complementation. Lois Bloom, Matthew Rispoli, Barbara Gartner, Jeremie Hafitz, Journal of Child Language. 1601Lois Bloom, Matthew Rispoli, Barbara Gartner, and Jeremie Hafitz. 1989. Acquisition of com- plementation. Journal of Child Language, 16(01):101-120.
Learning to in complement constructions. Lois Bloom, Jo Tackeff, Margaret Lahey, Journal of Child Language. 1102Lois Bloom, Jo Tackeff, and Margaret Lahey. 1984. Learning to in complement constructions. Journal of Child Language, 11(02):391-406.
A first language: The early stages. Roger Brown, Harvard Univ. PressRoger Brown. 1973. A first language: The early stages. Harvard Univ. Press.
Constructing grammar: A computational model of the emergence of early constructions. Nancy Chih-Lin Chang, BerkeleyUniversity of CaliforniaPh.D. thesisNancy Chih-Lin Chang. 2009. Constructing grammar: A computational model of the emer- gence of early constructions. Ph.D. thesis, Uni- versity of California, Berkeley.
Can language acquisition give children a point of view. Jill G De Villiers, Why Language Matters for Theory of Mind. Oxford Univ. PressJill G. de Villiers. 2005. Can language acquisi- tion give children a point of view. In Why Lan- guage Matters for Theory of Mind, pages 199- 232. Oxford Univ. Press.
Thematic Proto-Roles and Argument Selection. David Dowty, Language. 673David Dowty. 1991. Thematic Proto-Roles and Argument Selection. Language, 67(3):547- 619.
A theory of the child's theory of mind. A Jerry, Fodor, Cognition. 443Jerry A Fodor. 1992. A theory of the child's theory of mind. Cognition, 44(3):283-296.
Hard words. Lila R Gleitman, Kimberly Cassidy, Rebecca Nappa, Anna Papafragou, John C Trueswell, Language Learning and Development. 11Lila R. Gleitman, Kimberly Cassidy, Rebecca Nappa, Anna Papafragou, and John C. Trueswell. 2005. Hard words. Language Learning and Development, 1(1):23-64.
A large-scale classification of English verbs. Language Resources and Evaluation. Karin Kipper, Anna Korhonen, Neville Ryant, Martha Palmer, 42Karin Kipper, Anna Korhonen, Neville Ryant, and Martha Palmer. 2008. A large-scale classifica- tion of English verbs. Language Resources and Evaluation, 42(1):21-40.
The acquisition of regular and irregular past tense forms. A Kuczaj, Stan , Journal of Verbal Learning and Verbal Behavior. 165A. Kuczaj, Stan. 1977. The acquisition of regular and irregular past tense forms. Journal of Verbal Learning and Verbal Behavior, 16(5):589-600.
Two-year-old children's production of multiword utterances: A usage-based analysis. Elena Lieven, Dorothé Salomo, Michael Tomasello, Cognitive Linguistics. 203Elena Lieven, Dorothé Salomo, and Michael Tomasello. 2009. Two-year-old children's pro- duction of multiword utterances: A usage-based analysis. Cognitive Linguistics, 20(3):481-507.
The CHILDES project: Tools for analyzing talk. B Macwhinney, Psychology Press2B. MacWhinney. 2000. The CHILDES project: Tools for analyzing talk, volume 2. Psychology Press.
When we think about thinking: The acquisition of belief verbs. Anna Papafragou, Kimberly Cassidy, Lila Gleitman, Cognition. 1051Anna Papafragou, Kimberly Cassidy, and Lila Gleitman. 2007. When we think about think- ing: The acquisition of belief verbs. Cognition, 105(1):125-165.
Generalizing between form and meaning using learned verb classes. Christopher Parisien, Suzanne Stevenson, Proceedings of the 33rd Annual Meeting of the Cognitive Science Society. the 33rd Annual Meeting of the Cognitive Science SocietyChristopher Parisien and Suzanne Stevenson. 2011. Generalizing between form and meaning using learned verb classes. In Proceedings of the 33rd Annual Meeting of the Cognitive Sci- ence Society.
Acquisition of mental state language in Spanish children: a longitudinal study of the relationship between the production of mental verbs and linguistic development. Belén Pascual, Gerardo Aguado, María Sotillo, Jose C Masdeu, Developmental Science. 114Belén Pascual, Gerardo Aguado, María Sotillo, and Jose C Masdeu. 2008. Acquisition of men- tal state language in Spanish children: a longitu- dinal study of the relationship between the pro- duction of mental verbs and linguistic develop- ment. Developmental Science, 11(4):454-466.
Variability, negative evidence, and the acquisition of verb argument constructions. Amy Perfors, Joshua B Tenenbaum, Elizabeth Wonnacott, Journal of Child Language. 3703Amy Perfors, Joshua B. Tenenbaum, and Eliz- abeth Wonnacott. 2010. Variability, negative evidence, and the acquisition of verb argu- ment constructions. Journal of Child Language, 37(03):607-642.
Developing semantics for theories of mind: From propositional attitudes to mental representation. Developing theories of mind. Josef Perner, Josef Perner. 1988. Developing semantics for the- ories of mind: From propositional attitudes to mental representation. Developing theories of mind, pages 141-172.
Want That is understood well before Say That, Think That, and False Belief: A test of de Villiers's linguistic determinism on German-speaking children. Josef Perner, Manuel Sprung, Petra Zauner, Hubert Haider, Child development. 741Josef Perner, Manuel Sprung, Petra Zauner, and Hubert Haider. 2003. Want That is understood well before Say That, Think That, and False Be- lief: A test of de Villiers's linguistic determin- ism on German-speaking children. Child devel- opment, 74(1):179-188.
Talking about the There and Then: The emergence of displaced reference in parent-child discourse. Jacqueline Sachs, Children's language, 4.Jacqueline Sachs. 1983. Talking about the There and Then: The emergence of displaced refer- ence in parent-child discourse. Children's lan- guage, 4.
The acquisition of mental verbs: A systematic investigation of the first reference to mental state. Marilyn Shatz, Henry M Wellman, Sharon Silber, Cognition. 143Marilyn Shatz, Henry M. Wellman, and Sharon Silber. 1983. The acquisition of mental verbs: A systematic investigation of the first reference to mental state. Cognition, 14(3):301-321. |
5,957,384 | Evaluating a Morphological Analyser of Inuktitut | We evaluate the performance of an morphological analyser for Inuktitut across a mediumsized corpus, where it produces a useful analysis for two out of every three types. We then compare its segmentation to that of simpler approaches to morphology, and use these as a pre-processing step to a word alignment task. Our observations show that the richer approaches provide little as compared to simply finding the head, which is more in line with the particularities of the task. | [
11976514,
12227075,
14109636,
5133576,
10549264,
5284722,
15166874,
9882011,
7541406
] | Evaluating a Morphological Analyser of Inuktitut
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 3-8, 2012. 2012
Jeremy Nicholson jeremymn@csse.unimelb.edu.au
Trevor Cohn tcohn@dcs.shef.ac.uk
Timothy Baldwin
†Department of Computing and Information Systems
‡Department of Computer Science
The University of Melbourne
Australia
The University of Sheffield
UK
Evaluating a Morphological Analyser of Inuktitut
Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Montréal, CanadaAssociation for Computational LinguisticsJune 3-8, 2012. 20122012
We evaluate the performance of an morphological analyser for Inuktitut across a mediumsized corpus, where it produces a useful analysis for two out of every three types. We then compare its segmentation to that of simpler approaches to morphology, and use these as a pre-processing step to a word alignment task. Our observations show that the richer approaches provide little as compared to simply finding the head, which is more in line with the particularities of the task.
Introduction
In this work, we evaluate a morphological analyser of Inuktitut, whose polysynthetic morphosyntax can cause particular problems for natural language processing; but our observations are also relevant to other languages with rich morphological systems. The existing NLP task for Inuktitut is that of word alignment (Martin et al., 2005), where Inuktitut tokens align to entire English clauses. While Langlais et al. (2005) theorises that a morphological analyser could aid in this task, we observed little to no improvement over a baseline model by making use of its segmentation. Nonetheless, morphological analysis does provide a great deal of information, but the task structure tends to disprefer its contribution.
Background
Inuktitut
Inuktitut is a macrolanguage of many more-or-less mutually intelligible dialects (Gordon, 2005). The morphosyntax of Inuktitut is particularly marked by a rich polysynthetic suffixing morphology, including incorporation of arguments into verbal tokens, as in natsiviniqtulauqsimavilli in (1). This phenomenon causes an individual token in Inuktitut to be approximately equivalent to an entire clause in English.
(1) natsiqseal
-viniq- meat -tuq- eat -lauq- before -sima- ever -vit INT-2s
-li but "But have you ever eaten seal meat before?" Lowe (1996) analyses the morphology as a fourplace relationship: one head morpheme, zero or more lexical morphemes, one or more grammatical morphemes, and an optional enclitic. The morphotactics causes, amongst other phenomena, the final consonant of a morpheme to assimilate the manner of the initial consonant of the following morpheme (as in -villi), or to be dropped (as in natsiviniq-). Consequently, morphemes are not readily accessible from the realised surface form, thereby motivating the use of a morphological analyser.
Morphological analysis
For many languages with a less rich morphology than Inuktitut, an inflectional lexicon is often adequate for morphological analysis (for example, CELEX for English (Burnage, 1990), Lefff for French (Sagot et al., 2006) or Adolphs (2008) for German). Another typical approach is to perform morphological analysis at the same time as POS tagging (as in Hajič and Hladká (1998) for the fusional morphology in Czech), as it is often the case that determining the part-of-speech and choosing the appropriate inflectional paradigm are closely linked.
For highly inflecting languages more generally, morphological analysis is often treated as a segmentand-normalise problem, amenable to analysis by weighted finite state transducer (wFST), for example, Creutz and Lagus (2002) for Finnish.
Resources
A morphological analyser for Inuktitut
The main resource that we are evaluating in this work is a morphological analyser of Inuktitut called Uqa·Ila·Ut. 1 It is a rule-based system based on regular morphological variations of about 3200 head, 350 lexical, and 1500 grammatical morphemes, with heuristics for ranking the various readings. The head and lexical morphemes are collated with glosses in both English and French.
Word alignment
The training corpus we use in our experiments is a sentence-aligned segment of the Nunavut Hansards (Martin et al., 2003). The corpus consists of about 340K sentences, which comprise about 4.0M English tokens, and 2.2M Inuktitut. The challenge of the morphology becomes apparent when we contrast these figures with the types: about 416K for Inuktitut, but only 27K for English. On average, there are only 5 token instances per Inuktitut type; some 338K types (81%) are singletons.
Inuktitut formed part of one of the shared tasks in the ACL 2005 workshop on building and using parallel texts (Martin et al., 2005); for this, the above corpus was simplistically tokenised, and used as unsupervised training data. 100 sentences from this corpus were phrasally aligned by Inuit annotators. These were then extended into word alignments, where phrasal alignments of one token in both the source and target were (generally) called sure alignments, and one-to-many or many-to-many mappings were extended to their cartesian product, and called probable. The test set was composed of 75 of these sentences (about 2K English tokens, 800 Inuktitut tokens, 293 gold-standard sure alignments, and 1679 probable), which we use to evaluate word alignments.
Our treatment of the alignment problem is most similar to Schafer and Drábek (2005) who examine four systems: GIZA++ models (Och and Ney, 2000) for each source-target direction, another where the Inuktitut input has been syllabised, and a wFST model. They observe that aggregating these results through voting can create a very competitive system for Inuktitut word alignment.
Experimental approach
We used an out-of-the-box implementation of the Berkeley Aligner (DeNero and Klein, 2007), a competitive word alignment system, to construct an unsupervised alignment over the 75 test sentences, based on the larger training corpus. The default implementation of the system involves two jointlytrained HMMs (one for each source-target direction) over five iterations, 2 with so-called competitive thresholding in the decoding step; these are more fully described in DeNero and Klein (2007) and Liang et al. (2006).
Our approach examines morphological preprocessing of the Inuktitut training and test sets, with the idea of leveraging the morphological information into a corpus which is more amenable to alignment. The raw corpus appears to be undersegmented, where data sparseness from the many singletons would prevent reliable alignments. Segmentation might aid in this process by making sublexical units with semantic overlap transparent to the alignment system, so that types appear to have a greater frequency through the data. Through this, we attempt to examine the hypothesis that one-toone alignments between English and Inuktitut would hold with the right segmentation. On the other hand, oversegmentation (for example, down to the character level) can leave the resulting sub-lexical items semantically meaningless and cause spurious matches.
We consider five different ways of tackling Inuktitut morphology:
1. None: simply treat each Inuktitut token as a monolithic entity. This is our baseline approach.
2. Head: attempt to separate the head morpheme from the non-head periphery. Our hypothesis is that we will be able to align the clausal head more reliably, as it tends to correspond to a single English token more reliably than the other morphemes, which may not be realised in the same manner in English. Head morphs in Inuktitut correspond to the first one or two syllables of a token; we treated them uniformly as two syllables, as other values caused a substantial degredation in performance.
3. Syllabification: treat the text as if Inuktitut had isolating morphology, and transform each token into a series of single-syllable pseudomorphs. This effectively turns the task on its head, from a primarily one Inukitut-to-many English token problem to that of one Englishto-many Inuktitut. Despite the overzealousness of this approach (as most Inuktitut morphemes are polysyllabic, and consequently there will be many plausible but spurious matches between tokens that share a syllable but no semantics), Schafer and Drábek (2005) observed it to be quite competitive.
4.
Morphs: segment each word into morphs, thereby treating the morphology problem as pure segmentation. This uses the top output of the morphological analyser as the oracle segmentation of each Inuktitut token.
5.
Morphemes: as previous, except include the normalisation of each morph to a morpheme, as provided by the morphological analyser, as a sort of "lemmatisation" step. The major advantage over the morph approach is due to the regular morphophonemic effects in Inuktitut, which cause equivalent morphemes to have different surface realisations.
Results
Analyser
In our analysis, the morphological analyser finds at least one reading for about 218K (= about 65%) of the Inuktitut types. Of the 120K types without read-ings, resource contraints account for about 11K. 3 Another 6K types caused difficulties due to punctuation, numerical characters or encoding issues, all of which could be handled through more sophisticated tokenisation. A more interesting cause of gaps for the analyser was typographical errors (e.g. *kiinaujaqtaaruasirnirmut for kiinaujaqtaarusiarnirmut "requests for proposals"). This was often due to consonant gemination, where it was either missing (e.g. nunavummut "in Nunavut" appeared in the corpus as *nunavumut) or added (e.g. *tamakkununnga instead of tamakkununga "at these ones here"). While one might expect these kinds of error to be rare, because Inuktitut has an orthography that closely reflects pronunciation, they instead are common, which means that the morphological analyser should probably accept incorrect gemination with a lower weighting.
More difficult to analyse directly is the impact of foreign words (particularly names) -these are typically subjectively transliterated based on Inuktitut morphophonology. Schafer and Drábek (2005) use these as motivation for an approach based on a wFST, but found few instances to analyse its accuracy. Finally, there are certainly missing roots, and possibly some missing affixes as well, for example pirru-"accident" (cf. pirruaqi-"to have an accident"). Finding these automatically remains as future work.
As for tokens, we briefly analysed the 768 tokens in the test set, of which 228 (30%) were not given a reading. Punctuation (typically commas and periods) account for 117 of these, and numbers another 7. Consonant gemination and foreign words cause gaps for at least 16 and 6 tokens, respectively (that we could readily identify).
Word Alignment
Following Och and Ney (2000), we assess using alignment error rate (AER) and define precision with respect to the probable set, and recall with respect to Table 1: Precision, recall, and alignment error rate for various approaches to morphology, with Schafer and Drábek (2005) for comparison the sure set. We present word alignment results of the various methods -contrasted with Schafer and Drábek (2005) -in Table 1. The striking result is in terms of statistical significance: according to χ 2 , most of the various approaches to morphology fail to give a significantly (P < 0.05) different result to the baseline system of using entire tokens. For comparison, whereas our baseline system is significantly better than the baseline system of Schafer and Drábek (2005) -which demonstrates the value that the Berkeley Aligner provides by training in both source-target directions -their syllablised model is significantly superior in precision (P < 0.001), while their recall is still worse than our model (P < 0.05). Intuitively, this seems to indicate that their model is making fewer judgments, but actually the opposite is true. It seems that their model achieves better performance than ours because it leverages many candidate probable alignments into high quality aggregates using a most-likely heuristic on the mapping of Inuktitut syllables to English words, whereas the Berkeley Aligner culls the candidate set in joint training.
Of the approaches toward morphology that we consider, only the recall of the head-based system improves upon the baseline (P < 0.025). This squares with our intuitions, where segmenting the root morpheme from the larger token allows for more effective alignment of the semantically straightforward sure alignments.
The three systems that involve a finer segmenta-tion over the tokens are equivalent in performance to the baseline system. The oversegmentation seemed to caused the alignment system to abandon an implicit preference for monotonicity of the order of tokens between the source and target (which holds pretty well for the baseline system over the test data, thanks partly to the fidelity-focused structure of a Hansard corpus): presumably because the aligner perceives lexical similarity between disparate tokens due to them sharing a sublexical unit. This relaxing of monotonicity is most apparent for punctuation, where a comma with a correct alignment in the baseline becomes incorrectly aligned to a different comma in the sentence for the segmented system.
Conclusion
The only improvement toward the task that we observed using morphological approaches is that of head segmentation, where using two syllables as a head-surrogate allowed us to capture more of the sure (one-to-one) alignments in the test set. One possible extension would be to take the head morpheme as given the analyser, rather than the somewhat arbitrary syllabic approach. For other languages with rich morphology, it may be similarly valuable to target substantives for segmentation to improve alignment. All in all, it appears that the lexical encoding of morphology of Inuktitut is so strikingly different than English, that the assumption of Inuktitut morphemes aligning to English words is untrue or at least unfindable within the current framework. Numerous common morphemes have no English equivalent, for example, -liaq-"to go to" which seems to act as a light verb, or -niq-, a (re-)nominaliser for abstract nominals. While the output of the morphological analyser could probably be used more effectively in other tasks, there are still important impacts in word alignment and machine translation, including leveraging a dictionary (which is based on morphemes, not tokens, and as such requires segmentation and normalisation) or considering grammatical forms for syntactic approaches.
http://inuktitutcomputing.ca/Uqailaut/ en/IMA.html
Better performance was observed with three iterations, but we preferred to maintain the default parameters of the system.
We only attempted to parse tokens of 30 characters or shorter; longer tokens tended to cause exceptions -this could presumably be improved with a more efficient analyser. While the number of analyses will continue to grow with the token length, which has implications in agglutinative languages, here there are only about 300 tokens of length greater than 40.
Acquiring a poor man's inflectional lexicon for German. Peter Adolphs, Proc. of the 6th LREC. of the 6th LRECMarrakech, MoroccoPeter Adolphs. 2008. Acquiring a poor man's inflec- tional lexicon for German. In Proc. of the 6th LREC, Marrakech, Morocco.
CELEX: A guide for users. Gavin Burnage, University of NijmegenTechnical reportGavin Burnage. 1990. CELEX: A guide for users. Tech- nical report, University of Nijmegen.
Unsupervised discovery of morphemes. Mathias Creutz, Krista Lagus, Proc. of the 6th Workshop of ACL SIGPHON. of the 6th Workshop of ACL SIGPHONPhiladelphia, USAMathias Creutz and Krista Lagus. 2002. Unsupervised discovery of morphemes. In Proc. of the 6th Workshop of ACL SIGPHON, pages 21-30, Philadelphia, USA.
Tailoring word alignments to syntactic machine translation. John Denero, Dan Klein, Proc. of the 45th Annual Meeting of the ACL. of the 45th Annual Meeting of the ACLPrague, Czech RepublicJohn DeNero and Dan Klein. 2007. Tailoring word alignments to syntactic machine translation. In Proc. of the 45th Annual Meeting of the ACL, pages 17-24, Prague, Czech Republic.
Ethnologue: Languages of the World, Fifteenth Edition. SIL International. Raymund G Gordon, Jr, Raymund G. Gordon, Jr, editor. 2005. Ethnologue: Lan- guages of the World, Fifteenth Edition. SIL Interna- tional.
Tagging inflective languages: Prediction of morphological categories for a rich, structured tagset. Jan Hajič, Barbora Hladká, Proc. of the 36th Annual Meeting of the ACL and 17th International Conference on COLING. of the 36th Annual Meeting of the ACL and 17th International Conference on COLINGMontréal, CanadaJan Hajič and Barbora Hladká. 1998. Tagging inflective languages: Prediction of morphological categories for a rich, structured tagset. In Proc. of the 36th Annual Meeting of the ACL and 17th International Conference on COLING, pages 483-490, Montréal, Canada.
NUKTI: English-Inuktitut word alignment system description. Philippe Langlais, Fabrizio Gotti, Guihong Cao, Proc. of the ACL Workshop on Building and Using Parallel Texts. of the ACL Workshop on Building and Using Parallel TextsAnn Arbor, USAPhilippe Langlais, Fabrizio Gotti, and Guihong Cao. 2005. NUKTI: English-Inuktitut word alignment sys- tem description. In Proc. of the ACL Workshop on Building and Using Parallel Texts, pages 75-78, Ann Arbor, USA.
Alignment by agreement. Percy Liang, Ben Taskar, Dan Klein, Proc. of the HLT Conference of the NAACL. of the HLT Conference of the NAACLNew York City, USAPercy Liang, Ben Taskar, and Dan Klein. 2006. Align- ment by agreement. In Proc. of the HLT Conference of the NAACL, pages 104-111, New York City, USA.
Grammatical sketches: Inuktitut. Ronald Lowe, Quebec's Aboriginal Languages: History, Planning and Development. Jacques MauraisMultilingual MattersRonald Lowe. 1996. Grammatical sketches: Inuktitut. In Jacques Maurais, editor, Quebec's Aboriginal Lan- guages: History, Planning and Development, pages 204-232. Multilingual Matters.
Aligning and using an English-Inuktitut parallel corpus. Joel Martin, Howard Johnson, Benoit Farley, Anna Maclachlan, Proc. of the HLT-NAACL 2003 Workshop on Building and Using Parallel Texts. of the HLT-NAACL 2003 Workshop on Building and Using Parallel TextsEdmonton, CanadaJoel Martin, Howard Johnson, Benoit Farley, and Anna Maclachlan. 2003. Aligning and using an English- Inuktitut parallel corpus. In Proc. of the HLT-NAACL 2003 Workshop on Building and Using Parallel Texts, pages 115-118, Edmonton, Canada.
Word alignment for languages with scarce resources. Joel Martin, Rada Mihalcea, Ted Pedersen, Proc. of the ACL Workshop on Building and Using Parallel Texts. of the ACL Workshop on Building and Using Parallel TextsAnn Arbor, USAJoel Martin, Rada Mihalcea, and Ted Pedersen. 2005. Word alignment for languages with scarce resources. In Proc. of the ACL Workshop on Building and Using Parallel Texts, pages 65-74, Ann Arbor, USA.
Improved statistical alignment models. Josef Franz, Hermann Och, Ney, Proc. of the 38th Annual Meeting of the ACL. of the 38th Annual Meeting of the ACLSaarbrücken, GermanyFranz Josef Och and Hermann Ney. 2000. Improved sta- tistical alignment models. In Proc. of the 38th Annual Meeting of the ACL, pages 440-447, Saarbrücken, Germany.
The Lefff syntactic lexicon for French: Architecture, acquisition, use. Proc. of the 5th LREC. Benoît Sagot, Lionel Clément, Eric Villemonte de La Clergerie, and Pierre Boullierof the 5th LRECGenoa, ItalyBenoît Sagot, Lionel Clément, Eric Villemonte de La Clergerie, and Pierre Boullier. 2006. The Lefff syntac- tic lexicon for French: Architecture, acquisition, use. In Proc. of the 5th LREC, pages 1348-1351, Genoa, Italy.
Models for Inuktitut-English word alignment. Charles Schafer, Elliott Drábek, Proc. of the ACL Workshop on Building and Using Parallel Texts. of the ACL Workshop on Building and Using Parallel TextsAnn Arbor, USACharles Schafer and Elliott Drábek. 2005. Models for Inuktitut-English word alignment. In Proc. of the ACL Workshop on Building and Using Parallel Texts, pages 79-82, Ann Arbor, USA. |
7,709,942 | The Interplay Between Lexical and Syntactic Resources in Incremental Parsebanking | Automatic syntactic analysis of a corpus requires detailed lexical and morphological information that cannot always be harvested from traditional dictionaries. In building the INESS Norwegian treebank, it is often the case that necessary lexical information is missing in the morphology or lexicon. The approach used to build the treebank is incremental parsebanking; a corpus is parsed with an existing grammar, and the analyses are efficiently disambiguated by annotators. When the intended analysis is unavailable after parsing, the reason is often that necessary information is not available in the lexicon. INESS has therefore implemented a text preprocessing interface where annotators can enter unrecognized words before parsing. This may concern words that are unknown to the morphology and/or lexicon, and also words that are known, but for which important information is missing. When this information is added, either during text preprocessing or during disambiguation, the result is that after reparsing the intended analysis can be chosen and stored in the treebank. The lexical information added to the lexicon in this way may be of great interest both to lexicographers and to other language technology efforts, and the enriched lexical resource being developed will be made available at the end of the project. | [
6943618,
8563463
] | The Interplay Between Lexical and Syntactic Resources in Incremental Parsebanking
Victoria Rosén
University of Bergen
Petter Haugereid petter.haugereid@lle.uib.no2
University of Bergen
Martha Thunes martha.thunes@lle.uib.no3
University of Bergen
Gyri Smørdal Losnegaard gyri.losnegaard@lle.uib.no4helge.dyvik@lle.uib.no5
University of Bergen
Helge Dyvik
University of Bergen
Paul Meurer paul.meurer@uni.no6
University of Bergen
The Interplay Between Lexical and Syntactic Resources in Incremental Parsebanking
Uni Research Computing
56Bergen, Norway victoria@uib.notreebankingINESSlexicon
Automatic syntactic analysis of a corpus requires detailed lexical and morphological information that cannot always be harvested from traditional dictionaries. In building the INESS Norwegian treebank, it is often the case that necessary lexical information is missing in the morphology or lexicon. The approach used to build the treebank is incremental parsebanking; a corpus is parsed with an existing grammar, and the analyses are efficiently disambiguated by annotators. When the intended analysis is unavailable after parsing, the reason is often that necessary information is not available in the lexicon. INESS has therefore implemented a text preprocessing interface where annotators can enter unrecognized words before parsing. This may concern words that are unknown to the morphology and/or lexicon, and also words that are known, but for which important information is missing. When this information is added, either during text preprocessing or during disambiguation, the result is that after reparsing the intended analysis can be chosen and stored in the treebank. The lexical information added to the lexicon in this way may be of great interest both to lexicographers and to other language technology efforts, and the enriched lexical resource being developed will be made available at the end of the project.
Introduction
Incremental parsebanking presents a unique opportunity for enrichment of the lexicon. It provides a useful context for supplementing the information provided in lexical resources derived from traditional dictionaries, thus helping to overcome their limitations. The INESS project (Infrastructure for the Exploration of Syntax and Semantics) is developing a large parsebank for Norwegian. 1 In the process, an existing grammar and lexicon for Norwegian are further developed in tandem. Since the grammar requires quite detailed morphosyntactic information in order to provide an analysis, the lexicon must be syntactically well informed; feedback from the parsebanking process results in a considerable enrichment of the original lexical resource. In the following, we will first discuss how the syntax and lexicon mutually inform each other in our approach. In section 3. the interface for preprocessing texts will be described. The treatment of unknown words will be illustrated in section 4. In section 5. the incremental parsebanking approach in INESS is briefly described. Finally, in section 6., we will present various kinds of missing or incorrect information in lexical resources and show how this may be remedied.
The interplay between syntax and lexicon
NorGram is a hand-written computational grammar for Norwegian (Dyvik, 2000;Butt et al., 2002). It is written in the Lexical Functional Grammar (LFG) framework (Bresnan, 2001;Dalrymple, 2001). The Xerox Linguistics Environment (XLE) is used for grammar development and parsing (Maxwell and Kaplan, 1993). NorGram has been used in several language technology projects, and its main lexicon has been the NorKompLeks electronic lexicon (Nordgård, 2000). This lexicon is an adapted version of Bokmålsordboka, a dictionary of Norwegian Bokmål (Landrø and Wangensteen, 1993), and Nynorskordboka, a dictionary of Norwegian Nynorsk (Hovdenak et al., 1986). NorGram provides deep syntactic analysis on two levels: constituent structure (c-structure) and functional structure (f-structure). The c-structure is a phrase structure tree showing the linear and hierarchical organization of the phrasal constituents in the sentence. The f-structure is an attributevalue matrix showing grammatical functions and features. In LFG, the syntax and the lexicon have an important interaction with each other especially in the treatment of predicate-argument structure. The lexical entry for each verb must specify which arguments a verb requires. If the sentence lacks syntactic arguments which the verb specifies, or if the sentence contains syntactic arguments which the verb does not specify, no grammatical analysis will be produced. For example, in a transitive sentence, the lexical entry for the verb must specify that the verb can take an object.
Text preprocessing
An important source of texts for the INESS Norwegian treebank is a large repository of OCR-read fiction texts supplied by the National Library of Norway. Because OCR software makes certain errors, such as misinterpreting characters, omitting text, or inserting unwanted material, the documents must be preprocessed before syntactic parsing. Moreover, when a corpus is parsed, there will always be words that are unknown to the morphology and/or the lexicon. INESS has developed an intelligent preprocessing in-terface which facilitates efficient text cleanup and the treatment of unknown word forms (Rosén et al., 2012b). Text cleanup involves for example removing superfluous material that does not belong to the text, joining parts of sentences that have erroneously been split, and adding punctuation where it is missing. The interface offers practical editing functions for these operations. After text cleanup, the annotators process word forms that have not been automatically recognized. The preprocessing interface presents a list of unknown words. Some of these are the result of OCR errors, and some are simply typos. Other frequent types of unrecognized word forms are productive compounds, multiword expressions, named entities, foreign words, neologisms, interjections, dialect words, and systematic, or intended, misspellings. It is important that the annotator observes the difference between typos, or unintentional misspellings, which must be corrected along with OCR errors, and nonstandard word forms, which are not to be changed. We distinguish between three main classes of nonstandard word forms. These are systematic misspellings, archaic word forms, and nonstandard forms that can be ascribed to a particular dialect, technolect, sociolect, or other language variety. Systematic misspellings are, typically, not just incidental typos, but forms produced regularly by an author. During preprocessing unrecognized forms of these types are left unchanged because correcting them would be to interfere with actual language use. The important common denominator of all types of unrecognized words which are not to be corrected is that while these forms fall outside standard dictionaries, it is a prerequisite for successful parsing that they are included in our lexicon. Since one unknown word may result in the parser not returning an analysis, it is important to recognize and properly treat as many such words as possible.
The recognition of unknown words during preprocessing
The preprocessing interface allows the annotators to add unrecognized words to the lexicon so that they will not cause parsing failures. Noninflecting words like named entities and interjections are entered as simple paradigms, with a given category assigned to each entry. Inflecting words belonging to the open lexical classes are entered as complex paradigms, and the annotator must specify an inflectional pattern for each new entry. Verbs must also be assigned subcategorization frames necessary for parsing. When a word is unrecognized because of nonstandard spelling, the annotator must consider whether the spelling deviation concerns the stem or an inflection. Variant stems are entered as paradigms associated with an existing standard paradigm, and variant inflectional forms are registered as deviations of individual, standard inflectional forms. In order to add unrecognized words to the lexicon in an efficient way, the annotator makes use of a set of predefined options in the preprocessing interface. Each option corresponds to a certain type of entry. Most of these types can be entered by a single mouse click, while the recording of paradigms and variant inflectional forms requires a few more steps. (nouns, verbs, adjectives, and adverbs) are the most frequent type of words added through preprocessing (39,2% of all entries). These are given as the category paradigm in table 1, and within the class of paradigms, more than half of them were compounds (6,595 entries). Table 2 lists some of the most frequent compound types added through preprocessing in this study. Prior to preprocessing, an automatic compound analyzer is run on the text in order to identify compounds that are not already in the lexicon. The analyzer checks for a certain set of patterns, and compounds that are not recognized are presented to the annotator as unrecognized words. The screenshot in figure 1 illustrates how the unknown compound gulblank 'yellow-shiny' is added to the lexicon by the annotator. As the base form is entered, the annotator marks the internal structure of the compound by separating the first and second element by the character +. Moreover, if the lexical class of the first element is a category other than noun, this category is entered in parentheses (in this case adjective). When the base form has been typed in, the annotator must specify an inflectional paradigm for the new lemma, either by typing in the base form of an existing lemma with matching inflection (in this case the adjective blank), or by selecting one from a set of potentially matching lemmas proposed by the interface. An inflectional paradigm must be specified for all paradigm entries, whether they are compounds or not. The motivation for analyzing unrecognized compounds in this way (by registering the part of speech also of the first part) is to be able to discover frequent compound elements and compound types that are not already accounted for by the compound analyzer. The noun+noun type is very frequent, and is normally handled by the analyzer. The example of this type in table 2, appelsin-te, was not recognized because the second element is a two-letter word. Allowing compound consituents of three letters or less is generally considered a risk in automatic compound analysis; if such short constituents are allowed in general, practically any typo or misspelled word could be erroneously analyzed as a compound.
Of the types included in table 2, the following are currently handled by the compound analyzer: noun+noun, noun+adj, adj+noun, adj+verb and verb+noun. Some of these combinations have certain constraints imposed on them. For noun+adj compounds, only a few nouns that occur frequently as the first element in compounds are allowed; examples are døds 'death', kjempe 'giant', drit 'shit', and rekord 'record'. For adj+verb compounds, the verb may only be a past participle. These constraints explain why the compounds avisgrå 'newspaper-gray' and blekpudre 'pale-powder' were not recognized. The types adj+adj, prep+noun, and prep+verb are currently not allowed at all. Studying the individual examples in the different categories will help to determine if new types should be added to the compound analyzer, or whether some particularly frequent elements should be allowed. Table 1 also shows that the second most frequent type of unrecognized words in this study is named entities. Among these, last names, place names, and organization or brand names are very common. The next category listed in table 1 is unclassified. This is a residual class used for unrecognized words that fall outside the set of predefined options available in the preprocessing interface, typically because they have some kind of morphological or morphosyntactic property which does not match any of the available categories. This is the case for certain word forms involving clitics, like n'Oscar, which is a contraction of the pronoun han 'he' and the name Oscar. Such forms are entered as unclassified, pending further treatment by the grammar developer. Another type of word that must be entered as unclassified is compounds which can be regarded as products of syntactic processes, such as gamlebilen 'the old car'. The first element, gamle, is an adjective inflected in the singular definite form, and the second element, bilen, is a noun, also with a singular definite inflection. This compound does not have a normal inflectional paradigm; it will not occur in the plural, or in the indefinite, because it is a contracted form of a syntactic phrase den gamle bilen 'the old car'. Foreign words are often used in Norwegian sentences.
Sometimes they are spontaneous uses of a word from another language, most often English. Other times they are well-established in Norwegian, but have not yet made their way into standard dictionaries. An example of a spontaneously used English word is shown in (1).
( Missing lexical entries like this are easily added to the lexicon when they are identified in the preprocessing step. In this case, American Bar was entered as a named entity of the category organization name, and alien and air conditioning were entered as loans. A particularly productive part of speech is interjections; especially writers of fiction are very creative in the way in which they write interjections. Bokmålsordboka has an entry for the interjection hysj 'hush' which includes also the alternative spelling hyss. There are several occurrences of this interjection in the fiction texts of the INESS treebank, and many of them do not have either of the two standard spellings. The following eight variants of hysj/hyss have been registered until now: hysjjj, hyssj, hysssj, hyssssjjj, hysssssj, hysssssjjj, hysst, hyyyysssjjj. This shows that the spelling of this interjection is unpredictable and to a large extent determined by the way in which an author chooses to express it in a given context. For parsebanking purposes, the challenge is that each time a new spelling is encountered, it is displayed in the preprocessing interface as an unknown word. The INESS interface makes it possible for annotators to add new variant spellings to a single lemma in the lexicon. In this way each extracted variant can eventually be recognized during parsing. It can often be justified to add misspellings to the lexicon and/or morphological analyzer. An author can for instance use a creative spelling to imitate a certain dialect or pronunciation. An example from the INESS parsebank is mordern 'the murderer', instead of the standard form morderen. The elided vowel is imitative of a certain accent. The annotator enters the form as a new inflectional variant by indicating in the preprocessing interface that it shares the same lemma and the same inflectional features as morderen. Thus, as the annotator processes the unrecognized words in a document, new lexicon information is compiled, and before the text is syntactically parsed, this new information is added to the lexical resources exploited by the parser.
Incremental parsebanking in INESS
In our parsebanking approach, the output of the parser is semi-automatically disambiguated by annotators. Thus annotators never manually edit an analysis, but they verify if the analysis produced by the parser is correct, or they choose the correct analysis if several possible analyses are produced. The parsebanking system automatically detects discriminants which help the annotators to efficiently distinguish between possibly many proposed analyses (Rosén et al., 2012a;Rosén et al., 2009;Rosén et al., 2007). The advantage of our parsebanking approach is that the grammar will always be fully compatible with the treebank. Thus, treebanks constructed in this way achieve a very high level of consistency. Also, the approach is scalable by using stochastic disambiguation to parse new texts fully automatically. However, only sentences that are grammaticalaccording to the current grammar-will be fully analyzed, while others may receive a fragment parse or may fail to parse. To the extent that coverage of the grammar needs to be improved, the approach is therefore an incremental one. Annotators signal shortcomings which are followed up by extensions or other changes in the grammar and lexicon, after which the treebank can be reparsed (with cached discriminants to speed up the process). In INESS we have carried out a detailed study of a small subcorpus in order to find out what the main causes of failed analyses are. We found that 29% of the failed analyses were caused by syntactic problems, while 71% were caused by lexical problems. Of the lexical problems, 41% were caused by missing multiword expressions, whereas 31% were caused by incorrect lexical categories (Losnegaard et al., 2012). This shows that correct lexical information is essential for successful syntactic analysis.
Known words with missing or incorrect information
Even though the NorKompLeks lexicon is a rich resource, in parsing we still often find that it lacks lexical information that we need in order to analyze even quite common words. We need, inter alia, lexical category, inflection, subcategorization, countability, compound structure, and multiword expressions. Table 3 gives an overview of the types of lex- icon updates made by three annotators while doing parsebanking over a period of about five months. The NorKompLeks lexicon added subcategorization frames for the verbs in Bokmålsordboka. There are, however, many quite common frames that are not included. As shown by table 3, the most frequent type of lexicon update in this study concerns subcategorization frames for verbs, and new verb frames involving multiword expressions (MWEs) account for almost two thirds of these cases. The effect of updating a verb entry with a new subcategorization frame can be illustrated by example (3) it can be either a verb or a noun. Initially, the lexicon entry for the verb flate 'flatten' contained no subcategorization frame covering the MWE flate ut; the only verbal frame available required a reflexive element, which does not occur in this sentence. Therefore, the only analysis found by the parser for (3) was that of a noun phrase, where the word form flater was analyzed as the plural indefinite of the noun flate 'surface' functioning as an apposition to the noun fjellet 'the mountain'. Figure 2 shows the c-and f-structures for this analysis of (3). After the annotator added the missing subcategorization frame to the lexicon, the sentence was reparsed. As the c-structure in figure 3 shows, flater is now analyzed as a present tense verb (with the lexical category Vfin), and ut as a particle (PRT). Adding this argument frame involves making an addition to a lexical entry, in which an existing template specifying the features of an intransitive verb with a selected particle is called. Figure 4 shows the lexical entry of flate with this addition in the second line. The notation {...|...} is a disjunction specifying alternative readings. The template V-SUBJ-PRT is defined as in figure 5. The Figure 4: Lexical entry after the addition of the intransitive frame with particle Figure 5: The template for intransitive particle verbs first line builds the predicate name 'flate*ut' by concatenation. The following disjunction in the template specifies three alternatives: regular active (Fjellet flater ut), impersonal passive (Det flates ut 'There is flattening out'), and active presentative (Det flater ut en fugleflokk 'There is a flock of birds flattening out'). As table 3 shows, several other types of lexicon updates for verbs are also relatively frequent in this study. We found that new intransitive verb readings were needed in 46 cases, whereas 18 verb frame updates involved adding transitive readings. This is interesting because it may indicate that with respect to verb subcategorization frames, the information available from standard dictionaries does not capture the extent to which verbs with variable argument frames are used intransitively. The sentence in (4) was initially returned by the parser with no analysis, and parsing had failed because the lexicon contained no intransitive reading for avslå 'decline'. After this reading was added, the sentence was successfully reparsed.
(4) Men but bestefar grandfather avslo. declined 'But grandfather declined.'
Adding an inquit reading is another frequent type of update in lexical entries for verbs (30 instances). Inquit verbs are verbs of saying and related verbs that may occur in this function, and in the analyzed texts a large variety of verbs are used in inquit clauses. This is not surprising, since the text material is fiction, containing numerous passages of dialogue, as well as inner monologue. The addition of an inquit reading in the lexical entry for a verb involves adding a subcategorization frame specifying that the verb takes a sentence complement as one of its arguments as well as a feature allowing it to occur in the syntactic position typical of inquit verbs. The sentence in example (5) was initially given a partial analysis by the parser. That is, the word sequences Hva mener du med det and stotret hun were respectively identified as sentence units, but no complete analysis was found, because the lexicon entry for the verb stotre 'stammer' contained only an intransitive reading. An inquit reading was added to the entry, and after reparse the sentence Hva mener du med det? was successfully analyzed as a sentential complement to the inquit verb. Table 3 also shows that lexicon updates involving new readings of adverbs constitute another frequent type in this study (47 occurrences). This illustrates that the lexical category of a given word must often be more fine-grained than what is provided by the lexicon. In the case of adverbs, there is only one large class with the part of speech ADV in the original lexicon. However, different types of adverbs differ considerably in their syntactic distribution, and it is therefore necessary to classify them into subcategories in order to account for this distribution. Our parsing lexicon distinguishes between 22 categories of adverbs based on syntactic position, usually named according to their typical semantic contribution. Thus, between the finite verb and the object there are positions for ADVatt ('attitude adverbs' like dessverre 'unfortunately'), ADVprt ('particle adverbs' like vel 'I suppose', ADVcmt ('commitment adverbs' like egentlig 'actually'), ADVneg ('negation adverbs' like ikke 'not'), and others, where there are ordering constraints. Thus, particle adverbs occur before commitment adverbs, which occur before negation adverbs, cf. example (6). Different classes of degree adverbs are also distinguished, for example ADVdeg ('degree adverbs' like ganske 'quite', which modify adjectives) and ADVdegloc ('locational degree adverbs' like langt 'far', modifying locative adjuncts), cf. example (7). The category ADV is used for the large class of adverbs which only occur in the VP domain, mostly manner adverbs. When annotators find that the analysis provided by the parser is inadequate, the situation can often be remedied by changing the part of speech from the default category ADV used in NorKompLeks to one of the more fine-grained adverb categories. With respect to lexicon updates concerning nouns and adjectives, table 3 shows that the most frequent type in this study involves correcting morphological properties of nouns concerning the distinction between mass terms and countables. Further, the data indicate that also for nouns and adjectives there is a considerable need for adding subcategorization frames involving MWEs. Multiword expressions (MWEs) present a great challenge for parsing because they exceed word boundaries, have unpredictable morphosyntactic properties and are sometimes discontiguous (i.e., other words and constituents may come between their component words in a sentence). Treating them as simplex words will thus often result in incorrect or missing analyses. The most immediate problem with MWEs simply concerns knowing about them (Losnegaard et al., 2012), and although there are a considerable number of MWE entries in NorKompLeks (more than 2500 prepositional verbs, 1800 particle verbs and almost 400 fixed expressions), these are not sufficient to account for all of the MWEs in our corpus. For instance, both verbs, nouns and adjectives may take prepositional arguments, while NorKompLeks only provides this kind of subcategorization frame for verbs. Such frames are added to the lexicon by augmenting the relevant predicates with a preposition or an adverb. Examples of such additions are legge ut 'pay', mening med 'point of', and opptatt med 'concerned with ' (examples 8, 9 and 10), where new predicates have been added to the entries for the verb legge, the noun mening, and the adjective opptatt, respectively. Other types of MWE frames that have been added to the lexicon during parsebanking are fixed expressions and verbal idioms. Fixed MWEs are invariable expressions that do not have a normal syntactic buildup. It is thus the expression as a whole, and not the individual words, that must be assigned a lexical category. An example of a fixed MWE is på tå hev 'on one's toes'. Since på tå hev is a completely invariable prepositional phrase, it is added to the lexicon as a word-with-spaces entry, i.e. it appears in the c-structure as one node, as if it were a single word. Because of their syntactic properties, such prepositional phrases with predicative function are classified as adjectives in NorGram.
The addition of words-with-spaces to the lexicon during parsebanking results in a coherently classified inventory of fixed MWEs. In the present study, there were twelve fixed MWEs added; ten updates involved new adverbs, and the other two produced a new adjective entry and a new preposition entry. While adding lexical entries for hitherto unanalyzed MWEs is an important factor for increasing parsing coverage, there are other and perhaps more general problems associated with the automatic analysis of multiword units. Conventional dictionaries usually provide limited information about MWEs, and their treatment is sometimes incomplete or incoherent. One problem is that the expressions are often not given lexical entries, but only used as examples in the definitions of single-word entries. This information is difficult to extract when constructing an electronic lexicon. For example, in Bokmålsordboka, på tå hev occurs as an example both under the entry for tå 'toe' and under the entry for heve '(to) raise', but it does not occur as an entry of its own. Similarly, the prepositional verb tenke på 'think about' is listed as one of two senses under the verb tenke, but is not explicitly marked as an idiomatic construction. The same MWE is also found under the entries for andakt 'piety', annen 'other', fordel 'advantage', the lexicalized compound giftetanker 'marriage plans', and several other semantically unrelated entries. The perhaps biggest challenge in parsing MWEs is posed by MWEs with internal syntactic structure, such as verbal idioms. These are variable in the sense that they may undergo inflection and syntactic transformations, but idiosyncratic because they may undergo some, but not all kinds of transformations. Although we have identified them as MWEs, it is therefore not straightforward how they should be treated in the lexicon. An example of a verbal idiom with metaphorical meaning and a regular syntax is feie under teppet 'sweep under the carpet'; the verb feie requires both a subject and an object in addition to the obligatory PP under teppet 'under carpet.the'. However, many MWEs are irregular: the idiom komme (noen) i møte 'come (someone) in meeting' ('approach') is idiosyncratic because the verb komme 'come' is normally intransitive. Others MWEs form 'families' of apparently similar surface structures.
(11) ta/ha/få take/have/get … … tak/grep hold/grip (i/på) (in/on) 'take a hold of/get a hold of/get a (good) grip on, etc.' Although these constructions seem similar, a closer investigation shows that they have several different (although often related) meanings and that they also differ in their possible syntactic variations. We cannot expect to find this kind of detail of linguistic description of MWEs in a regular dictionary, and the treatment of MWEs in computational dictionaries varies greatly depending on the type of dictionary, the language in question, and the theoretical framework used. In this respect, parsebanking provides a unique method for detecting problematic constructions such as MWEs, and for acquiring more knowledge about them.
Conclusion
Correct lexical information is essential for successful syntactic analysis, but lexical resources derived from dictionaries lack much necessary information, because they are typically not tested in parsing. In our experience, parsebanking is therefore a useful and necessary context not only for grammar development, but also for lexicon development. The INESS project is building up a richer lexical resource for Norwegian and will continue to do so during the remainder of the project. The resulting reusable lexical resource will be made available upon completion of the INESS project in 2016.
Acknowledgments
The work reported on here has been supported by the Research Council of Norway and the University of Bergen.
Figure 1 :
1Interface for adding unknown words during preprocessing
Figure 2 :Figure 3 :
23from the INESS treebank. The sentence involves the particle verb flate ut 'flatten out'. (3) Fjellet mountain.the flater flatten ut. out 'The mountain flattens out.'The Norwegian word form flate is categorially ambiguous: Analysis offered before lexical update Analysis offered after lexical update
( 6 )
6Jeg I har have vel I-suppose egentlig actually ikke not noe something å to legge lay til. to 'I actually have nothing to add, I suppose.'
( 7 )
7ganske quite langt far fra from vannet lake.the 'quite far from the lake'
Table 2 :
2Overview of some of the most common compound types added through preprocessing.that have been extracted through preprocessing of a corpus
of about 29 million words. Among these words, members of
the open lexical classes
Auguste.Auguste ' "I'm not really an alien for you," said Auguste.' Example (2) contains both the English loan air conditioning and the named entity American Bar. He entered the American Bar, which boasted air conditioning.'1) «Jeg
I
skulle
should
ikke
not
vaere
be
noen
some
alien
alien
for
for
deg,»
you
sa
said
(2) Han
he
gikk
went
inn
in
på
on
American
American
Bar,
Bar
som
which
reklamerte
advertised
med
with
air
air
conditioning.
conditioning
'
Table 3 :
3Overview of lexicon updates made by annotators.
had to pay for you since I reckoned you wanted to tip him, right?'(9) HvaWhat What was the point of embarrassing me like that?'(10) Hun she She became very concerned with brushing cake crumbs off of her coat.'(8) Jeg
I
måtte
must.pret
legge
lay
ut
out
for
for
deg,
you,
for
since
jeg
I
regnet
counted
med
with
at
that
du
you
ville
would
gi
give
ham
him
tips,
tip.pl,
ikke
not
sant?
true?
'I var
was
da
then
meningen
meaning.def
med
with
å
to
sette
put
meg
me
i
in
slik
such
forlegenhet?
embarrassment?
'ble
became
veldig
very
opptatt
busy
med
with
å
to
børste
brush
kakesmuler
cake crumbs
av
off
kåpa
coat.the
si.
refl
'
http://clarino.uib.no/iness
Lexical-Functional Syntax. Joan Bresnan, Blackwell. Bresnan, Joan. (2001). Lexical-Functional Syntax. Black- well, Malden, MA.
The Parallel Grammar project. Miriam Butt, Dyvik, Helge, Tracy King, Holloway, Hiroshi Masuichi, Christian Rohrer, Proceedings of the Workshop on Grammar Engineering and Evaluation at the 19th International Conference on Computational Linguistics (COLING). Carroll, John, Oostdijk, Nelleke, and Sutcliffe, Richardthe Workshop on Grammar Engineering and Evaluation at the 19th International Conference on Computational Linguistics (COLING)Taipei, Taiwan; Stroudsburg, PA, USAAssociation for Computational LinguisticsButt, Miriam, Dyvik, Helge, King, Tracy Holloway, Ma- suichi, Hiroshi, and Rohrer, Christian. (2002). The Parallel Grammar project. In Carroll, John, Oostdijk, Nelleke, and Sutcliffe, Richard, editors, Proceedings of the Workshop on Grammar Engineering and Evalua- tion at the 19th International Conference on Computa- tional Linguistics (COLING), Taipei, Taiwan, pages 1- 7, Stroudsburg, PA, USA. Association for Computational Linguistics.
Lexical Functional Grammar. Mary Dalrymple, Syntax and Semantics. 34Academic PressDalrymple, Mary. (2001). Lexical Functional Grammar, volume 34 of Syntax and Semantics. Academic Press, San Diego, CA.
Nødvendige noder i norsk: Grunntrekk i en leksikalsk-funksjonell beskrivelse av norsk syntaks [Necessary nodes in Norwegian: Basic properties of a lexical-functional description of Norwegian syntax. Helge Dyvik, Dyvik, Helge. (2000). Nødvendige noder i norsk: Grun- ntrekk i en leksikalsk-funksjonell beskrivelse av norsk syntaks [Necessary nodes in Norwegian: Basic properties of a lexical-functional description of Norwegian syntax].
In Andersen, Øivin, Kjersti Fløttum, Torodd Kinn, Menneske, språk og felleskap. Novus forlag. In Andersen, Øivin, Fløttum, Kjersti, and Kinn, Torodd, editors, Menneske, språk og felleskap. Novus forlag.
Marit Hovdenak, Killingbergtrø, Laurits, Arne Lauvhjell, Nordlie, Sigurd, Magne Rommetveit, Dagfinn Worren, Nynorskordboka : definisjonsog rettskrivingsordbok. Det norske samlaget. OsloHovdenak, Marit, Killingbergtrø, Laurits, Lauvhjell, Arne, Nordlie, Sigurd, Rommetveit, Magne, and Worren, Dagfinn, editors. (1986). Nynorskordboka : definisjons- og rettskrivingsordbok. Det norske samlaget, Oslo.
Bokmålsordboka: definisjons-og rettskrivningsordbok. Marit Landrø, Ingebjørg, Boye Wangensteen, Universitetsforlaget, OsloLandrø, Marit Ingebjørg and Wangensteen, Boye, editors. (1993). Bokmålsordboka: definisjons-og rettskrivnings- ordbok. Universitetsforlaget, Oslo.
What we have learned from Sofie: Extending lexical and grammatical coverage in an LFG parsebank. Gyri Losnegaard, Smørdal, Gunn Lyse, Inger, Thunes, Martha, Rosén, De Victoria, Smedt, Koenraad, Helge Dyvik, Paul Meurer, META-RESEARCH Workshop on Advanced Treebanking at LREC2012. Hajič, Jan, De Smedt, Koenraad, Tadić, Marko, and Branco, AntónioIstanbul, TurkeyLosnegaard, Gyri Smørdal, Lyse, Gunn Inger, Thunes, Martha, Rosén, Victoria, De Smedt, Koenraad, Dyvik, Helge, and Meurer, Paul. (2012). What we have learned from Sofie: Extending lexical and grammatical cover- age in an LFG parsebank. In Hajič, Jan, De Smedt, Koenraad, Tadić, Marko, and Branco, António, editors, META-RESEARCH Workshop on Advanced Treebanking at LREC2012, pages 69-76, Istanbul, Turkey.
The interface between phrasal and functional constraints. John Maxwell, Ronald M Kaplan, Computational Linguistics. 194Maxwell, John and Kaplan, Ronald M. (1993). The inter- face between phrasal and functional constraints. Compu- tational Linguistics, 19(4):571-589.
Nordkompleks -A Norwegian computational lexicon. Torbjørn Nordgård, COMLEX 2000 Workshop on Computational Lexicography and Multimedia Dictionaries. Patras, Greece. University of PatrasNordgård, Torbjørn. (2000). Nordkompleks -A Norwe- gian computational lexicon. In COMLEX 2000 Work- shop on Computational Lexicography and Multimedia Dictionaries, pages 89-92, Patras, Greece. University of Patras.
Designing and implementing discriminants for LFG grammars. Victoria Rosén, Paul Meurer, De Smedt, Koenraad, The Proceedings of the LFG '07 Conference. King, Tracy Holloway and Butt, MiriamStanfordCSLI PublicationsRosén, Victoria, Meurer, Paul, and De Smedt, Koenraad. (2007). Designing and implementing discriminants for LFG grammars. In King, Tracy Holloway and Butt, Miriam, editors, The Proceedings of the LFG '07 Con- ference, pages 397-417. CSLI Publications, Stanford.
LFG Parsebanker: A toolkit for building and searching a treebank as a parsed corpus. Victoria Rosén, Paul Meurer, De Smedt, Koenraad, Proceedings of the Seventh International Workshop on Treebanks and Linguistic Theories (TLT7). the Seventh International Workshop on Treebanks and Linguistic Theories (TLT7)Van Eynde, Frank, Frank, Anette, van Noord, Gertjan, and De Smedt, Koenraad; Utrecht. LOTRosén, Victoria, Meurer, Paul, and De Smedt, Koenraad. (2009). LFG Parsebanker: A toolkit for building and searching a treebank as a parsed corpus. In Van Eynde, Frank, Frank, Anette, van Noord, Gertjan, and De Smedt, Koenraad, editors, Proceedings of the Seventh Interna- tional Workshop on Treebanks and Linguistic Theories (TLT7), pages 127-133, Utrecht. LOT.
An open infrastructure for advanced treebanking. Victoria Rosén, De Smedt, Koenraad, Paul Meurer, Helge Dyvik, META-RESEARCH Workshop on Advanced Treebanking at LREC2012. Hajič, Jan, De Smedt, Koenraad, Tadić, Marko, and Branco, AntónioIstanbul, TurkeyRosén, Victoria, De Smedt, Koenraad, Meurer, Paul, and Dyvik, Helge. (2012a). An open infrastructure for ad- vanced treebanking. In Hajič, Jan, De Smedt, Koen- raad, Tadić, Marko, and Branco, António, editors, META- RESEARCH Workshop on Advanced Treebanking at LREC2012, pages 22-29, Istanbul, Turkey.
An integrated web-based treebank annotation system. Victoria Rosén, Meurer, Paul, Gyri Losnegaard, Smørdal, Gunn Lyse, De Inger, Smedt, Koenraad, Martha Thunes, Helge Dyvik, Proceedings of the Eleventh International Workshop on Treebanks and Linguistic Theories (TLT11). Hendrickx, Iris, Kübler, Sandra, and Simov, Kirilthe Eleventh International Workshop on Treebanks and Linguistic Theories (TLT11)LisbonPortugal. Edições ColibriRosén, Victoria, Meurer, Paul, Losnegaard, Gyri Smørdal, Lyse, Gunn Inger, De Smedt, Koenraad, Thunes, Martha, and Dyvik, Helge. (2012b). An integrated web-based treebank annotation system. In Hendrickx, Iris, Kübler, Sandra, and Simov, Kiril, editors, Proceedings of the Eleventh International Workshop on Treebanks and Lin- guistic Theories (TLT11), pages 157-167, Lisbon, Portu- gal. Edições Colibri. |
227,231,405 | [] | TWEETSUM: Event-oriented Social Summarization Dataset
OnlineCopyright OnlineDecember 8-13, 2020
Ruifang He rfhe@tju.edu.cn
School of Computer Science and Technology
Tianjin University Tianjin
China
Liangliang Zhao liangliangzhao@tju.edu.cn
School of Computer Science and Technology
Tianjin University Tianjin
China
Huanyu Liu huanyuliu@tju.edu.cn
School of Computer Science and Technology
Tianjin University Tianjin
China
TWEETSUM: Event-oriented Social Summarization Dataset
Proceedings of the 28th International Conference on Computational Linguistics
the 28th International Conference on Computational LinguisticsBarcelona, SpainOnlineDecember 8-13, 20205731
With social media becoming popular, a vast of short and noisy messages are produced by millions of users when a hot event happens. Developing social summarization systems becomes more and more critical for people to quickly grasp core and essential information. However, the publicly available and high-quality large scale dataset under social media situation is rare. Constructing such corpus is not easy and very expensive since short texts have very complex social characteristics. Though there exist some datasets, they only consider the text on social media and ignore the potential user relations relevant signals on social network. In this paper, we construct TWEETSUM, a new event-oriented dataset for social summarization. The original data is collected from twitter and contains 12 real world hot events with a total of 44,034 tweets and 11,240 users. We create expert summaries for each event, and we also have the annotation quality evaluation. In addition, we collect additional social signals (i.e. user relations, hashtags and user profiles) and further establish user relation network for each event. To our knowledge, it is the first event-oriented social summarization dataset that contains social relationships. Besides the detailed dataset description, we show the performance of several typical extractive summarization methods on TWEETSUM to establish baselines. For further researches, we will release this dataset to the public. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/.
Introduction
Social media has become an important real-time information source, especially during emergencies, natural disasters and other hot events. According to a new Pew Research Center survey, social media has surpassed traditional news platforms (such as TV and radio) as a news source for Americans: about twothirds of American adults (68%) get news via social media. Among all major social media sites, Twitter is still the site Americans most commonly use for news, with 71% of Twitter's users get their news from Twitter. However, it can often be daunting to catch up with the most recent contents due to high volume and velocity of tweets. Hence, social summarization aiming to acquire the most representative and concise information from massive tweets when a hot event happens is particularly urgent.
In recent years, many large-scale summarization datasets have been proposed such as New York Times (Sandhaus, 2008), Gigaword (Napoles et al., 2012), NEWSROOM (Grusky et al., 2018) and CNN/DAILYMAIL (Nallapati et al., 2016). However, most of these datasets focus on formal document summarization. Actually, social media text has many different characteristics from formal documents: 1) Short. The length of a tweet is limited to 140 characters, which is much shorter than formal document. 2) Informal. Tweets usually contains informal expressions such as abbreviations, typos, special symbols and so on, which make tweets more difficult to deal with. 3) Social signal. There are different kinds of social signals on social media such as hashtags, urls and emojis. 4) Potential relations. Tweets are generated by users and hence have potential connections through user relationship. Because of these characteristics, traditional summarization methods often do not perform well on social media. Figure 1: Diagram of the process for creating the TWEETSUM dataset.
Though there exists some social media summarization datasets (Hu et al., 2015;Li et al., 2016;P.V.S. et al., 2018;Duan et al., 2012;Cao et al., 2017;Nguyen et al., 2018). However, these datasets only consider the text on social media and ignore the potential social signals on social network. In a social context, the interactions between friends are obviously different from that between strangers. This phenomenon demonstrates that social relationship can affect user behavior patterns and consequently affect the tweets content they post. This inspires us to consider integrating social relations relevant signals when analyzing social information.
In this paper, we construct an event-oriented large-scale dataset with user relations for social summarization called TWEETSUM. It contains 12 real world hot events with a total of 44,034 tweets and 11,240 users. In summary, this paper provides the following contributions: (1) Construct an event-oriented social media summarization dataset, TWEETSUM, which contains social signals. To our knowledge, it is the first summarization dataset that contains user relationships relevant social signals, such as hashtags and user profiles and so on; (2) Create expert summaries for each social event and verified the existence of sociological theory in real data, including social consistency and contagion; (3) Evaluate the performance of typical extractive summarization models on our TWEETSUM dataset to provide benchmarks and validate the effectiveness of the dataset.
TWEETSUM DATASET
Task and Data Collection
Tweets summarization aims to find a group of representative tweets for a specific topic. Given a collection of tweets about an event T = [t 1 ; t 2 ; .....; t m ], our goal is to extract a set of tweets S = [s 1 ; s 2 ; ...; s n ] (n << m), which contain as much important information and as little redundant information as possible at the same time (Rudrapal et al., 2018).
The dataset is created using the public Twitter data collected by University of Illinois 1 as the raw data and the overall creation process is shown as Figure 1. The detailed process of data collection is shown summarized as follows: (1) We first select twelve hot events happened in May, June and July 2011, including sports, technology and science, natural disasters, politics, terrorist attacks and so on. The events selected should satisfy the following conditions: (i) Widely spread on the Internet and cause a heated discussion on social media; (ii) Last longer than 30 days; (iii) Be impressive to news providers.
(2) Since each hot event can have multiple hashtags, such as "#nba" and "#nbafinals", we then search the tweets which contain any of these hashtags or any of the keywords obtained by getting rid of "#" from hashtags. (3) After obtaining the event-oriented data, we carefully preprocess the data as follows: (i) Merge identical tweets; (ii) Remove tweets whose length are shorter than 3 other than hashtags, keywords, mentions, URLs and stop words; (iii) Delete tweets whose author has no connection with others. (4) For each event, we further collect user profiles and user relationships. We filter users whose degree is smaller than 1 and obtain 11,240 users with their relations. Finally, we collect user profiles including User ID, historical tweets records, Tweet timestamp and Retweet Count. To verify summarization performance, we create expert summaries for each event. Specifically, for each of the 12 events, we ask annotators to select most representative 25 tweets as expert summary. Since different annotators have different understandings of the same event, we ask 4 annotators to create expert summary individually for each event in order to eliminate the subjective factors of users. To evaluate the quality of all expert summaries, we further ask 3 other annotators to score all summaries in range [1, 5] based on the coverage, diversity and readability. If only 0-6 tweets are satisfactory, the summary is scored as 1, 6-12 tweets as 2 scores, 12-18 tweets as 3 scores, 18-24 tweets as 4 scores. If all tweets are good, the score is 5. We remain the summaries with scores greater than or equal to 3 and require modifications to those low-quality summaries until they meet the criteria. To ensure the agreement of multiple expert summaries of each event, we conduct the mutual evaluation among them, and the results are shown in Figure 2. In this section, we introduce each part of the TWEETSUM dataset in detail. The dataset consists of 12 hot events, each of which contains four parts: tweets text, user relations, user profiles, and manually created expert summaries. The detailed statistics of each part are shown in Table 1. Due to the limited space, we only show the statistics of four events. Tweet text is the textual content of tweets, whose average length in all 12 events is 15.22 words. The number of tweets and average length per tweet in each event is shown in the first two rows in Table 1. In addition, hashtags in tweets contain important clues that can help understand the semantics of the tweets. Therefore, we also analyzed the distribution of hashtags in tweets, as shown in the third and fourth rows of Table 1. User relations are the unique property of our dataset compared with other summarization datasets. We collect users and their corresponding relations in each event to construct social networks and further analyze the statistics of the generated network, which is shown in the second part of Table 1. Indicated by social theories, i.e. consistency (Abelson, 1983) and homophily (Mcpherson et al., 2001), social relations will affect the user behaviors and consequently influence the content. We visualize the structure of one social network as shown in Figure 3. Users and their relationships constitute an undirected graph G(V, E), where V is user set and E is relation set. We observe some homophily groups, which may indicate that users being friends tend to share similar opinions or topics. We further analyze the words overlap ratio between friends and strangers respectively. Figure 4 shows the 1-gram and 2-gram overlap ratio under all 12 events. The average 1-gram and 2-gram overlap ratio between friends (26.92% and 4.01%) are consistently higher than that between strangers (25.40% and 3.45%), which demonstrates the impact of social relations on user behavior. We further conduct two sample t-test where null hypothesis H 0 means there is no difference between tweets posted by friends and those randomly selected tweets, while alternative hypothesis H 1 means the distance between tweets posted by friends is smaller than that of randomly selected tweets. We define the distance of two tweets as : D ij = ||t i − t j || 2 , where t i is the TFIDF representation of the i-th tweet. The p-value shown in Table 1 suggests to reject H 0 , which proves the influence of social relation on the tweet content.
Expert Summaries Creation
Data Properties and Analysis
User profiles include the following information: 1) User ID. This is the unique identity of each user. 2) Historical tweet records. Tweets posted by users contain lots of user information, from which we can obtain abundant information, such as user interests and preference. 3) Tweet timestamp records the time of the creation of tweets. 4) Retweet Count is used to reflect the popularity of each tweet. Expert summary has been described in section 2.2.
Experiments
Compared methods
To verify the effectiveness of our TWEETSUM dataset, we choose some typical extractive summarization methods as baseline methods.
(1) Expert: denotes the average mutual assessment of expert summaries.
(2) Random: selects tweets randomly from each hot events set to form summaries. (3) LexRank: adopts PageRank-like algorithm to rank and select tweets (Erkan and Radev, 2004). (4) LSA: exploits SVD (Gong and Liu, 2001) and selects the highest-ranked tweets from each singular vector. (5) MDSS: uses two-level sparse representation model (Liu et al., 2015). (6) DSDR: uses data reconstruction method (He et al., 2012). (7) SNSR: integrates the social relationship into a unified sparse coding optimization framework (He and Duan, 2018). (8) Fine-Tuning BERT: We fine tune the pre-trained BERT model (Devlin et al., 2019) and learn representations for tweets. Summaries are choosen according to cosine similarity.
Evaluation Methodologies
ROUGE is the most commonly used evaluation metric in summarization task, which counts number of overlapping units such as n-gram, word sequences and word pairs between the machine-generated summary and reference summaries. (Lin and Hovy, 2003) proposed different ROUGE matrices. Here, we use the F-measures of ROUGE-1, 2, ROUGE-L and ROUGE-SU* as our evaluation metric. Table 2 shows the performance of different baselines on our dataset. As we can see, all of these models have improvement over the Random baseline, especially SNSR model, which achieves the best performance and outperforms the Random baseline with an absolute gain of 3.31% R-1, 4.45% R-2, 3.57% R-L and 3.39% R-SU*. The main reason is that SNSR captures social relations among tweets. However, the improvement of other models are not as significant as SNSR. The reason is that most of these models are designed for formal documents such as news articles, thus not suitable for tweets. The neural network based model BERT has a strong ability in feature extraction. However, the BERT model still lags behind the best model. There are mainly three reasons: 1) Learning an efficient tweet representation still remains a big challenge since tweets are short and noisy.
Results and Discussions
2) It only considers text content and ignores relations among tweets.
3) The summary selection strategy is relatively simple. To further prove the effectiveness of social relations, we remove the relation component of SNSR (indicated by -social), which brings performance deterioration. As we discussed above, there are multiple types of social signals in social media which can provide various kinds of additional information. These heterogeneous signals contain a large amount of information, which is conducive to generating summaries. This inspires us to further explore to integrate these additional signals to improve social summarization.
Conclusion and Future Work
In this paper, we construct an event-oriented social media summarization dataset, called TWEETSUM. To better explore how social signals help social summarization, we filter some outliers, keeping social network dense to some extent, and conduct experiments to verify the influence of social signals on user generated content. We further analyze the characteristics of this dataset in detail and validate the influence of social relations on tweets content selection. Both traditional summarization methods and neural network-based methods are tested on our dataset.
In the future, the dataset can be further expanded to include more events as well as more various social signals. In addition, manually annotating data is an expensive and labor-consuming task, therefore we will further try to explore approaches to construct social summarization dataset automatically. More research space can be extended based on this dataset and we hope the TWEETSUM dataset can foster the development of social summarization.
Figure 2 :
2ROUGE scores of expert summaries.
Figure 3 :
3Visualization of one user social networks.
Figure 4 :
41-gram and 2-gram overlap between friends and strangers under 12 events.
Table 1 :
1The detailed statistics for the TWEETSUM dataset.
Table 2 :
2The ROUGE scores of different baselines.
https://wiki.illinois.edu/wiki/display/forward/Dataset-UDI-TwitterCrawl-Aug20
Whatever became of consistency theory?. Robert P Abelson, Personality & Social Psychology Bulletin. 91Robert P. Abelson. 1983. Whatever became of consistency theory? Personality & Social Psychology Bulletin, 9(1):37-64.
Ziqiang Cao, Chengyao Chen, Wenjie Li, Sujian Li, Furu Wei, Ming Zhou, TGSum: Build Tweet Guided Multi-Document Summarization Dataset: Natural Language Processing and Beyond. 11Ziqiang Cao, Chengyao Chen, Wenjie Li, Sujian Li, Furu Wei, and Ming Zhou, 2017. TGSum: Build Tweet Guided Multi-Document Summarization Dataset: Natural Language Processing and Beyond, pages 401-417. 11.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.
Twitter topic summarization by ranking tweets using social influence and content quality. Yajuan Duan, Zhumin Chen, Furu Wei, Ming Zhou, Heung-Yeung Shum, 24th International Conference on Computational Linguistics -Proceedings of COLING 2012: Technical Papers. 12Yajuan Duan, Zhumin Chen, Furu Wei, Ming Zhou, and Heung-Yeung Shum. 2012. Twitter topic summarization by ranking tweets using social influence and content quality. In 24th International Conference on Computa- tional Linguistics -Proceedings of COLING 2012: Technical Papers, pages 763-780, 12.
Lexrank: Graph-based lexical centrality as salience in text summarization. G Erkan, D R Radev, Journal of Artificial Intelligence Research. 22G. Erkan and D. R. Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research, 22:457-479.
Generic text summarization using relevance measure and latent semantic analysis. Yihong Gong, Xin Liu, Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '01. the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '01New York, NY, USAAssociation for Computing MachineryYihong Gong and Xin Liu. 2001. Generic text summarization using relevance measure and latent semantic analy- sis. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '01, page 1925, New York, NY, USA. Association for Computing Machinery.
Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. Max Grusky, Mor Naaman, Yoav Artzi, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 708-719, New Orleans, Louisiana, June. Association for Computational Linguistics.
Twitter summarization based on social network and sparse reconstruction. Ruifang He, Xingyi Duan, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18). Sheila A. McIlraith and Kilian Q. Weinbergerthe Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18)AAAI PressRuifang He and Xingyi Duan. 2018. Twitter summarization based on social network and sparse reconstruction. In Sheila A. McIlraith and Kilian Q. Weinberger, editors, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), pages 5787-5794. AAAI Press.
Document summarization based on data reconstruction. Z He, C Chen, J Bu, C Wang, L Zhang, D Cai, X He, Proceedings of the National Conference on Artificial Intelligence. the National Conference on Artificial Intelligence11Z. He, C. Chen, J. Bu, C. Wang, L. Zhang, D. Cai, and X. He. 2012. Document summarization based on data reconstruction. Proceedings of the National Conference on Artificial Intelligence, 1:620-626, 01.
LCSTS: A large scale Chinese short text summarization dataset. Baotian Hu, Qingcai Chen, Fangze Zhu, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsBaotian Hu, Qingcai Chen, and Fangze Zhu. 2015. LCSTS: A large scale Chinese short text summarization dataset. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1967-1972, Lisbon, Portugal, September. Association for Computational Linguistics.
Using relevant public posts to enhance news article summarization. Chen Li, Zhongyu Wei, Yang Liu, Yang Jin, Fei Huang, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersOsaka, JapanThe COLING 2016 Organizing CommitteeChen Li, Zhongyu Wei, Yang Liu, Yang Jin, and Fei Huang. 2016. Using relevant public posts to enhance news article summarization. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 557-566, Osaka, Japan, December. The COLING 2016 Organizing Com- mittee.
Automatic evaluation of summaries using n-gram co-occurrence statistics. Chin-Yew Lin, Eduard Hovy, Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology. the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyUSA17178NAACL '03. Association for Computational LinguisticsChin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram co-occurrence statistics. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology -Volume 1, NAACL '03, page 7178, USA. Association for Com- putational Linguistics.
Multi-document summarization based on two-level sparse representation model. He Liu, Hongliang Yu, Zhi-Hong Deng, Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence. Blai Bonet and Sven Koenigthe Twenty-Ninth AAAI Conference on Artificial IntelligenceAustin, Texas, USAAAAI PressHe Liu, Hongliang Yu, and Zhi-Hong Deng. 2015. Multi-document summarization based on two-level sparse rep- resentation model. In Blai Bonet and Sven Koenig, editors, Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30, 2015, Austin, Texas, USA, pages 196-202. AAAI Press.
Birds of a feather: Homophily in social networks. Miller Mcpherson, Lynn Smithlovin, James M Cook, Review of Sociology. 127Miller Mcpherson, Lynn Smithlovin, and James M Cook. 2001. Birds of a feather: Homophily in social networks. Review of Sociology, 27(1).
Abstractive text summarization using sequence-to-sequence RNNs and beyond. Ramesh Nallapati, Bowen Zhou, Cicero Dos Santos, Bing Aglar Gulçehre, Xiang, Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. The 20th SIGNLL Conference on Computational Natural Language LearningBerlin, GermanyAssociation for Computational LinguisticsRamesh Nallapati, Bowen Zhou, Cicero dos Santos, Ç aglar Gulçehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of The 20th SIGNLL Confer- ence on Computational Natural Language Learning, pages 280-290, Berlin, Germany, August. Association for Computational Linguistics.
Annotated gigaword. Courtney Napoles, Matthew Gormley, Benjamin Van Durme, Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction. the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge ExtractionCourtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated gigaword. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction, pages 95-100.
TSix: A human-involvedcreation dataset for tweet summarization. Minh-Tien Nguyen, Dac Viet Lai, Huy-Tien Nguyen, Le-Minh Nguyen, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)Miyazaki, Japan, MayEuropean Language Resources Association (ELRAMinh-Tien Nguyen, Dac Viet Lai, Huy-Tien Nguyen, and Le-Minh Nguyen. 2018. TSix: A human-involved- creation dataset for tweet summarization. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, May. European Language Resources Association (ELRA).
Live blog corpus for summarization. P V S Avinesh, Maxime Peyrard, Christian M Meyer, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)Miyazaki, Japan, MayEuropean Language Resources Association (ELRAAvinesh P.V.S., Maxime Peyrard, and Christian M. Meyer. 2018. Live blog corpus for summarization. In Proceed- ings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, May. European Language Resources Association (ELRA).
A survey on automatic twitter event summarization. Dwijen Rudrapal, Amitava Das, Baby Bhattacharya, Journal of Information Processing Systems. 143Dwijen Rudrapal, Amitava Das, and Baby Bhattacharya. 2018. A survey on automatic twitter event summariza- tion. Journal of Information Processing Systems, 14, 03.
The new york times annotated corpus. Linguistic Data Consortium. Evan Sandhaus, 6PhiladelphiaEvan Sandhaus. 2008. The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, 6(12). |
||
6,206,064 | Building trainable taggers in a web-based, UIMA-supported NLP workbench | Argo is a web-based NLP and text mining workbench with a convenient graphical user interface for designing and executing processing workflows of various complexity. The workbench is intended for specialists and nontechnical audiences alike, and provides the ever expanding library of analytics compliant with the Unstructured Information Management Architecture, a widely adopted interoperability framework. We explore the flexibility of this framework by demonstrating workflows involving three processing components capable of performing self-contained machine learning-based tagging. The three components are responsible for the three distinct tasks of 1) generating observations or features, 2) training a statistical model based on the generated features, and 3) tagging unlabelled data with the model. The learning and tagging components are based on an implementation of conditional random fields (CRF); whereas the feature generation component is an analytic capable of extending basic token information to a comprehensive set of features. Users define the features of their choice directly from Argo's graphical interface, without resorting to programming (a commonly used approach to feature engineering). The experimental results performed on two tagging tasks, chunking and named entity recognition, showed that a tagger with a generic set of features built in Argo is capable of competing with taskspecific solutions. | [
8940645,
3446853,
7985741,
5992375
] | Building trainable taggers in a web-based, UIMA-supported NLP workbench
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 2012. 2012
Rafal Rak rafal.rak@manchester.ac.uk
School of Computer Science
National Centre for Text Mining
University of Manchester Manchester Interdisciplinary Biocentre
131 Princess StM1 7DNManchesterUK
Balakrishna Kolluru balakrishna.kolluru@manchester.ac.uk
School of Computer Science
National Centre for Text Mining
University of Manchester Manchester Interdisciplinary Biocentre
131 Princess StM1 7DNManchesterUK
Sophia Ananiadou sophia.ananiadou@manchester.ac.uk
School of Computer Science
National Centre for Text Mining
University of Manchester Manchester Interdisciplinary Biocentre
131 Princess StM1 7DNManchesterUK
Building trainable taggers in a web-based, UIMA-supported NLP workbench
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics
the 50th Annual Meeting of the Association for Computational LinguisticsJeju, Republic of KoreaAssociation for Computational LinguisticsJuly 2012. 2012
Argo is a web-based NLP and text mining workbench with a convenient graphical user interface for designing and executing processing workflows of various complexity. The workbench is intended for specialists and nontechnical audiences alike, and provides the ever expanding library of analytics compliant with the Unstructured Information Management Architecture, a widely adopted interoperability framework. We explore the flexibility of this framework by demonstrating workflows involving three processing components capable of performing self-contained machine learning-based tagging. The three components are responsible for the three distinct tasks of 1) generating observations or features, 2) training a statistical model based on the generated features, and 3) tagging unlabelled data with the model. The learning and tagging components are based on an implementation of conditional random fields (CRF); whereas the feature generation component is an analytic capable of extending basic token information to a comprehensive set of features. Users define the features of their choice directly from Argo's graphical interface, without resorting to programming (a commonly used approach to feature engineering). The experimental results performed on two tagging tasks, chunking and named entity recognition, showed that a tagger with a generic set of features built in Argo is capable of competing with taskspecific solutions.
Introduction
The applications of automatic recognition of categories, or tagging, in natural language processing (NLP), range from part of speech tagging to chunking to named entity recognition and complex scientific discourse analyses. Currently, there is a variety of tools capable of performing these tasks. A commonly used approach involves the use of machine learning to first build a statistical model based on a manually or semi-automatically tagged sample data and then to tag new data using this model. Since the machine learning algorithms for building models are well established, the challenge shifted to feature engineering, i.e., developing task-specific features that form the basis of these statistical models. This task is usually accomplished programmatically which pose an obstacle to a non-technically inclined audience. We alleviate this problem by demonstrating Argo 1 , a web-based platform that allows the user to build NLP and other text analysis workflows via a graphical user interface (GUI) available in a web browser. The system is equipped with an ever growing library of text processing components ranging from low-level syntactic analysers to semantic annotators. It also allows for including user-interactive components, such as an annotation editor, into otherwise fully automatic workflows. The interoperability of processing components is ensured in Argo by adopting Unstructured Information Management Architecture (UIMA) (Ferrucci and Lally, 2004) as the system's framework. In this work we explore the capabilities of this framework to support machine learning components for tagging textual content.
In the following section we present related work. Section 3 provides background information on Argo and its relationship to UIMA. The details of the three machine learning components are discussed in Section 4. Section 5 provides evaluation, whereas Section 6 concludes the paper.
Related work
Language processing tools with machine learning capabilities for tagging textual content have been distributed by various groups in form of either standalone applications or application programming interfaces (API). Packages such as Lingpipe 2 , Mallet 3 , Stanford NLP tools 4 and OpenNLP 5 have been extensively used by the NLP and text mining communities (Kolluru et al., 2011;Corbett and Murray-Rust, 2006). However, such tools inherently impose inconveniences on users, such as a lack of GUI, often arduous manual installation procedures, proficiency in programming or familiarity with the details of machine learning algorithms. These limitations are overcome by GUI-equipped, workflow-supporting platforms that often directly use the solutions provided by the former tools. The notable examples of such platforms designed specifically for NLP and text mining tasks are GATE (Cunningham et al., 2002), a suite of text processing and annotation tools, and U-Compare (Kano et al., 2010), a standalone application supporting the UIMA framework that formed the inspiration for Argo.
Although the GUI platforms provide machine learning solutions, these are usually limited to using pre-trained models and providing a rich set of features for training requires resorting to programming. Argo, on the other hand, allows the users to train their own models with either a generic set of features or customisable features without having to write a single line of code. This capability is provided in Argo entirely through its GUI.
Argo and UIMA
Argo's main user interface consists of three panels as shown in Figure 1. The left-hand panel includes user-owned or shared storable objects; the middle panel is a drawing space for constructing workflows and the right-hand panel displays context-dependent information. The storable objects are categorised into workflows, represented as block diagrams of interconnected processing components, documents that represent the user's space intended for uploading resources and saving processing results, and executions that provide past and live workflow execution details and access points to user-interactive components should such be present in a workflow.
Component interoperability in Argo is ensured by UIMA which defines common structures and interfaces. A typical UIMA processing pipeline consists of a collection reader, a set of analysis engines and a consumer. The role of a collection reader is to fetch a resource (e.g., a text document) and deposit it in a common annotation structure, or CAS, as the subject of annotation. Analysis engines then process the subject of annotation stored in the CAS and populate the CAS with their respective annotations. The consumer's role is to transform some or all of the annotations and/or the subject of annotation from the CAS and serialise it into some storable format.
Readers, analysers and consumers are represented graphically in Argo as blocks with incoming only, incoming and outgoing, and outgoing only ports, respectively, visible in the middle of Figure 1.
Machine learning components in Argo
In order to ensure flexibility in building workflows, we split the machine learning capability into three distinct processing components, namely feature generator, model trainer and tagger. The trainer and the tagger are intrinsic machine learning components, whereas the feature generator is a convenient and customisable processing component capable of building a feature space for a user-defined domain.
From UIMA's perspective, the feature generator and the tagger are both analysis engines whose purpose is to analyse the incoming CASes and enrich them with additional annotations; whereas the trainer is a consumer that transforms the information stored in CASes into a statistical model.
A typical use of the three components is shown in Figure 2. The three components are repre-sented as the Feature Generator, CRF++ Trainer and CRF++ Tagger blocks. Figure 2a shows a process of building a statistical model supported by a document reader, common, well-established preprocessing components (in this case, to establish boundaries of sentences and tokens), and the previously mentioned editor for manually creating annotations 6 . The manual annotations serve to generate tags/labels which are used in the training process together with the features produced by Feature Generator. The trained model is then used in the workflow shown in Figure 2b to tag new resources. Although the tagging workflow automatically recognises the labels of interest (based on the model supplied in CRF++ Tagger), in practice, the labels need further correction, hence the use of Annotation Editor after the tagger.
Training and tagging
At present, our implementation of the training and tagging components is based on the conditional random fields (CRF) (Lafferty et al., 2001). Our choice is dictated by the fact that CRF models are currently one of the best models for tagging and efficient algorithms to compute marginal probabilities and n-best sequences are freely available.
We used the CRF++ implementation 7 and wrapped it into two UIMA-compatible components, CRF++ Trainer and CRF++ Tagger. The trainer deals with the optimisation of feature parameters, whereas word observations are produced by Feature Generator, as described in the following section.
From annotations to features
The Feature Generator component is an intermediary between annotations stored in CASes and the training component. This component is customisable via the component's settings panel, parts of which are shown in Figure 3. The panel allows the user to 1) identify the stream of tokens 8 (Figure 3a), 2) identify the stream of token sequences (usually Each feature definition consists of a name, a token field, an optional list of token field transformations, and an optional set of context windows. The name is only for the user's convenience of identifying individual feature definitions. The token field is the primary subject of transformations (if any) and it is one of the data fields of the selected token annotation type. For instance, the token annotation type may define data fields such as part of speech, chunk, or lemma. By default, the system selects "covered text", i.e., the span of text covered by an annotation, since this data field is available for any annotation.
If no transformation is declared, the string rep- Figure 4: UML diagram of transformation types resentation of the token field's value ultimately becomes the value of the generated feature. If the user declares one or more transformations then these are applied on the token field's value in sequence, i.e., an outcome of the preceding transformation becomes an input of the following one. Figure 4 shows the various transformations currently available in the system. Context windows allow for enriching the current token's feature set by introducing observations from surrounding tokens as n-grams. For example, the selected feature definition in Figure 3b, "surface has symbols", declares the covered text as the feature's basis and defines two transformations and two context windows. The two transformations will first transform the covered text to a collapsed shape (e.g., "NF-kappa" will become "A#a") and then produce "Y" or "N" depending on whether the collapsed shape matches the simple regular expression "#" (e.g., "A#a" will become "Y"). The two context windows define six unigrams and four bigrams, which will ultimately result in this single feature definition's producing ten observations for training.
Evaluation
We show the performance of taggers trained with two distinct sets of features, basic and extended. The basic set of features uses token fields such as the covered text and the part of speech without any transformations or context n-grams. The extended set makes the full use of Feature Generator's settings and enriches the basic set with various transformations and context n-grams. The transformations in- Table 2: Comparison of setups with basic and extended features for the chunking and NER tasks. clude surface shape, length, prefixes, suffixes, and the presence of various combinations of letters, digits and symbols. The context n-grams include unigrams for all feature definitions and bigrams for selected ones. Figure 3b shows a sample of the actual extended set.
We use two datasets, one prepared for the CoNLL 2000 shared task (Tjong et al., 2000) and another prepared for the BioNLP/NLPBA 2004 shared task (Kim et al., 2004). They represent two different tagging tasks, chunking and named entity recognition, respectively. The CoNLL 2000 chunking dataset involves 10 labels and comes pre-tokenised with 211,727 tokens in the training set and 47,377 tokens in the test set. The dataset also provides partof-speech tags for each token. The BioNLP/NLPBA 2004 named entity recognition dataset involves five biology-related labels and consists of 472,006 and 96,780 tokens in the training and testing sets, respectively. Contrary to the former dataset, there is no other information supporting the tokens in the BioNLP/NLPBA dataset. To compensate for it we automatically generated part of speech and chunk labels for each token.
The chosen datasets/tasks are by no means an exhaustive set of representative comparative-setup datasets available. Our goal is not to claim the superiority of our approach over the solutions reported in the respective shared tasks. Instead, we aim to show that our generic setup is comparable to those task-tuned solutions.
We further explore the options of both Feature Generator and CRF++ Trainer by manipulating labelling formats (IOB vs IOBES (Kudo and Matsumoto, 2001)) for the former and parameter estimation algorithms (L 2 -vs L 1 -norm regularisation) for the latter. Ultimately, there are 32 setups as the result of the combinations of the two feature sets, the two datasets, the two labelling formats and the two estimation algorithms. Table 1 shows the precision, recall and f-scores of our extended-feature setups against each other as well as with reference to the best and baseline solutions as reported in the respective shared tasks. The gap to the best performing solution for the chunking task is about 1.3% points in F-score, ahead of the baseline by 15.7% points. Respectively for the NER task, our best setup stands behind the best reported solution by about 7% points, ahead of the baseline by about 18% points. In both instances our solution would be placed in the middle of the reported rankings, which is a promising result, especially that our setups are based solely on the tokens' surface form, part of speech, and (in the case of the NER task) chunk. In contrast, the best solutions for the NER task involve the use of dictionaries and advanced analyses such as acronym resolution.
Results
The tested combinations of the labelling formats and parameter estimation algorithms showed to be inconclusive, with a difference between the best and worst setups of only 0.35% points for both tasks.
The advantage of using the extended set of features over the basic set is clearly illustrated in Table 2. The performance of the basic set on the chunking dataset is only at the level of the baseline, whereas for the NER task it falls nearly 6% points behind the baseline (which comes as no surprise given that the baseline system is a string match of entities found in the training set). Table 3 shows the number of iterations 9 needed for the optimisation algorithm of the trainer to converge. The advantage of the L1 regularisation is apparent with nearly two to five times less iterations needed when compared to the L2 regularisation. Given the close F-scores achieved by the two family of setups, the L1 regularisation becomes a clear winner in our experimentation setup.
Conclusions
Argo's strength is manifested by its online availability, an intuitive graphical user interface available from a web browser, convenience in building even most complex text processing workflows, and the availability of trainable machine learning components. The Feature Generator component, customisable entirely through a GUI, provides the flexibility needed to extend the basic set of features without resorting to programming. The experiment results showed that an extended, yet generic, set of features can be taken to competitive levels in terms of effectiveness.
Acknowledgements
This work was partially supported by Biotechnology and Biological Sciences Research Council (BB-
Figure 1 :
1Screen capture of Argo's web-based interface.
Figure 2 :
2Two generic workflows demonstrating the use of the Feature Generator component for (a) training and (b) tagging.
Figure 3 :
3Feature Generator settings panel allows the user to (a) select labels for machine learning and (b) define features. sentences), and 3) define features or token observations (Figure 3b).
Table 3 :
3Number of iterations needed for the optimisation algorithm to converge.
http://nactem.ac.uk/Argo
http://alias-i.com/lingpipe 3 http://mallet.cs.umass.edu 4 http://nlp.stanford.edu/software/index.shtml 5 http://opennlp.apache.org
The preprocessing and manual annotation components could be replaced with CAS Reader, a component capable of supplying the workflow with a previously annotated set of documents. 7 http://code.google.com/p/crfpp/ 8 The definition of token depends on the selected UIMA annotation type. It may range from a simple span of text to a complex lexical or semantic structure.
9We do not report detailed CPU times due to experimenting on resource-shared machines. Such a setup makes direct sideby-side comparisons largely skewed. As a reference we note that the workflows completed in 15 minutes to about 11 hours depending on a feature space size and machine load. SRC BB/G53025X/1 From Text to Pathways) and Korea Institute of Science and Technology Information (KISTI Text Mining and Pathways).
High-throughput identification of chemistry in life science texts. P Corbett, P Murray-Rust, Comp Life. 4216P. Corbett and P. Murray-Rust. 2006. High-throughput identification of chemistry in life science texts. Comp Life, pages 107-118. LNBI 4216.
GATE: A framework and graphical development environment for robust NLP tools and applications. H Cunningham, D Maynard, K Bontcheva, V Tablan, Proc. of the 40th Anniversary Meeting of the Association for Computational Linguistics. of the 40th Anniversary Meeting of the Association for Computational LinguisticsH. Cunningham, D. Maynard, K. Bontcheva, and V. Tablan. 2002. GATE: A framework and graphi- cal development environment for robust NLP tools and applications. In Proc. of the 40th Anniversary Meeting of the Association for Computational Linguistics.
UIMA: An Architectural Approach to Unstructured Information Processing in the Corporate Research Environment. D Ferrucci, A Lally, Natural Language Engineering. 103-4D. Ferrucci and A. Lally. 2004. UIMA: An Architec- tural Approach to Unstructured Information Process- ing in the Corporate Research Environment. Natural Language Engineering, 10(3-4):327-348.
U-Compare: An integrated language resource evaluation platform including a comprehensive UIMA resource library. Y Kano, R Dorado, L Mccrochon, S Ananiadou, J Tsujii, Proc. of the Seventh International Conference on Language Resources and Evaluation (LREC 2010). of the Seventh International Conference on Language Resources and Evaluation (LREC 2010)Y. Kano, R. Dorado, L. McCrochon, S. Ananiadou, and J. Tsujii. 2010. U-Compare: An integrated language resource evaluation platform including a comprehen- sive UIMA resource library. In Proc. of the Seventh International Conference on Language Resources and Evaluation (LREC 2010), pages 428-434.
Introduction to the bio-entity recognition task at jnlpba. J.-D Kim, T Ohta, Y Tsuruoka, Y Tateisi, N Collier, Proc. of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications, JNLPBA '04. of the International Joint Workshop on Natural Language essing in Biomedicine and its Applications, JNLPBA '04Geneva, SwitzerlandAssociation for Computational LinguisticsJ.-D. Kim, T. Ohta, Y. Tsuruoka, Y. Tateisi, and N. Col- lier. 2004. Introduction to the bio-entity recogni- tion task at jnlpba. In Proc. of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications, JNLPBA '04, pages 70-75, Geneva, Switzerland. Association for Compu- tational Linguistics.
Automatic extraction of microorganisms and their habitats from free text using text mining workflows. B Kolluru, S Nakjang, R P Hirt, A Wipat, S Ananiadou, Journal of Integrative Bioinformatics. 82184B. Kolluru, S. Nakjang, R. P. Hirt, A. Wipat, and S. Ana- niadou. 2011. Automatic extraction of microorgan- isms and their habitats from free text using text min- ing workflows. Journal of Integrative Bioinformatics, 8(2):184.
Chunking with support vector machines. T Kudo, Y Matsumoto, Proc. of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, NAACL '01. of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, NAACL '01Stroudsburg, PA, USAAssociation for Computational LinguisticsT. Kudo and Y. Matsumoto. 2001. Chunking with sup- port vector machines. In Proc. of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, NAACL '01, pages 1-8, Stroudsburg, PA, USA. Asso- ciation for Computational Linguistics.
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. J Lafferty, A Mccallum, F Pereira, Proc. 18th International Conf. on Machine Learning. 18th International Conf. on Machine LearningSan Francisco, CAMorgan KaufmannJ. Lafferty, A. Mccallum, and F. Pereira. 2001. Condi- tional Random Fields: Probabilistic Models for Seg- menting and Labeling Sequence Data. In Proc. 18th International Conf. on Machine Learning, pages 282- 289. Morgan Kaufmann, San Francisco, CA.
Introduction to the CoNLL-2000 shared task: chunking. K S Tjong, F Erik, S Buchholz, Proc. of the 2nd workshop on Learning language in logic and the 4th Conference on Computational natural language learning. of the 2nd workshop on Learning language in logic and the 4th Conference on Computational natural language learningMorristown, NJ, USAAssociation for Computational LinguisticsK. S. Tjong, F. Erik, and S. Buchholz. 2000. Introduc- tion to the CoNLL-2000 shared task: chunking. In Proc. of the 2nd workshop on Learning language in logic and the 4th Conference on Computational nat- ural language learning, pages 127-132, Morristown, NJ, USA. Association for Computational Linguistics. |
2,660,018 | On the durational reduction of repeated mentions: recency and speaker effects | There are conflicting views in the literature as to the role of listener-adaptive processes in language production in general and articulatory reduction in particular. We present two novel pieces of corpus evidence that corroborate the hypothesis that non-lexical variation of durations is related to the speed of retrieval of stored motor code chunks and durational reduction is the result of facilitatory priming. | [] | On the durational reduction of repeated mentions: recency and speaker effects
Viktor Trón v.tron@ed.ac.uk
International Graduate College of Language Technology and Cognitive Systems Edinbugh & Saarbrücken
On the durational reduction of repeated mentions: recency and speaker effects
speech productioncorpus studyreductiondurationpriming
There are conflicting views in the literature as to the role of listener-adaptive processes in language production in general and articulatory reduction in particular. We present two novel pieces of corpus evidence that corroborate the hypothesis that non-lexical variation of durations is related to the speed of retrieval of stored motor code chunks and durational reduction is the result of facilitatory priming.
Introduction
Over the past decade, it has been demonstrated that discourse context systematically affects articulatory variation in speech over and above lexical, social and individual factors. It has long been noticed that frequent words are hypoarticulated relative to infrequent words of the same phonological composition (0; 0). In a series of work (0), Jurafsky and colleagues showed that predictability within the discourse context is also a key determinant of articulatory variation. In particular they found a unidirectional link between redundancy and reduction, which they destilled into the mnemonic: "inform less, less form in" and formulated the Probabilistic Reduction Hypothesis (PRH): More probable words are more reduced. The PRH has the potential to subsume various static as well as dynamic sources of redundancy which are known to enhance reduction such as frequency, bigram probability, semantic association and repetition. However, it leaves open the question how one can pin down a causal connection between information-theoretic notions of probability or redundance on the one hand and reduced articulation on the other.
Listener-adaptive accounts.
The earliest known statements about the relation between redundancy and reduction framed the phenomenon in functional terms somewhat similar in vein to Jurafsky et al's mnemonic. In this view articulatory reduction is the implementation of the Principle of Least Effort and the PRH is a constraint on its application: its interaction with the Principle of Clarity. According to this, speakers can afford attenuating their pronunciation more in contexts where more information is available for the listener to identify what is said (0; 0; 0). Taking this for granted implies that articulatory reduction is a listener-adaptive process which presupposes that the speaker has to maintain and update a model of the listener. Explicit listener-modelling has been shown to be very limited in natural spontaneous discourse and thought to be restricted to monitoring and adjustment stages of language use. Even at higher levels of linguistic processing, the way people plan referential expressions is known to be predominantly speaker-centered (0) and inferencing from common ground is subject to available resources. This leads one to question that computationally costly listener-modelling could underlie such low-level processes as articulation (0).
Priming and reduction.
Balota et al (0) perform a series of experiments, where they prompt speakers to produce a target word in the context of semantically related and unrelated primes. They show that semantic relatedness leads to shorter target productions, and this durational reduction is more pronounced at shorter latencies between prime and target. To our knowledge this study is the first to hypothesize that durational reduction might be directly related to memory retrieval insomuch as chunks of motor codes are sequentially accessed during production, and access facilitated by priming can lead to speeded execution, i.e., shortening.
Why duration?
Various reductive processes can be viewed as resulting from attenuated gestures due to temporal overlap and therefore regarded as side-effects of durational reduction. Also, the durational aspect of reduction is a fortunate choice for experiments since it can direcly be measured on speech output and requires only minimally tagged speech corpora.
Why repetitions?
Absolute measures of durational reduction are problematic, since norms are difficult to obtain due to the large amount of variables influencing durations (speech rate, style, prosody, segmental composition, etc.). If word durations are compared across mentions of the same word in the same discourse, most of these methodological problems are solved. Therefore we decided to explore durational reduction of repeated mentions in the hope that it tells us about the relationship between contextual redundancy and articulatory reduction in general described in the PRH. Fowler and Housum (0) were the first to demonstrate that second mentions of words in a discourse are shorter than first mentions and proposed a listener-adaptive functionalist account in line with the PRH.
Synopsis.
This paper presents novel results on durational reduction of repetitions that corroborate a non-functionalist mechanistic account of reduction mediated by priming. If durational reduction is the result of priming, we expect repetitional reduction to show a recency effect similar to the one found in semantic priming by Balota et al. In particular, we hypothesize that consecutive mentions of a word show more reduction for shorter latencies and asymptotically level out as the time lapse between the mentions increases. Since repetition is the strongest possible prime-target association, we expect that the sensitive time window showing reduction is much longer than for semantic priming and also that its magnitude is larger. Section 2. describes the corpus used in our study, overviews dataset preparation and introduces our terminology. In section 3. we test the recency effect for repetitional reduction. Section 4. contrasts reduction in self-and cross-speaker repetitions in dialogue to provide further argument against the listener-adaptive account of repetitional shortening.
Materials and method
We present results on the Edinburgh Maptask Corpus (Maptask), a collection of spontaneous dialogue transcribed and aligned on the word level. The Maptask Corpus contains 128 dialogues totalling a 14.5 hours of speech. The corpus was cleaned of fillers, pauses, fragmentary utterances and overlapping speech in dialogue. Mentions of the same word type w (based on orthographic identity) within a dialogue define a mention chain, w 1 , w 2 , . . . , w n . Datapoints in our initial dataset are repetitions, i.e., pairs of consecutive mentions in mention chains, w i , w i+1 . Repetitions are indexed for their position in the mention chain, e.g., w i , w i+1 has position index i. We extracted start and end times and word durations. In addition to this, for each mention pair we record durational reduction and latency. Durational reduction is measured directly as the duration difference between the later and earlier mention of the pair in milliseconds; this gives an intuitive scale where a value for repetitional shortening is smaller than for lengthening. The latency of a repetition is defined as the time lapse between the end of the earlier mention to the onset of the later mention in seconds. More precisely, latency for the pair
w i , w i+1 is end(w i )−start(w i+1 ) (with range (−∞, 0]),
which yields a mnemonic measure where repetitions with recent earlier mentions are to the right while ones with long lapsed earlier mentions are to the left. We also recorded token frequency of the word type based on occurrences within the corpus. The resulting database contains some 100,000 mention pairs with 1,000 word types. Mentions were also tagged for speaker, which allowed us to classify repetitions as self-repetitions if the utterers of the two consecutive mentions were identical or cross-speaker repetitions in case they were different. Approximately two-thirds of all mention pairs are self-repetitions.
Latency
Fowler and Housum's earlier finding that second mentions are shorter was confirmed. Paired t-test directly comparing durations of earlier and later mentions was highly significant (see Table 1). An alternative hypothesis would be that repeated mentions are reduced because they start later in the discourse. A control test was performed where a later mention was paired with a first mention of the same word in another dialogue with a matching onset (it starts at the same time in their respective dialogue). Paired ttest showed that repeated mentions are significantly shorter than onset-matched first mentions ruling out the possibility that durational difference between consecutive mentions is solely an artefact of different amounts of preceding discourse. The priming account of durational reduction predicts a recency effect of reduction: the magnitude of durational reduction is larger for short latencies between the consecutive mentions. Latency (-log scale) shows a highly significant negative correlation with reduction ( = −0.06, t = −19.5, df = 102602, p ≈ 0). Fig 1 demonstrates that the temporal relationship is monotonic and near linear for latency quantiles (which almost perfectly align with log latency). Reduction is attested even in the latency range of 50-60 seconds and asymptotically levels out at longer latencies. In order to quantify the predictive power of reduction, a linear model was run with reduction as the response variable and various factors like latency (log), token frequency (log), onset and position index (log), speaker as independent variables. ANOVA shows the main effect of latency as highly significant (F = 558.32, p ≈ 0). Frequent words tend to reduce less than infrequent words, and there is a significant interaction between frequency and latency (F = 23.98, p ≈ 0) suggesting a floor effect: frequent words are typically short which cannot reduce as much.
The magnitude of reduction also decreases steadily with position index (see Fig 1). Note that for higher position indexes we actually see lengthening for longer latencies. The higher the position index, the shorter the expected average latency, the more likely the earlier mention will be reduced due to a short-latency prime before. These results are compatible with the PRH since we expect predictability of a word to decrease as its previous mention is farther behind in the discourse. The temporal profile of reduction, however, is reminiscent of the laboratory findings of the Balota et al study and raises the possibility that we are dealing with a recency effect in repetitional priming. Since repetition serves as the most robust prime-target association possible, the sensitive range of latencies is longer than for semantic priming and the magnitude of reduction is larger. Figure 1: Reduction (more reduced is down) as a function of latency (more recent on the right). Duration difference betweeen consecutive mentions is averaged over 6 latency quantiles and plotted for 4 different frequency groups (top) and 3 mention quantiles (buttom) in the Maptask Corpus.
repetitions with high position index and long latency. A priming account, however, allows us to link such low-level temporal dependency of reduction to known recency effects on facilitation.
We found similar effects with word type: content words reduce more than function words for the same latencies and frequency, however word length is still a confound. This possibility is eliminated by using a normalized measures of reduction for the two corpora specifying alignment at the phoneme level which show exactly the same interactions.
Self-repetitions
In a second experiment we compared durational reduction of self-repetitions and cross-speaker repetitions. According to listener-adaptive accounts, durational reduction depends on the redundancy of a word given the speaker's model of the listener. Speakers cannot be sure that the listener processed a word, whereas listeners are expected to update their speaker's model upon successful comprehension of the message. This implies that the redundancy of a word is less influenced by an earlier mention when the speaker is also the utterer of the earlier mention than when she is the listener interpreting it. According to this self-repetitions are to be equally or less reduced than cross-speaker ones. More mechanistic views of dialogue assume a strict parity of representations which predicts identical activations on all levels in production and comprehension (0). This approach predicts no difference between self-and cross-speaker repetitions.
A priming account allows that comprehension and production drive different processes so representational parity does not imply quantitatively identical degrees of activation. We hypothesize that the actual execution of motor codes in production channels more activation to articulatory representations than comprehension processes do. More activation in production would predict larger facilitation of retrieval with self-repetitions and thus more durational reduction.
In order to compare these competing hypotheses, we compiled a dataset out of first-second mentions where selfrepetitions and cross-speaker repetitions were paired up and were matched for word type and log latency. A paired t-test shows that self-repetitions are significantly more reduced than cross-speaker repetitions (see Table 2). Contrast in reduction between the speaker identity condition is more robust for short latencies, levelling out at around 10-15 seconds. Overall, our results are different from a similar experiment by Bard et al (0) who found no difference in reduction between self-and cross-speaker repetitions. We conjecture that this may be because repetition latency was not controlled for in their study; the lack of difference may be an artefact of shorter cross-speaker latencies or predominantly long latencies where the contrast is neutralized. More reduction of self-repetitions contradicts a purely listener-adaptive account. Moreover, it does not support mechanistic views of discourse which assume a strict parity in the use of representations in production and comprehension. In particular, it suggests that the motor theory of speech perception (0) needs to be refined at least allowing for longer lasting activation of articulatory representations during production than in perception.
Conclusion
This paper presented two novel results about the durational reduction of repeated mentions: (i) Latency between consecutive mentions is inversely proportional to the magnitude of repetitional shortening. (ii) Self-repetitions are more reduced than cross-speaker repetitions at short latencies. Both results falsify a purely listener-adaptive account of reduction but are compatible with a priming account: (i) is explained by a recency effect in repetitional priming while (ii) is explained by assuming higher degree (or longer decay) of activation of articulatory representations in production relative to perception. In sum, the results corroborate the hypothesis that nonlexical aspects of durational variation are modulated by the speed of retrieval of motor code chunks in speech production. This proposal explains durational reduction as a result of facilitatory priming and provides the causal link between redundancy and reduction stated descriptively in the PRH.
Since the temporal aspects of articulation can even be taken under conscious control, we do not dispell the possibility that inferencing listener needs from common ground can prompt listener-adaptive choice in phonetic realization. However, we second the literature suggesting a limited, resource-dependent role for this computationally costly process of explicit other-modelling. As long as more mechanistic explanations of articulatory reduction are feasible, there is no need to invoke listener modelling at the phonetic end of speech production.
Table 2 :
2Durational reduction in self-and cross-speaker repetitions.
Priming in pronunciation: Beyond pattern recognition and onset latency. D A Balota, J E Boland, L W Shields, Journal of Memory and Language. 28Balota, D. A., Boland, J. E., Shields, L. W. 1989. Priming in pronunciation: Beyond pattern recognition and onset latency. Journal of Memory and Language 28, 14-36.
Controlling the intelligibility of referring expressions in dialogue. E G Bard, A Anderson, C Sotillo, M Aylett, G Doherty-Sneddon, A Newlands, Journal of Memory and Language. 42Bard, E. G., Anderson, A., Sotillo, C., Aylett, M., Doherty- Sneddon, G., Newlands, A. 2000. Controlling the intel- ligibility of referring expressions in dialogue. Journal of Memory and Language 42, 1-22.
Phonology and Language Use volume 94 of Cambridge Studies in Linguistics. J L Bybee, Cambridge University PressCambridgeBybee, J. L. 2001. Phonology and Language Use vol- ume 94 of Cambridge Studies in Linguistics. Cam- bridge: Cambridge University Press.
Differential shortening of repeated content words produced in various communicative contexts. C Fowler, Language and Speech. 28Fowler, C. 1988. Differential shortening of repeated con- tent words produced in various communicative contexts. Language and Speech 28, 47-56.
Talkers' signaling of 'new' and 'old' words in speech and listeners' perception and use of the distinction. C Fowler, J Housum, Journal of Memory and Language. 26Fowler, C., Housum, J. 1987. Talkers' signaling of 'new' and 'old' words in speech and listeners' perception and use of the distinction. Journal of Memory and Language 26, 489-504.
When do speakers take into account common ground. W S Horton, B Keysar, Cognition. 59Horton, W. S., Keysar, B. 1996. When do speakers take into account common ground. Cognition 59, 91-117.
Probabilistic relations between words: evidence from reduction in lexical production. D Jurafsky, A Bell, M Gregory, W D Raymond, Frequency and the Emergence of Linguistic Structure number 45 in Typological Studies in Language. John Benjamins. Bybee, J., Hopper, P.,Jurafsky, D., Bell, A., Gregory, M., Raymond, W. D. 2001. Probabilistic relations between words: evidence from re- duction in lexical production. In: Bybee, J., Hopper, P., (eds), Frequency and the Emergence of Linguistic Struc- ture number 45 in Typological Studies in Language. John Benjamins 229-253.
The motor theory of speech perception revised. A M Liberman, I G Mattingly, Cognition. 21Liberman, A. M., Mattingly, I. G. 1985. The motor theory of speech perception revised. Cognition 21, 1-36.
Explaining variation: a sketch of the h and h theory. B Lindblom, Speech Production and Speech Modelling. Hardcastle, W. J., Marchal, A.,Dordrecht, NetherlandsLindblom, B. 1990. Explaining variation: a sketch of the h and h theory. In: Hardcastle, W. J., Marchal, A., (eds), Speech Production and Speech Modelling. Dordrecht, Netherlands: Kluwer 403-439.
Toward a mechanistic psychology of dialogue. M Pickering, S Garrod, Behavioral and Brain Sciences. 2702Pickering, M., Garrod, S. 2004. Toward a mechanistic psychology of dialogue. Behavioral and Brain Sciences 27(02), 169-190.
Duration differences between rare and common words and their implications for the interpretation of word frequency effects. C Wright, Memory and Cognition. 76Wright, C. 1979. Duration differences between rare and common words and their implications for the interpreta- tion of word frequency effects. Memory and Cognition 7(6), 411-9. |
18,957,362 | An Experimental Methodology for an End-to-End Evaluation in Speech-to-Speech Translation | This paper describes the evaluation methodology used to evaluate the TC-STAR speech-to-speech translation (SST) system and their results from the third year of the project. It follows the results presented in , dealing with the first end-to-end evaluation of the project. In this paper, we try to experiment with the methodology and the protocol during the second end-to-end evaluation, by comparing outputs from the TC-STAR system with interpreters from the European parliament. For this purpose, we test different criteria of evaluation and type of questions within a comprehension test. The results reveal that interpreters do not translate all the information (as opposed to the automatic system), but the quality of SST is still far from that of human translation. The experimental comprehension test used provides new information to study the quality of automatic systems, but without settling the issue of what protocol is best. This depends on what the evaluator wants to know about the SST: either to have a subjective end-user evaluation or a more objective one. | [
18967544
] | An Experimental Methodology for an End-to-End Evaluation in Speech-to-Speech Translation
Olivier Hamon hamon@elda.org
) Evaluation and Language Resources Distribution Agency (ELDA)
55-57 rue Brillat-Savarin75013ParisFrance (
Laboratoire d'Informatique de Paris-Nord (UMR 7030
Université Paris 13
CNRS 99 av. J.-B. Clément
93430VilletaneuseFrance
Djamel Mostefa mostefa@elda.org
) Evaluation and Language Resources Distribution Agency (ELDA)
55-57 rue Brillat-Savarin75013ParisFrance (
An Experimental Methodology for an End-to-End Evaluation in Speech-to-Speech Translation
This paper describes the evaluation methodology used to evaluate the TC-STAR speech-to-speech translation (SST) system and their results from the third year of the project. It follows the results presented in , dealing with the first end-to-end evaluation of the project. In this paper, we try to experiment with the methodology and the protocol during the second end-to-end evaluation, by comparing outputs from the TC-STAR system with interpreters from the European parliament. For this purpose, we test different criteria of evaluation and type of questions within a comprehension test. The results reveal that interpreters do not translate all the information (as opposed to the automatic system), but the quality of SST is still far from that of human translation. The experimental comprehension test used provides new information to study the quality of automatic systems, but without settling the issue of what protocol is best. This depends on what the evaluator wants to know about the SST: either to have a subjective end-user evaluation or a more objective one.
Introduction
A Speech-to-Speech Translation (SST) system is composed of an Automatic Speech Recognizer (ASR) chained to a Spoken Language Translation (SLT) module and to a Text-To-Speech (TTS) component in order to produce the speech in the target language. In TC-STAR 1 , evaluations of individual components (ASR, SLT and TTS) are carried out and their performance is measured with methodologies and metrics specific to each component. Here, we focus on the evaluation of the SST as a whole by comparing the SST output speech with a human interpreter speech. We first give a description of the tasks and languages, then we brink back the evaluation protocol, the methodology parts we modified, and how we set up the experiment. Finally, we present a part of the results obtained by the TC-STAR system and compare them to the human interpreter ones.
Tasks and Languages
For this second end-to-end evaluation of the TC-STAR project, we adopted the general features of the first end-to-end evaluation. Only the English-to-Spanish direction was considered, automatic systems being applied to data from audio recordings in English of the European Parliament Plenary Sessions (EPPS). The raw resources consist of 20 audio recordings of around three minutes each, from the parliamentary debates in English dating from June and July 2006. The total adds up to one hour of speech, namely around 8,000 words. Professional interpreters from the European Parliament 1 http://www.tc-star.org produce oral human translation in several European languages including Spanish. Translations are done in real time, what allows us to evaluate human translation in the same way as automatic translation in order to compare the automatic and human speech translation performance. For this purpose, meaning preservation is checked between the audio input, in English, and the audio output, in Spanish. For this second evaluation, the evaluated TC-STAR system includes the following modules:
-The ASR module made of a combination of several ASR engines (Lamel et al., 2006), using the Recognizer Output Voting Error Reduction (ROVER) method (Fiscus, 1997); -The SLT module made of a combination of several SLT engines, as a ROVER (Matusov et al., 2006); -The TTS module developed by UPC (Bonafonte et al., 2006).
Therefore, if we exclude the transit from one module to another, the system is fully automatic: no manual modifications are done on the outputs of the modules. For each audio sample in English, an automatic transcription is produced by several ASR systems and an ASR ROVER output is built up. This ASR output is automatically translated into Spanish by several SLT systems and a SLT ROVER output is also built up. Finally, the SLT output is synthesized in Spanish by the TTS module.
Protocol
In this experiment, we kept the same protocol as that used for the first end-to-end evaluation with few exceptions in order to experiment new methods.
The concepts of adequacy and fluency are based on machine translation (White et al., 1994) and calculated over a five-point scale which is filled in by several judges. We only change the content of the questions. In our case, we decided to select 20 judges who were not familiar with the speech-to-speech translation domain. They were native Spanish speakers and were able to do the subjective tests online, through a Web interface. In order to process the evaluation, we extracted 20 samples containing around three minutes of English speech each. Each sample is a monologue. The objective of this evaluation is twofold: on one hand, we want to look at how much of the meaning is preserved and, on another hand, we want to estimate the quality of the audio output. Thus, we decided to ask questions built on the English speeches in order to work on what the speaker would mean (and, for instance, not what the interpreter understood and reformulated). These questions are first translated into Spanish and then, the translated questions are asked to human judges in order to observe the information loss or preservation in the target speech, in Spanish.
Using this protocol, the evaluation is carried out in three steps: -First, a questionnaire is established for each English sample and then translated into Spanish; -Then, judges assess the Spanish samples according to the evaluation protocol described below; -Finally, subjective evaluation results (answers given by judges) are checked by a single person.
We also try to compare the TC-STAR system with the professional interpreters, and to do that correctly, judges were not informed about the presence of audio data from interpreters in the evaluation. Judges act like end-users, in as much as aim at observing to what extent the information is preserved and how much the quality is sufficient. Thus, each judge receives four audio samples to evaluate: two from the TC-STAR system and two others from the interpreters. So, distribution is fair and audio samples are presented anonymously and distributed randomly among the judges.
With 40 audio samples (20 from the TC-STAR system and 20 from the interpreters) and 20 judges, each audio sample is evaluated twice: this helps to observe the agreement between judges, and most of all, it permits to compute a mean between judgments, in case some judges are mistaken.
Adequacy Evaluation
Adequacy evaluation is a comprehension test on potential users which allows estimating the rate of intelligibility of the audio outputs. The level of adequacy is computed as the rate of answers that are found by the judges, for each audio they assessed. The final objective of the adequacy evaluation is to determine whether the meaning is preserved or not.
In order to check this meaning preservation, we prepare a comprehension questionnaire of 10 questions for each sample. The manual transcriptions of the source English speech are used to prepare the 10 questions set per sample. We hold onto the answers to all 200 questions and use them as "reference answers". It means those reference answers are used as a gold standard when answers drawn up by the judges are checked manually. Then, all questions and answers are translated into Spanish. For this evaluation, we tried to classify questions into three categories, partly coming from information retrieval (Voorhees and Dang, 2005): Factoid (70% of questions), Boolean (20%) and List (10%). This could determine the quality of the system according to the type of question. Finally, after being translated, questions and audio files are put into the interface and judges can start the evaluations. First, they are informed about the evaluation procedure and its context. Then, they can listen to each of their assigned audio files and answer the respective questions.
They are not informed about the provenance of the audio (i.e. either from the TC-STAR system or from one interpreter). Once all questionnaires have been filled out by judges, a single assessor checks all the answers manually, looking at whether they are correct or not. To that end, the assessor, who is a Spanish native speaker, uses the reference answers and compares them to the answers provided by judges. He then gives scores to each answer, according to the following criteria (the values given for the scoring are provided between brackets):
-Wrong (0): the answer is not correct; -Incomplete (1): the answer is not perfect, but could be considered as good; -Right (2): the answer is most certainly correct.
We were inspired by criteria which are widely used in the evaluation of systems from the information retrieval domain (Magnini et al., 2004). However, to be more consistent with the previous end-to-end evaluation, we split the Right and Incomplete criteria to obtain two criteria of assessment 2 . Then, after presenting the corresponding results, we study the behaviors when three criteria are used. This part of the protocol differs slightly from the previous evaluation, for which two criteria (Right or Wrong) were used. We decided to revise the method of assessment to be able to be stricter with the answers of the judges. Finally, when all the answers are assessed, the adequacy score (i.e. the meaning preservation) is computed by audio, then by output.
Fluency Evaluation
Further to the meaning comprehension test, we carried out a quality test. This fluency test is more subjective and several questions related to features such as quality of the audio or utility of the output are asked to the judges. Each fluency score is the mean of a five-point scale answer. For each sample and after each adequacy judgment, judges are asked to fill in a fluency questionnaire. They have to rate the sample they have just listened to according to the four fluency questions shown in Table 2 A five-point scale is provided for each question. Only extreme marks (1 and 5) are explicitly defined, ranging from the lowest level (1) to the highest (5). Questions have been slightly modified between the previous experiment and this one, for the Fluent Speech and the Overall Quality, in order to improve the inter-judge agreement. It has been done regarding the comments from judges from this previous evaluation, and their agreement scores. In this way, two questions have been simplified whereas they were: -For Fluent Speech criterion: Is the system fluent? -For Overall Quality criterion: Rate the overall quality of this translation system.
Here, the notion of "system" disappears and the interest on the audio output is reinforced. Finally, when all the samples have been rated by all the judges, the average values of each fluency rate are computed for both interpreters and TC-STAR system outputs. Scores can then be compared.
Results
After all the samples have been listened to, answered and rated, answers are checked by the assessor and validated or not, according to the three criteria of assessment. Then, the final results are computed for each output: we obtain scores for the adequacy and for the four fluency judgments. We first give the overall results using two criteria then we observe the results when three criteria are used in a specific section.
Judges Agreement
Each sample is evaluated twice by two different judges, so we can compute the inter-judges agreement. In general, judges give similar answers: 75% of the 400 questions get the same assessment. It means that 25% of the questions raise problems, but some of them were not easy to answer. The agreement is slightly higher when judges answer questions from the interpreter samples (79%) than from the TC-STAR system (71%). Finally, we observe that agreement is quite the same as in the previous experiment, which achieved 77% of agreement between judges. For fluency, the agreement is quite low: 30% for understanding, 52% for fluent speech, 35% for effort and 45% for overall quality. However, it corresponds to the state-of-the-art and agreements are better than for the previous experiment (15% for understanding and above 30% for the other criteria, respectively). However, what is more interesting is the 1-agreement, the ratio of scores that did not differ in more than 1 unit between the evaluation from the first judge and the evaluation from the second one. Judges provide similar rates about the quality of the samples, whether it is on the TC-STAR system or the interpreter ones. However, the effort criterion still causes problems, especially for the Interpreters' samples: effort 1-agreement is low regarding the 1-agreement of the other criteria. That is probably due to the difference of perception of judges, linked to the difficulty for interpreters to speak both smoothly and quickly, due to the real time translation constraints.
Adequacy
Overall Results
In order to compare with the previous end-to-end evaluation, we propose the results as if there were still two criteria, considering the Incomplete criterion as being Right. This corresponds to the definition we had for the previous evaluation, which was less strict about the correctness of the answers. Table 4 presents the adequacy results for the interpreter and TC-STAR system speeches, indicating:
-The subjective results of the end-to-end evaluation ("Subj." column) done by the judges and checked by the assessor; -An objective verification of the presence of the answers in each component ("Obj", "SLT output" and "ASR output" columns), in order to determine in which component of the TC-STAR system the information is lost. To do that, individual outputs from each component (recognition output from ASR, translated output from SLT, synthesized audio from TTS -corresponding to the "Obj." column -and speaker audio) are checked by the assessor.
Audio Output System
Subj.
Obj.
SLT Output
ASR Output
Interp. 74% (50) 91% (72) --Tc-star 64% (58) 89% (83) 92% (83) 97% (91) Regardless of the type of evaluation (whether subjective or objective), interpreters' speeches obtain higher results than TC-STAR system speeches. Only 9% of the information has not been translated by the interpreters. The difference between subjective and objective evaluations is quite strong (but similar for both TC-STAR system and interpreters): Judges did not find 25% of the information for the TC-STAR system and 27% for the interpreters' speeches.
In the same way, we can see that 3% is lost by the ASR module, 5% by the SLT module and 3% by the TTS module. It seems that some questions were difficult to answer out of context.
As in , we decided to compare the TC-STAR system and the interpreters in a fair manner by only selecting questions for which answers are in interpreters' samples and corresponding to the objective evaluation.
We make the assumption that interpreters select important information because of their hard task of real-time oral translation. We then get a new subset of 182 questions, for which information has been kept by the interpreters. As with earlier experiments, the outcome of the study is presented in Table 5.
If we consider interpreters' translation as perfect (100% of the questions could be answered), then the TC-STAR system obtains rather good results.
Audio Output System
Subj.
Obj.
SLT Output
ASR Output
Interp. 80% (67) 100% --Tc-star 66% (63) 91% (86) 93% (86) 97% (95) In any case, results seem to be lower than for the previous evaluation and actual scores of interpreters' quality. The subjective loss is really deep for the TC-STAR system: judges do not find the information in the translated speech easily. Finally, this is the SLT module that loses the most in terms of information, and interestingly enough, the TTS module also loses information and quality decreases.
Comparison with the Previous Experiment
Actually, comparison is indicative, while questions and answers are not the same for both evaluations, and data is checked on different contexts. Anyway, this permits to give an idea of system improvement. Globally, results seem to be better in this evaluation than in the previous one. But that should be put into context, since interpreters also get better results. This is mainly explained by the increase in terms of performance of the TC-STAR system but also by the fact that questionnaires seem to be less difficult for this experiment. In fact, improvement of the TC-STAR system is not so good. While improvement on interpreters data is of 48% in absolute for the subjective evaluation (and 36% for the objective one), it is 10% for the TC-STAR system (and 7% for the objective one). So, even if scores are better, we can not say the TC-STAR system improves for this second end-to-end evaluation, since improvement is weaker than for the interpreters. That could be due to either the use of the SLT ROVER or the change of topics and context of data.
4.2.3
Criteria of Answers Assessment
After the general results presented above, we show the results according the three new criteria (Wrong, Incomplete and Right) for the meaning comprehension test and we try to observe the differences and the utility of the new method. In order not to lose information, we did not combine answers given by two different judges on the same questions. For both outputs, Incomplete answers get the same rates, around 15%, what represents a rather small proportion of answers. In any case, decomposing the assessments in three criteria gives more accurate information about the systems performance. By taking (R+I) assessments as correct answers, performance is acceptable, while by taking only (R), assessments performance is quite low. That makes a strong difference on the perception of the results. However, (R+I) results are closer to the objective evaluations than (R) ones. It means that combining (R) and (I) criteria corresponds to a better assessment (i.e. determining the quality of a speech-to-speech system output).
It also means that we should not be extremely strict with the assessment of the answers. And as shown with the judges' agreement, errors and doubts may very well come occur when judges answer questions. In any case, this is also the aim of the end-to-end evaluation, and the difficulty of subjective judgments: We would like to know the quality of outputs from an end-user point of view. That is probably most interesting, since it tests the usability of the system.
4.2.4
Type of Questions When the questions are built up, we try to respect the proportion of Factoid, List and Boolean questions. At the end of the process, there were, for the subjective evaluation, 144 Factoid (127 for the objective evaluation), 15 List (14 for the objective evaluation) and 41 Boolean (41 for the objective evaluation). Thus, Factoid questions are especially truncated when the selection is made for the objective evaluation. That would mean their respective contexts are not dealt in the same way by the interpreter. The fact is understandable as regards the List questions: when an interpreter hears something like an enumeration, he pays attention to translate correctly each point of the enumeration, because, in general, this is particular and important points of the discourse. For instance, in the part of the sentence:
[…] as they have been in their condemnation of racism, xenophobia, anti-Semitism, homophobia and indeed other hate speech and hate crimes the focus is made on the enumeration done by the speaker and the sentence would loose its consistency without these terms. What is particularly odd is the difference between handling Factoid and Boolean questions. All the Boolean questions are selected for the objective evaluation, whereas more than 10% of the Factoid questions are not used for it. A priori the decrease effect should be the same. The single hypothesis we could have to this phenomenon is that the Factoid questions require exact and detailed answers. On the contrary, Boolean questions contain already details in the question and then answers can be detected in the output easily. Results according to each type of questions are presented in Table 7. The "Subj./Obj." criteria are done according to the fact that the evaluation is made by judges and checked by the assessor, or directly made by the assessor with the help of the reference answers. The "Fair/Unfair" criteria are related to the fact that the evaluation is made on the selected question for which answers may be present in the interpreter output or not. Boolean questions are easier to answer than Factoid and List ones, as we could expected, since Boolean questions contain more information. Moreover, interpreters' scores are higher than TC-STAR system's ones and this is not really surprising. The gap between the interpreters' results and the TC-STAR system results increases when the evaluation is Fair, instead of Unfair, whatever the type of question is: it shows the real difference between the systems. But this gap is reduced when the evaluation is objective rather than subjective. This last point reveals how important the evaluation made by judges is: their perception and comprehension of the information remains different to that of the "real quality" of a system. For the interpreters, at first sight, the scores are good and the averages are above 3 points for all the fluency questions, but the results are not as good as one may expect. This is explained by the working conditions of the interpreters who have to translate in real time. As we denoted in the previous experiment, there are some noises (background recordings, speaker's noises, etc.) and contexts (speaker hesitations) which cause difficulties to understand and to follow the speaker. For the TC-STAR system, the quality is much lower than the interpreter one. Even if the Understanding is slightly higher, the audio quality is constraining for the judges, in particular represented by the Effort of listening. Actually, the interpreters fail regarding Effort and Understanding, while the TC-STAR system fails in what concerns Effort but also on the Fluent Speech and the Overall Quality. That corresponds to the results of the previous evaluation too. In the same way, all scores for both interpreter and TC-STAR system are higher than those of the previous evaluation.
Fluency Results
Data Analysis
Many analyses can be done on both the interpreter and TC-STAR speeches, with respect to the Adequacy or the Fluency criteria. Indeed, most of the errors could cause reduction of the quality. We try here to outline issues from both kinds of speech, in addition to those already found in the previous evaluation .
In many audio outputs, the interpreters hesitate and make repetitions. That is probably due to the delivery of the speaker: there is no feedback when speakers talk and most of the time this is a fast speech, since speakers have a short time to utter their speech. Interpreters have some difficulties to follow speakers and then give fewer details. So, interpreters are de facto forced to select information, and, inevitably, they restrict the comprehension of the topic and disturb the listener (a judge in our case). For instance, a speaker gives many details in his speech while speaking quickly: as a consequence, the interpreter limits the translation and does not provide any details in the end, in order to resume the translated speech at a "quieter" moment (i.e. at the end of the speech or when the speaker breathes/makes a pause).
In the same way, some questions are general and the absence of major details prevents the judge from answering those questions. For instance, in one audio sample, a speaker talks about information published in the German newspaper "Der Spiegel". But the interpreter avoids drastically the name of the newspaper, even if the rest of the information is translated. Then, judges could not answer the, even informative, question (in English) "Which main German newspaper published a report denying the link between the World Cup and an increase in trafficking and forced prostitution?", since there was no link with the audio output to find the corresponding information.
Interpreters interpolate or reformulate the source coming from the speaker. In the same way, they summarize the speech. For instance, we found in the audio output five sentences from the speaker summarized into two sentences by the interpreter.
We also found the case for which an interpreter has to wait for the end of the speaker sentence (in English) to be able to translate it into Spanish, otherwise it is not possible to understand. As a consequence, the quality of the current sentence is lower (the interpreter has to speed up), but above all, the next sentence is also damaged since the speaker continues to speak during the translation, etc.
An interesting point concerns the impact of the prosody of the TC-STAR system on the comprehension of the output speech. That is probably one of the most surprising facts, the TTS output being (normally) the exact synthesis of the SLT output and TTS systems getting good results for synthesis . Actually, the explanation is rather simple and is due, in part, to the quality of the translation. Indeed, when the quality of the SLT output is quite low, the prosody breaks the flow and the output speech is less understandable. When looking at the context, reading the translation is still understandable. However, the problem arises when the sentence is synthesized and thus listened to. The TTS module stops its utterance just before the verb "comprobar" (i.e. "to check"), which introduces a long pause and gives the impression of starting a completely different new sentence afterwards, thus disassociating the subject from the verb (which is in fact in infinitive mode) and completely misleading and even confusing the listener/judge. As a consequence, the question ("In which condition would Member States examine thoroughly a financial services company?") requires an answer which is, theoretically, in between the two sentences "perceived" by the judge and thus, he cannot find the searched information unless he makes some "strange" deduction. Moreover, even if the assessor had the reference answer in front of him, he decided to define the answer as "impossible to answer" regarding the audio, which also implies that the objective assessments are different for both SLT and TTS scores. For instance, in this particular case, the TTS score is lower than the SLT one.
Another case that shows a typical error that can be attributed to the TTS module is the following: named entities are not always well synthesized. For instance, in the translated sentence: necesitamos acciones como Sophie Veld dicho de la Comisión y necesitamos actuar como han dicho muchos de la Presidencia finlandesa translation of: we need action as Sophie Veld said from the Commission and we need action as many have said from the Finnish Presidency the name "Sophie Veld" is translated correctly and can be easily read into the text file, but the name is badly synthesized. Indeed, instead of the name, one can listen to something like "comoso biebeld" with a small distortion right in the middle: -word "como" is combined with the phoneme "so", beginning of the name "Sophie", and a pause is inserted between the two created "words"; -phoneme "ph" is synthesized in "b" (what maybe due to the distortion); -the "v" of "Veld" is pronounce "b", like in Spanish.
Since the answer of the question "Who said that we need action from the Commission and from the Finnish Presidency?" is the name "Sophie Veld", nobody managed to find the correct answer from the audio output, regardless of being a judge or an assessor listening to the audio.
Occasionally, judges make deductions/guesses from the translated speech, and answer a question correctly. This is clearly the case when the topic is about quantities or general knowledge (or, sometimes, named entities). For instance, the Party of European Socialists Women is translated by the TCSTAR system as "BSE" instead of something like "PS" (and moreover, the TTS module could not synthesized "BSE" correctly). Even if the "P.S." acronym was not in the audio, the judges answered the question "How many signatures did P.S. Women collect for its petition in two months?" correctly because the audio contains the sentence "la recopilación de más de veinte tres mil firmas en dos meses" (automatic translation of the sentence "we collected more than twenty-three thousand signatures in two months"). So judges managed to answer the question without the information on who collected the signatures.
In this regard, judges should be better informed about the evaluation task in order to avoid this kind of "under-evaluation".
Finally, and generally speaking, an objective validation still remains slightly subjective, and results should be taken carefully. Some questions may be ambiguous and whatever the output observed is from ASR, SLT, or TTS, the quality of answers is limited by the understanding of the speech or the text. This can be difficult, even with the reference answer available.
Conclusion
An evaluation of a speech-to-speech translation system has been presented. A methodology has been reused and modified in order to experiment different methods of evaluation. Similar results on a different data set have been obtained, with different judges and different questionnaires. This allows us to conclude that we have performed a rather robust evaluation. The TC-STAR speech-to-speech system has been compared with interpreters of the European parliament, demonstrating the trench between an automatic system and humans. However, it also shows that people are able to understand audio in outputs from an automatic system in a certain context, and can answer questions about their meaning. Even if the audio quality is lower than what would be wished, translation of a politician speech could be understood, at least the outline, in a certain way. The methodology of the Adequacy evaluation (the subjective part) has been studied in more detail, regarding the type of questions asked and the number of criteria for the assessment of answers. Splitting the number of criteria from two to three shows different results, and gives two different interpretations of them. However, it is closer to the objective evaluation when two criteria are used. Moreover, studying the system according to the type of questions asked allows finding other sources of errors and helps to diagnose the output. The analysis of the end-to-end output is costly and becomes very time-consuming, since many parameters are involved, starting with the different modules. The advantage of the methodology proposed here is that it helps developers (among other people) to diagnose issues related to a SST system.
Acknowledgments
This work was supported by the TC-STAR project (grant number IST-506738). We would like to thank Victoria Arranz and Fernando Villavicencio for their help. We are also very grateful to all the participating sites of the evaluation who have built the TC-STAR system and the human judges.
.Test
Fluency questionnaire
Understanding
Do you think that you have understood
the message?
1: Not at all
5: Yes, absolutely
Fluent Speech
Is the speech in good Spanish?
1: No, it is very bad!
5: Yes, it is perfect Spanish
Effort
Rate the listening effort.
1: Very high
5: Low, as natural speech
Overall Quality
Rate the overall quality of this audio
sample.
1: Very bad, unusable
5: It is very useful
Table 2 :
2Fluency questionnaire.
Table 3
3presents those scores.System Understanding
Fluent
Speech
Effort
Overall
Quality
Overall
82.5%
80% 72.5%
90%
Interp.
85%
85%
60%
85%
Tc-star
80%
75%
85%
95%
Table 3 :
3Fluency 1-agreements.
Table 4 :
4Adequacy results. Scores are shown in percentage, a score of 100% means all the answers are correct. Scores from the previous experiment are shown between brackets.
Table 5 :
5Fair Adequacy results. Scores are shown in percentage, a score of 100% means all the answers are correct. Scores from the previous experiment are shown between brackets.
Table 6
6presents statistics for answers assessed as Wrong (W), Incomplete (I) and Right (R), and the combinations of results. The overall set contains 800 answers (there are 200 questions for interpreters and 200 questions for the TC-STAR system answered by two different judges each).System (R)
(I)
(W) (R+I) (I+W)
Interp.
236
(59%)
58
(15%)
106
(26%)
294
(74%)
164
(41%)
Tc-star
197
(49%)
59
(15%)
144
(36%)
256
(64%)
203
(51%)
Table 6 :
6Adequacy results for each criteria of assessment.
Table 7 :
7Adequacy results for each type of question.
Table 8
8presents the fluency results for the interpreter and
the TC-STAR system samples and shows the results for
the four fluency questions. A score of 1 means the speech
is of bad quality while a score of 5 means the speech is
good.
System Understanding
Fluent
Speech
Effort
Overall
Quality
Interp.
3.85
4.08
3.38
4.03
Tc-star
2.43
2.03
1.63
2.05
Table 8: Fluency results.
We discuss in the next sessions the way to split the results. The assumption is that an incomplete answer may be considered as correct and not really show a problem of comprehension.
Ogmios: The UPC Text-to-Speech Synthesis System for Spoken Translation. A Bonafonte, P Agüero, J Adell, J Pérez, A Moreno, Proceedings of the TC-STAR Workshop on Speech-to-Speech Translation. the TC-STAR Workshop on Speech-to-Speech TranslationBarcelona, SpainBonafonte A., Agüero P., Adell J., Pérez J., Moreno A. (2006). Ogmios: The UPC Text-to-Speech Synthesis System for Spoken Translation. In Proceedings of the TC-STAR Workshop on Speech-to-Speech Translation, Barcelona, Spain, pp. 31-36.
End-to-End Evaluation of a Speech-to-Speech Translation System in TC-STAR. O Hamon, D Mostefa, K Choukri, Proceedings of MT Summit XI. MT Summit XICopenhagen, DenmarkHamon O., Mostefa D., Choukri K. (2007). End-to-End Evaluation of a Speech-to-Speech Translation System in TC-STAR. In Proceedings of MT Summit XI, Copenhagen, Denmark.
The LIMSI 2006 TC-STAR Transcription Systems. L Lamel, J L Gauvain, G Adda, C Barras, E Bilinski, O Galibert, A Pujol, H Schwenk, X Zhu, Proceedings of the TC-STAR Workshop on Speech-to-Speech Translation. the TC-STAR Workshop on Speech-to-Speech TranslationBarcelona, SpainLamel L., Gauvain J.L., Adda G., Barras C., Bilinski E., Galibert O., Pujol A., Schwenk H., Zhu X. (2006). The LIMSI 2006 TC-STAR Transcription Systems. In Proceedings of the TC-STAR Workshop on Speech-to-Speech Translation, Barcelona, Spain, pp. 123-128.
Overview of the CLEF 2004 Multilingual Question Answering Track. B Magnini, A Vallin, C Ayache, G Erbach, A Peñas, M De Rijke, P Rocha, K Simov, R Sutcliffe, Working Notes of the Workshop of CLEF. BathMagnini B., Vallin A., Ayache C., Erbach G., Peñas A., De Rijke M., Rocha P., Simov K., Sutcliffe R. (2004) Overview of the CLEF 2004 Multilingual Question Answering Track. In Working Notes of the Workshop of CLEF 2004, Bath, 15-17 september 2004.
Automatic Sentence Segmentation and Punctuation Prediction for Spoken Language Translation. E Matusov, N Ueffing, H Ney, Proceedings of the International Workshop on Spoken Language Translation (IWSLT). the International Workshop on Spoken Language Translation (IWSLT)Trento, ItalyMatusov E., Ueffing N., Ney H. (2006). Automatic Sentence Segmentation and Punctuation Prediction for Spoken Language Translation. In Proceedings of the International Workshop on Spoken Language Translation (IWSLT), Trento, Italy, pp. 158-165.
Technological Showcase and End-to-End Evaluation Architecture, Technology and Corpora for Speech to Speech Translation (TC-STAR) projects. Deliverable D30. D Mostefa, O Hamon, N Moreau, K Choukri, Mostefa D., Hamon O., Moreau N., Choukri K. (2007). Technological Showcase and End-to-End Evaluation Architecture, Technology and Corpora for Speech to Speech Translation (TC-STAR) projects. Deliverable D30, May 2007.
Evaluating Commercial Spoken Language Translation Software. H Somers, Y Sugita, Proceedings of the Ninth Machine Translation Summit. the Ninth Machine Translation SummitNew OrleansSomers H. and Sugita Y. (2003). Evaluating Commercial Spoken Language Translation Software. In Proceedings of the Ninth Machine Translation Summit, pp. 370-377, New Orleans.
Overview of TREC 2005 question answering track. E Voorhees, H Dang, Proceedings of TREC2005. TREC2005Voorhees E. and Dang H. (2005). Overview of TREC 2005 question answering track. In Proceedings of TREC2005.
The ARPA MT Evaluation Methodologies: Evolution, Lessons, and Future Approaches. J S White, T A O'connell, Proceedings of AMTA Conference. AMTA ConferenceColumbia, MD, USAWhite J. S. and O'Connell T. A. (1994). The ARPA MT Evaluation Methodologies: Evolution, Lessons, and Future Approaches. In Proceedings of AMTA Conference, Columbia, MD, USA. |
2,983,741 | Exploring Speech-Enabled Dialogue with the Galaxy Communicator Infrastructure | This demonstration will motivate some of the significant properties of the Galaxy Communicator Software Infrastructure and show how they support the goals of the DARPA Communicator program. | [] | Exploring Speech-Enabled Dialogue with the Galaxy Communicator Infrastructure
Samuel Bayer
The MITRE Corporation
202 Burlington Rd01730BedfordMA
C Hristine Doran
The MITRE Corporation
202 Burlington Rd. Bedford01730MA
Bryan George bgeorge@mitre.org
The MITRE Corporation
11493 Sunset Hi lls Rd20190RestonVA
Exploring Speech-Enabled Dialogue with the Galaxy Communicator Infrastructure
Spoken dialogue, speech interfaces
This demonstration will motivate some of the significant properties of the Galaxy Communicator Software Infrastructure and show how they support the goals of the DARPA Communicator program.
INTRODUCTION
The DARPA Communicator program [1], now in its second fiscal year, is intended to push the boundaries of speechenabled dialogue systems by enabling a freer interchange between human and machine. A crucial enabling technology for the DARPA Communicator program is the Galaxy Communicator software infrastructure (GCSI), which provides a common software platform for dialogue system development. This infrastructure was initially designed and constructed b y MIT [2], and is now maintained and enhanced by the MITRE Corporation. This demonstration will motivate some of the significant properties of this infrastructure and show how they support the goals of the DARPA Communicator program.
HIGHLIGHTED PROPERTIES
The GCSI is a distributed hub-and-spoke infrastructure which allows the programmer to develop Communicator-compliant servers in C, C++, Java, Python, or Allegro Common Lisp. This system is based on message passing rather than CORBA-or RPC-style APIs. The hub in this infrastructure supports routing of messages consisting of key-value pairs, but also supports logging and rule-based scripting. Such an infrastructure has the following desirable properties:
•
The scripting capabilities of the hub allow the programmer to weave together servers which may not otherwise have been intended to work together, b y rerouting messages and their responses and transforming their keys.
• The scripting capabilities of the hub allow the programmer to insert simple tools and filters to convert data among formats.
•
The scripting capabilities of the hub make it easy t o modify the message flow of control in real time.
•
The scripting capabilities of the hub and the simplicity of message passing make it simple to build up systems bit by bit.
• The standard infrastructure allows the Communicator program to develop platform-and programminglanguage-independent service standards for recognition, synthesis, and other better-understood resources.
• The standard infrastructure allows members of the Communicator program to contribute generally useful tools to other program participants.
This demonstration will illustrate a number of these properties.
DEMO CONFIGURATION AND CONTENT
By way of illustration, this demo will simulate a process of assembling a Communicator-compliant system, while at the same time exemplifying some of the more powerful aspects of the infrastructure. The demonstration has three phases, representing three successively more complex configuration steps. We use a graphical display of the Communicator hub to make it easy to see the behavior of this system.
As you can see in Figure 1, the hub is connected to eight servers: We will use the flexibility of the GCSI, and the hub scripting language in particular, to change the path that messages follow among these servers.
• MITRE's
Phase 1
In phase 1, we establish audio connectivity. JDAS is MITRE's contribution to the problem of reliable access to audio resources. It is based on JavaSound 1.0 (distributed with JDK 1.3), and supports barge-in. We show the capabilities of JDAS by having the system echo the speaker's input; we also demonstrate the barge-in capabilities of JDAS bye showing that the speaker can interrupt the playback with a new utterance/input. The goal in building JDAS is that anyone who has a desktop microphone and the Communicator infrastructure will be able to use this audio server to establish connectivity with any Communicator-compliant recognizer or synthesizer.
Changing the message path
The hub maintains a number of information states. The Communicator hub script which the developer writes can both access and update these information states, and we can invoke "programs" in the Communicator hub script by sending messages to the hub. This demonstration exploits this capability by using messages sent from the graphical display to change the path that messages follow, as illustrated i n Figure 2. In phase 1, the hub script routed messages from JDAS back to JDAS (enabled by the message named "Echo"). In the next phase, we will change the path of messages from JDAS and send them to a speech recognizer.
Phase 2
Now that we've established audio connectivity, we can add recognition and synthesis. In this configuration, we will route the output of the preferred recognizer to the preferred synthesizer. When we change the path through the hub script using the graphical display, the preferred servers are highlighted. Figure 3 shows that the initial configuration of phase 2 prefers SUMMIT and Festival.
Figure 3: Initial recognition/synthesis configuration
The SUMMIT recognizer and the Festival synthesizer were not intended to work together; in fact, while there is a good deal of activity in the area of establishing data standards for various aspects of dialogue systems (cf. [3]), there are n o programming-language-independent service definitions for speech. The hub scripting capability, however, allows these tools to be incorporated into the same configuration and t o interact with each other. The remaining incompatibilities (for instance, the differences in markup between the recognizer output and the input the synthesizer expects) are addressed b y the string server, which can intervene between the recognizer and synthesizer. So the GCSI makes it easy both to connect a variety of tools to the hub and make them interoperate, as well as to insert simple filters and processors to facilitate the interoperation.
In addition to being able to send general messages to the hub, the user can use the graphical display to send messages associated with particular servers. So we can change the preferred recognizer or synthesizer. (as shown in Figure 4), or change the Festival voice (as shown in Figure 5). All these messages are configurable from the hub script.
Phase 3
Now that we've established connectivity with recognition and synthesis, we can add parsing and generation (or, in this case, input paraphrase). Figure 6 illustrates the final configuration, after changing recognizer and synthesizer preferences. In this phase, the output of the recognizer is routed to the parser, which produces a structure which is then paraphrased and then sent to the synthesizer. So for instance, the user might say "I'd like to fly to Tacoma", and after parsing and paraphrase, the output from the synthesizer might be "A trip to Tacoma".
CONCLUSION
The configuration at the end of phase 3 is obviously not a complete dialogue system; this configuration is missing context management and dialogue control, as well as an application backend, as illustrated by the remaining components in white in Figure 7. However, the purpose of the demonstration is to illustrate the ease of plug-and-play experiments within the GCSI, and the role of these capabilities to assemble and debug a complex Communicator interface. The GCSI is available under an open source license at http://fofoca.mitre.org/download .
Figure 1 :
1Initial demo configuration
Figure 2 :
2Modifying the hub information state
Figure 4 :Figure 5 :
45Preferring Changing the Festival voice
Figure 6 :
6Adding parsing and paraphrase
Figure 7 :
7A sample full dialogue system configuration 5. ACKNOWLEDGMENTS This work was funded by the DARPA Communicator program under contract number DAAB07-99-C201. © 2001 The MITRE Corporation. All rights reserved.
Galaxy-II: A Reference Architecture for Conversational System Development. S Seneff, E Hurley, R Lau, C Pao, P Schmid, V Zue, Proc. ICSLP 98. ICSLP 98Sydney, AustraliaS. Seneff, E. Hurley, R. Lau, C. Pao, P. Schmid, and V. Zue. Galaxy-II: A Reference Architecture for Conversational System Development. Proc. ICSLP 98, Sydney, Australia, November 1998.
Voice Browser' Activity. "'Voice Browser' Activity." http://www.w3.org/Voice. |
10,560,403 | A Corpus-Based Tool for Exploring Domain-Specific Collocations in English | Coxhead's (2000)Academic Word List (AWL) has been frequently used in EAP classrooms and re-examined in light of various domain-specific corpora. Although well-received, the AWL has been criticized for ignoring the fact that words tend to show irregular distributions and be used in different ways across disciplines (Hyland and Tse, 2007). One such difference concerns collocations. Academic words (e.g. analyze) often co-occur with different words across domains and contain different meanings. What EAP students need is a "disciplinebased lexical repertoire" (p.235). Inspired by Hyland and Tse, we develop an online corpus-based tool, TechCollo, which is meant for EAP students to explore collocations in one domain or compare collocations across disciplines. It runs on textual data from six specialized corpora and utilizes frequency, traditional mutual information, and normalized MI (Wible et al., 2004) as measures to decide whether co-occurring word pairs constitute collocations. In this article we describe the current released version of TechCollo and how to use it in EAP studies. Additionally, we discuss a pilot study in which we used TechCollo to investigate whether the AWL words take different collocates in different domainspecific corpora. This pilot basically confirmed Hyland and Tse and demonstrates that many AWL words show uneven distributions and collocational differences across domains. | [
9558665,
15273358,
16151922,
8898462,
1487550
] | A Corpus-Based Tool for Exploring Domain-Specific Collocations in English
Ping-Yu Huang
General Education Center
Ming Chi University of Technology
Chien-Ming Chen
Institute of Information Science
Academia Sinica
Nai-Lung Tsao
Graduate Institute of Learning and Instruction
National Central University
David Wible wible@stringnet.org
Graduate Institute of Learning and Instruction
National Central University
A Corpus-Based Tool for Exploring Domain-Specific Collocations in English
Coxhead's (2000)Academic Word List (AWL) has been frequently used in EAP classrooms and re-examined in light of various domain-specific corpora. Although well-received, the AWL has been criticized for ignoring the fact that words tend to show irregular distributions and be used in different ways across disciplines (Hyland and Tse, 2007). One such difference concerns collocations. Academic words (e.g. analyze) often co-occur with different words across domains and contain different meanings. What EAP students need is a "disciplinebased lexical repertoire" (p.235). Inspired by Hyland and Tse, we develop an online corpus-based tool, TechCollo, which is meant for EAP students to explore collocations in one domain or compare collocations across disciplines. It runs on textual data from six specialized corpora and utilizes frequency, traditional mutual information, and normalized MI (Wible et al., 2004) as measures to decide whether co-occurring word pairs constitute collocations. In this article we describe the current released version of TechCollo and how to use it in EAP studies. Additionally, we discuss a pilot study in which we used TechCollo to investigate whether the AWL words take different collocates in different domainspecific corpora. This pilot basically confirmed Hyland and Tse and demonstrates that many AWL words show uneven distributions and collocational differences across domains.
Introduction
There has long been a shared belief among English for academic or specific purposes (EAP and ESP) instructors that it is necessary to provide students with a list of academic vocabulary 1 irrespective of their specialized domain(s). There are two main reasons why academic vocabulary receives so much attention in EAP instruction. First, academic vocabulary accounts for a substantial proportion of words in academic texts (Nation, 2001). Sutarsyah et al. (1994), for example, found that about 8.4% of the tokens in the Learned and Scientific sections of the Lancaster-Oslo/Bergen (Johansson, 1978) and Wellington corpora (Bauer, 1993). Second, academic words very often are non-salient in written texts and less likely to be emphasized by content teachers in class (Flowerdew, 1993). Consequently, EAP researchers have been convinced that students need a complete list of academic vocabulary, and several lists were thus compiled. Among the attempts to collect academic lexical items, Coxhead's (2000) Academic Word List (AWL) has been considered the most successful work to date. In the AWL, Coxhead offered 570 word families which were relatively frequent in a 3.5-milliontoken corpus of academic texts. The corpus was composed of writings from four disciplines: arts, commerce, law, and science. By considering certain selection principles such as frequency and range, Coxhead gathered a group of word families which were specialized in academic discourse and generalized across different fields of specialization. On average, the AWL accounted for 10% of Coxhead's academic corpus and showed distributions of 9.1-12% of the four disciplines. Since its publishment, the AWL has been frequently used in EAP classes, 1 Academic words are also variously termed sub-technical vocabulary (Yang, 1986), semi-technical vocabulary (Farrell, 1990), or specialized non-technical lexis (Cohen et al., 1979) in the literature. They generally refer to words which are common in academic discourse but not so common in other types of texts. covered by numerous teaching materials, and reexamined by various domain-specific corpora (e.g. Vongpumivitch et al., 2009;Ward, 2009). The AWL, as Coxhead (2011) herself claims, indeed exerts much greater effects that the author ever imagined.
Although well-received, the AWL is not without criticisms. For instance, Chen and Ge (2007), while confirming the significant proportion of the AWL in medical texts (10.07%), found that only half of the AWL words were frequent in the field of medicine. In Hancioğlu et al. (2008), the authors criticized that the distinction that Coxhead (2000) made into academic and general service words was questionable. In actuality, there were several general service words contained in the AWL (e.g. drama and injure). Arguably the strongest criticism came from Hyland and Tse (2007), who questioned whether there was a single core academic word list. Hyland and Tse called Coxhead's corpus compilation "opportunistic" (p. 239) and built a new database better controlled for its selection of texts to examine Coxhead's findings. Utilizing a more rigorous standard, Hyland and Tse found that only 192 families in the AWL were frequent in their corpus. Furthermore, numerous most frequent AWL families did not show such high-frequency distributions in Hyland and Tse's dataset. In addition to these methodological problems, as Hyland and Tse emphasized, the AWL as well as those previous academic word lists ignored an important fact that words tend to behave semantically and phraseologically differently across disciplines. Many academic words, such as analyze, tend to co-occur with different words and contain different meanings across research areas. What EAP learners actually need and have to study, accordingly, should be "a more restricted, discipline-based lexical repertoire" (p. 235).
Inspired by Hyland and Tse's (2007) insights and analyses, we devise and create a learning tool which is able to generate domain-specific lexico-grammatical knowledge for EAP students. The knowledge that we focus on here concerns collocations. Specifically, we develop an online corpus-based tool, TechCollo, which can be used by EAP students to search for and explore frequent word combinations in their specialized area(s). The tool, by processing written texts in several medium-sized domain-specific corpora, enables students to study collocational patterns in their own domain, compare collocations in different disciplines, and check whether certain combinations or word usages are restricted to a specific field. To decide whether a pair of cooccurring words constitutes a candidate collocation, TechCollo uses measures such as frequency, traditional mutual information (MI) (Church and Hanks, 1990), and normalized MI (Wible et al., 2004). We will discuss these measures in more detail in Section 3. This paper is structured as follows. In Section 2 we briefly discuss some related work. Section 3 describes the online learning tool and the corpora from which TechCollo extracts collocations. In Section 4, we present results of a pilot study to exemplify how to exploit TechCollo to discover differences in collocations across two domains. Finally, we propose our future plans for improving TechCollo in Section 5.
Related Work
In electronic lexicography or automatic term recognition (ATR), a number of studies have investigated how to retrieve multiword terminology from texts (e.g. Collier et al., 2002;Rindflesch et al., 1999). Basically, those studies identified candidate patterns of words (e.g. nounnoun or adjective-noun combinations) from texts and used various frequency-based or associationbased measures to determine the termhood of those candidates. Other ATR studies took more sophisticated approaches. Wermter and Hahn (2005), for example, distinguished domainspecific from non-domain-specific multiword terms on the basis of paradigmatic modifiability degrees. The assumption behind this approach was that the component words of a multiword term had stronger association strength and thus any component of it was less likely to be substituted by other words. However, although the identification of multiword terms has been an active field of research, few studies have explored ways of making the terminology accessible to EAP students. To our knowledge, Barrière's (2009) TerminoWeb has been the only work addressing this issue in the literature. Below we describe Barrière's platform.
TerminoWeb, as its name suggests, was created with an aim to help learners of different professional areas explore and learn domainspecific knowledge from the Web. To get access to the knowledge, a user had to follow several steps. The starting point was to upload a technical paper to the platform. This paper was used as a source text in which the user selected unknown terms and the TerminoWeb also automatically identified certain terms. Then, a set of queries were performed on the Web to collect texts relevant to the source text (i.e. belonging to the same domain) or including the same userselected and computer-identified terms. Those collected texts were then a large domain-specific corpus. Within the corpus, the user could do concordance searches to understand word usages of an unknown term in larger contexts. The user could also make collocation searches for this term. The calculation of collocations performed by Barrière (2009) was based on Smadja's (1993) algorithm, which, as Smadja claimed, reached a precision rate of 80% for extracting collocations.
Unlike the technical corpora compiled via the TerminoWeb with texts from the whole Web and were likely to include lots of messy data, the corpora underlying TechCollo basically were composed of texts edited in advance which were assumed to be cleaner and more reliable. TechCollo, furthermore, offers an interface which allows users to compare collocations in two different specialized domains or in a specialized and a general-purpose corpus. These convenient search functions will more effectively enable EAP learners to discover and explore specialized collocational knowledge online.
TechCollo: A Corpus-Based Domain-Specific Collocation Learning Tool
TechCollo, which stands for technical collocations, is an online tool with which EAP students can explore specialized collocations. To illustrate the functions of TechCollo, we respectively describe: (1) the compilation of ESP corpora underlying it, (2) the determination of a word pair as a candidate for a true collocation, and (3) the interface designed for EAP students.
Corpora
Currently, TechCollo extracts collocations from six domain-specific corpora. All of the six databases are medium-sized, containing 1.8-5.5 million running tokens. Among them, three were composed of texts coming from the largest online encyclopedia, Wikipedia. Specifically, the Wikipedia texts that we processed were provided by the Wacky team of linguists and information technology specialists (Baroni et al., 2009), 2 who compiled large Wikipedia corpora for various European languages such as English, Italian, and French. Based on an English corpus created by the Wacky team, we established corpora for three domains: medicine, engineering, and law, which were named Medical Wiki, Engineering Wiki, and Legal Wiki Corpora, respectively. The other three ESP textual archives contained writings from high-quality academic journals. That is, for the same medical, engineering, and legal domains, we consulted sixty academic journals and respectively downloaded 280, 408, and 106 articles from those journals online. We utilized the tools offered by Stanford CoreNLP (Klein and Manning, 2003) to POS-tag and parse the three academic corpora. The three corpora then were termed: Medical Academic, Engineering Academic, and Legal Academic Corpora.
In addition to the domain-specific corpora, TechCollo also provides collocation searches in two general-purpose corpora: Wikipedia and British National Corpus (2001). We offer collocation exploration for the two corpora for users to compare and identify collocations in subject areas and general use.
Collocation Extraction
In computational linguistics, various measures have been utilized in order to automatically extract collocations from texts. Those measures can be roughly divided into three categories (Wermter and Hahn, 2004): (1) frequency-based measures, (2) information-theoretical measures (e.g. mutual information), and (3) statistical TreeTagger and MaltParser. We thank Baroni et al. (2009) for offering the WaCkypedia_EN corpus.
PACLIC-27
measures (e.g. t test and log-likelihood test). To evaluate whether a measure is effective or to compare the effectiveness of several measures, one often needs to collect a set of true collocations and non-collocations and examine how a measure ranks those word combinations (see, for example, Pecina, 2008). An important lesson learned from the examinations of those measures is that there is no single measure which is perfect in all situations. To identify target collocations, one is suggested to exploit several association measures with a correct understanding of their notions and behaviors.
TechCollo employs three main measures to decide whether a two-word combination constitutes a candidate collocation in a five-word window in our textual databases: frequency, traditional mutual information (tradMI) (Church and Hanks, 1990), and normalized MI (normMI, Wible et al., 2004). A learner using TechCollo can set or change the values of these measures to show candidate collocations in the six technical corpora (a detailed description of the user interface for TechCollo is given in section 3.3). First, the measure of frequency refers to raw cooccurrence count of a word pair. However, to filter out the pairs which are extremely frequent as a result of one or both of their component words but are not true collocations, 3 TechCollo offers the common association measure: tradMI, which is formulated as follows:
This information-theoretical measure works by comparing the joint probability of two expressions x and y (i.e. the probability of two expressions appearing together) with the independent probabilities of x and y. In other words, MI expresses to what extent the observed frequency of a combination differs from expected. Although tradMI effectively removes word pairs containing high-frequency words, it inevitably suffers from a problem that it also filters out certain pairs which contain highfrequency words but are interesting and actual collocations. In English, for example, word combinations such as take medicine, make (a) decision, and run (a) risk are real collocations which include very frequent component words. To solve the problem with the tradMI, Wible et al. introduces the alternative association measure normMI, which attempts to minimize the effects caused by sheer high frequency words. To achieve this, Wible et al. normalizes the tradMI by dividing the lexeme frequency by its number of senses (based on WordNet). The formula for the normMI is shown below. Basically, the notion of normMI is based on the one sense per collocation assumption proposed by Yarowsky (1995). A highly frequent word (e.g. take, make, and run) is generally polysemous. However, as Wible et al. indicates, as the word appears in a collocation, it is very common that only one of its senses is used (e.g. the word run in the collocation run a risk). Wible et al. compares the tradMI with normMI using several pairs containing high-frequency words (e.g. make effort and make decision) and found that these combinations are ranked higher among the identified candidate collocations. It is important to note that, although the normMI produces higher recall than the tradMI, precision does not decrease accordingly. On our TechCollo interface, we provide the normMI to enable EAP learners to find and learn some word combinations which include high frequent words but are still true and specialized collocations in their domain(s).
User Interface
The main page of TechCollo is shown in Figure 1. Basically, this online collocation exploration tool allows users to choose from the six mediumsized domain-specific corpora: MWC, EWC, LWC, MAC, EAC, and LAC, and the two largescale general-purpose corpora: BNC and Wikipedia. A user accessing the website can key in a keyword that he/she intends to study and the system will automatically search for words which tend to co-occur with the keyword in the selected databases. The current released version of TechCollo (i.e. TechCollo 1.0) provides searches of verb-noun collocations. The
P(x,y) normMI(x,y) =log 2 P(x) P(y) sn(x) sn(y) 〔 〕 〔 〕 P(x,y)
tradMI(x,y) =log 2
P(x) P(y)
* measures of frequency and tradMI, as specified earlier, can be changed and decided by users so that the system will respond with either a shorter list of word pairs with higher frequency counts and MI or a longer list containing more candidate collocations.
Here we take the noun procedure and its verb collocates in MWC and EWC as examples. We feed this word into the TechCollo system with the frequency and tradMI set at 1 and 4, respectively. That is, only the verbs which appear together with procedure at least two times and having mutual information larger than 4 will be identified as candidate collocates. The search results are demonstrated in Figure 2. According to the results offered by TechCollo, there are, respectively, 934 and 591 tokens of procedure in Medical Wiki and Engineering Wiki. Furthermore, the two corpora (or the two fields of profession) share several common collocations, including: perform procedure, follow procedure, describe procedure, etc. Taking a closer look at the unshared verb collocates in the two corpora (i.e. only in MWC or EWC), however, we find that procedure tends to co-occur with undergo and die only in MWC. These specialized collocations suggest that procedure is a technical term in medicine which refers to an operation. We expect and encourage EAP students to use TechCollo to explore and further discover such specialized collocations by:
(1) searching collocations in a specific domain, (2) comparing collocations in two domainspecific corpora (e.g. MWC vs. EWC), and (3) comparing collocations in a specialized and a general-purpose corpora (e.g. MWC vs. BNC).
On TechCollo, for the extracted candidate collocations, a user can change their ordering(s) by clicking on the icons frequency or MI (which refers to tradMI). The other measure offered by TechCollo is NMI, which is the normMI that we described earlier and provide on our website in the hope that it allows EAP learners to find certain collocations containing high frequency component words. To examine the effectiveness of the normMI, we test it with certain legal collocations in the LAC, with the results shown in Table 2 In the three cases, specifically, we use the three nouns: law, trial, and obligation as keywords to search in the LAC and examine how the tradMI and normMI decide the rankings of the three high-frequency verb collocates: break, push, and carry. As Table 2 shows, normMI changes the rankings of these collocations with the three verbs being ranked in higher positions. The three verbs might not be noticed by learners using the tradMI and the normMI successfully raises them into more advantaged positions for learners. A more thorough examination, nevertheless, is required to investigate whether the normMI is indeed an effective measure of identifying collocations in domain-specific texts.
Comparing Collocational Patterns across Disciplines: A Pilot Study
To specify and illustrate how to use TechCollo in EAP studies, we ran a pilot study in which we examined the verb-noun collocations in two different domains: medicine and engineering. More specifically, we focused on the nouns included in the Sublist 1 of the Academic Word List 4 (Coxhead, 2000) and explored and analyzed their verb collocates in the MWC and EWC. Our purpose, then, was to investigate whether it is true that words tend to show differences in collocations in different professional areas, as Hyland and Tse (2007) point out. First, from the sixty word families contained in the Sublist 1, we identified 109 nouns. Those nouns were fed into TechCollo in order to extract their frequent co-occurring verbs in MWC and EWC. The very first observation that we made in the data generated by TechCollo was that many nouns showed uneven distributions in the two domain-specific corpora. Some examples of those nouns are given in Table 3. These distributional variations suggest that an academic word which is highly frequent and important in one discipline may be less important for students in another domain (e.g. the words contractor, finance, and specification for medical school students). EAP students who are required to study the AWL for their academic studies are very likely to be exposed to more lexical items than they actually need (Hyland and Tse, 2007).
Word
Frequency ( In addition to the comparisons of numbers of occurrence, what interests us more concerns their relations with verbs in medicine and engineering. We present some of the verb-noun collocation data in Table 4 Table 4 displays, there are several nouns which share verb collocates in the MWC and EWC, including: context, method, and role. In other words, these verb-noun combinations are of equal importance for EAP students, at least for medicine and engineering majors. This table, however, reveals that there are many more socalled generalized academic words which tend to take different collocates and even refer to different meanings across disciplines. The word area, for example, co-occurs with rub and scratch in MWC and not in EWC and refers to the specialized meaning of a part on the surface of human body. Several other nouns, such as consistency, formula, function, procedure, and response also contain such medicine-specific senses as they co-occur with boil, feed, impair, die, and induce, respectively. Another notable cross-disciplinary difference based on these collocations is, while expressing a similar idea, people in medicine and engineering appear to prefer different verbs. Examples for this include: confer/offer benefit, employ/utilize concept, induce/lead creation, approach/deal issue, undergo/undertake research, exhibit/display variation, etc. These field-specific idiomatic and habitual usages do not suggest that they are the only expressions that people in medicine or engineering use. Rather, they provide evidence showing that people in different areas tend to select different word combinations which form "a variety of subject-specific literacies" (Hyland and Tse, 2007: p.247). What EAP students need to study, then, should be these common specialized collocations and usages which make their writings and speech professional in their own domain(s).
Conclusion
The pilot study reported in this article basically suggests that academic words, though being collected for EAP students irrespective of their subject areas, tend to have different numbers of occurrence and co-occur with different words in different domains. If students depend on word lists such as the AWL to learn academic words, they are very likely to memorize more lexical items than they actually need for studies in their own domain. Plus they will not be familiar with the common collocations that their colleagues frequently use in speech or writing. What the students need, or more specifically, what EAP researchers are suggested to develop, should be discipline-based vocabulary and collocation lists. Accordingly, we develop the online corpus-based collocation exploration tool, TechCollo, with the aim of providing the specialized lexicogrammatical knowledge that EAP students need to master at college. The tool, with its ability to allow students to learn specialized collocations in a discipline, compare collocations across disciplines, and explore collocations in domainspecific and general-purpose corpora, is of great help for EAP students to check word usages as they write technical papers. Furthermore, as we can expect, TechCollo will be very useful for researchers doing interdisciplinary studies and having to check word combinations across disciplines.
We have made several plans for improving TechCollo. First, for pedagogical purposes, we plan to provide discipline-specific word lists on the TechCollo website. Those lists, compiled based on our domain-specific corpora, will be indexed with frequency information for various domains (e.g. in MWC, academic corpora, or BNC). EAP students can conveniently click on each listed word and study its collocational patterns in different areas. Second, for technical purposes, we will continue to improve our techniques of extracting domain-specific collocations. We plan to use the techniques and methods developed by, for example, Wermter and Hahn (2005) and Pecina (2008) and examine whether the revised techniques increase the precision of collocation extractions. Specifically, we intend to investigate whether taking into account paradigmatic modifiability degrees and combining several association measures outperform the tradMI and normMI measures used by the current version of TechCollo. These new techniques will further be tested on various domain-specific corpora which may enable us to make some interesting discoveries in terminology extraction.
Figure 1 :
1Main Page of TechCollo
Figure 2 :
2Search Results for procedure
Table 1shows the corpus sizes of the six technical and two generalpurpose corpora behind TechCollo.Table 1: Sizes for Domain-Specific and General-Purpose CorporaCorpus
Token Count
Medical Wiki Corpus (MWC) 2,812,082
Engineering Wiki Corpus
(EWC)
3,706,525
Legal Wiki Corpus (LWC)
5,556,661
Medical Academic Corpus
(MAC)
1,821,254
Engineering Academic Corpus
(EAC)
1,989,115
Legal Academic Corpus (LAC) 2,232,982
Wikipedia
833,666,975
British National Corpus (BNC) 94,956,136
.Collocation
tradMI
ranking for
the verb
normMI
ranking for
the verb
break law
63
1
push trial
14
7
carry obligation
5
1
Table 2 :
2Comparison of tradMI and normMI with Legal Collocations
Table 3 :
3Nouns with Irregular Distributions in
MWC and EWC
4 As Coxhead (2000) explains, the word families of the
AWL are categorized into ten sublists according to their
frequency. Each of the sublists contains sixty families with
the last one containing thirty.
.Noun
Shared
Collocates
Verbs in
MWC Only
Verbs in
EWC Only
analysis
perform
conduct
area
rub, scratch
assessment
allow,
perform
benefit
receive
confer
provide,
offer
concept
use
employ
utilize
consistency
boil
context
depend
contract
negotiate,
cancel
creation
result
induce
lead
environment create
build
evidence
show
yield,
reinforce
trace
factor
activate,
inhibit
formula
feed,
determine
derive
function
affect,
impair
replicate
issue
address
approach deal
majority
make
constitute
method
devise,
employ
policy
influence,
implement
principle
operate
apply
procedure
undergo,
die
requirement
meet,
fulfill
satisfy,
comply
research
conduct
undergo
undertake
response
trigger,
evoke
induce,
stimulate
role
play, fulfill
structure
describe
elucidate,
depict
theory
develop,
propose
formulate
variation
show
exhibit
display
Table 4 :
4Verb Collocates in MWC and EWCAs
The corpus that we downloaded from the Wacky website (http://wacky.sslmit.unibo.it/) was WaCkypedia_EN, which was POS-tagged, lemmatized, and syntactically parsed with
A typical example of the frequent non-collocational pairs is the string of the, which appears more than 2.7 million times in Corpus of Contemporary American English(Davies, 2008).
AcknowledgementsThe research reported in this paper was supported in part by a grant from Taiwan 's National Science Council, Grant #NSC 100-2511-S-008-005-MY3.
The WaCky Wide Web: A Collection of Very Large Linguistically Processed Web-crawled Corpora. Marco Baroni, Silvia Bernardini, Adriano Ferraresi, Eros Zanchetta, Language Resources and Evaluation. 433Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky Wide Web: A Collection of Very Large Linguistically Processed Web-crawled Corpora. Language Resources and Evaluation 43(3): 209-226.
Finding Domain Specific Collocations and Concordances on the Web. Caroline Barrière, Proceedings of the Workshop on Natural Language Processing Methods and Corpora in Translation, Lexicography, and Language Learning. the Workshop on Natural Language Processing Methods and Corpora in Translation, Lexicography, and Language Learning27Caroline Barrière. 2009. Finding Domain Specific Collocations and Concordances on the Web. Proceedings of the Workshop on Natural Language Processing Methods and Corpora in Translation, Lexicography, and Language Learning. PACLIC-27
Manual of Information to Accompany the Wellington Corpus of Written. Laurie Bauer, New Zealand English. Victoria University of WellingtonLaurie Bauer. 1993. Manual of Information to Accompany the Wellington Corpus of Written New Zealand English. Victoria University of Wellington.
British National Corpus, Version 2 (BNC World). Distributed by Oxford University Computing Services on behalf of the BNC Consortium. British National Corpus, Version 2 (BNC World). 2001. Distributed by Oxford University Computing Services on behalf of the BNC Consortium. URL: http://www.natcorp.ox.ac.uk/
A Corpus-Based Lexical Study on Frequency and Distribution of Coxhead's AWL Word Families in Medical Research Articles (RAs). Qi Chen, Guang-Chun Ge, English for Specific Purposes. 264Qi Chen and Guang-Chun Ge. 2007. A Corpus-Based Lexical Study on Frequency and Distribution of Coxhead's AWL Word Families in Medical Research Articles (RAs). English for Specific Purposes 26(4): 502-514.
Word Association Norms, Mutual Information, and Lexicography. Kenneth Ward Church, Patrick Hanks, Computational Linguistics. 161Kenneth Ward Church and Patrick Hanks. 1990. Word Association Norms, Mutual Information, and Lexicography. Computational Linguistics 16(1):
Reading English for Specialized Purposes: Discourse Analysis and the Use of Student Informants. Andrew Cohen, Hilary Glasman, Phyllis R Rosenbaum-Cohen, Jonathan Ferrara, Jonathan Fine, TESOL Quarterly. 34Andrew Cohen, Hilary Glasman, Phyllis R. Rosenbaum-Cohen, Jonathan Ferrara, and Jonathan Fine. 1979 Reading English for Specialized Purposes: Discourse Analysis and the Use of Student Informants. TESOL Quarterly, 34: 551- 564.
Automatic Acquisition and Classification of Terminology Using a Tagged Corpus in the. Nigel Collier, Chikashi Nobata, Junichi Tsujii, Molecular Biology Domain. Terminology. 72Nigel Collier, Chikashi Nobata, and Junichi Tsujii. 2002. Automatic Acquisition and Classification of Terminology Using a Tagged Corpus in the Molecular Biology Domain. Terminology 7(2): 239-257.
. Averil Coxhead, A New Academic Word List. TESOL Quarterly. 342Averil Coxhead. 2000. A New Academic Word List. TESOL Quarterly, 34(2): 213-238.
The Corpus of Contemporary American English (COCA): 400+ Million Words. Mark Davies, Mark Davies. 2008. The Corpus of Contemporary American English (COCA): 400+ Million Words, 1990-present. http://www.americancorpus.org
Vocabulary in ESP: A Lexical Analysis of the English of Electronics and a Study of Semi-Technical Vocabulary. Paul Farrell, CLCS Occasional Paper. 25Paul Farrell. 1990. Vocabulary in ESP: A Lexical Analysis of the English of Electronics and a Study of Semi-Technical Vocabulary. CLCS Occasional Paper No. 25.
Concordancing as a Tool in Course Design. John Flowerdew, System. 212John Flowerdew. 1993. Concordancing as a Tool in Course Design. System 21(2): 231-244.
Through the Looking Glass and into the Land of Lexico-Grammar. Nilgün Hancioğlu, Steven Neufeld, John Eldridge, English for Specific Purposes. 274Nilgün Hancioğlu, Steven Neufeld, and John Eldridge. 2008. Through the Looking Glass and into the Land of Lexico-Grammar. English for Specific Purposes 27(4): 459-479.
Accurate Unlexicalized Parsing. Dan Klein, Christopher D Manning, Proceedings of the 41st Annual Meeting on Association for Computational Linguistics. the 41st Annual Meeting on Association for Computational Linguistics1Dan Klein and Christopher D. Manning. 2003. Accurate Unlexicalized Parsing. Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1.
Is there an "academic vocabulary"?. Ken Hyland, Polly Tse, TESOL Quarterly. 412Ken Hyland and Polly Tse. 2007. Is there an "academic vocabulary"?." TESOL Quarterly 41(2): 235-253.
Learning Vocabulary in Another Language. I S P Nation, Cambridge University PressCambridge, UKI. S. P. Nation. 2001. Learning Vocabulary in Another Language. Cambridge University Press, Cambridge, UK.
Manual of Information to Accompany the Lancaster-Oslo/Bergen Corpus of British English, for Use with Digital Computers. Stig Johansson, NorwayUniversity of Oslo. OsloStig Johansson. 1978. Manual of Information to Accompany the Lancaster-Oslo/Bergen Corpus of British English, for Use with Digital Computers. University of Oslo. Oslo, Norway.
A Machine Learning Approach to Multiword Expression Extraction. Pavel Pecina, Proceedings of the LREC MWE 2008 Workshop. the LREC MWE 2008 WorkshopPavel Pecina. 2008. A Machine Learning Approach to Multiword Expression Extraction. Proceedings of the LREC MWE 2008 Workshop.
Mining Molecular Binding Terminology from Biomedical Text. C Thomas, Lawrence Rindflesch, Alan R Hunter, Aronson, Proceedings of the AMIA Symposium. the AMIA SymposiumThomas C. Rindflesch, Lawrence Hunter, and Alan R. Aronson. 1999, Mining Molecular Binding Terminology from Biomedical Text. Proceedings of the AMIA Symposium. American Medical Informatics Association.
Retrieving Collocations from Text: Xtract. Frank Smadja, Computational Linguistics. 191Frank Smadja. 1993. Retrieving Collocations from Text: Xtract. Computational Linguistics 19(1): 143-177.
How Useful Is EAP Vocabulary for ESP? A Corpus Based Case Study. Cucu Sutarsyah, Paul Nation, Graeme Kennedy, RELC Journal. 252Cucu Sutarsyah, Paul Nation, and Graeme Kennedy. 1994. How Useful Is EAP Vocabulary for ESP? A Corpus Based Case Study. RELC Journal 25(2): 34-50.
Frequency Analysis of the Words in the Academic Word List (AWL) and Non-AWL Content Words in Applied Linguistics Research Papers. Viphavee Vongpumivitch, Ju-Yu Huang, Yu-Chia Chang, English for Specific Purposes. 281Viphavee Vongpumivitch, Ju-yu Huang, and Yu-Chia Chang. 2009. Frequency Analysis of the Words in the Academic Word List (AWL) and Non-AWL Content Words in Applied Linguistics Research Papers. English for Specific Purposes 28(1): 33-41.
A Basic Engineering English Word List for Less Proficient Foundation Engineering Undergraduates. Jeremy Ward, English for Specific Purposes. 283Jeremy Ward. 2009. A Basic Engineering English Word List for Less Proficient Foundation Engineering Undergraduates. English for Specific Purposes 28(3): 170-182.
Collocation Extraction Based on Modifiability Statistics. Joachim Wermter, Udo Hahn, Proceedings of the 20th International Conference on Computational Linguistics. the 20th International Conference on Computational LinguisticsAssociation for Computational LinguisticsJoachim Wermter and Udo Hahn. 2004. Collocation Extraction Based on Modifiability Statistics. Proceedings of the 20th International Conference on Computational Linguistics. Association for Computational Linguistics.
Paradigmatic Modifiability Statistics for the Extraction of Complex Multi-word Terms. Joachim Wermter, Udo Hahn, Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing. the Conference on Human Language Technology and Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsJoachim Wermter and Udo Hahn. 2005. Paradigmatic Modifiability Statistics for the Extraction of Complex Multi-word Terms. Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
Improving the Extraction of Collocations with High Frequency Words. David Wible, Chin-Hwa Kuo, Nai-Lung Tsao, Proceedings of International Conference on LREC. International Conference on LRECDavid Wible, Chin-Hwa Kuo, and Nai-Lung Tsao. 2004. Improving the Extraction of Collocations with High Frequency Words. Proceedings of International Conference on LREC.
A New Technique for Identifying Scientific/Technical Terms and Describing Science Texts. Huizhong Yang, Literary and Linguistic Computing. 1Huizhong Yang. 1986. A New Technique for Identifying Scientific/Technical Terms and Describing Science Texts. Literary and Linguistic Computing 1: 93-103.
Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. David Yarowsky, Proceedings of the 33rd Annual Meeting on Association for Computational Linguistics. the 33rd Annual Meeting on Association for Computational LinguisticsAssociation for Computational Linguistics27David Yarowsky. 1995. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. Proceedings of the 33rd Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics. PACLIC-27 |
5,220,140 | Syntactic Patterns versus Word Alignment: Extracting Opinion Targets from Online Reviews | Mining opinion targets is a fundamental and important task for opinion mining from online reviews. To this end, there are usually two kinds of methods: syntax based and alignment based methods. Syntax based methods usually exploited syntactic patterns to extract opinion targets, which were however prone to suffer from parsing errors when dealing with online informal texts. In contrast, alignment based methods used word alignment model to fulfill this task, which could avoid parsing errors without using parsing. However, there is no research focusing on which kind of method is more better when given a certain amount of reviews. To fill this gap, this paper empirically studies how the performance of these two kinds of methods vary when changing the size, domain and language of the corpus. We further combine syntactic patterns with alignment model by using a partially supervised framework and investigate whether this combination is useful or not. In our experiments, we verify that our combination is effective on the corpus with small and medium size. | [
10109731,
631855,
1578481,
16031155,
13573624,
9331037,
988830
] | Syntactic Patterns versus Word Alignment: Extracting Opinion Targets from Online Reviews
Association for Computational LinguisticsCopyright Association for Computational LinguisticsAugust 4-9 2013. 2013
Kang Liu kliu@nlpr.ia.ac.cn
Institute of Automation
National Laboratory of Pattern Recognition
Chinese Academy of Sciences
Liheng Xu lhxu@nlpr.ia.ac.cn
Institute of Automation
National Laboratory of Pattern Recognition
Chinese Academy of Sciences
Jun Zhao jzhao@nlpr.ia.ac.cn
Institute of Automation
National Laboratory of Pattern Recognition
Chinese Academy of Sciences
Syntactic Patterns versus Word Alignment: Extracting Opinion Targets from Online Reviews
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics
the 51st Annual Meeting of the Association for Computational LinguisticsSofia, BulgariaAssociation for Computational LinguisticsAugust 4-9 2013. 2013
Mining opinion targets is a fundamental and important task for opinion mining from online reviews. To this end, there are usually two kinds of methods: syntax based and alignment based methods. Syntax based methods usually exploited syntactic patterns to extract opinion targets, which were however prone to suffer from parsing errors when dealing with online informal texts. In contrast, alignment based methods used word alignment model to fulfill this task, which could avoid parsing errors without using parsing. However, there is no research focusing on which kind of method is more better when given a certain amount of reviews. To fill this gap, this paper empirically studies how the performance of these two kinds of methods vary when changing the size, domain and language of the corpus. We further combine syntactic patterns with alignment model by using a partially supervised framework and investigate whether this combination is useful or not. In our experiments, we verify that our combination is effective on the corpus with small and medium size.
Introduction
With the rapid development of Web 2.0, huge amount of user reviews are springing up on the Web. Mining opinions from these reviews become more and more urgent since that customers expect to obtain fine-grained information of products and manufacturers need to obtain immediate feedbacks from customers. In opinion mining, extracting opinion targets is a basic subtask. It is to extract a list of the objects which users express their opinions on and can provide the prior information of targets for opinion mining. So this task has attracted many attentions. To extract opinion targets, pervious approaches usually relied on opinion words which are the words used to express the opinions (Hu and Liu, 2004a;Popescu and Etzioni, 2005;Liu et al., 2005;Wang and Wang, 2008;Qiu et al., 2011;Liu et al., 2012). Intuitively, opinion words often appear around and modify opinion targets, and there are opinion relations and associations between them. If we have known some words to be opinion words, the words which those opinion words modify will have high probability to be opinion targets.
Therefore, identifying the aforementioned opinion relations between words is important for extracting opinion targets from reviews. To fulfill this aim, previous methods exploited the words co-occurrence information to indicate them (Hu and Liu, 2004a;Hu and Liu, 2004b). Obviously, these methods cannot obtain precise extraction because of the diverse expressions by reviewers, like long-span modified relations between words, etc. To handle this problem, several methods exploited syntactic information, where several heuristic patterns based on syntactic parsing were designed (Popescu and Etzioni, 2005;Qiu et al., 2009;Qiu et al., 2011). However, the sentences in online reviews usually have informal writing styles including grammar mistakes, typos, improper punctuation etc., which make parsing prone to generate mistakes. As a result, the syntax-based methods which heavily depended on the parsing performance would suffer from parsing errors . To improve the extraction performance, we can only employ some exquisite highprecision patterns. But this strategy is likely to miss many opinion targets and has lower recall with the increase of corpus size. To resolve these problems, Liu et al. (2012) formulated identifying opinion relations between words as an monolingual alignment process. A word can find its corresponding modifiers by using a word alignment (WAM). Without using syntactic parsing, the noises from parsing errors can be effectively avoided. Nevertheless, we notice that the alignment model is a statistical model which needs sufficient data to estimate parameters. When the data is insufficient, it would suffer from data sparseness and may make the performance decline.
Thus, from the above analysis, we can observe that the size of the corpus has impacts on these two kinds of methods, which arises some important questions: how can we make selection between syntax based methods and alignment based method for opinion target extraction when given a certain amount of reviews? And which kind of methods can obtain better extraction performance with the variation of the size of the dataset? Although (Liu et al., 2012) had proved the effectiveness of WAM, they mainly performed experiments on the dataset with medium size. We are still curious about that when the size of dataset is larger or smaller, can we obtain the same conclusion? To our best knowledge, these problems have not been studied before. Moreover, opinions may be expressed in different ways with the variation of the domain and language of the corpus. When the domain or language of the corpus is changed, what conclusions can we obtain? To answer these questions, in this paper, we adopt a unified framework to extract opinion targets from reviews, in the key component of which we vary the methods between syntactic patterns and alignment model. Then we run the whole framework on the corpus with different size (from #500 to #1, 000, 000), domain (three domains) and language (Chinese and English) to empirically assess the performance variations and discuss which method is more effective.
Furthermore, this paper naturally addresses another question: is it useful for opinion targets extraction when we combine syntactic patterns and word alignment model into a unified model? To this end, we employ a partially supervised alignment model (PSWAM) like (Gao et al., 2010;Liu et al., 2013). Based on the exquisitely designed high-precision syntactic patterns, we can obtain some precisely modified relations between words in sentences, which provide a portion of links of the full alignments. Then, these partial alignment links can be regarded as the constrains for a standard unsupervised word alignment model. And each target candidate would find its modifier under the partial supervision. In this way, the errors generated in standard unsupervised WAM can be corrected. For example in Figure 1, "kindly" and "courteous" are incorrectly regarded as the modifiers for "foods" if the WAM is performed in an whole unsupervised framework. However, by using some high-precision syntactic patterns, we can assert "courteous" should be aligned to "services", and "delicious" should be aligned to "foods". Through combination under partial supervision, we can see "kindly" and "courteous" are correctly linked to "services". Thus, it's reasonable to expect to yield better performance than traditional methods. As mentioned in (Liu et al., 2013), using PSWAM can not only inherit the advantages of WAM: effectively avoiding noises from syntactic parsing errors when dealing with informal texts, but also can improve the mining performance by using partial supervision. However, is this kind of combination always useful for opinion target extraction? To access this problem, we also make comparison between PSWAM based method and the aforementioned methods in the same corpora with different size, language and domain. The experimental results show the combination by using PSWAM can be effective on dataset with small and medium size.
Related Work
Opinion target extraction isn't a new task for opinion mining. There are much work focusing on this task, such as (Hu and Liu, 2004b;Ding et al., 2008;Li et al., 2010;Popescu and Etzioni, 2005;. Totally, previous studies can be divided into two main categories: supervised and unsupervised methods.
In supervised approaches, the opinion target extraction task was usually regarded as a sequence labeling problem (Jin and Huang, 2009;Li et al., 2010;Ma and Wan, 2010;). It's not only to extract a lexicon or list of opinion targets, but also to find out each opinion target mentions in reviews. Thus, the contextual words are usually selected as the features to indicate opinion targets in sentences. And classical sequence labeling models are used to train the extractor, such as CRFs (Li et al., 2010), HMM (Jin andHuang, 2009) etc.. Jin et al. (2009) proposed a lexicalized HMM model to perform opinion mining. Both Li et al. (2010) and Ma et al. (2010) used CRFs model to extract opinion targets in reviews. Specially, Li et al. proposed a Skip-Tree CRF model for opinion target extraction, which exploited three structures including linear-chain structure, syntactic structure, and conjunction structure. However, the main limitation of these supervised methods is the need of labeled training data. If the labeled training data is insufficient, the trained model would have unsatisfied extraction performance. Labeling sufficient training data is time and labor consuming. And for different domains, we need label data independently, which is obviously impracticable.
Thus, many researches focused on unsupervised methods, which are mainly to extract a list of opinion targets from reviews. Similar to ours, most approaches regarded opinion words as the indicator for opinion targets. (Hu and Liu, 2004a) regarded the nearest adjective to an noun/noun phrase as its modifier. Then it exploited an association rule mining algorithm to mine the associations between them. Finally, the frequent explicit product features can be extracted in a bootstrapping process by further combining item's frequency in dataset. Only using nearest neighbor rule to mine the modifier for each candidate cannot obtain precise results. Thus, (Popescu and Etzioni, 2005) used syntax information to extract opinion targets, which designed some syntactic patterns to capture the modified relations between words. The experimental results showed that their method had better performance than (Hu and Liu, 2004a). Moreover, (Qiu et al., 2011) proposed a Double Propagation method to expand sentiment words and opinion targets iteratively, where they also exploited syntactic relations between words. Specially, (Qiu et al., 2011) didn't only design syntactic patterns for capturing modified relations, but also designed patterns for capturing relations among opinion targets and relations among opinion words. However, the main limitation of Qiu's method is that the patterns based on dependency parsing tree may miss many targets for the large corpora. Therefore, extended Qiu's method. Besides the patterns used in Qiu's method, they adopted some other special designed patterns to increase recall. In addition they used the HITS (Kleinberg, 1999) algorithm to compute opinion target confidences to improve the precision. (Liu et al., 2012) formulated identifying opinion relations between words as an alignment process. They used a completely unsupervised WAM to capture opinion relations in sentences. Then the opinion targets were extracted in a standard random walk framework where two factors were considered: opinion relevance and target importance. Their experimental results have shown that WAM was more effective than traditional syntax-based methods for this task. (Liu et al., 2013) extend Liu's method, which is similar to our method and also used a partially supervised alignment model to extract opinion targets from reviews. We notice these two methods ( (Liu et al., 2012) and (Liu et al., 2013)) only performed experiments on the corpora with a medium size. Although both of them proved that WAM model is better than the methods based on syntactic patterns, they didn't discuss the performance variation when dealing with the corpora with different sizes, especially when the size of the corpus is less than 1,000 and more than 10,000. Based on their conclusions, we still don't know which kind of methods should be selected for opinion target extraction when given a certain amount of reviews.
Opinion Target Extraction Methodology
To extract opinion targets from reviews, we adopt the framework proposed by (Liu et al., 2012), which is a graph-based extraction framework and has two main components as follows.
1) The first component is to capture opinion relations in sentences and estimate associations between opinion target candidates and potential opinion words. In this paper, we assume opinion targets to be nouns or noun phrases, and opinion words may be adjectives or verbs, which are usually adopted by (Hu and Liu, 2004a;Qiu et al., 2011;Wang and Wang, 2008;Liu et al., 2012). And a potential opinion relation is comprised of an opinion target candidate and its corresponding modified word.
2) The second component is to estimate the confidence of each candidate. The candidates with higher confidence scores than a threshold will be extracted as opinion targets. In this procedure, we formulate the associations between opinion target candidates and potential opinion words in a bipartite graph. A random walk based algorithm is employed on this graph to estimate the confidence of each target candidate.
In this paper, we fix the method in the second component and vary the algorithms in the first component. In the first component, we respectively use syntactic patterns and unsupervised word alignment model (WAM) to capture opinion relations. In addition, we employ a partially supervised word alignment model (PSWAM) to incorporate syntactic information into WAM. In experiments, we run the whole framework on the different corpora to discuss which method is more effective. In the following subsections, we will present them in detail.
The First Component: Capturing
Opinion Relations and Estimating Associations between Words
Syntactic Patterns
To capture opinion relations in sentences by using syntactic patterns, we employ the manual designed syntactic patterns proposed by (Qiu et al., 2011). Similar to Qiu, only the syntactic patterns based on the direct dependency are employed to guarantee the extraction qualities. The direct dependency has two types. The first type indicates that one word depends on the other word without any additional words in their dependency path. The second type denotes that two words both depend on a third word directly. Specifically, we employ Minipar 1 to parse sentences. To further make syn-1 http://webdocs.cs.ualberta.ca/lindek/minipar.htm tactic patterns precisely, we only use a few dependency relation labels outputted by Minipar, such as mod, pnmod, subj, desc etc. To make a clear explanation, we give out some syntactic pattern examples in Table 1. In these patterns, OC is a potential opinion word which is an adjective or a verb. T C is an opinion target candidate which is a noun or noun phrase. The item on the arrows means the dependency relation type. The item in parenthesis denotes the part-of-speech of the other word.
In these examples, the first three patterns are based on the first direct dependency type and the last two patterns are based on the second direct dependency type. In this subsection, we present our method for capturing opinion relations using unsupervised word alignment model. Similar to (Liu et al., 2012), every sentence in reviews is replicated to generate a parallel sentence pair, and the word alignment algorithm is applied to the monolingual scenario to align a noun/noun phase with its modifiers. We select IBM-3 model (Brown et al., 1993) as the alignment model. Formally, given a sentence S = {w 1 , w 2 , ..., w n }, we have
P ibm3 (A|S) ∝ N i=1 n(φ i |w i ) N j=1 t(w j |w a j )d(j|a j , N )(1)
where t(w j |w a j ) models the co-occurrence information of two words in dataset. d(j|a j , n) models word position information, which describes the probability of a word in position a j aligned with a word in position j. And n(φ i |w i ) describes the ability of a word for modifying (being modified by) several words. φ i denotes the number of words that are aligned with w i . In our experiments, we set φ i = 2.
Since we only have interests on capturing opinion relations between words, we only pay attentions on the alignments between opinion target candidates (nouns/noun phrases) and potential opinion words (adjectives/verbs). If we directly use the alignment model, a noun (noun phrase) may align with other unrelated words, like prepositions or conjunctions and so on. Thus, we set constrains on the model: 1) Alignment links must be assigned among nouns/noun phrases, adjectives/verbs and null words. Aligning to null words means that this word has no modifier or modifies nothing; 2) Other unrelated words can only align with themselves.
Combining Syntax-based Method with Alignment-based Method
In this subsection, we try to combine syntactic information with word alignment model. As mentioned in the first section, we adopt a partially supervised alignment model to make this combination. Here, the opinion relations obtained through the high-precision syntactic patterns (Section 3.1.1) are regarded as the ground truth and can only provide a part of full alignments in sentences. They are treated as the constrains for the word alignment model. Given some partial alignment links = {(k, a k )|k ∈ [1, n], a k ∈ [1, n]}, the optimal word alignment A * = {(i, a i )|i ∈ [1, n], a i ∈ [1, n]} can be obtained as A * = argmax A P (A|S,Â), where (i, a i ) means that a noun (noun phrase) at position i is aligned with its modifier at position a i . Since the labeled data provided by syntactic patterns is not a full alignment, we adopt a EM-based algorithm, named as constrained hill-climbing algorithm (Gao et al., 2010), to estimate the parameters in the model. In the training process, the constrained hill-climbing algorithm can ensure that the final model is marginalized on the partial alignment links. Particularly, in the E step, their method aims to find out the alignments which are consistent to the alignment links provided by syntactic patterns, where there are main two steps involved. 1) Optimize towards the constraints. This step aims to generate an initial alignments for alignment model (IBM-3 model in our method), which can be close to the constraints. First, a simple alignment model (IBM-1, IBM-2, HMM etc.) is trained. Then, the evidence being inconsistent to the partial alignment links will be got rid of by using the move operator operator m i,j which changes a j = i and the swap operator s j 1 ,j 2 which exchanges a j 1 and a j 2 . The alignment is updated iteratively until no additional inconsistent links can be removed.
2) Towards the optimal alignment under the constraints. This step aims to optimize towards the optimal alignment under the constraints which starts from the aforementioned initial alignments. Gao et.al. (2010) set the corresponding cost value of the invalid move or swap operation in M and S to be negative, where M and S are respectively called Moving Matrix and Swapping Matrix, which record all possible move and swap costs between two different alignments. In this way, the invalid operators will never be picked which can guarantee that the final alignment links to have high probability to be consistent with the partial alignment links provided by high-precision syntactic patterns.
Then in M-step, evidences from the neighbor of final alignments are collected so that we can produce the estimation of parameters for the next iteration. In the process, those statistics which come from inconsistent alignment links aren't be picked up. Thus, we have P (w i |w a i ,Â) = λ, otherwise P (w i |w a i ) + λ, inconsistent withÂ
(2) where λ means that we make soft constraints on the alignment model. As a result, we expect some errors generated through high-precision patterns (Section 3.1.1) may be revised in the alignment process.
Estimating Associations between Words
After capturing opinion relations in sentences, we can obtain a lot of word pairs, each of which is comprised of an opinion target candidate and its corresponding modified word. Then the conditional probabilities between potential opinion target w t and potential opinion word w o can be estimated by using maximum likelihood estimation. Thus, we have P (w t |w o ) = Count(wt,wo) Count(wo) , where Count(·) means the item's frequency information. P (w t |w o ) means the conditional probabilities between two words. At the same time, we can obtain conditional probability P (w o |w t ). Then, similar to (Liu et al., 2012), the association between an opinion target candidate and its modifier is estimated as follows. Association(w t , w o ) = (α × P (w t |w o ) + (1 − α) × P (w o |w t )) −1 , where α is the harmonic factor. We set α = 0.5 in our experiments.
The Second Component: Estimating Candidate Confidence
In the second component, we adopt a graph-based algorithm used in (Liu et al., 2012) to compute the confidence of each opinion target candidate, and the candidates with higher confidence than the threshold will be extracted as the opinion targets.
Here, opinion words are regarded as the important indicators. We assume that two target candidates are likely to belong to the similar category, if they are modified by similar opinion words. Thus, we can propagate the opinion target confidences through opinion words.
To model the mined associations between words, a bipartite graph is constructed, which is defined as a weighted undirected graph G = (V, E, W ). It contains two kinds of vertex: opinion target candidates and potential opinion words, respectively denoted as v t ∈ V and v o ∈ V . As shown in Figure 2, the white vertices represent opinion target candidates and the gray vertices represent potential opinion words. An edge e vt,vo ∈ E between vertices represents that there is an opinion relation, and the weight w on the edge represents the association between two words.
Figure 2: Modeling Opinion Relations between Words in a Bipartite Graph
To estimate the confidence of each opinion target candidate, we employ a random walk algorithm on our graph, which iteratively computes the weighted average of opinion target confidences from neighboring vertices. Thus we have
C i+1 = (1 − β) × M × M T × C i + β × I (3)
where C i+1 and C i respectively represent the opinion target confidence vector in the (i + 1) th and i th iteration. M is the matrix of word associations, where M i,j denotes the association between the opinion target candidate i and the potential opinion word j. And I is defined as the prior confidence of each candidate for opinion target. Similar to (Liu et al., 2012), we set each item in
I v = tf (v)idf (v) v tf (v)idf (v) , where tf (v)
is the term frequency of v in the corpus, and df (v) is computed by using the Google n-gram corpus 2 . β ∈ [0, 1] represents the impact of candidate prior knowledge on the final estimation results. In experiments, we set β = 0.4. The algorithm run until convergence which is achieved when the confidence on each node ceases to change in a tolerance value.
Experiments
Datasets and Evaluation Metrics
In this section, to answer the questions mentioned in the first section, we collect a large collection named as LARGE, which includes reviews from three different domains and different languages. This collection was also used in (Liu et al., 2012). In the experiments, reviews are first segmented into sentences according to punctuation. The detailed statistical information of the used collection is shown in Table 2, where Restaurant is crawled from the Chinese Web site: www.dianping.com. The Hotel and MP3 are used in (Wang et al., 2011), which are respectively crawled from www.tripadvisor.com and www.amazon.com. For each dataset, we perform random sampling to generate testing set with different sizes, where we use sampled subsets with #sentences = 5 × 10 2 , 10 3 , 5 × 10 3 , 10 4 , 5 × 10 4 , 10 5 and 10 6 sentences respectively. Each sentence is tokenized, part-of-speech tagged by using Stanford NLP tool 3 , and parsed by using Minipar toolkit. And the method of (Zhu et al., 2009) is used to identify noun phrases.
We select precision and recall as the metrics. Specifically, to obtain the ground truth, we manually label all opinion targets for each subset. In this process, three annotators are involved. First, every noun/noun phrase and its contexts in review sentences are extracted. Then two annotators were required to judge whether every noun/noun phrase is opinion target or not. If a conflict happens, a third annotator will make judgment for final results. The average inter-agreements is 0.74. We also perform a significant test, i.e., a t-test with a default significant level of 0.05.
Compared Methods
We select three methods for comparison as follows.
• Syntax: It uses syntactic patterns mentioned in Section 3.1.1 in the first component to capture opinion relations in reviews. Then the associations between words are estimated and the graph based algorithm proposed in the second component (Section 3.3) is performed to extract opinion targets.
• WAM: It is similar to Syntax, where the only difference is that WAM uses unsupervised WAM (Section 3.1.2) to capture opinion relations.
• PSWAM is similar to Syntax and WAM, where the difference is that PSWAM uses the method mentioned in Section 3.1.3 to capture opinion relations, which incorporates syntactic information into word alignment model by using partially supervised framework.
The experimental results on different domains are respectively shown in Figure 3, 4 and 5.
Syntax based Methods vs. Alignment based Methods
Comparing Syntax with WAM and PSWAM, we can obtain the following observations: 1) When the size of the corpus is small, Syntax has better precision than alignment based methods (WAM and PSWAM). We believe the reason is that the high-precision syntactic patterns employed in Syntax can effectively capture opinion relations in a small amount of texts. In contrast, the methods based on word alignment model may suffer from data sparseness for parameter estimation, so the precision is lower.
2) However, when the size of the corpus increases, the precision of Syntax decreases, even worse than alignment based methods. We believe it's because more noises were introduced from parsing errors with the increase of the size of the corpus , which will have more negative impacts on extraction results. In contrast, for estimating the parameters of alignment based methods, the data is more sufficient, so the precision is better compared with syntax based method.
3) We also observe that recall of Syntax is worse than other two methods. It's because the human expressions of opinions are diverse and the manual designed syntactic patterns are limited to capture all opinion relations in sentences, which may miss an amount of correct opinion targets.
4) It's interesting that the performance gap between these three methods is smaller with the increase of the size of the corpus (more than 50,000). We guess the reason is that when the data is sufficient enough, we can obtain sufficient statistics for each opinion target. In such situation, the graphbased ranking algorithm in the second component will be apt to be affected by the frequency information, so the final performance could not be sensitive to the performance of opinion relations iden-tification in the first component. Thus, in this situation, we can get conclusion that there is no obviously difference on performance between syntaxbased approach and alignment-based approach. 5) From the results on dataset with different languages and different domains, we can obtain the similar observations. It indicates that choosing either syntactic patterns or word alignment model for extracting opinion targets can take a few consideration on the language and domain of the corpus.
Thus, based on the above observations, we can draw the following conclusions: making chooses between different methods is only related to the size of the corpus. The method based on syntactic patterns is more suitable for small corpus (#sentences < 5 × 10 3 shown in our experiments). And word alignment model is more suitable for medium corpus (5 × 10 3 < #sentences < 5 × 10 4 ). Moreover, when the size of the corpus is big enough, the performance of two kinds of methods tend to become the same (#sentences ≥ 10 5 shown in our experiments).
Is It Useful Combining Syntactic Patterns with Word Alignment Model
In this subsection, we try to see whether combining syntactic information with alignment model by using PSWAM is effective or not for opinion target extraction. From the results in Figure 3, 4 and 5, we can see that PSWAM has the similar recall compared with WAM in all datasets. PSWAM outperforms WAM on precision in all dataset. But the precision gap between PSWAM and WAM decreases when the size of the corpus increases. When the size is larger than 5 × 10 4 , the performance of these two methods is almost the same. We guess the reason is that more noises from parsing errors will be introduced by syntactic patterns with the increase of the size of corpus , which have negative impacts on alignment performance. At the same time, as mentioned above, a great deal of reviews will bring sufficient statistics for estimating parameters in alignment model, so the roles of partial supervision from syntactic information will be covered by frequency information used in our graph based ranking algorithm.
Compared with State-of-the-art Methods. However, it's not say that this combination is not useful. From the results, we still see that PSWAM outperforms WAM in all datasets on precision when size of corpus is smaller than 5 × 10 4 . To further prove the effectiveness of our combination, we compare PSWAM with some state-of-the-art methods, including Hu (Hu and Liu, 2004a), which extracted frequent opinion target words based on association mining rules, DP (Qiu et al., 2011), which extracted opinion targets through syntactic patterns, and LIU (Liu et al., 2012), which fulfilled this task by using unsupervised WAM. The parameter settings in these baselines are the same as the settings in the original papers. Because of the space limitation, we only show the results on Restaurant and Hotel, as shown in Figure 6 and 7. From the experimental results, we can obtain the following observations. PSWAM outperforms other methods in most datasets. This indicates that our method based on PSWAM is effective for opinion target extraction. Especially compared PSWAM with LIU, both of which are based on word alignment model, we can see PSWAM identifies opinion relations by performing WAM under partial supervision, which can effectively improve the precision when dealing with small and medium corpus. However, these improvements are limited when the size of the corpus increases, which has the similar observations obtained above.
The Impact of Syntactic Information on Word Alignment Model. Although we have prove the effectiveness of PSWAM in the corpus with small and medium size, we are still curious about how the performance varies when we incor-porate different amount of syntactic information into WAM. In this experiment, we rank the used syntactic patterns mentioned in Section 3.1.1 according to the quantities of the extracted alignment links by these patterns. Then, to capture opinion relations, we respectively use top N syntactic patterns according to frequency mentioned above to generate partial alignment links for PSWAM in section 3.1.3. We respectively define N= [1,7]. The larger is N , the more syntactic information is incorporated. Because of the space limitation, only the average performance of all dataset is shown in Figure 8. In Figure 8, we can observe that the syntactic information mainly have effect on precision. When the size of the corpus is small, the opinion relations mined by high-precision syntactic patterns are usually correct, so incorporating more syntactic information can improve the precision of word alignment model more. However, when the size of the corpus increases, incorporating more syntactic information has little impact on precision.
Conclusions and Future Work
This paper discusses the performance variation of syntax based methods and alignment based methods on opinion target extraction task for the dataset with different sizes, different languages and different domains. Through experimental results, we can see that choosing which method is not related with corpus domain and language, but strongly associated with the size of the corpus . We can conclude that syntax-based method is likely to be more effective when the size of the corpus is small, and alignment-based methods are more useful for the medium size corpus. We further verify that incorporating syntactic information into word alignment model by using PSWAM is effective when dealing with the corpora with small or medium size. When the size of the corpus is larger and larger, the performance gap between syntax based, WAM and PSWAM will decrease.
In future work, we will extract opinion targets based on not only opinion relations. Other semantic relations, such as the topical associations between opinion targets (or opinion words) should also be employed. We believe that considering multiple semantic associations will help to improve the performance. In this way, how to model heterogenous relations in a unified model for opinion targets extraction is worthy to be studied.
Figure 1 :
1Mining Opinion Relations between Words using Partially Supervised Alignment Model model
Example: The quality of LCD is good
Figure 3 :Figure 4 :Figure 5 :
345Experimental results on Restaurant Experimental Experimental results on MP3
Figure 6 :Figure 7 :
67Compared with the State-of-the-art Methods on Restaurant Compared with the State-of-the-art Methods on Hotel
Figure 8 :
8The Impacts of Different Syntactic Information on Word Alignment Model
Table 1 :
1Some Examples of Used Syntactic Patterns 3.1.2 Unsupervised Word Alignment Model
Table 2 :
2Experimental Dataset
http://books.google.com/ngrams/datasets 3 http://nlp.stanford.edu/software/tagger.shtml
AcknowledgementThis work was supported by the National Natural Science Foundation of China (No. 61070106, No. 61272332 and No. 61202329
The mathematics of statistical machine translation: parameter estimation. F Peter, Brown, J Della Vincent, Stephen A Pietra, Robert L Della Pietra, Mercer, Comput. Linguist. 192Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: pa- rameter estimation. Comput. Linguist., 19(2):263- 311, June.
A holistic lexicon-based approach to opinion mining. Xiaowen Ding, Bing Liu, Philip S Yu, Proceedings of the Conference on Web Search and Web Data Mining (WSDM). the Conference on Web Search and Web Data Mining (WSDM)Xiaowen Ding, Bing Liu, and Philip S. Yu. 2008. A holistic lexicon-based approach to opinion mining. In Proceedings of the Conference on Web Search and Web Data Mining (WSDM).
A semi-supervised word alignment algorithm with partial manual alignments. Qin Gao, Nguyen Bach, Stephan Vogel, Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR. the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATRUppsala, SwedenAssociation for Computational LinguisticsQin Gao, Nguyen Bach, and Stephan Vogel. 2010. A semi-supervised word alignment algorithm with par- tial manual alignments. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, pages 1-10, Uppsala, Sweden, July. Association for Computational Linguistics.
Mining opinion features in customer reviews. Mingqin Hu, Bing Liu, Proceedings of Conference on Artificial Intelligence (AAAI). Conference on Artificial Intelligence (AAAI)Mingqin Hu and Bing Liu. 2004a. Mining opinion fea- tures in customer reviews. In Proceedings of Con- ference on Artificial Intelligence (AAAI).
Mining and summarizing customer reviews. Minqing Hu, Bing Liu, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '04. the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '04New York, NY, USAACMMinqing Hu and Bing Liu. 2004b. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowl- edge discovery and data mining, KDD '04, pages 168-177, New York, NY, USA. ACM.
A novel lexicalized hmm-based learning framework for web opinion mining. Wei Jin, Hay Ho, Huang, Proceedings of International Conference on Machine Learning (ICML). International Conference on Machine Learning (ICML)Wei Jin and Hay Ho Huang. 2009. A novel lexical- ized hmm-based learning framework for web opin- ion mining. In Proceedings of International Confer- ence on Machine Learning (ICML).
Authoritative sources in a hyperlinked environment. Jon M Kleinberg, J. ACM. 465Jon M. Kleinberg. 1999. Authoritative sources in a hyperlinked environment. J. ACM, 46(5):604-632, September.
Structure-aware review mining and summarization. Fangtao Li, Chao Han, Minlie Huang, Xiaoyan Zhu, Yingju Xia, Shu Zhang, Hao Yu, COL-ING. Chu-Ren Huang and Dan JurafskyTsinghua University PressFangtao Li, Chao Han, Minlie Huang, Xiaoyan Zhu, Yingju Xia, Shu Zhang, and Hao Yu. 2010. Structure-aware review mining and summarization. In Chu-Ren Huang and Dan Jurafsky, editors, COL- ING, pages 653-661. Tsinghua University Press.
Opinion observer: analyzing and comparing opinions on the web. Bing Liu, Minqing Hu, Junsheng Cheng, Allan Ellis and Tatsuya HaginoACMBing Liu, Minqing Hu, and Junsheng Cheng. 2005. Opinion observer: analyzing and comparing opin- ions on the web. In Allan Ellis and Tatsuya Hagino, editors, WWW, pages 342-351. ACM.
Opinion target extraction using word-based translation model. Kang Liu, Liheng Xu, Jun Zhao, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningJeju Island, KoreaAssociation for Computational LinguisticsKang Liu, Liheng Xu, and Jun Zhao. 2012. Opin- ion target extraction using word-based translation model. In Proceedings of the 2012 Joint Confer- ence on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1346-1356, Jeju Island, Korea, July. Association for Computational Linguistics.
Opinion target extraction using partially supervised word alignment model. Kang Liu, Liheng Xu, Yang Liu, Jun Zhao, Kang Liu, Liheng Xu, Yang Liu, and Jun Zhao. 2013. Opinion target extraction using partially supervised word alignment model.
Opinion target extraction in chinese news comments. Tengfei Ma, Xiaojun Wan, COLING (Posters). Chu-Ren Huang and Dan JurafskyChinese Information Processing Society of ChinaTengfei Ma and Xiaojun Wan. 2010. Opinion tar- get extraction in chinese news comments. In Chu- Ren Huang and Dan Jurafsky, editors, COLING (Posters), pages 782-790. Chinese Information Pro- cessing Society of China.
Extracting product features and opinions from reviews. Ana-Maria Popescu, Oren Etzioni, Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05. the conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05Stroudsburg, PA, USAAssociation for Computational LinguisticsAna-Maria Popescu and Oren Etzioni. 2005. Ex- tracting product features and opinions from reviews. In Proceedings of the conference on Human Lan- guage Technology and Empirical Methods in Natu- ral Language Processing, HLT '05, pages 339-346, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.
Expanding domain sentiment lexicon through double propagation. Guang Qiu, Bing Liu, Jiajun Bu, Chun Che, Guang Qiu, Bing Liu, Jiajun Bu, and Chun Che. 2009. Expanding domain sentiment lexicon through dou- ble propagation.
Opinion word expansion and target extraction through double propagation. Guang Qiu, Bing Liu 0001, Jiajun Bu, Chun Chen, Computational Linguistics. 371Guang Qiu, Bing Liu 0001, Jiajun Bu, and Chun Chen. 2011. Opinion word expansion and target extraction through double propagation. Computational Lin- guistics, 37(1):9-27.
Bootstrapping both product features and opinion words from chinese customer reviews with cross-inducing. Bo Wang, Houfeng Wang, Bo Wang and Houfeng Wang. 2008. Bootstrapping both product features and opinion words from chi- nese customer reviews with cross-inducing.
Latent aspect rating analysis without aspect keyword supervision. Hongning Wang, Yue Lu, Chengxiang Zhai, Chid Apt, Joydeep Ghosh, and Padhraic SmythACMHongning Wang, Yue Lu, and ChengXiang Zhai. 2011. Latent aspect rating analysis without aspect key- word supervision. In Chid Apt, Joydeep Ghosh, and Padhraic Smyth, editors, KDD, pages 618-626. ACM.
Phrase dependency parsing for opinion mining. Yuanbin Wu, Qi Zhang, Xuanjing Huang, Lide Wu, EMNLP. ACLYuanbin Wu, Qi Zhang, Xuanjing Huang, and Lide Wu. 2009. Phrase dependency parsing for opinion min- ing. In EMNLP, pages 1533-1541. ACL.
Mining product reviews based on shallow dependency parsing. Qi Zhang, Yuanbin Wu, Tao Li, Mitsunori Ogihara, Joseph Johnson, Xuanjing Huang, Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, SIGIR '09. the 32nd international ACM SIGIR conference on Research and development in information retrieval, SIGIR '09New York, NY, USAACMQi Zhang, Yuanbin Wu, Tao Li, Mitsunori Ogihara, Joseph Johnson, and Xuanjing Huang. 2009. Min- ing product reviews based on shallow dependency parsing. In Proceedings of the 32nd international ACM SIGIR conference on Research and develop- ment in information retrieval, SIGIR '09, pages 726-727, New York, NY, USA. ACM.
Extracting and ranking product features in opinion documents. Lei Zhang, Bing Liu, Suk Hwan Lim, Eamonn O' Brien-Strain, COLING (Posters). Chu-Ren Huang and Dan JurafskyChinese Information Processing Society of ChinaLei Zhang, Bing Liu, Suk Hwan Lim, and Eamonn O'Brien-Strain. 2010. Extracting and ranking product features in opinion documents. In Chu- Ren Huang and Dan Jurafsky, editors, COLING (Posters), pages 1462-1470. Chinese Information Processing Society of China.
Multi-aspect opinion polling from textual reviews. Jingbo Zhu, Huizhen Wang, Benjamin K Tsou, Muhua Zhu, CIKM. David Wai-Lok Cheung, Il-Yeol Song, Wesley W. Chu, Xiaohua Hu, and Jimmy J. LinACMJingbo Zhu, Huizhen Wang, Benjamin K. Tsou, and Muhua Zhu. 2009. Multi-aspect opinion polling from textual reviews. In David Wai-Lok Cheung, Il-Yeol Song, Wesley W. Chu, Xiaohua Hu, and Jimmy J. Lin, editors, CIKM, pages 1799-1802. ACM. |
248,780,043 | Adversarial Soft Prompt Tuning for Cross-Domain Sentiment Analysis | Cross-domain sentiment analysis has achieved promising results with the help of pre-trained language models. As GPT-3 appears, prompt tuning has been widely explored to enable better semantic modeling in many natural language processing tasks. However, directly using a fixed predefined template for crossdomain research cannot model different distributions of the [MASK] token in different domains, thus making underuse of the prompt tuning technique. In this paper, we propose a novel Adversarial Soft Prompt Tuning method (AdSPT) to better model cross-domain sentiment analysis. On the one hand, AdSPT adopts separate soft prompts instead of hard templates to learn different vectors for different domains, thus alleviating the domain discrepancy of the [MASK] token in the masked language modeling task. On the other hand, AdSPT uses a novel domain adversarial training strategy to learn domain-invariant representations between each source domain and the target domain. Experiments on a publicly available sentiment analysis dataset show that our model achieves new state-of-the-art results for both single-source domain adaptation and multi-source domain adaptation. | [
220045953,
51879969,
3626819,
21736097,
52967399,
14015791,
7403346,
227231444
] | Adversarial Soft Prompt Tuning for Cross-Domain Sentiment Analysis
Association for Computational LinguisticsCopyright Association for Computational LinguisticsMay 22-27, 2022 c 2022
Hui Wu
Xiaodong Shi
Department of Artificial Intelligence
School of Informatics
Xiamen University
China
National Institute for Data Science in Health and Medicine
Xiamen University
China
Key Laboratory of Digital Protection and Intelligent Processing of Intangible Cultural Heritage of Fujian and Taiwan
Ministry of Culture and Tourism
Xiamen University
China
Adversarial Soft Prompt Tuning for Cross-Domain Sentiment Analysis
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
the 60th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1May 22-27, 2022 c 2022
Cross-domain sentiment analysis has achieved promising results with the help of pre-trained language models. As GPT-3 appears, prompt tuning has been widely explored to enable better semantic modeling in many natural language processing tasks. However, directly using a fixed predefined template for crossdomain research cannot model different distributions of the [MASK] token in different domains, thus making underuse of the prompt tuning technique. In this paper, we propose a novel Adversarial Soft Prompt Tuning method (AdSPT) to better model cross-domain sentiment analysis. On the one hand, AdSPT adopts separate soft prompts instead of hard templates to learn different vectors for different domains, thus alleviating the domain discrepancy of the [MASK] token in the masked language modeling task. On the other hand, AdSPT uses a novel domain adversarial training strategy to learn domain-invariant representations between each source domain and the target domain. Experiments on a publicly available sentiment analysis dataset show that our model achieves new state-of-the-art results for both single-source domain adaptation and multi-source domain adaptation.
Introduction
In recent years, with the emergence of a series of large-scale pre-trained language models (PLMs), such as GPT (Radford et al., 2018(Radford et al., , 2019, BERT (Devlin et al., 2019), and RoBERTa (Liu et al., 2019), fine-tuning PLMs has achieved promising results on a wide range of natural language processing (NLP) tasks. However, as PLMs become larger and larger, fine-tuning larger PLMs becomes more challenging in most real-world applications. More recently, Brown et al. (2020) show that designing task descriptions (a.k.a. prompts) can make accurate predictions without updating any of the * Corresponding author.
Reading it can learn calculus better .
Watching it is like being on the scene . It was . [MASK] It was .
[MASK]
MLM head
MLM head
Label Words useful benificial parameters of GPT-3 (which has 175B parameters). This inspires a new PLM-tuning method named "prompt tuning". Such prompt tuning method has achieved state-of-the-art results on text classification and natural language inference Gao et al., 2020), relation classification (Han et al., 2021), and natural language generation (Li and Liang, 2021). It is common to use a predefined template (e.g., "It was [MASK].") in prompt tuning for binary sentiment analysis, and the classification results of positive or negative depend on the probabilities of predefined label words (e.g., "{good, bad}") in the masked language modeling (MLM) task. However, the distributions of MLM prediction results can be different for different domains. An example is shown in Figure 1, the discrepancy between bookdomain review and video-domain review leads to different possibilities of label words. The highfrequency label word in book-domain review is "useful", and video-domain review is "real", neither of which is in the predefined "{good, bad}". Therefore, it is unreasonable to predict predefined label words with fixed templates (a.k.a. hard prompts) for different domain datasets.
The intuition is that the feature distributions corresponding to the [MASK] position learned from the hard prompt are distinct among different do-mains. And the discrepancy among different domains can have serious effects on the cross-domain setting where we train a classifier on source domain data, e.g., the book reviews, and test it on the target domain, e.g., the video review. So domain adaptation (Ben-David et al., 2007;Mansour et al., 2009) based on cluster hypothesis (Zhu and Goldberg, 2009) In experiments, we evaluate on a publicly available sentiment analysis dataset for both singlesource domain adaptation and multi-source domain adaptation. Our results show the effectiveness of collaboratively leveraging domain-specific soft prompts tuning and domain adversarial training. To summarize, the main contributions of this work are as follows:
(1) In prompt tuning, we adopt separate soft prompts to learn embeddings enriched with the domain knowledge, thus alleviating the domain discrepancy of the [MASK] position.
(2) We design a novel adversarial training strategy to learn the domain-invariant representation of the [MASK] position.
(3) Experiments on the Amazon reviews dataset show our method AdSPT obtains the average accuracy 93.14% (0.46 absolute improvement) under single-source domain adaptation and the average accuracy 93.75% (0.81 absolute improvement) under multi-source domain adaptation.
Related Work
Prompt tuning. Fine-tuning PLMs with taskspecific heads on downstream tasks has become the main paradigm and yields strong performance on many NLP tasks (Peters et al., 2018;Devlin et al., 2019;Radford et al., 2019). But there is a big gap between the fine-tuning objectives of downstream tasks and the pre-training objectives of PLMs, which could limit the exploitation of knowledge in PLMs (Liu et al., 2021b). Subsequently, GPT-3 (Brown et al., 2020) brings a new paradigm "prompt tuning" for downstream tasks, which leverages natural-language prompts and task demonstrations as context to make downstream tasks similar to language modeling.
Early works explore manually defined templates (a.k.a. hard templates) for text classification and natural language inference Schütze, 2020, 2021). However, suitable templates require strong domain knowledge. Therefore, some automatically generated hard templates are explored (Shin et al., 2020;Gao et al., 2020;Ben-David et al., 2021). Since prompt construction is to find a method that allows PLMs to effectively perform downstream tasks, it is not necessary to limit templates to human-interpretable natural language. Some works attempt to perform prompting directly with several learnable vectors, such as soft prompt Vu et al., 2021), prefix-tuning (Li and Liang, 2021) and P-tuning V2 (Liu et al., 2021a). Moreover, explore automatically identifying label words. Hu et al. (2021) use an external knowledge base to expand label words. This paper focuses on improving the cross-domain sentiment analysis via different soft prompts of different domains.
Domain Adaptation. Research on domain adaptation (DA) uses labeled or unlabeled target data to transfer labeled source information to a specific target domain (Pan and Yang, 2009;Mansour et al., 2009). Popular methods for unsupervised DA are based on domain discrepancy optimizing based on adversarial training (Ganin et al., 2016;Zhao et al., 2018;Saito et al., 2018). As for cross-domain sentiment analysis, some early works use pivot-based methods to capture the shared feature representation of different domains (Yu and Jiang, 2016;Ziser and Reichart, 2018;Li et al., 2018;Peng et al., 2018). Some other works adopt different adversarial learning methods to learn the domain-common sentiment knowledge (Li et al., 2017;Qu et al., 2019;Li et al., 2019).
Recently, with the promising performance of PLMs in NLP, many works on cross-domain sentiment analysis focus on how to improve langange model pre-training and fine-tuning, e.g., Du et al. (2020) use a target domain MLM task and a domain-distinguish task in pre-training; Zhou et al. (2020) utilize several pre-training tasks based on existing lexicons and annotations. Different from these works, our method is the first to use the combination of soft prompt tuning and adversarial training to solve the DA problem.
Problem Formulation
In this paper, we study cross-domain sentiment analysis in the unsupervised domain adaptation setting which contains two scenarios: a source domain and a target domain or multiple source domains and a target domain. Given m(m ≥ 1) source domains, the l-th (l ∈ [1, . . . , m]) source domain contains an annotated dataset
S l = {x s i , y s i } N s l i=1 , where x s i = [w s 1 , .
. . , w s n ] is a input sentence with n words, y s i is the corresponding polarity label, and N s l represents the number of examples of the l-th source domain. In the target domain, there is an unannotated dataset
T = {x t i } N t i=1 , where x t i = [w t 1 , . . . , w t n ]
is an unlabeled sentence of the target domain and N t is the number of the unlabeled data. The goal of cross-domain sentiment analysis is to learn a function F that could both retain in-domain knowledge for different domains and also learn the domain invariance between the target domain and each source domain to better predict the polarity of unlabeled sentences from the target domain.
Method
In this section, we first introduce a soft prompt tuning method for sentiment classification that utilizes soft prompts to capture domain-specific knowledge. Then we present a domain adversarial training method for domain adaptation. Finally, we describe the overall learning procedure.
Soft Prompt Tuning for Sentiment Classification
Prompt tuning is an approach to add extra information for PLMs by reformulating downstream tasks as cloze questions. The primary components include a template and a set of label words, where the template is a background description of current task and the label words are the high-probability vocabulary predicted by PLMs in the current context. In the binary sentiment classification, we denote the input sentence as x = [w 1 , . . . , w n ], the output label as y. Here y ∈ Y, and the label space
Y = {positive, negative}.
Prompt tuning formalizes the classification task into a MLM task. Given a PLM M and its vocabulary V, a prompt consists of a template function T (·) that converts the input sentence x to a prompt input x prompt = T (x) with the [MASK] token and a set of label words V * ⊂ V, which are connected with the label space through a mapping function v : Y → V * . As shown in Figure
where e(·) represents the embedding function of M.
Here we can denote a PLM M as a function mapping from x prompt to the feature representation and vocabulary distribution of the [MASK] token, represented as:
h [MASK] , s [MASK] = M(x prompt )(2)
where h [MASK] ∈ R h and s [MASK] ∈ R |V| are the hidden representation and vocabulary distribution of the [MASK] token respectively, and s [MASK] = f (h [MASK] ) is obtained by the MLM head function f . The probability p(y|x) is formalized according to the distribution of the label word w ∈ V * w.r.t. the [MASK] position. In binary sentiment classification, we set the label words as V * = ...
(x 1 1 ) (x 2 1 ) (x 1 1 ) ... (x 1 −1 ) (x 2 −1 ) (x −1 −1 ) ... (x 1 ) (x 2 ) (x ) ("[MASK]")
...
("[MASK]")
... {good, bad}. So,
p(y|x) = p(V * y ← [MASK] |x prompt ) = exp(s [MASK] (V * y )) y ∈Y exp(s [MASK] (V * y ))(3)
Given an annotated dataset S = {x i , y i } N i=1 , the training objective for soft prompt tuning is obtained using the binary cross-entropy loss,
L class (S; θ M,p,f ) = − N i=1 log p(y i |x i ) I{ŷ i =1} + log(1 − p(y i |x i )) I{ŷ i =0}(4)
whereŷ i represents the ground truth label ranging from 1 as the positive label and 0 as the negative label). θ M,p,f represents the overall trainable parameters of the PLM M, several learnable vectors p and the MLM head function f .
Domain Adversarial Training
For the same task in different domains, domain adversarial training can not only transfer the generic knowledge from source domains to the target domain, but also train more domain-aware classifiers. As shown in Figure 2, domain adversarial training aims to make the feature distributions of the [MASK] position from different domains closer.
More intuitively, it will encourage the MLM head classifer to obtain domain-invariant features across domains.
Based on the hidden representation h [MASK] by the PLM, the detailed process of domain adversarial training is as follows: given m (m ≥ 1) source domains, we assume that between each source domain S l (l ∈ [1, . . . , m]) and the target domain T have a domain discriminative function g l : R h → D that discriminates between the source domain and the target domain, where the domain label set is represented as D = {0, 1}, 0 is the source domain label, and 1 is the target domain label. To this end, there are m domain discriminators, denoted as g = {g l } m l=1 . Given an input example x from either the l-th (l ∈ [1, . . . , m]) source domain or the target domain, we first obtain the task-specific head representation h [MASK] by M and then model the probability p(d|x) for discriminating the domain label d ∈ D as:
p(d|x) = exp(g d l (h [MASK] )) d ∈D exp(g d l (h [MASK] ))(5)Given m source domain datasetŜ = {S l } m l=1 = {{x s i } N s l i=1 } m l=1 and a target domain dataset T = {x t i } N t i=1 ,
where N s l is the number of samples in the l-th source domain and N t is the number of samples in the target domain, the domain discriminative objective is to minimize the following cross-entropy loss,
L domain (Ŝ, T ; θ M,p,g ) = − m l=1 N s l +N t i=1 log p(d i |x i ) I{d i =1} + log(1 − p(d i |x i )) I{d i =0}(6)
whered i represents the truth domain label and θ M,p,g represents the overall trainable parameters of the PLM M, several learnable vectors p and m domain discriminators g.
The domain adversarial training among m source domains and the target domain can be seen as a two-player minimax game where the domain classifiers g = {g l } m l=1 tend to minimize the domain discrimination loss so as to make the domain discriminators strong while the PLM M tends to maximize the domain discrimination loss so as to weaken the domain discrimination.
Formally, the domain adversarial training objective w.r.t. to g, p and M can be represented as:
where λ is a trade-off parameter. The sentiment classification objective L class and the domain discrimination objective L domain are defined in Eq.
(4) and Eq. (6), respectively. if d = Source then 4:
for l in {1, . . . , m} do 5:
L class ← L class (S l ; θ M,p,f ) 6:
L domain ← L domain (S l , T ; θM,p,g l ) # Minimizing the MLM head classification loss 7:
θ f ← θ f − ∇ θ f L class # Minimizing the domain discrimination loss 8:
θg l ← θg l − ∇ θg l L domain 9:
end for # Minimizing the sentiment classification loss 10:
θM,p ← θM,p − ∇ θ M,p (λL class − L domain ) 11:
end if 12:
end for 13: end while target domain are mapped to different domain discriminators to train the PLM M, several learnable vectors p and the domain discriminator g l . The corresponding domain discrimination loss is computed in line 6. The sentiment classification loss is used for updating the parameters of the PLM, several learnable vectors and the MLM head function (line 7, 10). The domain discrimination loss is used for updating the parameters of the PLM, several learnable vectors and the domain discriminators. Obviously, the parameters of the PLM and several learnable vectors be updated together by the above two losses.
Experiments
In this section, we conduct experiments to evaluate the effectiveness of our methods. Our experiments are carried out on single-source domain adaptation and multi-source domain adaptation settings ( § 5.3). In addition, we also investigate how different components in the model impact the performance of cross-domain sentiment analysis with different settings.
Experimental Setup
Dataset. We evaluate on the Amazon reviews dataset , which has been widely used for cross-domain sentiment classification. This dataset contains reviews of binary categories from four domains: Books (B), DVDs : Prompt-tuning with the hard prompt; PT(SOFT): Prompt-tuning with the soft prompt; + represents the combination, e.g., "PT(HARD) + AT" represents hard prompt tuning with the domain adversarial training. AdSPT is also called "PT(SOFT) + AT". We report mean performances over 5 fold cross-validation.
(D), Electronics (E) and Kitchen appliances (K). Each domain has totally 2,000 manually labeled reviews. We use different settings for single-source domain adaptation and multi-source domain adaptation. For each domain, there are 2000 labeled reviews, including 1000 positive and 1000 negative, and 4000 unlabeled reviews. Following previous work (Ruder and Plank, 2017), we randomly select a small part (20%) of examples in each domain as the development set to save the best training model and perform a 5 fold cross-validation.
In single-source domain adaptation, we follow previous work (Ziser and Reichart, 2018) to construct 12 cross-domain sentiment analysis tasks (corresponding to 12 ordered domain pairs). In multi-source domain adaptation, we choose threedomain data as multiple source domains and the remaining one as the target domain, e.g., "BDE → K". So there are 4 combinations, corresponding to 4 tasks.
Training details. In the Amazon reviews experiments, we adopt a 12-layer Transformer (Vaswani et al., 2017;Devlin et al., 2019) initialized with RoBERTa BASE (Liu et al., 2019) as the PLM. During the training, we train with batch size of 2 for 10 epoches. The optimizer is Adam with learning rate 2e −5 for the PLM optimization and 5e −5 for optimizing domain discriminators. All experiments are conducted with an NVIDIA GeForce RTX 2080 Ti.
Baselines
We compare our method against 2 state-of-the-art methods, and also design several variants of finetuning and prompt tuning as baselines to demonstrate the effectivenss of adversatial training strategy in soft prompt tuning for DA.
(1) BERT-DAAT (Du et al., 2020): Use BERT post-training for cross-domain sentiment analysis with adversarial training.
(2) SENTIX F ix (Zhou et al., 2020): Pre-train a sentiment-aware language model by several pretraining tasks.
(3) Fine-tuning: Standard fine-tuning vanilla PLMs in the source domain labeled data, which use the hidden representation of [CLS] for classification.
(4) Fine-tuning + AT: Add the adversarial training operating on standard fine-tuning vanilla PLMs.
(5) Prompt-tuning(Hard): Use a manually defined template "It is [MASK]" for prompt-tuning.
(6) Prompt-tuning(Hard) + AT: Add the adversarial training operating on Prompt-tuning(Hard).
Following previous work (Du et al., 2020;Zhou et al., 2020), we adopt the accuracy to evaluate the performance.
Main Results
Main results contain results of single-source domain adaptation (Table 1) and multi-source domain adaptation ( Compared with previous state-of-the-art methods, AdSPT is significantly superior to BERT-DAAT and SENTIX F ix on average (3.02 absolute improvement and 0.46 absolute improvement, respectively). More specifically speaking, prompttuning methods achieve better results than BERT-DAAT on most of single-source domain adaptation. This indicates that prompt tuning can stimulate pre-encoded knowledge in PLMs to solve the DA problem. But the performance of PT(HARD) and PT(HARD) + AT is lower than that of SENTIX F ix on average (91.12% v.s. 92.68% and 92.06% v.s. 92.68%), showing that the feature representation of the [MASK] token in hard prompt tuning learns more domain knowledge of source domains, which leads to degraded performance on the target domain. Conversely, PT(SOFT) is comparable to SENTIX F ix on average (92.45% v.s. 92.68%) and AdSPT achieves better results than SENTIX F ix on average (0.46 absolute improvement). It shows that soft prompt tuning not only learns domainaware continuous vectors, but also weakens the domain discrepancy of the feature distribution of the [MASK] position. In addition, prompt-tuning methods are consistently superior to FT and FT + AT, either using a hard prompt, or soft prompt.
In prompt-tuning, soft prompt tuning methods achieve better performances than corresponding hard prompt tuning methods (1.33 absolute improvement and 1.08 absolute improvement, respectively). This indicates these separate soft prompts can flexibly learn in-domain knowledge of different domains, which makes the feature representation of the [MASK] token more suitable for predicting the predefined label words. So soft prompt is more applicable to the DA problem than a hard prompt. When we add a domain adversarial training oper-ation on soft prompt tuning, AdSPT achieves the new start-of-the-art result on average. It shows that the domain adversarial training strategy can enhance the domain-invariant feature of the [MASK] token among different domain datasets.
Results of Multi-source Domain Adaptation. Table 2 shows our main experimental results under multi-source domain adaptation.
Compared with fine-tuning methods, variants of prompt tuning achieve better performances (over at least 0.55 absolute improvement on average). This is mainly because prompt tuning uses the feature representation of [MASK] token for classification, rather than the feature representation of [CLS] token. On the one hand, fine-tuning is difficult to train the domain-specific classifier accurately from scratch on the unlabeled dataset. On the other hand, prompt tuning is used to classify by predicting the feature distribution of the [MASK] token in the set of label words, which can activate some prior knowledge in PLMs.
Compared with hard prompt tuning methods, soft prompt tuning methods achieve significant improvements on average (92.94% v.s. 91.39% and 93.75% v.s. 92.94%). Constructing the sophisticated hard template not only requires expertise knowledge and time, but the unified predefined hard template leads to the domain discrepancy of the feature representation of the [MASK] position that is unsuitable for multi-domain adaptation.
Besides, PT(HARD) + AT achieves a better result than PT(HARD) on average (0.61 absolute improvement), which shows the domain adversarial training can obtain domain-invariant features among different domains by domain discriminators for DA. So when adding the domain adversarial training into soft prompt tuning, AdSPT achieves the best results under multi-source domain adaptation setting. This shows the effectiveness of the collaboration of soft prompt tuning and the domain adversarial training strategy. In the domain ad-
Analysis
Multi-source v.s. Single-source. We make more detailed comparisons to explore the effect of multi-source domain adaptation and single-source domain adaptation settings. Figure 3 illustrates the influence of multi-source and single-source on the predicted results of the same target domain. When the target domain is "E", "D", or "B", multi-source achieves better results in the target domain than single-source, showing that in most cases, multisource domain adaptation is superior to singlesource domain adaptation in cross-domain research. However, when the target domain is "K", the result of "E → K" is superior to that of "BDE → K" (94.75% v.s. 93.75%). It is mainly because the feature distribution of "E" and "K" is closer.
Effect of Soft Prompts. As stated in previous works (Gao et al., 2020), the choice of hard templates may have a huge impact on the performance of prompt tuning. In this subsection, we carry out experiments in "BDE → K" and "B → K" respectively to investigate the influence of different soft prompts under multi-source domain adaptation and single-source domain adaptation settings.
As shown in Figure 4, we use 6 different soft prompts (by changing the number of prompt tokens k). The results demonstrate that the choice of templates exerts a considerable influence on the performance of prompt tuning. For soft prompts, surprisingly, prompt tuning yields the best result with the fewest special tokens. Here k = 3.
Conclusion
In this paper, we proposed a novel Adversarial Soft Prompt Tuning method (AdSPT) for crossdomain sentiment analysis. Firstly, we use domainspecific soft prompts instead of hard templates to represent domain-specific knowledge. The domainspecific soft prompts can alleviate the domain discrepancy w.r.t. the [MASK] representations by MLM task. Meanwhile, we also design a novel adversarial training strategy to learn the domaininvariant knowledge of the [MASK] token among different domains. Experiments on the Amazon reviews dataset achieve state-of-the-art performance.
Figure 1 :
1How domain discrepancy affects prompt tuning. Examples of a book review on the top and a video review on the bottom.
2, the soft prompted input x prompt contains the embeddings of the original sentence e(x), k learnable vectors [h 0 , . . . , h k−1 ], the embedding of the [MASK] token e("[MASK]"), and the embeddings of two positional tokens e("[CLS]") and e("[SEP]"). So the actual input of M is represented as: x prompt = e("[CLS]"), e(x), h 0 , . . . , h k−1 , e("[MASK]"), e("[SEP]")
Figure 2 :
2Overall structure of the proposed method.
objective. Given m source do-mainsŜ and a target domain T , the sentiment classifier and the domain discriminator are jointly trained for optimizing the PLM M, soft prompt embeddings p, MLM head function f and domain discriminators g, and the final training objective is formally represented as: min M,p,f λL class (S; θ M,p,f ) − min g L domain (Ŝ, T ; θ M,p,g )
Training procedure .
procedureThe iterative training procedure is summarized in Algorithm 1. In each iteration, the input samples of each source domain are first used for training the PLM M, several learnable vectors p and the MLM head function f . The sentiment classification loss is computed in line 5. Then the samples of each source domain and the Algorithm 1 Training Process of AdSPT. Input: Training samples of m source domain datasetŜ = {S l a target domain dataset T = {x t i } N t i=1 ; the number of training iterations n. Output: Configurations of AdSPT θ M,p,f,g Initialize: PLM θM; soft prompt embeddings θp; MLM head function θ f ; domain discriminator {θg l } m l=1 ; learning rate η; trade-off parameter λ. 1: while Training steps not end do 2: for d in {Source, Target} do 3:
Figure 3 :
3Analysis of multi-source and single-source versarial training, using the feature representation of the [MASK] token to obtain domain invariance is better for predicting the predefined set of label words.
Figure 4 :
4Results of different soft prompts k on "BDE → K" and "B → K"
becomes a key point of the cross-domain research. In order to improve the cross-domain sentiment analysis with the help of PLMs, we propose Ad-SPT: an Adversarial Soft Prompt Tuning method, which sheds new light on solving the domain adaptation problem. Specifically, we use soft prompts composed of multiple learnable vectors and the [MASK] token instead of hard templates for tuning. For different domains, we use independent soft prompts to represent domain-specific information, thus making them have the domain-aware knowledge. With different domain soft prompts, the MLM head classifier can mitigate the domain discrepancy of the [MASK] token. To enhance the effectiveness of the target domain, we design a novel adversarial training strategy to learn the domain-invariant knowledge of the [MASK] token, which can be seen as a two-player minimax game between the target domain and each source domain under multi-source domain adaptation setting. As a result, the collaborative effect of soft prompt tuning and domain adversarial training can more properly predict the feature distribution of the [MASK] token on the ground of domain-specific soft prompts and the domain invariance of the [MASK] token.
S → T
→Fine-tuning
Prompt-tuning
BERT-DAAT SENTIX F ix
FT
FT + AT PT(HARD) PT(HARD) + AT PT(SOFT) AdSPT
B → D
89.70
91.30
88.96
89.70
89.75
90.75
90.50
92.00
B → E
89.57
93.25
86.15
87.30
91.75
92.45
93.05
93.75
B → K
90.75
96.20
89.05
89.55
91.90
92.70
92.75
93.10
D → B
90.86
91.15
89.40
89.55
90.90
91.50
91.75
92.15
D → E
89.30
93.55
86.55
86.05
91.75
92.75
93.55
94.00
D → K
87.53
96.00
87.53
87.69
91.05
92.35
92.50
93.25
E → B
88.91
90.40
86.50
87.15
90.00
91.90
91.90
92.70
E → D
90.13
91.20
87.98
88.20
92.10
92.55
93.25
93.15
E → K
93.18
96.20
91.60
91.91
92.90
93.55
93.95
94.75
K → B
87.98
89.55
87.55
87.65
89.15
90.75
91.75
92.35
K → D
88.81
89.85
87.30
87.72
90.05
91.00
91.35
92.55
K → E
91.72
93.55
90.45
90.25
92.15
92.50
93.10
93.95
Avg.
90.12
92.68
88.25
88.56
91.12
92.06
92.45
93.14
Table 1: Results of single-source domain adaptation on Amazon reviews. There are four domains, B: Books, D:
DVDs, E: Electronics, K: Kitchen appliances. In the table header, S: Source domain; T: Target domain; FT: Fine-
tuning; AT: Adversarial training; PT(HARD)
Table 2 ).
2S → T
Fine-tuning
Prompt-tuning
FT
FT + AT PT(HARD) PT(HARD) + AT PT(SOFT) AdSPT
BDE → K 89.70
91.30
91.50
92.25
93.25
93.75
BDK → E 90.57
91.25
91.30
93.00
93.75
94.25
BEK → D 88.56
89.05
90.75
91.25
92.00
93.50
DEK → B 89.86
91.75
92.00
92.25
92.75
93.50
Avg.
89.67
90.84
91.39
92.00
92.94
93.75
Table 2 :
2Results of multi-source domain adaptation on Amazon reviews.Results of Single-source Domain Adaptation.
Table 1 shows our main experimental results under
single-source domain adaptation. We can observe
that our method AdSPT outperforms all other meth-
ods in most of single-source domain adaptation.
AcknowledgementsWe thank the anonymous reviewers for their helpful comments and suggestions. This work is supported by the Project of Technological Innovation 2030 "New Generation Artificial Intelligence" (Grant no. 2020AAA0107904), the Major Scientific Research Project of the State Language Commission in the 13th Five-Year Plan (Grant nos. WT135-38), and the Key Support Project of NSFC-Liaoning Joint Foundation (Grant no. U1908216).
Pada: A prompt-based autoregressive approach for adaptation to unseen domains. Eyal Ben-David, Nadav Oved, Roi Reichart, Eyal Ben-David, Nadav Oved, and Roi Reichart. 2021. Pada: A prompt-based autoregressive approach for adaptation to unseen domains.
Analysis of representations for domain adaptation. Shai Ben-David, John Blitzer, Koby Crammer, Fernando Pereira, Advances in Neural Information Processing Systems. 19137Shai Ben-David, John Blitzer, Koby Crammer, Fer- nando Pereira, et al. 2007. Analysis of representa- tions for domain adaptation. Advances in Neural In- formation Processing Systems, 19:137.
Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. John Blitzer, Mark Dredze, Fernando Pereira, Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. the 45th Annual Meeting of the Association of Computational LinguisticsJohn Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the As- sociation of Computational Linguistics, pages 440- 447.
Language models are few-shot learners. Benjamin Tom B Brown, Nick Mann, Melanie Ryder, Jared Subbiah, Prafulla Kaplan, Arvind Dhariwal, Pranav Neelakantan, Girish Shyam, Amanda Sastry, Askell, Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. pages 1877-1901.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.
Adversarial and domain-aware bert for cross-domain sentiment analysis. Chunning Du, Haifeng Sun, Jingyu Wang, Qi Qi, Jianxin Liao, Proceedings of the 58th annual meeting of the Association for Computational Linguistics. the 58th annual meeting of the Association for Computational LinguisticsChunning Du, Haifeng Sun, Jingyu Wang, Qi Qi, and Jianxin Liao. 2020. Adversarial and domain-aware bert for cross-domain sentiment analysis. In Pro- ceedings of the 58th annual meeting of the Asso- ciation for Computational Linguistics, pages 4019- 4028.
Domain-adversarial training of neural networks. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, Victor Lempitsky, The Journal of Machine Learning Research. 171Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Lavi- olette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural net- works. The Journal of Machine Learning Research, 17(1):2096-2030.
Making pre-trained language models better few-shot learners. Tianyu Gao, Adam Fisch, Danqi Chen, Tianyu Gao, Adam Fisch, and Danqi Chen. 2020. Making pre-trained language models better few-shot learners. pages 3816-3830.
Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, Maosong Sun, arXiv:2105.11259Ptr: Prompt tuning with rules for text classification. arXiv preprintXu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2021. Ptr: Prompt tuning with rules for text classification. arXiv preprint arXiv:2105.11259.
Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. Shengding Hu, Ning Ding, Huadong Wang, Zhiyuan Liu, Juanzi Li, Maosong Sun, arXiv:2108.02035arXiv preprintShengding Hu, Ning Ding, Huadong Wang, Zhiyuan Liu, Juanzi Li, and Maosong Sun. 2021. Knowl- edgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. arXiv preprint arXiv:2108.02035.
The power of scale for parameter-efficient prompt tuning. Brian Lester, Rami Al-Rfou, Noah Constant, arXiv:2104.08691arXiv preprintBrian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691.
Prefixtuning: Optimizing continuous prompts for generation. Lisa Xiang, Percy Li, Liang, Xiang Lisa Li and Percy Liang. 2021. Prefix- tuning: Optimizing continuous prompts for genera- tion. pages 4582-4597.
Transferable end-to-end aspect-based sentiment analysis with selective adversarial learning. Zheng Li, Xin Li, Ying Wei, Lidong Bing, Yu Zhang, Qiang Yang, arXiv:1910.14192arXiv preprintZheng Li, Xin Li, Ying Wei, Lidong Bing, Yu Zhang, and Qiang Yang. 2019. Transferable end-to-end aspect-based sentiment analysis with selective adver- sarial learning. arXiv preprint arXiv:1910.14192.
Hierarchical attention transfer network for crossdomain sentiment classification. Zheng Li, Ying Wei, Yu Zhang, Qiang Yang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence32Zheng Li, Ying Wei, Yu Zhang, and Qiang Yang. 2018. Hierarchical attention transfer network for cross- domain sentiment classification. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 32.
End-to-end adversarial memory network for cross-domain sentiment classification. Zheng Li, Yun Zhang, Ying Wei, Yuxiang Wu, Qiang Yang, IJCAI. Zheng Li, Yun Zhang, Ying Wei, Yuxiang Wu, and Qiang Yang. 2017. End-to-end adversarial mem- ory network for cross-domain sentiment classifica- tion. In IJCAI, pages 2237-2243.
Xiao Liu, Kaixuan Ji, Yicheng Fu, Zhengxiao Du, arXiv:2110.07602Zhilin Yang, and Jie Tang. 2021a. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. arXiv preprintXiao Liu, Kaixuan Ji, Yicheng Fu, Zhengxiao Du, Zhilin Yang, and Jie Tang. 2021a. P-tuning v2: Prompt tuning can be comparable to fine-tuning uni- versally across scales and tasks. arXiv preprint arXiv:2110.07602.
Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, arXiv:2103.10385Zhilin Yang, and Jie Tang. 2021b. Gpt understands, too. Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021b. Gpt understands, too. arXiv:2103.10385.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Mehryar Mohri, and Afshin Rostamizadeh. Yishay Mansour, arXiv:0902.3430Domain adaptation: Learning bounds and algorithms. arXiv preprintYishay Mansour, Mehryar Mohri, and Afshin Ros- tamizadeh. 2009. Domain adaptation: Learn- ing bounds and algorithms. arXiv preprint arXiv:0902.3430.
A survey on transfer learning. Qiang Sinno Jialin Pan, Yang, IEEE Transactions on Knowledge and Data Engineering. 2210Sinno Jialin Pan and Qiang Yang. 2009. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345-1359.
Cross-domain sentiment classification with target domain specific information. Minlong Peng, Qi Zhang, Yu-Gang Jiang, Xuan-Jing Huang, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Minlong Peng, Qi Zhang, Yu-gang Jiang, and Xuan- Jing Huang. 2018. Cross-domain sentiment classi- fication with target domain specific information. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2505-2513.
Deep contextualized word representations. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, 10.18653/v1/N18-1202Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.
Adversarial category alignment network for cross-domain sentiment classification. Xiaoye Qu, Zhikang Zou, Yu Cheng, Yang Yang, Pan Zhou, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Xiaoye Qu, Zhikang Zou, Yu Cheng, Yang Yang, and Pan Zhou. 2019. Adversarial category alignment network for cross-domain sentiment classification. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2496- 2508.
Improving language understanding by generative pre-training. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya SutskeverAlec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.
Language models are unsupervised multitask learners. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Learning to select data for transfer learning with bayesian optimization. Sebastian Ruder, Barbara Plank, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingSebastian Ruder and Barbara Plank. 2017. Learning to select data for transfer learning with bayesian opti- mization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 372-382.
Maximum classifier discrepancy for unsupervised domain adaptation. Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, Tatsuya Harada, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. 2018. Maximum classifier discrep- ancy for unsupervised domain adaptation. In Pro- ceedings of the IEEE conference on computer vision and pattern recognition, pages 3723-3732.
Automatically identifying words that can serve as labels for few-shot text classification. Timo Schick, Helmut Schmid, Hinrich Schütze, Timo Schick, Helmut Schmid, and Hinrich Schütze. 2020. Automatically identifying words that can serve as labels for few-shot text classification. pages 5569-5578.
Exploiting cloze questions for few shot text classification and natural language inference. Timo Schick, Hinrich Schütze, Timo Schick and Hinrich Schütze. 2020. Exploiting cloze questions for few shot text classification and natural language inference. pages 255-269.
It's not just size that matters: Small language models are also few-shot learners. Timo Schick, Hinrich Schütze, Timo Schick and Hinrich Schütze. 2021. It's not just size that matters: Small language models are also few-shot learners. pages 2339-2352.
Autoprompt: Eliciting knowledge from language models with automatically generated prompts. Taylor Shin, Yasaman Razeghi, Robert L Logan, Eric Wallace, Sameer Singh, Taylor Shin, Yasaman Razeghi, Robert L. Logan IV au2, Eric Wallace, and Sameer Singh. 2020. Auto- prompt: Eliciting knowledge from language models with automatically generated prompts.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.
Tu Vu, Brian Lester, Noah Constant, Rami Al-Rfou, Daniel Cer, arXiv:2110.07904Spot: Better frozen model adaptation through soft prompt transfer. arXiv preprintTu Vu, Brian Lester, Noah Constant, Rami Al-Rfou, and Daniel Cer. 2021. Spot: Better frozen model adaptation through soft prompt transfer. arXiv preprint arXiv:2110.07904.
Learning sentence embeddings with auxiliary tasks for cross-domain sentiment classification. Jianfei Yu, Jing Jiang, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingJianfei Yu and Jing Jiang. 2016. Learning sentence em- beddings with auxiliary tasks for cross-domain senti- ment classification. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 236-246.
Adversarial multiple source domain adaptation. Han Zhao, Shanghang Zhang, Guanhang Wu, M F José, Joao P Moura, Geoffrey J Costeira, Gordon, Advances in Neural Information Processing Systems. 31Han Zhao, Shanghang Zhang, Guanhang Wu, José MF Moura, Joao P Costeira, and Geoffrey J Gordon. 2018. Adversarial multiple source domain adapta- tion. Advances in Neural Information Processing Systems, 31:8559-8570.
Sentix: A sentiment-aware pre-trained model for cross-domain sentiment analysis. Jie Zhou, Junfeng Tian, Rui Wang, Yuanbin Wu, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsWenming Xiao, and Liang HeJie Zhou, Junfeng Tian, Rui Wang, Yuanbin Wu, Wenming Xiao, and Liang He. 2020. Sentix: A sentiment-aware pre-trained model for cross-domain sentiment analysis. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 568-579.
Introduction to semi-supervised learning. Synthesis lectures on artificial intelligence and machine learning. Xiaojin Zhu, B Andrew, Goldberg, 10.2200/S00196ED1V01Y200906AIM0063Xiaojin Zhu and Andrew B Goldberg. 2009. Intro- duction to semi-supervised learning. Synthesis lec- tures on artificial intelligence and machine learning, 3(1):1-130.
Pivot based language modeling for improved neural domain adaptation. Yftah Ziser, Roi Reichart, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Long PapersYftah Ziser and Roi Reichart. 2018. Pivot based lan- guage modeling for improved neural domain adapta- tion. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 1 (Long Papers), pages 1241-1251. |
7,255,624 | Flexible Answer Typing with Discriminative Preference Ranking | An important part of question answering is ensuring a candidate answer is plausible as a response. We present a flexible approach based on discriminative preference ranking to determine which of a set of candidate answers are appropriate. Discriminative methods provide superior performance while at the same time allow the flexibility of adding new and diverse features. Experimental results on a set of focused What ...? and Which ...? questions show that our learned preference ranking methods perform better than alternative solutions to the task of answer typing. A gain of almost 0.2 in MRR for both the first appropriate and first correct answers is observed along with an increase in precision over the entire range of recall. | [
7831164,
223717
] | Flexible Answer Typing with Discriminative Preference Ranking
Christopher Pinchak pinchak@cs.ualberta.ca
Department of Computing Science
Google Inc. University of Alberta
1600 Amphitheatre Parkway EdmontonAlberta, Mountain ViewCACanada, USA
Dekang Lin
Department of Computing Science
Google Inc. University of Alberta
1600 Amphitheatre Parkway EdmontonAlberta, Mountain ViewCACanada, USA
Davood Rafiei drafiei@cs.ualberta.ca
Department of Computing Science
Google Inc. University of Alberta
1600 Amphitheatre Parkway EdmontonAlberta, Mountain ViewCACanada, USA
Flexible Answer Typing with Discriminative Preference Ranking
March -3 April 2009. c 2009 Association for Computational Linguistics
An important part of question answering is ensuring a candidate answer is plausible as a response. We present a flexible approach based on discriminative preference ranking to determine which of a set of candidate answers are appropriate. Discriminative methods provide superior performance while at the same time allow the flexibility of adding new and diverse features. Experimental results on a set of focused What ...? and Which ...? questions show that our learned preference ranking methods perform better than alternative solutions to the task of answer typing. A gain of almost 0.2 in MRR for both the first appropriate and first correct answers is observed along with an increase in precision over the entire range of recall.
Introduction
Question answering (QA) systems have received a great deal of attention because they provide both a natural means of querying via questions and because they return short, concise answers. These two advantages simplify the task of finding information relevant to a topic of interest. Questions convey more than simply a natural language query; an implicit expectation of answer type is provided along with the question words. The discovery and exploitation of this implicit expected type is called answer typing.
We introduce an answer typing method that is sufficiently flexible to use a wide variety of features while at the same time providing a high level of performance. Our answer typing method avoids the use of pre-determined classes that are often lacking for unanticipated answer types. Because answer typing is only part of the QA task, a flexible answer typing model ensures that answer typing can be easily and usefully incorporated into a complete QA system. A discriminative preference ranking model with a preference for appropriate answers is trained and applied to unseen questions. In terms of Mean Reciprocal Rank (MRR), we observe improvements over existing systems of around 0.2 both in terms of the correct answer and in terms of appropriate responses. This increase in MRR brings the performance of our model to near the level of a full QA system on a subset of questions, despite the fact that we rely on answer typing features alone.
The amount of information given about the expected answer can vary by question. If the question contains a question focus, which we define to be the head noun following the wh-word such as city in "What city hosted the 1988 Winter Olympics?", some of the typing information is explicitly stated. In this instance, the answer is required to be a city. However, there is often additional information available about the type. In our example, the answer must plausibly host a Winter Olympic Games. The focus, along with the additional information, give strong clues about what are appropriate as responses.
We define an appropriate candidate answer as one that a user, who does not necessarily know the correct answer, would identify as a plausible answer to a given question. For most questions, there exist plausible responses that are not correct answers to the question. For our above question, the city of Vancouver is plausible even though it is not correct. For the purposes of this paper, we assume correct answers are a subset of appropriate candidates. Because answer typing is only intended to be a component of a full QA system, we rely on other components to help establish the true correctness of a candidate answer.
The remainder of the paper is organized as follows. Section 2 presents the application of discriminative preference rank learning to answer typing. Section 3 introduces the models we use for learning appropriate answer preferences. Sections 4 and 5 discuss our experiments and their results, respectively. Section 6 presents prior work on answer typing and the use of discriminative methods in QA. Finally, concluding remarks and ideas for future work are presented in Section 7.
Preference Ranking
Preference ranking naturally lends itself to any problem in which the relative ordering between examples is more important than labels or values assigned to those examples. The classic example application of preference ranking (Joachims, 2002) is that of information retrieval results ranking. Generally, information retrieval results are presented in some ordering such that those higher on the list are either more relevant to the query or would be of greater interest to the user.
In a preference ranking task we have a set of candidates c 1 , c 2 , ..., c n , and a ranking r such that the relation c i < r c j holds if and only if candidate c i should be ranked higher than c j , for 1 ≤ i, j ≤ n and i = j. The ranking r can form a total ordering, as in information retrieval, or a partial ordering in which we have both c i ≮ r c j and c j ≮ r c i . Partial orderings are useful for our task of answer typing because they can be used to specify candidates that are of an equivalent rank.
Given some c i < r c j , preference ranking only considers the difference between the feature representations of c i and c j (Φ(c i ) and Φ(c j ), respectively) as evidence. We want to learn some weight vector w such that w · Φ(c i ) > w · Φ(c j ) holds for all pairs c i and c j that have the relation c i < r c j . In other words, we want w · (Φ(c i ) − Φ(c j )) > 0 and we can use some margin in the place of 0. In the context of Support Vector Machines (Joachims, 2002), we are trying to minimize the function:
V ( w, ξ) = 1 2 w · w + C ξ i,j(1)
subject to the constraints:
∀(c i < r c j ) : w · (Φ(c i ) − Φ(c j )) ≥ 1 − ξ i,j (2) ∀i, j : ξ i,j ≥ 0(3)
The margin incorporates slack variables ξ i,j for problems that are not linearly separable. This ranking task is analogous to the SVM classification task on the pairwise difference vectors (Φ(c i ) − Φ(c j )), known as rank constraints. Unlike classification, no explicit negative evidence is required as w·(Φ(c i )−Φ(c j )) = (−1) w·(Φ(c j )− Φ(c i )). It is also important to note that no rank constraints are generated for candidates for which no order relation exists under the ranking r. Support Vector Machines (SVMs) have previously been used for preference ranking in the context of information retrieval (Joachims, 2002). We adopt the same framework for answer typing by preference ranking. The SVM light package (Joachims, 1999) implements the preference ranking of Joachims (2002) and is used here for learning answer types.
Application to Answer Typing
Assigning meaningful scores for answer typing is a difficult task. For example, given the question "What city hosted the 1988 Winter Olympics?" and the candidates New York, Calgary, and the word blue, how can we identify New York and Calgary as appropriate and the word blue as inappropriate? Scoring answer candidates is complicated by the fact that a gold standard for appropriateness scores does not exist. Therefore, we have no a priori notion that New York is better than the word blue by some amount v. Because of this, we approach the problem of answer typing as one of preference ranking in which the relative appropriateness is more important than the absolute scores.
Preference ranking stands in contrast to classification, in which a candidate is classified as appropriate or inappropriate depending on the values in its feature representation. Unfortunately, simple classification does not work well in the face of a large imbalance in positive and negative examples. In answer typing we typically have far more inappropriate candidates than appropriate candidates, and this is especially true for the experiments described in Section 4. This is indeed a problem for our system, as neither re-weighting nor attempting to balance the set of examples with the use of random negative examples were shown to give better performance on development data. This is not to say that some means of balancing the data would not provide comparable or superior performance, but rather that such a weighting or sampling scheme is not obvious.
An additional benefit of preference ranking over classification is that preference ranking models the better-than relationship between candidates. Typically a set of candidate answers are all related to a question in some way, and we wish to know which of the candidates are better than others. In contrast, binary classification simply deals with the is/is-not relationship and will have difficulty when two responses with similar feature values are classified differently. With preference ranking, violations of some rank constraints will affect the resulting order of candidates, but sufficient ordering information may still be present to correctly identify appropriate candidates.
To apply preference ranking to answer typing, we learn a model over a set of questions q 1 , ..., q n . Each question q i has a list of appropriate candidate answers a (i,1) , ..., a (i,u) and a list of inappropriate candidate answers b (i,1) , ..., b (i,v) . The partial ordering r is simply the set
∀i, j, k : {a (i,j) < r b (i,k) }(4)
This means that rank constraints are only generated for candidate answers a (i,j) and b (i,k) for question q i and not between candidates a (i,j) and b (l,k) where i = l. For example, the candidate answers for the question "What city hosted the 1988 Winter Olympics?" are not compared with those for "What colour is the sky?" because our partial ordering r does not attempt to rank candidates for one question in relation to candidates for another. Moreover, no rank constraints are generated between a (i,j) and a (i,k) nor b (i,j) and b (i,k) because the partial ordering does not include orderings between two candidates of the same class. Given two appropriate candidates to the question "What city hosted the 1988 Winter Olympics?", New York and Calgary, rank constraints will not be created for the pair (New York, Calgary).
Methods
We begin with the work of Pinchak and Lin (2006) in which question contexts (dependency tree paths involving the wh-word) are extracted from the question and matched against those found in a corpus of text. The basic idea is that words that are appropriate as answers will appear in place of the wh-word in these contexts when found in the corpus. For example, the question "What city hosted the 1988 Winter Olympics?" will have as one of the question contexts "X hosted Olympics." We then consult a corpus to discover what replacements for X were actually mentioned and smooth the resulting distribution. We use the model of Pinchak and Lin (2006) to produce features for our discriminative model.
E(t, c) Estimated count of term t in context c C(t, c) Observed count of term t in context c t C(t , c)
Count of all terms appearing in context c c C(t, c )
Count of term t in all contexts
S(t)
Count of the times t occurs in the candidate list
These features are mostly based on question contexts, and are briefly summarized in Table 1. Following Pinchak and Lin (2006), all of our features are derived from a limited corpus (AQUAINT); large-scale text resources are not required for our model to perform well. By restricting ourselves to relatively small corpora, we believe that our approach will easily transfer to other domains or languages (provided parsing resources are available).
To address the sparseness of question contexts, we remove lexical elements from question context paths. This removal is performed after feature values are obtained for the fully lexicalized path; the removal of lexical elements simply allows many similar paths to share a single learned weight. For example, the term Calgary in context X ← subject ← host → object → Olympics (X hosted Olympics) is used to obtain a feature value v that is assigned to a feature such as C(Calgary, X ← subject ← * → object → * ) = v. Removal of lexical elements results in a space of 73 possible question contexts. To facilitate learning, all counts are log values and feature vectors are normalized to unit length.
The estimated count of term t in context c, E(t, c), is a component of the model of Pinchak and Lin (2006) and is calculated according to:
E(t, c) = χ Pr(χ|t)C(χ, c)(5)
Essentially, this equation computes an expected count for term t in question c by observing how likely t is to be part of a cluster χ (Pr(χ|t)) and then observing how often terms of cluster χ occur in context c (C(χ, c)). Although the model of Pinchak and Lin (2006) is significantly more complex, we use their core idea of cluster-based smoothing to decide how often a term t will occur in a context c, regardless of whether or not t was actually observed in c within our corpus. The Pinchak and Lin (2006) system is unable to assign individual weights to different question contexts, even though not all question contexts are equally important. For example, the Pinchak and Lin (2006) model is forced to consider a question focus context (such as "X is a city") to be of equal importance to non-focus contexts (such as "X host Olympics"). However, we have observed that it is more important that candidate X is a city than it hosted an Olympics in this instance. Appropriate answers are required to be cities even though not all cities have hosted Olympics. We wish to address this problem with the use of discriminative methods.
The observed count features of term t in context c, C(t, c), are included to allow for combination with the estimated values from the model of Pinchak and Lin (2006). Because Pinchak and Lin (2006) make use of cluster-based smoothing, errors may occur. By including the observed counts of term t in context c, we hope to allow for the use of more accurate statistics whenever they are available, and for the smoothed counts in cases for which they are not.
Finally, we include the frequency of a term t in the list of candidates, S(t). The idea here is that the correct and/or appropriate answers are likely to be repeated many times in a list of candidate answers. Terms that are strongly associated with the question and appear often in results are likely to be what the question is looking for.
Both the C(t, c) and S(t) features are extensions to the Pinchak and Lin (2006) model and can be incorporated into the Pinchak and Lin (2006) model with varying degrees of difficulty. The value of S(t) in particular is highly dependent on the means used to obtain the candidate list, and the distribution of words over the candidate list is often very different from the distribution of words in the corpus. Because this feature value comes from a different source than our other features, it would be difficult to use in a non-discriminative model.
Correct answers to our set of questions are obtained from the TREC 2002-2006 results (Voorhees, 2002). For appropriateness labels we turn to human annotators. Two annotators were instructed to label a candidate as appropriate if that candidate was believable as an answer, even if that candidate was not correct. For a question such as "What city hosted the 1988 Winter Olympics?", all cities should be labeled as appropriate even though only Calgary is correct. This task comes with a moderate degree of difficulty, especially when dealing with questions for which appropriate answers are less obvious (such as "What kind of a community is a Kibbutz?"). We observed an interannotator (kappa) agreement of 0.73, which indicates substantial agreement. This value of kappa conveys the difficulty that even human annotators have when trying to decide which candidates are appropriate for a given question. Because of this value of kappa, we adopt strict gold standard appropriateness labels that are the intersection of the two annotators' labels (i.e., a candidate is only appropriate if both annotators label it as such, otherwise it is inappropriate).
We introduce four different models for the ranking of appropriate answers, each of which makes use of appropriateness labels in different ways: Correctness Model: Although appropriateness and correctness are not equivalent, this model deals with distinguishing correct from incorrect candidates in the hopes that the resulting model will be able to perform well on finding both correct and appropriate answers. For learning, correct answers are placed at a rank above that of incorrect candidates, regardless of whether or not those candidates are appropriate. This represents the strictest definition of appropriateness and requires no human annotation. Appropriateness Model: The correctness model assumes only correct answers are appropriate. In reality, this is seldom the case. For example, documents or snipppets returned for the question "What country did Catherine the Great rule?" will contain not only Russia (the correct answer), but also Germany (the nationality of her parents) and Poland (her modern-day birthplace). To better address this overly strict definition of appropriateness, we rank all candidates labeled as appropriate above those labeled as inappropriate, without regards to correctness. Because we want to learn a model for appropriateness, training on appropriateness rather than correctness information should produce a model closer to what we desire. Combined Model: Discriminative preference ranking is not limited to only two ranks. We combine the ideas of correctness and appropri-ateness together to form a three-rank combined model. This model places correct answers above appropriate-but-incorrect candidates, which are in turn placed above inappropriate-and-incorrect candidates. Reduced Model: Both the appropriateness model and the combined model incorporate a large number of rank constraints. We can reduce the number of rank constraints generated by simply removing all appropriate, but incorrect, candidates from consideration and otherwise following the correctness model. The main difference is that some appropriate candidates are no longer assigned a low rank. By removing appropriate, but incorrect, candidates from the generation of rank constraints, we no longer rank correct answers above appropriate candidates.
Experiments
To compare with the prior approach of Pinchak and Lin (2006), we use a set of what and which questions with question focus (questions with a noun phrase following the wh-word). These are a subset of the more general what, which, and who questions dealt with by Pinchak and Lin (2006). Although our model can accommodate a wide range of what, which, when, and who questions, the focused what and which questions are an easily identifiable subclass that are rarely definitional or otherwise complex in terms of the desired answer. We take the set of focused what and which questions from TREC 2002-2006(Voorhees, 2002 comprising a total of 385 questions and performed 9-fold cross-validation, with one dedicated development partition (the tenth partition). The development partition was used to tune the regularization parameter of the SVM used for testing.
Candidates are obtained by submitting the question as-is to the Google search engine and chunking the top 20 snippets returned, resulting in an average of 140 candidates per question. Google snippets create a better confusion set than simply random words for appropriate and inappropriate candidates; many of the terms found in Google snippets are related in some way to the question. To ensure a correct answer is present (where possible), we append the list of correct answers to the list of candidates.
As a measure of performance, we adopt Mean Reciprocal Rank (MRR) for both correct and appropriate answers, as well as precision-recall for appropriate answers. MRR is useful as a measure of overall QA system performance (Voorhees, 2002), but is based only on the top correct or appropriate answer encountered in a ranked list. For this reason, we also show the precision-recall curve to better understand how our models perform.
We compare our models with three alternative approaches, the simplest of which is random. For random, the candidate answers are randomly shuffled and performance is averaged over a number of runs (100). The snippet frequency approach orders candidates based on their frequency of occurrence in the Google snippets, and is simply the S(t) feature of our discriminative models in isolation. We remove terms comprised solely of question words from all approaches to prevent question words (which tend to be very frequent in the snippets) from being selected as answers. The last of our alternative systems is an implementation of the work of Pinchak and Lin (2006) in which the output probabilities of their model are used to rank candidates. Figures 1 and 2 show the MRR results and precision-recall curve of our correctness model against the alternative approaches. In comparison to these alternative systems, we show two versions of our correctness model. The first uses a linear kernel and is able to outperform the alternative approaches. The second uses a radial basis function (RBF) kernel and exhibits performance superior to that of the linear kernel. This suggests a degree of non-linearity present in the data that cannot be captured by the linear kernel alone. Both the training and running times of the RBF kernel are considerably larger than that of the linear kernel. The accuracy gain of the RBF kernel must therefore be weighed against the increased time required to use the model. Figures 3 and 4 give the MRR results and precision-recall curves for our additional models in comparison with that of the correctness model. Although losses in MRR and precision are observed for both the appropriate and combined model using the RBF kernel, the linear kernel versions of these models show slight performance gains.
Results
Discussion of Results
The results of our correctness model, found in Figures 1 and 2 show considerable gains over our alternative systems, including that of Pinchak and Lin (2006). The Pinchak and Lin (2006) system was specifically designed with answer typing in mind, although it makes use of a brittle generative model that does not account for ranking of answer candidates nor for the variable strength of various question contexts. These results show that our discriminative preference ranking approach creates a better model of both correctness and appropriateness via weighting of contexts, preference rank learning, and with the incorporation of additional related features (Table 1). The last feature, snippet frequency, is not particularly strong on its own, but can be easily incorporated into our discriminative model. The ability to add a wide variety of potentially helpful features is one of the strengths of discriminative techniques in general. By moving away from simply correct answers in the correctness model and incorporating labeled appropriate examples in various ways, we are able to further improve upon the performance of our approach. Training on appropriateness labels instead of correct answers results in a loss in MRR for the first correct answer, but a gain in MRR for the first appropriate candidate. Unfortunately, this does not carry over to the entire range of precision over recall. For the linear kernel, our three ad- The precision-recall curves of Figures 2 and 4 show remarkable consistency across the full range of recall, despite the fact that candidates exist for which feature values cannot easily be obtained. Due to tagging and chunking errors, ill-formed candidates may exist that are judged appropriate by the annotators. For example, "explorer Hernando Soto" is a candidate marked appropriate by both annotators to the question "What Spanish explorer discovered the Mississippi River?" However, our context database does not include the phrase "explorer Hernando Soto" meaning that only a few features will have non-zero values. Despite these occasional problems, our models are able to rank most correct and appropriate candidates high in a ranked list.
Finally, we examine the effects of training set size on MRR. The learning curve for a single partitioning under the correctness model is presented in Figure 5. Although the model trained with the RBF kernel exhibits some degree of instability below 100 training questions, both the linear and RBF models gain little benefit from additional training questions beyond 100. This may be due to the fact that the most common unlexicalized question contexts have been observed in the first
Prior Work
The expected answer type can be captured in a number of possible ways. By far the most common is the assignment of one or more predefined types to a question during a question analysis phase. Although the vast majority of the approaches to answer type detection make use of rules (either partly or wholly) (Harabagiu et al., 2005;Sun et al., 2005;Wu et al., 2005;Mollá and Gardiner, 2004), a few notable learned methods for answer type detection exist. One of the first attempts at learning a model for answer type detection was made by Ittycheriah et al. (2000;2001) who learn a maximum entropy classifier over the Message Understanding Conference (MUC) types. Those same MUC types are then assigned by a named-entity tagger to identify appropriate candidate answers. Because of the potential for unanticipated types, Ittycheriah et al. (2000;2001) include a Phrase type as a catch-all class that is used when no other class is appropriate. Although the classifier and named-entity tagger are shown to be among the components with the lowest error rate in their QA system, it is not clear how much benefit is obtained from using a relatively coarse-grained set of classes. The approach of Li and Roth (2002) is similar in that it uses learning for answer type detection. They make use of multi-class learning with a Sparse Network of Winnows (SNoW) and a two-layer class hierarchy comprising a total of fifty possible answer types. These finer-grained classes are of more use when computing a notion of appropriateness, although one major drawback is that no entity tagger is discussed that can identify these types in text. Li and Roth (2002) also rely on a rigid set of classes and so run the risk of encountering a new question of an unseen type. Pinchak and Lin (2006) present an alternative in which the probability of a term being appropriate to a question is computed directly. Instead of assigning an answer type to a question, the question is broken down into a number of possibly overlapping contexts. A candidate is then evaluated as to how likely it is to appear in these contexts. Unfortunately, Pinchak and Lin (2006) use a brittle generative model when combining question contexts that assumes all contexts are equally important. This assumption was dealt with by Pinchak and Lin (2006) by discarding all non-focus contexts with a focus context is present, but this is not an ideal solution.
Learning methods are abundant in QA research 2000) created an entire QA system based on maximum entropy components in addition to the question classifier discussed above. Ittycheriah et al. (2000) were able to obtain reasonable performance from learned components alone, although future versions of the system use non-learned components in addition to learned components (Prager et al., 2003). The JAVELIN I system (Nyberg et al., 2005) uses a SVM during the answer/information extraction phase. Although learning is applied in many QA tasks, very few QA systems rely solely on learning. Compositional approaches, in which multiple distinct QA techniques are combined, also show promise for improving QA performance. Echihabi et al. (2003) use three separate answer extraction agents and combine the output scores with a maximum entropy re-ranker. Surdeanu et al. (2008) explore preference ranking for advice or "how to" questions in which a unique correct answer is preferred over all other candidates. Their focus is on complex-answer questions in addition to the use of a collection of user-generated answers rather than answer typing. However, their use of preference ranking mirrors the techniques we describe here in which the relative difference between two candidates at different ranks is more important than the individual candidates.
Conclusions and Future Work
We have introduced a means of flexible answer typing with discriminative preference rank learning. Although answer typing does not represent a complete QA system, it is an important component to ensure that those candidates selected as answers are indeed appropriate to the question being asked. By casting the problem of evaluating appropriateness as one of preference ranking, we allow for the learning of what differentiates an appropriate candidate from an inappropriate one.
Experimental results on focused what and which questions show that a discriminatively trained preference rank model is able to outperform alternative approaches designed for the same task. This increase in performance comes from both the flexibility to easily combine a number of weighted features and because comparisons only need to be made between appropriate and inappropriate candidates. A preference ranking model can be trained from a relatively small set of example questions, meaning that only a small number of question/answer pairs or annotated candidate lists are required.
The power of an answer typing system lies in its ability to identify, in terms of some given query, appropriate candidates. Applying the flexible model described here to a domain other than question answering could allow for a more focused set of results. One straight-forward application is to apply our model to the process of information or document retrieval itself. Ensuring that there are terms present in the document appropriate to the query could allow for the intelligent expansion of the query. In a related vein, queries are occasionally comprised of natural language text fragments that can be treated similarly to questions. Rarely are users searching for simple mentions of the query in pages; we wish to provide them with something more useful. Our model achieves the goal of finding those appropriate related concepts.
Figure 1 :
1MRR
Figure 2 :
2Precisionmodels (appropriateness, combined, and reduced) show consistent improvements over the correctness model, but with the RBF kernel only the reduced model produces a meaningful change.
Figure 3
3examples and so therefore additional questions simply repeat the same information. Requiring only a relatively small number of training examples means that an effective model can be learned with relatively little input in the form of question-answer pairs or annotated candidate lists.
Figure 4 :
4Precision
Figure 5 :
5Learning have been applied in a number of different ways. Ittycheriah et al. (
Table 1 :
1Feature templatesPattern
Description
AcknowledgmentsWe would like to thank Debra Shiau for her assistance annotating training and test data and the anonymous reviewers for their insightful comments. We would also like to thank the Alberta Informatics Circle of Research Excellence and the Alberta Ingenuity Fund for their support in developing this work.
Multiple-Engine Question Answering in TextMap. A Echihabi, U Hermjakob, E Hovy, D Marcu, E Melz, D Ravichandran, Proceedings of the Twelfth Text REtrieval Conference (TREC-2003). the Twelfth Text REtrieval Conference (TREC-2003)Gaithersburg, MarylandA. Echihabi, U. Hermjakob, E. Hovy, D. Marcu, E. Melz, and D. Ravichandran. 2003. Multiple- Engine Question Answering in TextMap. In Pro- ceedings of the Twelfth Text REtrieval Conference (TREC-2003), Gaithersburg, Maryland.
Employing Two Question Answering Systems in TREC-2005. S Harabagiu, D Moldovan, C Clark, M Bowden, A Hickl, P Wang, Proceedings of the Fourteenth Text REtrieval Conference (TREC-2005). the Fourteenth Text REtrieval Conference (TREC-2005)Gaithersburg, MarylandS. Harabagiu, D. Moldovan, C. Clark, M. Bowden, A. Hickl, and P. Wang. 2005. Employing Two Question Answering Systems in TREC-2005. In Proceedings of the Fourteenth Text REtrieval Con- ference (TREC-2005), Gaithersburg, Maryland.
IBM's Statistical Question Answering System. A Ittycheriah, M Franz, W-J Zhu, A Ratnaparkhi, R Mammone, Proceedings of the 9th Text REtrieval Conference (TREC-9. the 9th Text REtrieval Conference (TREC-9Gaithersburg, MarylandA. Ittycheriah, M. Franz, W-J. Zhu, A. Ratnaparkhi, and R. Mammone. 2000. IBM's Statistical Ques- tion Answering System. In Proceedings of the 9th Text REtrieval Conference (TREC-9), Gaithersburg, Maryland.
IBM's Statistical Question Answering System -TREC-10. A Ittycheriah, M Franz, S Roukos, Proceedings of the 10th Text REtrieval Conference (TREC-10). the 10th Text REtrieval Conference (TREC-10)Gaithersburg, MarylandA. Ittycheriah, M. Franz, and S. Roukos. 2001. IBM's Statistical Question Answering System -TREC-10. In Proceedings of the 10th Text REtrieval Confer- ence (TREC-10), Gaithersburg, Maryland.
Making Large-Scale SVM Learning Practical. T Joachims, Advances in Kernel Methods -Support Vector Learning. B. Schölkopf, C. Burges, and A. SmolaMIT-PressT. Joachims. 1999. Making Large-Scale SVM Learn- ing Practical. In B. Schölkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning. MIT-Press.
Optimizing Search Engines Using Clickthrough Data. T Joachims, Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD). the ACM Conference on Knowledge Discovery and Data Mining (KDD)ACMT. Joachims. 2002. Optimizing Search Engines Us- ing Clickthrough Data. In Proceedings of the ACM Conference on Knowledge Discovery and Data Min- ing (KDD). ACM.
Learning Question Classifiers. X Li, D Roth, Proceedings of the International Conference on Computational Linguistics (COLING 2002). the International Conference on Computational Linguistics (COLING 2002)X. Li and D. Roth. 2002. Learning Question Clas- sifiers. In Proceedings of the International Confer- ence on Computational Linguistics (COLING 2002), pages 556-562.
AnswerFinder -Question Answering by Combining Lexical, Syntactic and Semantic Information. D Mollá, M Gardiner, Proceedings of the Australian Language Technology Workshop. the Australian Language Technology WorkshopSydneyALTWD. Mollá and M. Gardiner. 2004. AnswerFinder - Question Answering by Combining Lexical, Syntac- tic and Semantic Information. In Proceedings of the Australian Language Technology Workshop (ALTW 2004, pages 9-16, Sydney, December.
JAVELIN I and II Systems at TREC. E Nyberg, R Frederking, T Mitamura, M Bilotti, K Hannan, L Hiyakumoto, J Ko, F Lin, L Lita, V Pedro, A Schlaikjer, Proceedings of the Fourteenth Text REtrieval Conference (TREC-2005). the Fourteenth Text REtrieval Conference (TREC-2005)Gaithersburg, MarylandE. Nyberg, R. Frederking, T. Mitamura, M. Bilotti, K. Hannan, L. Hiyakumoto, J. Ko, F. Lin, L. Lita, V. Pedro, and A. Schlaikjer. 2005. JAVELIN I and II Systems at TREC 2005. In Proceedings of the Fourteenth Text REtrieval Conference (TREC-2005), Gaithersburg, Maryland.
A Probabilistic Answer Type Model. C Pinchak, D Lin, Proceedings of the Eleventh Conference of the European Chapter of the Association for Computational Linguistics. the Eleventh Conference of the European Chapter of the Association for Computational LinguisticsTrento, ItalyC. Pinchak and D. Lin. 2006. A Probabilistic Answer Type Model. In Proceedings of the Eleventh Con- ference of the European Chapter of the Association for Computational Linguistics (EACL 2006), Trento, Italy, April.
IBM's PIQUANT in TREC2003. J Prager, J Chu-Carroll, K Czuba, C Welty, A Ittycheriah, R Mahindru, Proceedings of the Twelfth Text REtrieval Conference (TREC-2003). the Twelfth Text REtrieval Conference (TREC-2003)Gaithersburg, MarylandJ. Prager, J. Chu-Carroll, K. Czuba, C. Welty, A. Itty- cheriah, and R. Mahindru. 2003. IBM's PIQUANT in TREC2003. In Proceedings of the Twelfth Text REtrieval Conference (TREC-2003), Gaithersburg, Maryland.
. R Sun, J Jiang, Y F Tan, H Cui, T-S Chua, M-Y , R. Sun, J. Jiang, Y.F. Tan, H. Cui, T-S. Chua, and M-Y.
Using Syntactic and Semantic Relation Analysis in Question Answering. Kan, Proceedings of the Fourteenth Text REtrieval Conference (TREC-2005). the Fourteenth Text REtrieval Conference (TREC-2005)Gaithersburg, MarylandKan. 2005. Using Syntactic and Semantic Relation Analysis in Question Answering. In Proceedings of the Fourteenth Text REtrieval Conference (TREC- 2005), Gaithersburg, Maryland.
Learning to rank answers on large online QA collections. M Surdeanu, M Ciaramita, H Zaragoza, Proceedings of the 46th Annual Meeting for the Association for Computational Linguistics: Human Language Technologies (ACL-08: HLT). the 46th Annual Meeting for the Association for Computational Linguistics: Human Language Technologies (ACL-08: HLT)Columbus, OhioAssociation for Computational LinguisticsM. Surdeanu, M. Ciaramita, and H. Zaragoza. 2008. Learning to rank answers on large online QA collec- tions. In Proceedings of the 46th Annual Meeting for the Association for Computational Linguistics: Hu- man Language Technologies (ACL-08: HLT), pages 719-727, Columbus, Ohio, June. Association for Computational Linguistics.
Overview of the TREC 2002 Question Answering Track. E M Voorhees, Proceedings of TREC 2002. TREC 2002Gaithersburg, MarylandE.M. Voorhees. 2002. Overview of the TREC 2002 Question Answering Track. In Proceedings of TREC 2002, Gaithersburg, Maryland.
ILQUA -An IE-Driven Question Answering System. M Wu, M Duan, S Shaikh, S Small, T Strzalkowski, Proceedings of the Fourteenth Text REtrieval Conference (TREC-2005). the Fourteenth Text REtrieval Conference (TREC-2005)Gaithersburg, MarylandM. Wu, M. Duan, S. Shaikh, S. Small, and T. Strza- lkowski. 2005. ILQUA -An IE-Driven Ques- tion Answering System. In Proceedings of the Fourteenth Text REtrieval Conference (TREC-2005), Gaithersburg, Maryland. |
15,505,354 | Testing semantic similarity measures for extracting synonyms from a corpus | The definition of lexical semantic similarity measures has been the subject of lots of works for many years. In this article, we focus more specifically on distributional semantic similarity measures. Although several evaluations of this kind of measures were already achieved for determining if they actually catch semantic relatedness, it is still difficult to determine if a measure that performs well in an evaluation framework can be applied more widely with the same success. In the work we present here, we first select a semantic similarity measure by testing a large set of such measures against the WordNet-based Synonymy Test, an extended TOEFL test proposed in(Freitag et al., 2005), and we show that its accuracy is comparable to the accuracy of the best state of the art measures while it has less demanding requirements. Then, we apply this measure for extracting automatically synonyms from a corpus and we evaluate the relevance of this process against two reference resources, WordNet and the Moby thesaurus. Finally, we compare our results in details to those of (Curran and Moens, 2002). | [
15698938,
968883,
5629501
] | Testing semantic similarity measures for extracting synonyms from a corpus
Olivier Ferret olivier.ferret@cea.fr
CEA
LIST, Vision and Content Engineering Laboratory
F-92265Fontenay-aux-RosesFrance
Testing semantic similarity measures for extracting synonyms from a corpus
The definition of lexical semantic similarity measures has been the subject of lots of works for many years. In this article, we focus more specifically on distributional semantic similarity measures. Although several evaluations of this kind of measures were already achieved for determining if they actually catch semantic relatedness, it is still difficult to determine if a measure that performs well in an evaluation framework can be applied more widely with the same success. In the work we present here, we first select a semantic similarity measure by testing a large set of such measures against the WordNet-based Synonymy Test, an extended TOEFL test proposed in(Freitag et al., 2005), and we show that its accuracy is comparable to the accuracy of the best state of the art measures while it has less demanding requirements. Then, we apply this measure for extracting automatically synonyms from a corpus and we evaluate the relevance of this process against two reference resources, WordNet and the Moby thesaurus. Finally, we compare our results in details to those of (Curran and Moens, 2002).
Introduction
This article takes place in the field of what is called lexical semantic similarity or even more generally lexical semantic relatedness. The objective of the work done in this field is to determine how close two words are from a semantic viewpoint and if their similarity is high enough, the type of the semantic relation they share. A part of this work is dedicated to the design of similarity measures that exploit more or less structured sources of knowledge, such as dictionaries or lexical networks (see (Zesch and Gurevych, 2010) for an overview). In this article, we focus more particularly on corpus-based approaches. Most of them rely on the distributional hypothesis, according to which words found in similar contexts tend to have similar meanings (Firth, 1957). Following (Grefenstette, 1994) and (Lin, 1998), this hypothesis is generally implemented by collecting co-occurrences from a large corpus and characterizing each term T from the corpus by the vector of its co-occurrents. These co-occurrents, also considered as features, are weighted according to the strength of their link with T. Finally, the semantic similarity of two terms is evaluated by applying a similarity measure between their vectors. This perspective was adopted for instance by (Curran and Moens, 2002) and (Weeds, 2003), where a wide set of similarity measures and feature weighting functions were tested. Some works propose variants of this basic schema but without changing the core principles of the distributional approach. One of these variants is based on a probabilistic viewpoint: each term is characterized by a probability distribution over its co-occurrents and the semantic similarity of two terms is evaluated by a distance between their probability distributions (Weeds, 2003). The application of dimensionality reduction techniques to the co-occurrent vectors covers another set of variants in which the semantic similarity between terms is evaluated in the semantic space resulting from the dimensionality reduction. The Latent Semantic Analysis from (Landauer and Dumais, 1997) and the Random Indexing from (Salgren, 2006) are the most signif-icant representatives of this trend. Works about lexical semantic similarity can also be characterized through the way they evaluate the semantic measures they propose. One common way to perform this evaluation is to apply these measures to a set of TOEFL synonym questions, as initially proposed by (Landauer and Dumais, 1997). Each question consists in a headword and a set of 4 words among which a synonym of the headword has to be identified. After the results for the TOEFL questions had reached a high level (Turney et al., 2003), several extensions of this evaluation approach were proposed, either by using questions from similar tests such as the ESL test (Moraliyski and Dias, 2007), building larger sets of questions by relying on a resource such as WordNet (Freitag et al., 2005;Piasecki et al., 2007) or extending the kind of relations covered by the test as with the presence of analogies in the SAT test (Turney, 2008). Another common way to evaluate semantic measures is to compare their results to a gold standard. Human judgments about the similarity of couples of words are sometimes used as a direct gold standard (Weeds, 2003) but this kind of resources are rare and small. As a consequence, a more indirect evaluation is generally performed (Lin, 1998;Curran and Moens, 2002): the semantic measures to test are used for finding the most similar neighbors of a headword and these neighbors are evaluated against a reference set of synonyms or related words for this headword taken from resources such as WordNet (Miller, 1990) or the Roget's thesaurus (Roget, 1911). In this article, our overall objective is to extract synonyms for nouns from a corpus by relying on the distributional hypothesis, which starts by selecting an appropriate semantic similarity measure. Although we have seen that many works were done about lexical semantic similarity, it is still difficult to know if their results can be transposed to our problem: most of them are about TOEFL-like tests, which are less difficult tasks than ours; when they come from the evaluation against a gold standard, they are generally given only for a restricted set of words (Curran and Moens, 2002) or the evaluation measure takes into account a larger set of semantically similar words than only synonyms (van der Plas and Bouma, 2004). Hence, in this article, we first report our experiments for finding a semantic similarity measure that performs well on an extended TOEFL test within a set of constraints. Then, we study the results of this measure for extracting synonyms. This is an attempt to have a more global view on semantic similarity, following (Turney, 2008) or (Baroni and Lenci, 2009).
Test of semantic similarity measures
Definition of similarity measures
A semantic similarity measure based on the distributional hypothesis heavily depends on the corpus from which distributional data are taken and the means used for extracting these data. Although corpora for distributional similarity tend to be bigger and bigger, such as in (Pantel et al., 2009), we decided to rely in our case on the AQUAINT-2 corpus, which is a middle-size corpus made of around 380 million words coming from news articles. This choice is motivated by the fact that collecting huge sets of textual data is not always possible for all domains and for all languages. Concerning the extraction of distributional data, we also chose to use limited means because advanced linguistic tools are not available, or at least freely available, for all languages. While many works, but not all of them, follow (Lin, 1998) and(Curran andMoens, 2002) and rely on a syntactic parser, we only applied lemmatization and selected content words (nouns, verbs and adjectives) as a pre-processing step, which was done by the TreeTagger tool (Schmid, 1994). As a consequence, the distributional data associated to each word took the form of a vector of co-occurrents collected by a fixed-size window and not a vector of syntactic co-occurrents based on syntactic dependency relations. We call this vector a context vector and we classically refer to its elements, i.e. the co-occurrents of the headword, as features (f ). These features were nouns, verbs and adjectives 1 . Within this framework, we defined a semantic similarity measure between word x and word y through four characteristics:
• a measure to compare the context vectors of x and y;
• a function to weight the significance of the features of a context vector;
• the size of the window used for collecting cooccurrents;
• the threshold applied for discarding low-frequency cooccurrents before building context vectors. Table 1 shows the context similarity measures and the feature weighting functions we tested as they are defined in (Curran and Moens, 2002) for some of them. The measure proposed by Ehlert (Ehlert, 2003) is a special case: as it is a probabilistic measure, it relies on the probability of features and not on their weight, which means that no weighting function is applied.
Results and evaluation
As mentioned in the introduction, the selection of the semantic similarity measure we used for synonym extraction was based on an extended TOEFL test and more precisely, on the WordNet-based Synonymy Test (WBST) proposed in (Freitag et al., 2005) 2 . WBST was produced by generating automatically a large set of TOEFL-like questions from the synonyms in WordNet. (Freitag et al., 2005) shows that this test is more difficult than the initial TOEFL test made of 80 questions that was first used in (Landauer and Dumais, 1997). The part of WBST restricted to nouns is made of 9887 questions. All the possible associations between the context similarity measures and the feature weighting functions presented in the previous section were tested with window sizes between 1 and 5 and frequency thresholds between 1 and 5 3 . For each question of the test, the tested similarity measure was computed between the headword and each of the four possible choices. These choices were then sorted according to the decreasing values of their similarity score and the choice with the highest score was taken as a candidate synonym. In the rare cases where no choice could be made from the distributional data (between 3.7 and 6.7% of questions according the measure), a random choice was performed. We classically used the percentage of relevant candidate synonyms as our evaluation measure, which can also be seen as the precision at rank 1 as our similarity measures sorted candidates. Table 2 shows the results of this evaluation. The first thing to notice is that for almost all our context similarity measures, the best results are obtained with a window size and a frequency threshold equal to 1. Moreover, we can observe that the accuracy of similarity measures tends to decrease while the frequency threshold and the window size increase 4 . This means that semantic similarity is preferably characterized by very short range cooccurrents among which only a weak selection has to be performed for discarding co-occurrences that are the most likely to be present only by chance 5 . The second main thing to notice is that the Cosine measure with Pointwise Mutual Information and the Ehlert measure have good results, which agrees the findings of (Freitag et al., 2005). However, (Freitag et al., 2005) had found that Ehlert outperforms Cosine while we found the opposite. More precisely, our best accuracy for Cosine is equal to its best accuracy (without supervised optimization) for Ehlert. Moreover, its measures had been defined with a one-billion word corpus, hence much larger than ours, and the frequency of the WBST nouns in their corpus was as least 1000 while we only discarded words with frequency lower than 11. This evaluation also shows that measures such as Jaccard, Dice † or Lin, whose precision is high for extracting similar words according to (Curran and Moens, 2002), have close accuracy values that are significantly lower than Cosine or
Context similarity measure
Feature weighting function
Cosine P i wgt(xi)·wgt(yi) √P j wgt(xj ) 2 · P j wgt(yj ) 2 Pointwise Mutual Information (pmi) log( p(x,f ) p(x)·p(f ) ) Jaccard P i min(wgt(xi),wgt(yi)) P j max(wgt(xj ),wgt(yj )) T-test p(x,f )−p(x)·p(f ) √ p(x)·p(f ) Jaccard † P i min(wgt(xi),wgt(yi)) P i max(wgt(xi),wgt(yi)) Tf.Idf N (x, f ) · log( Nx N x,f ) Dice 2· P i min(wgt(xi),wgt(yi)) P j wgt(xj )+ P j wgt(yj ) Dice † 2· P i min(wgt(xi),wgt(yi)) P i wgt(xi)+wgt(yi) Lin P i wgt(xi)+wgt(yi) P j wgt(xj )+ P j wgt(yj )
Applying a lexical similarity measure for extracting synonyms and similar words
Principles
Results from the previous section show that we have built a distributional semantic similarity measure that performs at least as well as state of the art measures on a standard benchmark for evaluating semantic similarity. We now examine in this section to what extent this measure can be used to extract synonyms and similar words. Our extraction process is simple: the possible synonyms of a word are found by retrieving its N nearest neighbors according to our similarity measure. In our case, the retrieval process only consists in applying the similarity measure between the target word and all the other words of the considered vocabulary with the same part-of-speech. Finally, all Table 3: Evaluation of synonym extraction these words are sorted according to their similarity value and only the first N , which is equal to 100 in our experiments, of them are kept 7 . As we use the Cosine measure for evaluating the semantic similarity of words, we could use techniques such the ones described in (Bayardo et al., 2007) to face the scalability problem of our basic approach for retrieving the nearest neighbors of a word. (Pantel et al., 2009) also addresses this problem for huge sets of data. Table 3 shows the results of the application of the best similarity measure of the previous section to the extraction of synonyms and similar words. Two well-known resources were taken as reference: WordNet, more precisely its version 3.0, and the Moby thesaurus (Ward, 1996). As we focus on the ability of a semantic similarity measure to extract reliable synonyms more than on the coverage of these resources, we filtered these two references by removing from them all the words that weren't part of the set of mono-term nouns of the AQUAINT 2 corpus for which our distributional data were collected. We also built a third reference (WM) by merging the data coming from WordNet (W) and the Moby thesaurus (M). In distributional approaches, the frequency of words related to the size of the corpus is an important factor. Hence, we give our results globally but also for three ranges of frequencies that split our vocabulary into roughly equal parts (see first column of Table 3): high frequency nouns (frequency > 1000), middle frequency nouns (100 < frequency ≤ 1000) and low frequency nouns (10 < frequency ≤ 100). The third column of Table 3 gives for each resource the number of words for which the evaluation was actually performed. This number is lower than the number of nouns of the first column as some nouns of the AQUAINT 2 corpus have no entry in our resources. The fourth column corresponds to the number of synonyms and similar words in our reference resources that have to be found for the nouns of the AQUAINT 2 corpus while the fifth column gives the 7 It was performed approximately in 4 hours on 48 cores of a cluster.
Results and evaluation
number of synonyms and similar words that were actually found among the first 100 semantic neighbors of each target word of our distributional base. As these neighbors are ranked according to their similarity value with their target word, the evaluation measures can be taken from the Information Retrieval field by replacing documents with synonyms and queries with target words (see the three last columns of Table 3). The R-precision (R-prec.) is the precision after the first R neighbors were retrieved, R being the number of reference synonyms; the Mean Average Precision (MAP) is the average of the precision value after a reference synonym is found; precision at different cut-offs is given for the 1, 5, 10 and 100 first neighbors. The results of Table 3 are globally low in spite of the good results on the WBST test of the similarity measure we have used. This weakness concerns both the recall of synonyms (around 25% for WordNet and 10% for the Moby thesaurus) and their rank among semantic neighbors (see Rprecision,MAP and P@1,5,10,100). This observation goes beyond our particular experiments as the similarity measure we relied on is not specific to our framework. However, the situation is somewhat different depending on the frequency range of target words: the best results are obtained for high-frequency words and evaluation measures significantly decrease for words whose frequency is less than 100 occurrences. More globally, the ability for a distributional approach to catch the semantic relatedness of words seems to be closely correlated with the frequency of these words in the corpus from which distributional data are collected. While this is an argument in favor of the use of larger and larger corpora, as illustrated by (Pantel et al., 2009), it doesn't invalidate the idea that rare words may have a different distributional behavior that should be taken into account specifically. Table 3 also shows that the characteristics of the reference resources has a significant impact on results. WordNet provides a restricted number of synonyms for each noun (2.8 on average) while the Moby thesaurus contains for each entry a larger number of synonyms and similar words (50 on average). This difference directly explains that the precision at rank 1, for words whose frequency is higher than 1000, is equal to 0.413 for the Moby thesaurus while it is only equal to 0.171 for WordNet.
Discussion
As a reference evaluation framework doesn't exist for the extraction of synonyms by distributional methods, the comparison of our results with already existing works faces some difficulties. The main one is the lack of consensus about the type of the target relations to find. The extraction of synonyms such as those of WordNet is a difficult task because their low number (see previous Section) requires an extraction method with a very high precision for having acceptable results. As a consequence, the type of the reference relations goes generally beyond synonymy and is extended to the notion of similar words, which is supposed to account for semantic relatedness. A part of the relations of the Moby thesaurus can be put into this category in our case. (van der Plas and Bouma, 2004) followed a similar trend: although it relied on the Dutch EuroWordNet, it made use for evaluation of a WordNet similarity measure that also took into account the hierarchy of hypernyms. (Pantel et al., 2009) is another variant: it evaluated its results against Entity Sets, which gathered entities that were not only similar but more generally analogous. (Curran and Moens, 2002) is more directly comparable to our work. It tested a large number of similarity measures based on syntactic co-occurrences by using them for extracting semantic neighbors. The evaluation of this extraction was done against the fusion of three thesauri: the Macquarie (Bernard, 1990), Roget's and Moby thesauri. It focused on 70 nouns randomly chosen from WordNet such that they were representative of WordNet's nouns in terms of frequency, number of senses, specificity (depth in the hierarchy of WordNet) and domains. Among all tested measures, the best results were obtained by the pair Dice † + T-test, with 0.76 as precision at rank 1, 0.52 at rank 5 and 0.45 at rank 10 for 70 nouns while our best precision is 0.413 at rank 1, 0.280 at rank 5 and 0.219 at rank 10 for 3,732 nouns. Apart from the fact that our test set is much larger than (Curran and Moens, 2002)'s one, the gold standards are partly different in the two cases, which can have a significant influence on results as we pointed it out in the previous section. For our 3,732 nouns, the Moby thesaurus provides 69 synonyms on average while 331 synonyms are available for each of the 70 nouns of (Curran and Moens, 2002) 8 . Moreover, we can observe that the recall rate is different for the two evaluations, equal to 8.3% for (Curran and Moens, 2002) and to 11.4% in our case. Even if the difference in the average number of relations for each entry in the two reference resources has an impact that it is difficult to estimate, this comparison suggests that using syntactic co-occurrences is a way to increase precision while graphical co-occurrences are more interesting for favoring recall.
Conclusion and future work
In this article, we have first presented our experiments for selecting the similarity measure based on the distributional paradigm that is the most likely to catch the semantic relatedness of words. This selection relied on an extended version of a TOEFL test, which is a classical way to evaluate semantic similarity measures. We then applied this selected measure for extracting automatically synonyms from a corpus and we evaluated the resulting set of candidate synonyms against two complementary resources: WordNet and the Moby thesaurus. Although the results of this evaluation are coherent with the state of the art, they show that results about semantic similarity for tasks such as TOEFLlike tests must be considered with caution when they are transposed to more difficult tasks such as finding synonyms. From our viewpoint, they represent a starting point for studying more precisely the kind of relations that are covered by distributional semantic similarity measures. The most straightforward extension to this work is to substitute syntactic co-occurrences for graphical co-occurrences to determine if the use of syntactic features leads to increase precision, as it is suggested by our analysis of the results of (Curran and Moens, 2002). Futhermore, we would like to test methods for improving the quality of our distributional data, as those proposed in (Zhitomirsky-Geffet andDagan, 2009) or (Broda et al., 2009), and extending them by taking into account new criteria such as words senses coming from a word sense discrimination method (Ferret, 2004). Finally, we plan to make publicly available A2ST 9 , the similarity thesaurus we have built from the AQUAINT 2 corpus, similarly to the similarity thesaurus of Dekang Lin (Lin, 1998).
6 i: index on shared features of x and y; j: index on all features of x or y; N (x, c): frequency of c as a co-occurrent of x; Nx: number of words; Nx,c: number of words having c as cooccurrent.
Table 1 :
1Tested similarity measures for contexts and weighting functions for features 6window size
1
3
5
frequency threshold
1
3
5
1
3
5
1
3
5
pmi
71.6 69.7 67.6 65.7 63.7 62.8 62.5 60.6 59.4
cosine
t-test
68.9 66.7 65.0 65.4 64.6 63.8 63.3 62.9 62.0
tf.idf
64.0 63.1 62.0 63.3 62.9 62.5 62.6 62.4 61.7
ehlert
-
70.2 68.5 66.2 68.9 67.2 65.9 66.9 65.9 64.4
pmi
64.8 63.0 61.7 57.1 55.0 54.1 54.6 52.6 51.3
jaccard
t-test
68.1 65.8 63.9 61.3 58.8 57.7 58.4 55.9 54.6
tf.idf
54.2 53.9 53.6 49.7 49.6 49.3 48.0 47.9 47.4
pmi
64.8 63.0 61.7 57.1 55.0 54.1 54.6 52.6 51.3
dice
t-test
68.1 65.8 63.9 61.3 58.8 57.7 58.4 55.9 54.6
tf.idf
54.2 53.9 53.6 49.7 49.6 49.3 48.0 47.9 47.4
pmi
65.6 63.5 61.7 57.0 54.6 53.6 54.2 52.1 51.1
lin
t-test
67.3 65.3 63.3 61.0 59.5 58.9 58.5 57.3 55.9
tf.idf
60.6 59.6 58.3 57.9 56.6 55.9 56.6 54.9 53.9
pmi
65.0 63.2 61.5 58.7 57.5 57.0 56.5 55.9 55.3
dice †
t-test
66.0 64.3 62.3 59.7 57.9 57.0 57.5 56.0 55.1
tf.idf
51.6 52.3 52.7 48.4 47.9 48.3 47.2 47.2 46.6
pmi
56.1 54.7 53.2 54.3 54.3 53.4 54.0 54.3
53
jaccard †
t-test
39.6 37.9 38.2 46.7 43.7 42.2 48.1 45.7 43.0
tf.idf
35.3 34.3 34.4 40.2 38.1 37.3 41.4 39.7 38.4
Table 2 :
2Evaluation of semantic similarity measuresEhlert's accuracies. For these measures, T-test is the best
weighting function, which is compatible with (Curran and
Moens, 2002), while Tf.idf is the worst. Jaccard † is clearly
the worst choice as a context similarity measure. Finally,
our best measure compares favorably with (Broda et al.,
2009), which uses the nouns of WBST for evaluation as
in our case but relies on syntactic co-occurrences collected
from the British National Corpus, a 100 million word cor-
pus. For nouns with frequency > 10, its best accuracy is
equal to 68.04%.
More precisely, only the words whose frequency is strictly higher than 10 are kept, both in context vectors and for headwords.
Available at the following address: http://www.cs.cmu.edu/ dayne/wbst-nanews.tar.gz.3 Frequency must be higher or equal to the threshold. 4 There are some rare exceptions, which mainly concern the Jaccard † measure. 5 A frequency threshold equal to 1 discards around half cooccurrences.
This difference shows that the Macquarie thesaurus is far much richer than the Moby thesaurus and WordNet.
A2ST stands for AQUAINT 2 Similarity Thesaurus.
AcknowledgementsThis work was partly supported by the Jean-Luc Lagardère Foundation (http://www.fondation-jeanluclagardere.com).References
Studies in Linguistic Analysis, chapter A synopsis of linguistic theory 1930-1955. Olivier Ferret, 20 th International Conference on Computational Linguistics. Geneva, Switzerland. John R. Firth; Blackwell, OxfordDiscovering word senses from a network of lexical cooccurrencesOlivier Ferret. 2004. Discovering word senses from a network of lexical cooccurrences. In 20 th Interna- tional Conference on Computational Linguistics (COL- ING 2004), pages 1326-1332, Geneva, Switzerland. John R. Firth, 1957. Studies in Linguistic Analysis, chapter A synopsis of linguistic theory 1930-1955, pages 1-32. Blackwell, Oxford.
New experiments in distributional representations of synonymy. Dayne Freitag, Matthias Blume, John Byrnes, Edmond Chow, Sadik Kapadia, Richard Rohwer, Zhiqiang Wang, Ninth Conference on Computational Natural Language Learning (CoNLL). Ann Arbor, Michigan, USADayne Freitag, Matthias Blume, John Byrnes, Edmond Chow, Sadik Kapadia, Richard Rohwer, and Zhiqiang Wang. 2005. New experiments in distributional repre- sentations of synonymy. In Ninth Conference on Com- putational Natural Language Learning (CoNLL), pages 25-32, Ann Arbor, Michigan, USA.
Explorations in automatic thesaurus discovery. Gregory Grefenstette, Kluwer Academic PublishersGregory Grefenstette. 1994. Explorations in automatic thesaurus discovery. Kluwer Academic Publishers.
A solution to Plato's problem: the latent semantic analysis theory of acquisition, induction, and representation of knowledge. K Thomas, Susan T Landauer, Dumais, Psychological review. 1042Thomas K. Landauer and Susan T. Dumais. 1997. A so- lution to Plato's problem: the latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological review, 104(2):211-240.
Automatic retrieval and clustering of similar words. Dekang Lin, 17 th International Conference on Computational Linguistics and 36 th Annual Meeting of the Association for Computational Linguistics (ACL-COLING'98). Montréal, CanadaDekang Lin. 1998. Automatic retrieval and clustering of similar words. In 17 th International Conference on Computational Linguistics and 36 th Annual Meeting of the Association for Computational Linguistics (ACL- COLING'98), pages 768-774, Montréal, Canada.
WordNet: An On-Line Lexical Database. George A Miller, International Journal of Lexicography. 34George A. Miller. 1990. WordNet: An On-Line Lexical Database. International Journal of Lexicography, 3(4).
One sense per discourse for synonym detection. Rumen Moraliyski, Gaël Dias, th International Conference Recent Advances in Natural Language Processing. Borovets, BulgariaRumen Moraliyski and Gaël Dias. 2007. One sense per discourse for synonym detection. In 5 th International Conference Recent Advances in Natural Language Pro- cessing (RANLP 2007), Borovets, Bulgaria.
Web-scale distributional similarity and entity set expansion. Patrick Pantel, Eric Crestan, Arkady Borkovsky, Ana-Maria Popescu, Vishnu Vyas, Conference on Empirical Methods in Natural Language Processing. SingaporePatrick Pantel, Eric Crestan, Arkady Borkovsky, Ana- Maria Popescu, and Vishnu Vyas. 2009. Web-scale dis- tributional similarity and entity set expansion. In 2009 Conference on Empirical Methods in Natural Language Processing, pages 938-947, Singapore, August.
Extended similarity test for the evaluation of semantic similarity functions. Maciej Piasecki, Stanisbaw Szpakowicz, Bartosz Broda, Language Technology Conference (LTC). Maciej Piasecki, StanisBaw Szpakowicz, and Bartosz Broda. 2007. Extended similarity test for the evaluation of semantic similarity functions. In Language Technol- ogy Conference (LTC).
Thesaurus of English words and phrases. Longmans, Green and Co. Peter Roget, London; UKPeter Roget. 1911. Thesaurus of English words and phrases. Longmans, Green and Co., London, UK.
The Word-space model. Magnus Salgren, Stockholm UniversityPh.D. thesisMagnus Salgren. 2006. The Word-space model. Ph.D. the- sis, Stockholm University.
Probabilistic part-of-speech tagging using decision trees. Helmut Schmid, International Conference on New Methods in Language Processing. Helmut Schmid. 1994. Probabilistic part-of-speech tag- ging using decision trees. In International Conference on New Methods in Language Processing.
Combining independent modules to solve multiple-choice synonym and analogy problems. D Peter, Michael L Turney, Jeffrey Littman, Victor Bigham, Shnayder, 4 th International Conference on Recent Advances in Natural Language Processing. Borovets, BulgariaRANLP 2003Peter D. Turney, Michael L. Littman, Jeffrey Bigham, and Victor Shnayder. 2003. Combining independent mod- ules to solve multiple-choice synonym and analogy prob- lems. In 4 th International Conference on Recent Ad- vances in Natural Language Processing (RANLP 2003), pages 482-489, Borovets, Bulgaria.
A uniform approach to analogies, synonyms, antonyms, and association. D Peter, Turney, COLING 2008. Peter D. Turney. 2008. A uniform approach to analo- gies, synonyms, antonyms, and association. In COLING 2008, pages 905-912.
Syntactic contexts for finding semantically related words. Gosse Lonneke Van Der Plas, Bouma, Ton van der Wouden, Michaela Poß, Hilke Reckman, and Crit Cremers. Leiden, NetherlandsSelected Papers from the Fifteenth CLIN MeetingLonneke van der Plas and Gosse Bouma. 2004. Syntac- tic contexts for finding semantically related words. In Ton van der Wouden, Michaela Poß, Hilke Reckman, and Crit Cremers, editors, Computational Linguistics in the Netherlands 2004, Selected Papers from the Fifteenth CLIN Meeting, Leiden, Netherlands.
. Grady Ward, Moby thesaurus. Moby ProjectGrady Ward. 1996. Moby thesaurus. Moby Project.
Measures and Applications of Lexical Distributional Similarity. Julie Weeds, Department of Informatics, University of SussexPh.D. thesisJulie Weeds. 2003. Measures and Applications of Lexi- cal Distributional Similarity. Ph.D. thesis, Department of Informatics, University of Sussex.
Wisdom of crowds versus wisdom of linguists -measuring the semantic relatdness of words. Torsten Zesch, Iryna Gurevych, Natural Language Engineering. 161Torsten Zesch and Iryna Gurevych. 2010. Wisdom of crowds versus wisdom of linguists -measuring the se- mantic relatdness of words. Natural Language Engi- neering, 16(1):25-59.
Bootstrapping distributional feature vector quality. Maayan Zhitomirsky, -Geffet , Ido Dagan, Computational Linguistics. 353Maayan Zhitomirsky-Geffet and Ido Dagan. 2009. Boot- strapping distributional feature vector quality. Computa- tional Linguistics, 35(3):435-461, september. |
220,445,356 | Etude par EMA des mouvements de la mâchoire inférieure durant les consonnes de l'arabe marocain | [] | Etude par EMA des mouvements de la mâchoire inférieure durant les consonnes de l'arabe marocain
Chakir Zeroual chakirzeroual@yahoo.fr
Faculté Polydisciplinaire de Taza
BP. 1223TazaMaroc. (
Laboratoire de Phonétique et Phonologie
UMR7018
CNRS
ParisFrance (
Phil Hoole hoole@phonetik.uni-muenchen.de
Institut fuer Phonetik und Sprachverarbeitung
University of Munchen
Germany (
Adamantios Gafos adamantios.gafos@uni-potsdam.de
University of Potsdam
Germany
Etude par EMA des mouvements de la mâchoire inférieure durant les consonnes de l'arabe marocain
Introduction
Peu d'études articulatoires ont essayé de caractériser les propriétés spatiotemporelles de la mâchoire inférieure (Minf) durant les consonnes. Elles ont généralement porté sur les labiales, coronales et/ou vélaires, et ont montré que les propriétés articulatoires de la Minf varient en fonction de leur lieu et mode d'articulation et sont associées, en partie, à des contraintes biomécaniques combinées éventuellement à des contraintes perceptives. Ici, nous nous baserons sur les études de Keating et al.
(1994 : anglais et suédois), Lee et al. (1995 : français, coréen, arabe), Mooshammer et al. (2006Mooshammer et al. ( et 2007, Recasens et al. (2012 : espagnol), Elgendy (1999 : arabe égyptien), Lindblom (1983 : suédois) et Zeroual et al. (2007 : arabe marocain, désormais AM). /s / ont la position la plus élevée de la Minf qui reste quasi-stable quelles que soient les voyelles adjacentes (Keating et al., 1994 ;Lee et al., 1995 ;Mooshammer et al. 2007). Cette posture est attribuée à la précision articulatoire requise durant ces consonnes, ainsi qu'à la présence d'une source supplémentaire du bruit de friction entre les incisives inférieures et supérieures (Shadle, 1985) qui renforce leurs caractéristiques acoustiques. La Minf est plus avancée durant / / comparée à /s/ (Mooshammer et al. 2007) liée très probablement à l'arrondissement durant la première.
/t/ (avec VOT long) a également une position élevée et invariable de la Minf ; sa hauteur est similaire (Keating et al. 1994) ou plus basse comparée à /s/ (Lindblom, 1983 ;Mooshammer et al. 2007). Selon Mooshammer et al. (2006), la montée substantielle de la Minf durant /t/ est nécessaire pour produire un bruit de relâchement long et saillant. La montée maximale de la Minf est souvent alignée avec le relâchement de /t/, mais située bien avant durant /d/ (Mooshammer et al., 2006 ;Zeroual et al. 2007). Mooshammer et al. (2007) montrent que la Minf est plus basse durant /d/ comparée à /t/ qu'ils associent au fait que /d/ peut être apicale et /t/ laminale (voir aussi Dart, 1991).
Comparées aux autres coronales notamment obstruantes, /l r/ (apicales) ont souvent une position plus basse de la Minf qui semble nécessaire pour éviter, durant /l/, un contact latéral entre la langue et le palais (Mooshameer et al., 2006) et pour faciliter, durant /r/, la montée et la rétraction de la pointe de la langue. La hauteur de la Minf durant /l/ peut être supérieure (Elgendy, 1999), identique (Keating, 1994), ou plus basse (Lindblom, 1983) comparées à /k g/. /p b/ ont généralement une hauteur la Minf entre les dentales et les vélaires : /t, d/ > /p, b/ > /k, g/ (Keating et al., 1994;Lee, 1995). La position abaissée de Minf durant /p b/, comparée à /t d/, est due aux lèvres qui peuvent se déplacer relativement indépendamment de la Minf. Keating et al. (1994) et Elgendy (1999 ont rapporté une position plus élevée de la Minf durant /f/ comparée à /b/ liée très probablement à la constriction labiodentale. /k g/ sont généralement associées à une position plus basse de la Minf, comparées aux obstruantes labiales et coronales, qui coarticule assez fortement avec les voyelles adjacentes. La position abaissée de la Minf, durant /k g/, est généralement attribuée à son axe de rotation qui est plus proche de l'articulateur dorsal Kühnert, 1996, Keating et al. 1994). En effet, le contact dorso-vélaire ne nécessite pas une montée importante de la Minf et la rotation de cette dernière affecte faiblement la hauteur du dos de la langue.
Très peu d'études ont été consacrées aux consonnes post-vélaires, celle de Boff (1983, cinéradiographie) a montré que, dans aCa, les pharyngales de l'AM affichent la baisse la plus importante de la Minf, qui est plus marquée que /a/ (réalisée [ ] en AM). Viennent ensuite les laryngales, puis les uvulaires. Une gradation similaire a été rapportée par Elgendy (1999) pour qui l'abaissement de la Minf durant les pharyngales serait actif et permettrait à la racine de la langue de reculer plus facilement. Pour Nolan (1995), cet abaissement de la Minf est une conséquence passive de la montée du larynx durant les pharyngales. La coarticulation très marquée de la Minf avec les voyelles, durant les laryngales et les pharyngales, combinée au fait que les pharyngales ont en arabe iraquien une position plus élevée de la Minf et plus abaissée en AM, ont amené Goldstein (1994) à suggérer que les laryngales et les pharyngales sont produites sans implication active de cet articulateur. En fait pour Goldstein (1994), toutes les gutturales / qui constituent une classe naturelle, serait caractérisée par la non implication active de la Minf.
Cette étude exhaustive, basée sur les données de 3 locuteurs produisant plusieurs consonnes de l'AM (Table 1), teste une partie des hypothèses articulatoires et perceptives citées ci-dessus : (i) le bruit de friction saillant des coronales suppose nécessairement une position très élevée de la Minf ; (ii) les consonnes apicales ont une position plus basse de la Minf comparées à leurs correspondantes laminales ; (iii) la rétraction de la racine de la langue nécessite un abaissement de la Minf ; (iv) les gutturales sont produites sans implication active de la mâchoire inférieure. Notons que plusieurs critères peuvent montrer l'implication active de la Minf (Goldsetin, 1994, Moosammer et al, 2006. Nous nous baserons ici sur celui qui consiste à comparer, dans les mêmes contextes vocaliques, les positions spatiales de la Minf durant la production des consonnes ayant des lieux et des modes d'articulation différents. Nous analyserons également l'ampleur de l'effet (coarticulation) des voyelles sur les mouvements de la Minf durant ces consonnes. 4 locuteurs natifs de l'AM ont participé à une expérience par EMA 3-dimensionnelle (AG500, Carstens Medizinelektronik). Cette technique nous a permis d'enregistrer les mouvements de plusieurs articulateurs grâce à des capteurs fixés : proche de la pointe de langue, au niveau de sa partie médiane, sur son dos, sur les extrémités externes centrales des lèvres inférieures et supérieures et sur la base externe des incisives inferieures pour enregistrer les mouvements de la Minf.
Méthodologie
Un programme Mview, développé sur Matlab par M. Tiede (Haskins), nous a permis d'identifier automatiquement, à partir de la courbe de la vélocité (seuil de 20%), les positions temporelles et spatiales des mouvements de chaque articulateur, ainsi que les valeurs de la vélocité maximale et l'amplitude de ses phases de fermeture et d'ouverture. Grâce à ce programme, nous avons également relevé les mesures de la position verticale et horizontale de la Minf au niveau de l'onset, partie médiane, et l'offset des trois segments des suites iCi et aCa. Cette étude sera basée essentiellement sur l'analyse des mesures effectuées au niveau de la partie médiane de /C/ (Figure 1).
Mouvements verticaux de la mâchoire inférieure en fonction de /C/
Nos résultats montrent que /t d T D s S/ ont la même position verticale de la Minf qui est significativement supérieure aux autres consonnes /b l r k g q (seule /d/ vs /b/ est non significative). Ces résultats semblent confirmer l'hypothèse selon laquelle la production d'un bruit de friction très saillant durant /s S/ et /t/ (qui a VOT long) nécessite une montée substantielle de la Minf. La hauteur identique de la Minf durant les emphatiques /T D S/, comparées à leurs correspondantes non emphatiques /t d s/, montre que le recul de la racine de la langue, pour produire leur articulation secondaire dans la cavité pharyngale, n'est pas nécessairement corrélé à la baisse de la Minf. Nous pensons que la montée importante de la Minf durant /T D S/, mêmes si elles sont apicales, est pour compenser le recul de la racine de la langue due à leur pharyngalisation.
Le fait que les coronales obstruantes ont une position beaucoup plus élevée de la Minf comparées à /l r/ est en accord avec les observations de plusieurs études. /l r/ ont la Minf légèrement plus basse mais non statiquement différente de /k g/. Notons que seule Lindblom (1983 : suédois) rapporte une position significativement plus basse de la Minf durant /l r/ comparées à /k g/. Les observations de Keating (1994) montrent une position identique de la Minf durant /l r k g/ avec la tendance [l r] [k g] (pour l'anglais). Dans les données d'Elgendy (1999), la Minf durant /l r/ est significativement plus élevée comparée à /k g/. Dans nos données, la position légèrement plus haute de la mâchoire inférieure durant /k g/ comparée à /l r/ peut être due au fait que /k/ de l'AM est dorso-palatal devant /a, i/ (Boff, 1983) et que /a/ de l'AM est réalisé [ ] sauf au voisinage d'une emphatique. Zeroual et al. (2011aZeroual et al. ( et 2011b) montrent aussi que /k/ de l'AM est produit devant /a i/ alors que le dos de la langue est dans la même position horizontale qui est plus avancée comparée à sa position devant /u/. /b/ est produite avec une position de la Minf qui est plus élevée, mais non significativement, à celle de /l r k g/. Cette tendance s'accorde avec les travaux qui montrent que la hauteur de la Minf durant /p b/ suie la gradation suivante : /t d/ > /p b/ > /k g/ (Keating et al., 1994;Lee, 1994). La Minf est plus élevée durant /k g/ comparée à /q h/ ; seules les différences /k g/ vs. /q/ sont non significatives. Les observations par EMA de Zeroual et al. (2011a) montrent que /q/ est très palatalisée dans iCi, ce qui peut expliquer la hauteur très proche de la Minf durant /k g q/.
Les gutturales / / ont la position la plus abaissée de la Minf comparées à toutes les autres consonnes. Seules / / présentent la position de la Minf qui est statiquement abaissée comparée à toutes les autres consonnes buccales. Même si la Minf est plus basse durant /q / comparée à /l r/, cette différence n'est pas statistiquement significative. Nous pensons que ce comportement particulier de /q /, se rapprochant à la fois des consonnes buccales et des gutturales, est lié au fait qu'ils sont des « segments complexes » ayant une articulation buccale impliquant le dos de la langue et une autre pharyngale réalisée par la racine de la langue (Goldstein, 1994).
Mouvements horizontaux de la mâchoire inférieure en fonction de /C/
/s S/ ont une position de la Minf qui est la plus avancée comparée à toutes les autres consonnes, y compris /d T D/ (seule /t/ vs. /s S/ non significative). /t/ est également plus avancée que le reste des consonnes à l'exception de /d/. Rappelons que /s S t d T D/ sont produites avec une hauteur de la Minf qui est statistiquement similaire. Ces résultats suggèrent que le mouvement de translation horizontale de la Minf peut être contrôlé indépendamment de son mouvement de rotation. /s/ et /d/ ont une même position horizontale comparée respectivement à leur correspondante emphatique /S/ et /D/. Ces deux résultats, montrent que le recul de la racine de la langue durant les consonnes coronales emphatiques / S D/ ne nécessite pas une rétraction de la Minf. Ce résultat peut également constituer un argument contre l'hypothèse selon laquelle les consonnes emphatiques sont accompagnées d'un arrondissement des lèvres (Jakobson, 1962). Cette déduction est basée sur les observations de (Mooshammer et al. 2007 ;Lee, 1995) qui montrent une position plus avancée de la Minf durant / / comparée à /s/ attribuée à l'arrondissement des lèvres durant la première.
/d T D l r k g/ ont une position horizontale de la Minf qui reste statistiquement identique, bien que sa position verticale est significativement plus élevée durant /d T D/ comparée à /l r k g/. Ces résultats suggèrent également que le mouvement de translation horizontale (antérieur-postérieur) de la Minf peut être contrôlé indépendamment du mouvement de rotation. / / présente la position de la Minf qui est la plus reculée comparée à toutes les autres consonnes (seule / / vs /q / est non significative). /q / ont une position horizontale de la Minf située entre les vélaires (différence non significative) et les pharyngales (différence non significative). La Minf durant /h/ est significativement plus reculée comparées à /s S t/, similaire à celle durant /b d T D l r k g/, mais plus avancée comparée à /q /. Notons que /h/ présente un comportement très particulier. Sa hauteur varie significativement en fonction de la voyelle adjacente, alors que sa position horizontale reste quasi-stable (Table 3). Ce résultat, observé chez nos 3 locuteurs pris séparément, confirme que les mouvements de la Minf ne peuvent être réduits à une simple rotation. y-val
Variations des positions de Minf durant /C/ en fonction des voyelles adjacentes
x-val aCa iCi aCa + iCi aCa iCi aCa + iCi /b/ -14,1 (1,9) -12,6 (2,4) -13,3 (2,2) 2,2 (0,9) 2,4 (1,1) 2,6 (1,0) /s/
Conclusion
Cette étude a essayé d'expliquer les causes des variations des positions spatiales de la mâchoire inférieure durant plusieurs consonnes de l'arabe marocain ayant des lieux et modes d'articulation différents et produites dans les contextes iCi et aCa. Nous avons montré que l'implication de la mâchoire inférieure est cruciale durant les obstruantes coronales /s S t T/. Le recul de la racine de la langue n'est pas nécessairement corrélé à la baisse de la Minf. De même que les consonnes apicales ne sont pas toujours associées à l'abaissement de la Minf. Nos données montrent également que la mâchoire inférieure ne semble pas être impliquée durant les laryngales et les pharyngales, ce qui est en accord avec les déductions de Goldstein (1994). Le dernier résultat majeur de notre étude montre que les mouvements verticaux (haut/bas) et horizontaux (avant-/arrière) de la Minf ne sont pas toujours corrélés ; les déplacements de la Minf durant la production des consonnes ne peuvent donc être réduits à un simple mouvement de rotation. b t d T D s S l r k g q h b *** ns ** * *** *** ns ns Ns ns * *** *** *** t ns ns ns ns ns *** *** *** *** *** *** *** *** d ns ns ns ns *** *** ** ** *** *** *** *** T ns ns ns *** *** *** *** *** *** *** *** D ns ns *** *** ** *** *** *** *** *** s ns *** *** *** *** *** *** *** *** S *** *** *** *** *** *** *** *** l ns Ns ns ns ns *** ** r Ns ns ns ns *** * k ns ns *** *** *** g ns ** *** *** q ns * ns ns ns ns ns ns ns ns ** * Ns Ns * ** *** *** *** ns t ns ** * ns Ns *** * *** *** *** *** *** *** d ns ns ** ** Ns Ns ns * *** *** *** ns T *** *** Ns Ns ns ns * *** *** ns D *** *** Ns Ns ns ns *** *** *** ns s ns *** *** *** *** *** *** *** *** S *** *** *** *** *** *** *** *** l Ns ns ns ns *** *** ns r ns ns ** *** *** ns k ns ns ** ** ns g ns ns * ns q ns ns * ns *** ***
Figure. 1 :
1Courbes représentant l'évolution (en mm) des positions verticales (Y : lignes vertes) et horizontales (X : lignes rouges) de la pointe de la langue (TTIPPOS) et de la mâchoire inférieure (JAWPOS) durant /ada/ dans /madab /. Les lignes verticales bleues correspondent aux postions (de gauche à droite) onset, médiane et offet de /d/ (/C/), où les mesures (x et y) ont été relevées.
Figure 2 :
2Variation de la position verticale (à gauche) et horizontale (à droite) de la mâchoire inférieure (en mm) durant /b s S t d T D l r k g q h/ de l'AM produites dans iCi et aCa. Chaque valeur correspond à la moyenne de 5 répétitions prononcées par 3 locuteurs dans les deux contextes.
4 :
4Tests post-hoc TukeyHSD pour les positions verticales de la Minf (*, p<0.05, **, p<0.01, ***, p<0.001). La case noircie : Minf durant /C/ en colonne plus basse que /C/ en ligne.
TABLE 1 :
1Consonnes de l'AM concernées par cette étude et placées dans les contextes aCa et iCi. Emphatique renvoie à une articulation secondaire caractérisée comme une pharyngalisation ou uvularisation, d'où les symboles non standards /T D S/. /t/ a un VOT très long et une articulation laminale ; /T/ un VOT très court et une articulation apicale(Zeroual et. 2007). /r/ est réalisé [ ]. /a/ est phonétiquement [ ] sauf au voisinage d'une emphatique où elle est réalisée [ɑ]. * : non mot. locuteurs natifs dans la phrase cadre : [galha ____ hnaja] « il lui a dit __ ici ». Dans ces items, plusieurs types de consonnes apparaissent dans les contextes symétriques aCa et iCi. Ces items sont des verbes conjuguées à la forme accomplie /ma+CaC+ / et inaccomplie /yCiC/, 3 ème personne masculin singulier. /yCiC/ est réalisé phonétiquement [iCiC]. Dans /ma+CaC+ /, [ma… ] est un morphème discontinu de négation. Dans tous ces items, l'accent est placé la 2 ème voyelle. Cette étude est limitée aux données de 3 locuteurs réalisant 5 répétées par item.Des items de l'AM (Table 1) complétés par quelques non mots et classés de manière aléatoire ont
été prononcés (8 fois) par 4 3 Résultats et discussions
Une ANOVA à deux facteurs montrent que la hauteur de la Minf varie significativement en fonction
de la nature de la consonne [F(14 ; 420)=37,1 ; p 0,001] et du contexte vocalique [F(1 ; 420)=62,7 ;
p 0,001]) ; leur interaction est significative (p 0,001). Une deuxième ANOVA à 2 facteurs montre
que la position horizontale de la Minf varie en fonction de la consonne [F(14 ; 420)=25,35 ;
p 0,001] et du contexte vocalique [F(14 ; 420)=17,94 ; p 0,001] ; leur interaction est non
significative (p=0,23). Les valeurs moyennes des positions verticales et horizontales de la Minf sont
données dans la Table 3, les analyses post-hoc (TukeyHSD) de la première ANOVA sont résumés
dans la Table 4 (positions verticales) et de la deuxième dans la Table 5 (positions horizontales).
TABLE 2 :
2différences moyennes plus faibles). Par rapport à la dimension verticale, les consonnes obstruantes coronales présentent les différences moyennes les plus réduites, alors que les pharyngales et les laryngales affichent les différences moyennes les plus importantes. La hauteur de la Minf est donc plus stable durant les obstruantes coronales, mais très influencée par le contexte vocalique durant les pharyngales et les laryngales, /b l r k g q x/ ont un comportent intermédiaire.Différences moyennes entre les valeurs de la position horizontale (x-val) et verticales (y-
val) de /C/ dans aCa comparée à iCi (*, p<0.05, **, p<0.01, ***, p<0.001). (-) : différence négative
Nos données (TABLE 2) montrent que les variations de la position horizontale de la Minf durant
chaque consonne dans aCa, comparée à sa correspondante dans iCi, sont moins importantes que les
variations de sa position verticale. Par rapport à la dimension horizontale, les consonnes postérieures
semblent subir un effet plus important du contexte vocalique comparées aux consonnes antérieures
(
TABLE 3 :
3Valeurs moyennes et (écart-type) de la position verticale (y-val) et horizontale (x-val) de la Minf (en mm) durant /b s S t d T D l r k g q h/ prononcées 5 fois par 3 locuteurs dans aCa et iCi ainsi que dans ces deux contextes.
TABLE
TABLE 5 : Tests post-hoc TukeyHSD pour les positions horizontales de la Minf (*, p<0.05, **, p<0.01, ***, p<0.001). La case noircie : Minf durant /C/ en colonne plus avancée que /C/ en ligne.Actes de la conférence conjointe JEP-TALN-RECITAL 2016, volume 1 : JEP
. Références, Références
Articulatory and acoustic properties of apical and laminal articulations. Dart S, UCLA Working Papers in Phonetics. 79DART S. (1991). "Articulatory and acoustic properties of apical and laminal articulations," UCLA Working Papers in Phonetics 79, 1-155.
Jaw contribution to the timing control of pharyngeal consonants production. M Elgendy A, Proc. of the XIV th ICPhS. of the XIV th ICPhSSan FranciscoELGENDY A.M. (1999). Jaw contribution to the timing control of pharyngeal consonants production. Proc. of the XIV th ICPhS, San Francisco: 2415-2418.
Possible articulatory bases for the class of guttural consonants. Goldstein L, Phonological Structure and Phonetic Form: Papers in Laboratory Phonology III. P. KeatingCambridgeCambridge University PressGOLDSTEIN L. (1994). Possible articulatory bases for the class of guttural consonants. In P. Keating (ed.) Phonological Structure and Phonetic Form: Papers in Laboratory Phonology III. Cambridge: Cambridge University Press, 234-241.
Tongue-jaw coordination in German vowel production. P Hoole, B Kühnert, Proc. of the 4th Speech Production Seminar, Autrans. of the 4th Speech Production Seminar, AutransHOOLE, P. & KÜHNERT, B. (1996). Tongue-jaw coordination in German vowel production. Proc. of the 4th Speech Production Seminar, Autrans, 97-100.
Muffxxama, the 'emphatic' phonemes in Arabic. R Jakobson, Studies presented to Jushua Whatmough on his 60th Birthday. The Hague: Mouton. E. PulgramJAKOBSON, R. (1962). Muffxxama, the 'emphatic' phonemes in Arabic. In E. Pulgram (ed.). Studies presented to Jushua Whatmough on his 60th Birthday. The Hague: Mouton.105-115.
Variability in jaw height for segments in English and Swedish VCVs. Keating P, B Lindblom, J Lubker, Kreiman J , J. Phonetics. 22KEATING P., LINDBLOM B., LUBKER J., and KREIMAN J. (1994). "Variability in jaw height for segments in English and Swedish VCVs," J. Phonetics 22, 407-422.
Phonology and phonetic evidence: Papers in Laboratory Phonology IV. H Lee S, B. Connell and A. ArvanitiCambridge University PressCambridgeOrals, gutturals, and the jawLEE S.H. (1995). Orals, gutturals, and the jaw. In B. Connell and A. Arvaniti (eds.). Phonology and phonetic evidence: Papers in Laboratory Phonology IV. Cambridge: Cambridge University Press.
Economy of speech gestures. Lindblom B, P.F. MacNeilageSpringerNew YorkThe Production of SpeechLINDBLOM B. (1983). Economy of speech gestures. In P.F. MacNeilage (ed.), The Production of Speech, 217-245. New York: Springer.
Jaw and order, Language Speech50. Mooshammer C, Hoole P. & Geumann A, MOOSHAMMER C. HOOLE P. & GEUMANN A. (2007). Jaw and order, Language Speech50, 145-76.
Interarticulator cohesion within coronal consonant production. C Mooshammer, P Hoole, Geumann A, Journal of the Acoustical Society of America. 120MOOSHAMMER C., HOOLE P., GEUMANN A. (2006). Interarticulator cohesion within coronal consonant production. Journal of the Acoustical Society of America, 120, 1028-1039.
Phonology and Phonetic Evidence: Papers in Laboratory Phonology IV. F Nolan, B. Connell & A. ArvanitiCambridge University PressCambridgeThe role of the jaw active or passiveNOLAN, F. (1995). The role of the jaw active or passive?. In B. Connell & A. Arvaniti (eds.). Phonology and Phonetic Evidence: Papers in Laboratory Phonology IV. Cambridge: Cambridge University Press.
A study of jaw coarticulatory resistance and aggressiveness for Catalan consonants and vowels. Recasens D, Journal of the Acoustical Society of America. 132RECASENS D. (2012). A study of jaw coarticulatory resistance and aggressiveness for Catalan consonants and vowels. Journal of the Acoustical Society of America, 132, 412-420.
The acoustics of Fricative Consonants. C Shadle, PhD. Thesis MITSHADLE C. (1985). The acoustics of Fricative Consonants. PhD. Thesis MIT.
EMA study of the coronal emphatic and non-emphatic plosive consonants of Moroccan Arabic. C Zeroual, P Hoole, Fuchs S, J H Esling, Proc. of the XVI th ICPhS. of the XVI th ICPhSSaarbrückenZEROUAL C., HOOLE P., FUCHS S., ESLING J.H. (2007). EMA study of the coronal emphatic and non-emphatic plosive consonants of Moroccan Arabic. Proc. of the XVI th ICPhS, Saarbrücken: 397-340.
Contraintes articulatoires et acoustico-perceptives liées à la production de /k/ emphatisée en arabe marocain. Zeroual C, P Hoole, Esling J H , La Coarticulation. Des indices à la représentation. M. Embarki and C. DodaneParis: L'HarmattanZEROUAL C, HOOLE P, and ESLING J.H. (2011a). Contraintes articulatoires et acoustico-perceptives liées à la production de /k/ emphatisée en arabe marocain. In M. Embarki and C. Dodane (eds.), La Coarticulation. Des indices à la représentation, 227-240. Paris : L'Harmattan.
Ultrasound study of Moroccan Arabic labiovelarization. Esling J H Zeroual C, P Hoole, Ridouane R , Proc. of the XVII th ICPhS. of the XVII th ICPhSHong KongZEROUAL C, ESLING J.H, HOOLE P, and RIDOUANE R. (2011). Ultrasound study of Moroccan Arabic labiovelarization. Proc. of the XVII th ICPhS, Hong Kong.
. Actes De La Conférence Conjointe, Jep-Taln-Recital , JEP1Actes de la conférence conjointe JEP-TALN-RECITAL 2016, volume 1 : JEP |
|
220,060,205 | [] | Topic Balancing with Additive Regularization of Topic Models
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 5 -July 10, 2020. 2020
Veselova Eugeniia veselova.er@phystech.edu
Moscow Institute of Physics and Technology Moscow
Moscow Institute of Physics and Technology Moscow
Russia, Russia
Vorontsov Konstantin
Moscow Institute of Physics and Technology Moscow
Moscow Institute of Physics and Technology Moscow
Russia, Russia
Topic Balancing with Additive Regularization of Topic Models
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
the 58th Annual Meeting of the Association for Computational Linguistics: Student Research WorkshopAssociation for Computational LinguisticsJuly 5 -July 10, 2020. 202059
This article proposes a new approach for building topic models on unbalanced collections in topic modelling, based on the existing methods and our experiments with such methods. Real-world data collections contain topics in various proportions, and often documents of the relatively small theme become distributed all over the larger topics instead of being grouped into one topic. To address this issue, we design a new regularizer for Θ and Φ matrices in probabilistic Latent Semantic Analysis (pLSA) model. We make sure this regularizer increases the quality of topic models, trained on unbalanced collections. Besides, we conceptually support this regularizer by our experiments.
Introduction
Topic modelling is a widespread approach to unsupervised text analysis and clustering. Given the number of latent variables -topicstopic models extract hidden word×topic and topic×document probability distributions from text corpora. Topic models have proven to be relevant in a wide range of contexts and uni-and multilingual tasks (Uys et al., 2008;De Smet and Moens, 2009;Boyd-Graber et al., 2017).
Two fundamental topic models are probabilistic Latent Semantic Analysis -pLSA (Hofmann, 1999) and Latent Dirichlet Allocation -LDA (Blei et al., 2003). Various extensions of pLSA and LDA models have emerged over the past years, e.g. Additive Regularization of Topic Models (ARTM) (Vorontsov and Potapenko, 2015) modification of pLSA, where required solution properties are induced by the additional regularizer part in the model. Through regularizers one can take into consideration various problem-specific features of data, and this is a reason why we apply ARTMframework in our work.
Despite almost 30 years of model development history, lots of problems and issues were raised in the topic modelling field. Problem of the "order effect" in LDA (Agrawal et al., 2018), for example. It consists in converging to the different topics set while during training on the unstructured data. Even with the structured data solution in the pLSA or LDA model is non-unique and unstable. Such unstability may be reduced by tuning the model with regularizers, as in the ARTM model. Inserting Φ and Θ prior distribution into the model, according to the (Wallach et al., 2009), promotes convergence to the better and stable solution along with regularization. However, many problems with models itself and with quality metrics remain unsolved.
In this article, we point out the topic balancing problem. At this moment problem of training topic models on the unbalanced collections is not studied thoroughly and is far from the comprehensive solution. We examine previously suggested approach to the topic balancing and propose a balancing procedure, based on the a priori ratio between topic capacities.
2 Problem statement
Topic modelling introduction
Let D denote the text corpora, W denote the set of words in the corpora, or the corpora vocabulary, and T denote the set of the topics. Every document d ∈ D is presented as a token sequence (w 1 , w 2 , . . . , w n d ) of length n d from the vocabulary of size n. In the models, based on the "bagof-words" hypothesis, the more compact way to represent a document is to consider the document as a vocabulary multiset, where each token w ∈ d occurs n dw times in the document.
Topic model describes conditional probabilities p(w|d) of the appearance of the tokens w in the documents d through the probabilities of the to-kens in the topics ϕ wt = p(w|t) and topics in the documents θ td = p(t|d). To build a probabilistic generative model, we consider further hypotheses fulfilled:
• conditional independence hypothesis: each topic generates tokens regardless of the document; p(w|d, t) = p(w|t)
• "bag-of-words" hypothesis: words order in the document does not affect desired distributions;
• a finite set of topics T exist in the corpora, and each token occurrence in each document refers to some latent topic from T .
According to the law of total probability and the assumption of conditional independence
p(w|d) = t∈T ϕ wt θ td
This probabilistic model describes how the collection D is generated from the known distributions p(w|t) and p(t|d). Learning a topic model is an inverse problem: obtaining tokens-topics and topicsdocuments distributions p(w|t) and p(t|d) given a corpora D. This problem is equivalent to finding a stochastic matrix decomposition of counter matrix as a product F ≈ ΦΘ, where matrix Φ represents tokens probabilities for the topics and Θ represents topic probabilities for the documents:
F = (p(w|d)) W ×D ,p(w|d) = n wd n d Φ = (ϕ wt ) W ×T , ϕ wt = p(w|t) Θ = (θ td ) T ×D , θ td = p(t|d)
In pLSA the topic model is learned by loglikelihood maximization through EM-algorithm
L(Φ, Θ) = d∈D,w∈d n dw log t∈T ϕ wt θ td → max Φ,Θ
(1) Further details can be found in the Appendix A.
Since the matrix product ΦΘ is defined up to a linear transformation, solution of the problem is not unique and, therefore, is unstable. Additional objectives called regularizers, depending on the Θ and Φ matrices, can be included in the loglikelihood along with their non-negative regularization coefficients τ to reduce the solution domain.
Likelihood maximization problem (1) with r regularizers then takes the following form:
L(Φ, Θ) + r i=1 τ i R i (Φ, Θ) → max Φ,Θ(2)
Solution of the problem therefore transforms to
pt dw = ϕ wt θt d t∈T ϕ wt θ td ϕ wt = norm w∈W n wt + ϕ wt ∂R ∂ϕ wt θ wt = norm t∈T n td + θ td ∂R ∂θ td where n wt = d∈D n dw p tdw , n td = w∈d n dw p tdw
Regularization approach and theorem proofs can be found in (Vorontsov and Potapenko, 2015)
Topic balancing problem statement
Let n t = d∈D p(t|d)n d denote the topic capacity of the topic t. Let k = nt max nt min denote the imbalance degree of the model; with p(t) = nt n denoting the topic probability and N (t) = |{d ∈ D|argmax t θ td =t}|, we can denote documents imbalance degree k = Nt max Nt min too. Probabilistic topic models, based on the matrix factorization, tend to spread documents by topics uniformly and extract topics with the equal capacity. In order to maximize log-likelihood, model should engage all inner parameters for data description. Reducing the topics number, meaning reducing the number of available parameters, is unprofitable for the model in terms of EM-algorithm optimization, therefore strong proportion reduction of the particular topic is unprofitable too. Experiments show that in the pLSA and LDA models imbalance degree rarely exceeds 3-4.
Similar problem arises in the multiclass classification with imbalanced data, where classifying model prefers predicting the label of the most common class for every object to reduce the number of errors in classification. The standard approach to imbalanced data problem is a class weighting. It can help to provide some bias towards the minority classes while training the model, and thus help in improving performance of the model while classifying various classes. Documents imbalance leads to overweight of the vocabulary of predominant topics in the collection. This effect exaggerates "word burstiness" in the model (Doyle and Elkan, 2009;Lei et al., 2011) in terms of documents: if a collection has disproportion of topics, a document is likely to belong to the widely represented topic.
Let us call the model imbalanced if it can extract and maintain topics with the imbalance degree k up to 10. In this article, we examine different ways of balancing topics in topic models and building imbalanced models.
Topic balancing hypotheses 3.1 Iterative renormalization of parameter in the Dirichlet distribution
While formulating the probabilistic generative model in terms of LDA, topic distributions over words and document distributions over topics are generated from prior Dirichlet distributions. A learning algorithm for LDA can also be considered as an EM-like algorithm with modified Mstep (Asuncion et al., 2009). The most simple and frequently used modification is the following:
ϕ wt ∝ n wt + β w , θ td ∝ n td + α t
Thus probabilities of words in topics and probabilities of topics in documents are estimated with apriori shift. This LDA modification is covered by the ARTM framework through the LDA regularizer
R(Φ, Θ) = t w (β w − 1) log ϕ wt + + d t (α t − 1) log θ td
and parameters of Dirichlet distributions can be manually adjusted. We put forward a hypothesis that increasing Dirichlet parameters in proportion to the topic capacities similar to the classes weighing in unbalanced classification can countervail tendency of the EM-algorithm to decrease the capacity of the big topics and increase the capacity of the small topics.
For the modelling experiment we chose synthetic collection which consists of the two themes -business and music -with 1000 and 150 documents respectively. Two pairs of models were built to compare modelling results and evaluate balancing opportunity. First models were trained with two topics with and without renormalization, secondwith six topics. In the second pair, the separation of topics was evident through each topic size and top-tokens: five topics had top-tokens from a big theme (with ∼ 200 documents in each topic), the last one topic had top-tokens from a small theme. However, better topics were obtained with balanced Dirichlet parameters. In the first pair of models we implied that through the process of rebalancing Dirichlet parameters we could obtain two topics with ∼ 150 and ∼ 1000 documents each and different top-tokens. This hypothesis was not fully confirmed in the experiment: without the parameter renormalization EM-algorithm had converged to the topics with almost similar topic capacities, with parameter renormalization model maintained documents imbalance degree equal 2 instead of 7. Results of the experiment can be seen in Figure 1.
Rebalancing p(t|d, w)
Referring a weighting classes approach in the unbalanced classification task, we considered possibility to rebalance p(t|d, w) (4). We proposed dividing n tdw by n t . However, the same experiment as with LDA model gave no positive results, and later, in the subsection, we are going to prove this hypothesis failure.
We show that dividing p(t|d, w) by any value Z t , which depends on t only, does not change Φ, but only leads to minor the topics redistribution in documents. Proof can be found in the Appendix B. We prove that during renormalization in the EM-algorithm, M-step formulas for Φ does not change, because normalizing multiplier Z t is reduced. Therefore, pLSA renormalization does not influence the topics.
Φ initialization
According to the (Wallach et al., 2009), Φ and Θ prior distribution, inserted into the model, could promote stability of the solution. We followed this assumption and conducted an experiment, in which Φ matrix was initialized not randomly, as in the unmodified topic models, but with the previously calculated probabilities according to the foregone distribution of documents by topics. We suppose that the "real" Φ initialization along with the Θ, calculated from Φ, are the optimal factorization of the counter matrix F in terms of log-likelihood. Therefore, the overall topic balance and relative change of Φ matrix value must not be small enough (∼ 1 − 3%).
For this experiment chose four synthetic collections with two themes about business and music: first collection consisted of 1000 and 10 documents per theme respectively, second consisted of 1000 and 100 documents, third consisted of 1000 and 300 documents, and fourth consisted of 1000 and 600 documents respectively.
The experiment was split into two levels: at the first level, we trained models without a priori Φ initialization, at the second level, beforehand calculated Φ matrix was used as an initial tokenstopics distribution for each model. All zero a priori probabilities in the calculated Φ matrix were replaced with the minimal possible probability value ∝ 10 −5 . Zero probabilities emerge when a word does not occur in any document of the foregone topic; hence we are not artificially limiting topic vocabularies by preserving zeroes. We were training and comparing pairs of basic model with two topics and model with the initialization of the Φ matrix with two topics for each collection, eight models in sum. Regardless of the data collection, after first 10 training iterations, uninitialized models converged to the balanced solutions with almost equal N (t), though initial initialization supported documents imbalance degree up to 6. This result is represented in Figure 2 through the topic's N(t). The left column represents the model without initialization, the right column represents the model with initialization with true topic's balance [10:1000, 100:1000, 300:1000, 600:1000] respectively.
Topic prior regularizer 4.1 Description of the regularizer
According to our experiments and modelling experience, log-likelihood functional optimization does not preserve topic balance in models and does not converge to the optimal solution from the user's point of view. We want an optimal solution to allow topics with relatively small topic capacities or topics with relatively small p(t|d) for the most of corpora documents. Optimality in such terms can be achieved in a solution, where some topic variables, or degrees of freedom, are not fully utilized. Current functional during the optimization via EM-algorithm tends to redistribute p(t|d) in the most efficient way, without degenerate distributions. Thus topic capacities obtain similar values during the training process.
We formed the hypothesis from our experiments, that additional shift in tokens-topics Φ may influence the EM-algorithm as a restriction of the degrees of freedom, supporting topics imbalance. By setting relative collection balance in Φ in advance, we can control possible collection balance after the training process. During the optimization, all ϕ wt are specified according to the tokens distribution in documents. We implemented this hypothesis in a new ARTM regularizer R T opicP rior called TopicPriorRegularizer with the parameter β to describe a priori topic balance in the collection.
R T opicP rior (Φ, Θ) = t w β t log φ wt
To better understand the R T opicP rior influence on the EM-algorithm, we calculated the R T opicP rior partial derivative:
∂R ∂Φ wt = β t ϕ wt
and modified log-likelihood in case of one additional regularizer with regularization coefficient τ , determining regularizing strength:
ϕ wt ∝ n wt + τ β t
In most of the cases, we lack knowledge about topic capacities in the researched data collection, therefore we cannot set precise β value. We generalize our regularization approach and propose R T opicP riorSampled regularizer, where β parameter is being sampled from the Dirichlet distribution with the parameter γ ∈ R 1 . γ is responsible for the estimated data sparsity, thus γ = 1 stands for the random topic capacities in a model, γ 1 stands for the equal topic capacities, γ 1 stands for the significantly uneven topic capacities.
β ∼ Dir(γ), γ ∈ R 1
Modelling experiments
For the first modelling experiment we chose synthetic collection with the two themes -business and music -with 1000 and 100 documents respectively. We build two models with two topics in each and train them for 15 epochs, however, the second model is trained with the R T opicP rior , where β = [0.1, 0.9]. After training we evaluate both models by their perplexity, top-tokens and n(t) for every topic in the model. The second model had extracted a small theme as a distinct topic, while the first unregularized model has two similar topics. Training results are presented in Figure 3: the first row represents model without regularizer, the second row represents regularized model; the left column represents N(t) of the topics, the right column represents n(t) of the topics.
For the second modelling experiment we choose collection with the eight themes, balanced with the following documents proportion: doc prop = [3000,2000,1500,1000,1000,1000,700,350]. Two models were trained on this collection: unregularized and regularized model, where regularizer was initialized with β = doc prop sum(doc prop) . Figure Figure 4 and Figure 5 show better topics composition in the second model, compared to the first model results.
Discussion and conclusion
Learning an unbalanced topic model from unbalanced text collection is a non-trivial task for all of the existing modelling methods. In this paper we discussed the problem of training topic models with unbalanced text collections. No previous research provides a thorough analysis of this problem or an efficient training procedure for unbalanced models. After reviewing the problem, we proposed an approach to building topic models, able to maintain relatively high imbalance degree. We described our approach in terms of pLSA regularization and brought theoretical justification for the R T opicP rior regularizer. A pLSA and ARTM model optimization problem
In pLSA the topic model is learned by loglikelihood maximization through EM-algorithm
L(Φ, Θ) = d∈D,w∈d n dw log t∈T ϕ wt θ td → max Φ,Θ(3)
Process of the calculation auxiliary variables p tdw is an E-step, while model parameters elaboration by the calculated p tdw is an M-step in the EM-algorithm.
B Proof of rebalancing failure
We considered possibility to rebalance p(t|d, w) in accordance with weighting classes approach. We proposed dividing n tdw by n t .
We show that dividing p(t|d, w) by any value Z t , which depends on t only, doesn't change Φ, but only leads to minor the topics redistribution in documents. We put R = 0 in (2) for the sake of simplicity.
Investigating M-step of the EM-algorithm, we write down log-likelihood with renormalizing factor To solve this linear programming task, we apply Karush-Kuhn-Takker conditions. We write Lagrangian:
L(Φ, Θ) = w∈W t∈T n wt Z t log ϕ wt − − t∈T λ t w ϕ wt − 1 + + d∈D t∈T n td Z t log θ td − − d∈D µ d t∈T θ td − 1
and equate its derivations to zero:
∂L ∂ϕ wt = n wt Z t ϕ wt − λ t = 0 λ t ϕ wt = n wt Z t ⇒ λ t = n t Z t ϕ wt = norm w∈W (n wt ) and ∂L ∂θ td = n td Z t θ td − µ d = 0
µ d θ td = n td Z t ⇒ µ d = t∈T n td Z t θ td = norm t∈T n td Z t
M-step formulas for Φ does not change, because normalizing multiplier Z t is reduced. Therefore, pLSA renormalization has no influence on the topics.
Figure 1 :
1Results of LDA renormalization.
Figure 2 :
2Results of a priori Φ initialization in pLSA model.
Figure 3 :
3Results of unregularized and regularized pLSA model training with 2 topics.
Figure 4 :
4Results (N(t)) of unregularized and regularized pLSA model training with 8 topics.
with linear constraints of non-negativity and normalization:w∈W ϕ wt = 1, ϕ wt ≥ 1; t∈T θ td = 1, θ td ≥ 1Solution of the pLSA problem satisfies the following system of equations with auxiliary variables p tdw :pt dw = ϕ wt θt d t∈T ϕ wt θ td ϕ wt = norm w∈W (n wt ) , n wt = d∈D n dw p tdw θ wt = norm t∈T (n td ) , n td = w∈d n dw p tdw
JW Uys,ND Du Preez, and EW Uys. 2008. Leveraging unstructured information using topic modelling. In PICMET'08-2008 Portland International Conference on Management of Engineering & Technology, pages 955-961. IEEE. Konstantin Vorontsov and Anna Potapenko. 2015. Additive regularization of topic models. Machine Learning, 101(1-3):303-323. Hanna M Wallach, David M Mimno, and Andrew Mc-Callum. 2009. Rethinking LDA: Why priors matter. In Advances in neural information processing systems, pages 1973-1981.
1 Zt :1 Z t d∈D w∈d t∈T n dw ϕ wt θ td → max Φ,Θand then separate variables Φ and Θ:w∈W t∈T
n wt
Z t
log ϕ wt +
+
d∈D t∈T
n td
Z t
log θ td → max
Φ,Θ
What is wrong with topic modeling? And how to fix it using search-based software engineering. Amritanshu Agrawal, Wei Fu, Tim Menzies, formation and Software Technology. 98Amritanshu Agrawal, Wei Fu, and Tim Menzies. 2018. What is wrong with topic modeling? And how to fix it using search-based software engineering. In- formation and Software Technology, 98:74-88.
On smoothing and inference for topic models. Arthur Asuncion, Max Welling, Padhraic Smyth, Yee Whye Teh, Proceedings of the twenty-fifth conference on uncertainty in artificial intelligence. the twenty-fifth conference on uncertainty in artificial intelligenceAUAI PressArthur Asuncion, Max Welling, Padhraic Smyth, and Yee Whye Teh. 2009. On smoothing and inference for topic models. In Proceedings of the twenty-fifth conference on uncertainty in artificial intelligence, pages 27-34. AUAI Press.
Latent Dirichlet allocation. M David, Blei, Y Andrew, Michael I Jordan Ng, Journal of machine Learning research. 3David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent Dirichlet allocation. Journal of ma- chine Learning research, 3(Jan):993-1022.
Applications of topic models. Jordan Boyd-Graber, Yuening Hu, David Mimno, Foundations and Trends R in Information Retrieval. 112-3Jordan Boyd-Graber, Yuening Hu, David Mimno, et al. 2017. Applications of topic models. Foundations and Trends R in Information Retrieval, 11(2-3):143- 296.
Results (n(t)) of unregularized and regularized pLSA model training with 8 topics. Figure. 5Figure 5: Results (n(t)) of unregularized and regular- ized pLSA model training with 8 topics.
Cross-language linking of news stories on the web using interlingual topic modelling. De Wim, Marie-Francine Smet, Moens, Proceedings of the 2nd ACM workshop on Social web search and mining. the 2nd ACM workshop on Social web search and miningACMWim De Smet and Marie-Francine Moens. 2009. Cross-language linking of news stories on the web using interlingual topic modelling. In Proceedings of the 2nd ACM workshop on Social web search and mining, pages 57-64. ACM.
Accounting for burstiness in topic models. Gabriel Doyle, Charles Elkan, Proceedings of the 26th Annual International Conference on Machine Learning. the 26th Annual International Conference on Machine LearningGabriel Doyle and Charles Elkan. 2009. Accounting for burstiness in topic models. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 281-288.
Probabilistic latent semantic analysis. Thomas Hofmann, Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence. the Fifteenth conference on Uncertainty in artificial intelligenceMorgan Kaufmann Publishers IncThomas Hofmann. 1999. Probabilistic latent semantic analysis. In Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence, pages 289- 296. Morgan Kaufmann Publishers Inc.
Topic model with constrainted word burstiness intensities. Shaoze Lei, Jianwen Zhang, Shifeng Weng, Changshui Zhang, The 2011 International Joint Conference on Neural Networks. IEEEShaoze Lei, JianWen Zhang, Shifeng Weng, and Changshui Zhang. 2011. Topic model with con- strainted word burstiness intensities. In The 2011 International Joint Conference on Neural Networks, pages 68-74. IEEE. |
||
239,020,526 | MT adaptation from TMs in ModernMT | [] | MT adaptation from TMs in ModernMT
Oct 28 -Nov 1, 2016
Marcello Federico -Fbk
Italy
MT adaptation from TMs in ModernMT
Proceedings of AMTA 2016
AMTA 2016219Oct 28 -Nov 1, 2016
Suffix array indexed with TMsPhrase table is built on the fly by sampling from the SA Phrases of TMs with highest weights sampled firstRoadmap
2015 Q1
2016 Q2
2016 Q4
2017 Q4
development
started
first alpha
release.
10 langs,
fast training,
context aware,
distributed
first beta
release
45 langs,
Incremental
learning
final release
enterprise
ready
Roadmap
2015 Q1
2016 Q2
2016 Q4
2017 Q4
development
started
first alpha
release.
2 langs,
fast training,
context aware,
distributed
first beta
release
plug-in
10 langs,
incremental
learning
final release
enterprise
ready
A
B
C
50%
45%
5%
Analyze the input text
(tokenization, stop words)
Retrieves best matching TMs
Computes matching score
Adaptive Phrase Table
1000
Suffix Array with
Ranked Sampling
Proceedings of AMTA 2016, vol. 2: MT Users' Track Austin, Oct 28 -Nov 1, 2016 | p. 25
Proceedings of AMTA 2016, vol. 2: MT Users' Track Austin, Oct 28 -Nov 1, 2016 | p. 43
Proceedings of AMTA 2016, vol. 2: MT Users' Track Austin, Oct 28 -Nov 1, 2016 | p. 44
Proceedings of AMTA 2016, vol. 2: MT Users' Track Austin, Oct 28 -Nov 1, 2016 | p. 55
Proceedings of AMTA 2016, vol. 2: MT Users' Track Austin, Oct 28 -Nov 1, 2016 | p. 56
Proceedings of AMTA 2016, vol. 2: MT Users' Track Austin, Oct 28 -Nov 1, 2016 | p. 57 |
|
5,892,397 | The Counselor Project at the Un/versity of Massachusetts | [] | The Counselor Project at the Un/versity of Massachusetts
David D Mcdonald
Department of Computer and Information Science
Un/versity of Massachusetts
01003AmherstMassachusetts
James D Pustejowky
Department of Computer and Information Science
Un/versity of Massachusetts
01003AmherstMassachusetts
Marie M Vaughem
Department of Computer and Information Science
Un/versity of Massachusetts
01003AmherstMassachusetts
Briim Stucky
Department of Computer and Information Science
Un/versity of Massachusetts
01003AmherstMassachusetts
Penelope Sibun Seth Rosenberg
Department of Computer and Information Science
Un/versity of Massachusetts
01003AmherstMassachusetts
Yelly Murrely
Department of Computer and Information Science
Un/versity of Massachusetts
01003AmherstMassachusetts
Kevin Getllagher
Department of Computer and Information Science
Un/versity of Massachusetts
01003AmherstMassachusetts
Joann M Brooks
Department of Computer and Information Science
Un/versity of Massachusetts
01003AmherstMassachusetts
John Brolio
Department of Computer and Information Science
Un/versity of Massachusetts
01003AmherstMassachusetts
Sabine Bergler
Department of Computer and Information Science
Un/versity of Massachusetts
01003AmherstMassachusetts
Evin D Ashley
Department of Computer and Information Science
Un/versity of Massachusetts
01003AmherstMassachusetts
Scott D Anderson
Department of Computer and Information Science
Un/versity of Massachusetts
01003AmherstMassachusetts
The Counselor Project at the Un/versity of Massachusetts
Participants in the Counselor Project, Fall 1984 through Summer 1986: PrincipAl Investigators: Edwina L. Risslemd. l~tvid D. McDonedd, Wendy G. Lehnert Research Associates: Beverly Woolf, James e. Pustejovsky Gradutte Students:
Introduction
The COUNSELOR PROJECT began in the fall of 1984 with the goal of exploring basic problems in discourse structure and text processing within an integrated interface to a strong expert system. The program that we have developed, COUNSELOR, integrates separately developed components for natural language generation (MUMeLE see [7], [8], [9]), parsing (PLUM [5]), and case-based legal reasoning (Hvpo [1], [2]). It adds a newly developed component, CICERO ([ 10]), positioned between the two text processors and the expert system; CIc~o is responsible for managing textual inferences ("reading between the lines") by using common sense models of legal events. Coussez~t can provide advise to an attorney about how to argue cases involving violations of trade secret law in the computer field. The attorney presents the facts of their case to the system, which may ask questions to elicit other facts that it knows to be relevant. The system then suggests lines of argument that the attorney might use, drawing on its library of litigated cases to find ones with analogous dimensions.
Motivations
Consequential results in natural language research will only come from working with a strong underlying program whose communicative needs will challenge the capabilities of state of the art of language interfaces. As a group, we are not interested in building yet another question answering system: our goal is to understand the structure of discourse. We believe that an effective place to begin is with task specific, mixed initiative dialog where the particiants' goals cannot be satisfied by single utterances.
Working with a legal reasoning system like Kevin Ashley and Edwina Rissland's Hvpo provides particular challenges to natural language research: (I) Legal text is structurally complex. The need to avoid ambiguity leads to deeply embedded clauses and heavy noun phrases.
(2) As both the user and the system have a thorough knowledge of the law, they communicate vastly more information in conversations about legal arguments than ever appears in their literal utterances.
(3) Hvpo's role as an advisory system creates a natural motivation to communicate through language.
(4) Legal cases are large, complex objects that can be viewed from many alternative perspectives. The purpose for which a case is being described strongly influences which of its attributes are salient and how that information should be structured as a text.
Component Parts
We began the project with three partially developed components, Hvpo, MUMBLE, and PLUM, each designed with independent motivations. An initial tension was whether to convert aspects of these programs that did not seem apt in their new setting, or alternatively to interpose new components between them to smooth out the differences. We concluded that the motivations underlying each component were strong enough that we should not change them just because they were now working together.
H~o reasons with cases and hypotheticals. Actually litigated legal cases are encoded and indexed by "dimensions", which capture the utility of a case for making a particular kind of argument. When evaluating new cases, Hvpo first analyzes them in terms of the dimensions they involve. Relevant cases are then retrieved to guide the reasoning. The system may ask pertinent questions about facts now found to be relevant. When the analysis is complete, H~o describes the arguments available to the user, and responses and counter responses that may follow.
MUHSLS, the linguistic component for generation, is responsible for realizing conceptual specifications as grammatical text cohesive with the discourse which proceeds it. MU~LE works within a description directed framework. Its input specification is a description of the message the underlying program wants to communicate. This description is executed incrementally, producin~ an intermediate llnguistic representation wl~ich defines the text's grammatical relatlons and imposes constraints on further realization. This surface structure description is concurrently executed, producing the actual text.
PLUM is a conceptual analyzer which has been given a well defined schematic structure so that it can be easily extended. It parses by doing prediction and completion over semantic concepts implied by the words rather than over syntactic categories. As in other conceptual analyzers, no explicit surface structure is recovered. Pt,uM's output is the set of completed frames.
Clcmto is a new component, a discourse and inference manager between the language components and the expert system. From the understanding side, CICE20 must integrate the clause by clause output of the parser into the larger discourse context, recognizing, for example, when noun phrases refer to the same object. In interpreting these small, lexically derived frames, CIcEio draws on its own representation of events which bridges the gap between the way such information is expressed in language and the way it is organized for expert legal reasoning. For generation, CIcEI~o is responsible for planning the message that is given to the generator. In particular, it determines what information should be included and what may be omitted as inferable, and it selects pivotal lexical items with appropriate perspective and rhetorical force.
Future Directions
While the accomplishments of the individual components of Counselor are interesting in their own right, the greatest effect of the project has been to provide a workbench for studying the problems of language in an integrated context. Perennial problems in anaphora, lexical semantics, aspect, etc. become more tractable in an integrated system where there is a discourse context and intensional motivation. There are also semantic generalizations between the level at which the text processors operate and the level of the expert system which are more easily captured when parsing and generation can be studied in unison. On a larger scale, an explicit discourse manager, a requisite for more complex dialogs, can only be developed once an integrated system exists.
At its present state of development, COUNSELOR can handle simple variations on a single scenerio, exemplified by the following dialog:I represent a client named Hacklnc, who wants to sue Swipelnc and Leroy Soleil for misappropriating trade secrets in connection with software developed by my client. Hacklnc markets the software, known as Autotell, a program to automate some of a bemk teller's fUnctions, to the banking industry. Did Soleil work for Hacklnc.? Yes. he was s key employee on the Autotell project. Did he later work for Swipelnc.? Yes. You can argue that there is an implied agreement arising out of Soleil's employment with Hscklnc. that he not disclose any trade secret inform ttion to which he gained eu:cess by virtue of his employment.User:
Counselor:
User:
Counselor:
User:
Counselor:
Modelling Legal Argument: Reasoning with csses and hypotheticals --s thesis proposal. Kevin D Ashley, I0The Counselor ProjectDepartment of Computer and Informstion Science, University of Msssschusetts st AmherstTechnical ReportAshley, Kevin D. (1986) "Modelling Legal Argument: Reasoning with csses and hypotheticals --s thesis proposal", Technical Report I0, The Counselor Project, Department of Computer and Informstion Science, University of Msssschusetts st Amherst.
Toward Modelling Legal Argument. Kevin D Ashley, L Edvins, Rissland, Proceedings of the 2rid International Congress LOGICA, IIqFORMATICA, DIRITTO. Institute ~: Per I,e, Documentazione Giuridics. the 2rid International Congress LOGICA, IIqFORMATICA, DIRITTO. Institute ~: Per I,e, Documentazione GiuridicsFlorence, ItalyAshley, Kevin D. and Edvins L. Rissland (1985) "Toward Modelling Legal Argument'. Proceedings of the 2rid International Congress LOGICA, IIqFORMATICA, DIRITTO. Institute ~: Per I,e, Documentazione Giuridics, Florence, Italy.
Themis: A Discourse Manager. Joann M Brooks, Department of Computer and Information Science, University of Massachusetts st jtm h erstunpubfished Master's thesisBrooks, JoAnn M. (1985) "Themis: A Discourse Manager",unpubfished Master's thesis, Department of Computer and Information Science, University of Massachusetts st jtm h erst.
The Design and Implementation of CICEIIO. Gevin Gallagher, Depsrtment of Computer and Information Science, University of Massachusetts st Amherstunpublished Master's thesisGallagher, gevin (1986) "The Design and Implementation of CICEIIO', unpublished Master's thesis, Depsrtment of Computer and Information Science, University of Massachusetts st Amherst.
The PLUM User's Manual" TechnicalReport I, The Counselor Project. Wendy G Lehnert, Seth Rosenberg, Department of Computer and Information Science, University of Massachusetts at AmherstLehnert, Wendy G. and Seth Rosenberg (1985) "The PLUM User's Manual" TechnicalReport I, The Counselor Project. Department of Computer and Information Science, University of Massachusetts at Amherst.
Natural Language Generation: Complexities and Techniques. David D Mcdonald, Theoretic&l and Methodologic&l Issues in M&chine Transl~tion. Cambridge University Pressto appear in NirenburgMcDonald, David D. (1986) "Natural Language Generation: Complexities and Techniques", to appear in Nirenburg (ed.) Theoretic&l and Methodologic&l Issues in M&chine Transl~tion. Cambridge University Press.
Description Directed Nstural Langauge Generation. Dzvid D Mcdonald, James Pustejovsky, Proceedings of IJCAI-85. IJCAI-85McDonald, Dzvid D. and James Pustejovsky (1985) "Description Directed Nstural Langauge Generation', Proceedings of IJCAI-85, pp. 799-805.
TAGs as a Grammatical Formalism for Generation. David D Mcdenald, Jsmes Pustejovsky, Proceedings of the 23rd Meeting of the Association for Computational Linguistics. the 23rd Meeting of the Association for Computational LinguisticsMcDenald, David D. and Jsmes Pustejovsky (1985) "TAGs as a Grammatical Formalism for Generation", Proceedings of the 23rd Meeting of the Association for Computational Linguistics, pp. 94--103.
SAMSON: A ComputaLional Theory of Prose Style for Natoral Language Generation. David D Mcdonald, Jxmes Pustejovsky, Proceedings oi" the 1985 meeting of the European Association for Computational LinguisticsMcDonald, David D. and Jxmes Pustejovsky (1985) "SAMSON: A ComputaLional Theory of Prose Style for Natoral Language Generation', Proceedings oi" the 1985 meeting of the European Association for Computational Linguistics.
Technical Report I I. The Counselor Project, Depa~'tment off Computer and Information Science. Pustejovsky, James, University of Massachusetts at AmherstAn Integrated Theory Discourse AnalysisPustejovsky. James (1986) "An Integrated Theory Discourse Analysis", Technical Report I I. The Counselor Project, Depa~'tment off Computer and Information Science, University of Massachusetts at Amherst.
Explaining and Arguing with Examples. Risslsnd Edwina, L , Edward Valcarce, Kevin Ashley, Proceedings of AAAI-84. AAAI-84Risslsnd Edwina L., Edward Valcarce, and Kevin Ashley (1984) "Explaining and Arguing with Examples", Proceedings of AAAI-84.
A Model off Revision in Natural Language Generation. Maj ' Vaughan, M , David D Mcdonald, Proceedings of the 24th Meeting of the Association for Computational Linguistics. the 24th Meeting of the Association for Computational LinguisticsVaughan, MaJ'ie M, and David D. McDonald (1986) "A Model off Revision in Natural Language Generation", Proceedings of the 24th Meeting of the Association for Computational Linguistics. |
|
216,914,289 | Exploring Contextualized Neural Language Models for Temporal Dependency Parsing | Extracting temporal relations between events and time expressions has many applications such as constructing event timelines and timerelated question answering. It is a challenging problem which requires syntactic and semantic information at sentence or discourse levels, which may be captured by deep language models such as BERT (Devlin et al., 2019). In this paper, we developed several variants of BERT-based temporal dependency parser, and show that BERT significantly improves temporal dependency parsing (Zhang and Xue, 2018a). Source code and trained models will be made available at github.com. | [
1957433,
52967399
] | Exploring Contextualized Neural Language Models for Temporal Dependency Parsing
Hayley Ross hayleyross@brandeis.edu
Raytheon BBN Technologies Cambridge
Brandeis University Waltham
MA, MA
Jonathan Cai jonathon.cai@raytheon.com
Raytheon BBN Technologies Cambridge
Brandeis University Waltham
MA, MA
Bonan Min bonan.min@raytheon.com
Raytheon BBN Technologies Cambridge
Brandeis University Waltham
MA, MA
Exploring Contextualized Neural Language Models for Temporal Dependency Parsing
Extracting temporal relations between events and time expressions has many applications such as constructing event timelines and timerelated question answering. It is a challenging problem which requires syntactic and semantic information at sentence or discourse levels, which may be captured by deep language models such as BERT (Devlin et al., 2019). In this paper, we developed several variants of BERT-based temporal dependency parser, and show that BERT significantly improves temporal dependency parsing (Zhang and Xue, 2018a). Source code and trained models will be made available at github.com.
Introduction
Temporal relation extraction has many applications including constructing event timelines for news articles or narratives and time-related question answering. Recently, Zhang and Xue (2018b) presented Temporal Dependency Parsing (TDP), which organizes time expressions and events in a document to form a Temporal Dependency Tree (TDT). Consider the following example:
Example 1: Kuchma and Yeltsin signed a cooperation plan on February 27, 1998. Russia and Ukraine share similar cultures, and Ukraine was ruled from Moscow for centuries. Yeltsin and Kuchma called for the ratification of the treaty, saying it would create a "strong legal foundation". Figure 1 shows the corresponding TDT. Compared to previous pairwise approaches for temporal relation extraction such as Cassidy et al. (2014), a TDT is much more concise but preserves the same (if not more) information. However, TDP is challenging because it requires syntactic and semantic information at sentence and discourse levels. DCT is Document Creation Time (March 1, 1998) Recently, deep language models such as BERT Devlin et al. (2019) have been shown to be successful at many NLP tasks, because (1) they provide contextualized word embeddings that are pretrained with very large corpora, and (2) BERT in particular is shown to capture syntactic and semantic information (Tenney et al., 2019, Clark et al., 2019, which may include but is not limited to tense and temporal connectives. Such information is relevant for temporal dependency parsing.
In this paper, we investigate the potential for applying BERT to this task. We developed two models that incorporate BERT into TDP, starting from a straightforward usage of pre-trained BERT word embeddings, to using BERT as an encoder and training it within an end-to-end system. Experiments showed that BERT improves TDP performance in all models, with the best model achieving a 13 absolute F1 point improvement over our reimplementation of the neural model in (Zhang and Xue, 2019) 1 . We present technical details, experiments, and analysis in the rest of this paper.
Related Work
Much previous work has been devoted to classification of relations between events and time expressions, notably TimeML (Pustejovsky et al., 2003a), TimeBank (Pustejovsky et al., 2003b), and recently TimeBank-Dense (Cassidy et al., 2014) which annotates all n 2 pairs of relations. Pair-wise annotation has two problems: O(n 2 ) complexity, and the possibility of inconsistent predictions such as A before B, B before C, C before A. To address these issues, Zhang and Xue (2018b) present a tree structure of relations between time expressions and events. There, all time expressions are children of the root (if they are absolute), of the special time expression node Document Creation Time (DCT), or of other time expressions. All events are children of either a time expression or another event. Each edge is labelled with before, after, overlap, or depends on. Organizing time expressions and events into a tree reduces the annotation complexity to O(n) and avoids cyclic inconsistencies.
This paper builds on the chain of work done by Zhang and Xue (2018b), Zhang and Xue (2018a) and Zhang and Xue (2019), which presents an English corpus annotated with this schema as well as a first neural architecture. Zhang and Xue (2018a) uses a BiLSTM model with simple attention and randomly initialized word embeddings. This paper capitalizes on recent advances in pre-trained, contextualized word embeddings such as ELMo (Peters et al., 2018), ULMFit (Howard and Ruder, 2018) and BERT (Devlin et al., 2019). Besides offering richer contextual information, BERT in particular is shown to capture syntactic and semantic properties (Tenney et al., 2019, Clark et al., 2019 relevant to TDP, which we show yield improvements over the original model.
BERT-based Models
Following Zhang and Xue (2018a), we transformed temporal dependency parsing (TDP) to a ranking problem: given a child mention (event or time expression) x i , the problem is to select the most appropriate parent mention from among the root node, DCT or an event or time expression from the window x i−k , . . . , x i , . . . , x i+m 2 around x i , along with the relation label (before, after, overlap, depends on). A Temporal Dependency Tree (TDT) is assembled by selecting the highest-ranked predic-2 We set k = 10, m = 3 in all experiments. Figure 2: Model architecture for TDP with three different encoders (orange, blue, green boxes). Shown with the parent, child input pairs for a given child (event or time expression) x i . For simplicity, we did not show < x i , root > and < x i , DCT >, which are included as candidate pairs for all x i . tion parent, relation type for each event and time expression in a document (while avoiding cycles).
As shown in Figure 2, we developed three models that share a similar overall architecture: the model takes a pair of mentions (child and parent) as input and passes each pair through an encoder which embeds the nodes and surrounding context into a dense representation. Hand-crafted features are concatenated onto the dense representation, which is then passed to one or two feed-forward layers and a softmax function to generate scores for each relation label for each pair. We tested three types of encoder:
• BiLSTM with non-contextualized embeddings feeds the document's word embeddings (one per word) to a BiLSTM to encode the pair as well as the surrounding context. The word embeddings can be either randomly initialized (identical to Zhang and Xue (2018a)), or pre-trained from a large corpus -we used GloVe (Pennington et al., 2014).
• BiLSTM with frozen BERT embeddings replaces the above word embeddings with frozen (pre-trained) BERT contextualized word embeddings. We used the BERT-base uncased model 3 , which has been trained on English Wikipedia and the BookCorpus.
• BERT as encoder: BERT's encoder architecture (with pre-trained weights) is used directly to encode the pairs. Its weights are fine-tuned in the end-to-end TDP training process.
All models use the same loss function and scoring as in Zhang and Xue (2018a).
Model 1: BiLSTM with Frozen BERT
The first model adjusts the model architecture from Zhang and Xue (2018a) to replace its word embeddings with frozen BERT embeddings. That is, word embeddings are computed via BERT for every sentence in the document; then, these word embeddings are processed as in the original model by a BiLSTM. The BiLSTM output is passed to an attention mechanism (which handles events / time expressions with multiple words), then combined with the hand-crafted features (listed in Table 2) and passed to a feed-forward network with one hidden layer, which ranks each relation label for each (possible) parent / child pair.
Model 2: BERT as Encoder
This model takes advantage of BERT's encoding and classification capabilities since BERT uses the Transformer architecture (Vaswani et al., 2017). The embedding of the first token [CLS] is interpreted as a classification output and fine-tuned.
To represent a child-parent pair with context, BERT as encoder constructs a "sentence" for the (potential) parent node and a "sentence" for the child node. These are passed to BERT in that order and concatenated with BERT's [SEP] token. Each "sentence" is formed of the word(s) of the node, the node's label (TIMEX or EVENT), a separator token ':' and the sentence containing the node, as shown in Table 1
Additional Features
We used several hand-crafted binary and scalar features (Table 2) in all models, expanding on the features in Zhang and Xue (2018a).
Node distance features parent is previous node in document parent is before child in same sentence parent is before child, more than one sentence away parent is after child parent and child are in same sentence scaled distance between nodes scaled distance between sentences Time expression / event label features child is time expression and parent is root child and parent both time expressions child is event and parent is DCT parent is padding node 4
Experiments
We use the training, development and test data from Zhang and Xue (2019) for all experiments. We evaluated four configurations of the encoders above. Firstly BiLSTM (re-implemented) reimplements Zhang and Xue (2018a)'s model 5 in TensorFlow (Abadi et al., 2016) for fair comparison. Replacing its randomly-initialized embeddings with GloVe (Pennington et al., 2014) yields BiLSTM with GloVe. We also test the models BiLSTM with frozen BERT and BERT as encoder as described in Section 3. We used Adam (Kingma and Ba, 2014) as the optimizer and performed coarse-to-fine grid search for key parameters such as learning rate and number of epochs using the dev set. We observed that when fine-tuning BERT in the BERT as encoder model, a lower learning rate (0.0001) paired with more epochs (75) achieves higher performance, compared to using learning rate 0.001 with 50 epochs for the BiLSTM models.
Model F1-score Baseline (Zhang and Xue, 2019) 0.18 BiLSTM (re-implemented) 0.55 BiLSTM (Zhang and Xue, 2019) 0.60 BiLSTM with GloVe 0.58 BiLSTM with frozen BERT 0.61 BERT as encoder 0.68 . Table 3 summarizes the F1 scores 6 of our models. We also include the rule-based baseline and the performance reported in Zhang and Xue (2019) 7 as a baseline.
BiLSTM with frozen BERT outperforms the re-implemented baseline BiLSTM model by 6 points and BiLSTM with GloVe by 3 points in F1score, respectively. This indicates that the frozen, pre-trained BERT embeddings improve temporal relation extraction compared to either kind of noncontextualized embedding. Fine-tuning the BERTbased encoder (BERT as encoder) resulted in an absolute improvement of as much as 13 absolute F1 points over the BiLSTM re-implementation, and 8 F1 points over the reported results in Zhang and Xue (2019). This demonstrates that contextualized word embeddings and the BERT architecture, pretrained with large corpora and fine-tuned for this task, can significantly improve TDP.
We also calculated accuracies for each model on time expressions or events subdivided by their type of parent: DCT, a time expression other than DCT, or another event. Difficult categories are children of DCT and children of events. By this breakdown, the main difference between the BiLSTM and the BiLSTM with frozen BERT is its performance on children of DCT: with BERT, it scores 0.48 instead of 0.38. Conversely BERT as encoder sees improvements across the board, with a 0.21 increase on children of DCT over the BiLSTM, a 0.14 increase for children of other time expressions, and a 0.11 increase for children of events.
Analysis
Why BERT helps: Comparing the temporal dependency trees produced by the models for the test set, we see that these improvements correspond to the phenomena below.
Firstly, unlike the original BiLSTM, BERT as encoder is able to properly relate time expressions occurring syntactically after the event, such as Kuchma and Yeltsin signed a cooperation plan on February 27, 1998 in Example 1. (The BiLSTM falsely relates signed to the "previous" time expression DCT). This shows BERT's ability to "look forward", attending to information indicating a parent appearing after the child.
Secondly, BERT as encoder is able to capture verb tense, and use it to determine the correct label in almost all cases, both for DCT and for chains of events. It knows that present tense sentences (share similar cultures) overlap DCT, while past perfect events (was ruled from Moscow) happen either before DCT or before the event immediately adjacent (salient) to them. Similarly, progressive tense (saying) may indicate overlapping events.
Thirdly, BERT as encoder captures syntax related to time. They are particularly adept at progressive and reported speech constructions such as Yeltsin and Kuchma called for the ratification of the treaty, saying [that] it would create . . . where it identifies that called and saying overlap and create is after saying. Similarly, BERT's ability to handle syntactic properties (Tenney et al., 2019, Clark et al., 2019 may allow it to detect in which direction adverbs such as since should be applied to the events. This means that while all models may identify the correct parent in these cases, BERT as encoder is much more likely to choose the correct label, whereas the non-contextualized BiLSTM models almost always choose either before for DCT or after for children of events.
Lastly, both BERT as encoder and BiLSTM with frozen BERT are much better than the BiL-STM at identifying context changes (new "sections") and linking these events to DCT rather than to a time expression in the previous sections (evidenced by the scores reported above on children of DCT). Because BERT's word embeddings use the sentence as context, the models using BERT may be able to "compare" the sentences and judge that they are unrelated despite being adjacent.
Equivalent TDP trees: We note that in cases where BERT as encoder is incorrect, it sometimes produces an equivalent or very similar tree (since relations such as overlap are transitive, there may be multiple equivalent ways of arranging the tree). Future work could involve developing a more flexible scoring function to account for this.
Limitations: There are also limitations to BERT as encoder. For example, it is still fooled by syntactic ambiguity. Consider:
Example 2: Foreign ministers agreed to set up a panel to investigate who shot down the Rwandan president's plane on April 6, 1994.
A human reading this sentence will infer based on world knowledge that April 6, 1994 should be attached to the subclause who shot down . . . , not to the matrix clause (agreed), but a syntactic parser would produce both parses. BERT as encoder incorrectly attaches agreed to April 6, 1994: even BERT's contextualized embeddings are not sufficient to identify the correct parse.
Conclusion and Future Work
We present two models that incorporate BERT into temporal dependency parsers, and observe significant gains compared to previous approaches. We present an analysis of where and how BERT helps with this challenging task.
For future research, we plan to explore the interaction between the representation learnt by BERT and the hand-crafted features added at the final layer, as well as develop a more flexible scoring function which can handle equivalent trees.
*
Work done during an internship at BBN.
Figure 1 :
1Temporal Dependency Tree of Example 1.
.Table 1: "Sentence" inputs to BERT in BERT as encoder, for potential parent February 27, 1998 and child called in Example 1. (The correct parent here is DCT.)word(s)
label
sep sentence
February
27, 1998
TIMEX
:
Kuchma and Yeltsin signed
a cooperation plan on
February 27 1998.
called
EVENT
:
Yeltsin and Kuchma called
for the ratification . . .
Table 2 :
2Binary and scalar features used in all models.
Table 3 :
3Performance of the models.
https://github.com/google-research/ bert
Window x i−k , . . . , xi, . . . , xi+m is of fixed size, so it must be padded near the start or end of a document.5 The original model was implemented in DyNet(Neubig et al., 2017).6 Following (Zhang and Xue, 2019), F1 scores are reported. For a document with n nodes, the TDP task aims at constructing a tree of n + 1 edges, so F1 is essentially the same as the accuracy or recall (their denominators are the same).
We were unable to replicate the F1-score reported in Zhang and Xue (2019) despite using similar hyperparameters. Therefore, we include performances for our re-implementation and the reported score inZhang and Xue (2019) inTable 3.
1We were unable to replicate the F1-score reported inZhang and Xue (2019). The improvement over the reported, state-of-the-art result is 8 absolute F1 points.Acknowledgments
Tensorflow: A system for large-scale machine learning. Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16). Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. Tensorflow: A system for large-scale machine learning. In 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), pages 265-283.
An annotation framework for dense event ordering. Taylor Cassidy, Bill Mcdowell, Nathanel Chambers, Steven Bethard, Carnegie-Mellon Univ Pittsburgh PATechnical reportTaylor Cassidy, Bill McDowell, Nathanel Chambers, and Steven Bethard. 2014. An annotation frame- work for dense event ordering. Technical report, Carnegie-Mellon Univ Pittsburgh PA.
What does bert look at? an analysis of bert's attention. Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D Manning, arXiv:1906.04341arXiv preprintKevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning. 2019. What does bert look at? an analysis of bert's attention. arXiv preprint arXiv:1906.04341.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.
Universal language model fine-tuning for text classification. Jeremy Howard, Sebastian Ruder, arXiv:1801.06146arXiv preprintJeremy Howard and Sebastian Ruder. 2018. Univer- sal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, arXiv:1701.03980The dynamic neural network toolkit. arXiv preprintGraham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopou- los, Miguel Ballesteros, David Chiang, Daniel Cloth- iaux, Trevor Cohn, et al. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). the 2014 conference on empirical methods in natural language processing (EMNLP)Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.
E Matthew, Mark Peters, Mohit Neumann, Matt Iyyer, Christopher Gardner, Kenton Clark, Luke Lee, Zettlemoyer, arXiv:1802.05365Deep contextualized word representations. arXiv preprintMatthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. arXiv preprint arXiv:1802.05365.
Timeml: Robust specification of event and temporal expressions in text. James Pustejovsky, M José, Robert Castano, Roser Ingria, Sauri, J Robert, Andrea Gaizauskas, Graham Setzer, Katz, Dragomir R Radev, New directions in question answering. 3James Pustejovsky, José M Castano, Robert Ingria, Roser Sauri, Robert J Gaizauskas, Andrea Set- zer, Graham Katz, and Dragomir R Radev. 2003a. Timeml: Robust specification of event and temporal expressions in text. New directions in question an- swering, 3:28-34.
The timebank corpus. James Pustejovsky, Patrick Hanks, Roser Sauri, Andrew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, Corpus linguistics. Lancaster, UK40James Pustejovsky, Patrick Hanks, Roser Sauri, An- drew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, et al. 2003b. The timebank corpus. In Corpus linguistics, volume 2003, page 40. Lancaster, UK.
Ian Tenney, Dipanjan Das, Ellie Pavlick, arXiv:1905.05950Bert rediscovers the classical nlp pipeline. arXiv preprintIan Tenney, Dipanjan Das, and Ellie Pavlick. 2019. Bert rediscovers the classical nlp pipeline. arXiv preprint arXiv:1905.05950.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.
Yuchen Zhang, Nianwen Xue, arXiv:1809.00370Neural ranking models for temporal dependency structure parsing. arXiv preprintYuchen Zhang and Nianwen Xue. 2018a. Neural rank- ing models for temporal dependency structure pars- ing. arXiv preprint arXiv:1809.00370.
Yuchen Zhang, Nianwen Xue, arXiv:1808.07599Structured interpretation of temporal relations. arXiv preprintYuchen Zhang and Nianwen Xue. 2018b. Structured interpretation of temporal relations. arXiv preprint arXiv:1808.07599.
Acquiring structured temporal representation via crowdsourcing: A feasibility study. Yuchen Zhang, Nianwen Xue, Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (* SEM 2019). the Eighth Joint Conference on Lexical and Computational Semantics (* SEM 2019)Yuchen Zhang and Nianwen Xue. 2019. Acquiring structured temporal representation via crowdsourc- ing: A feasibility study. In Proceedings of the Eighth Joint Conference on Lexical and Computa- tional Semantics (* SEM 2019), pages 178-185. |
21,689,265 | A Corpus of eRulemaking User Comments for Measuring Evaluability of Arguments | eRulemaking is a means for government agencies to directly reach citizens to solicit their opinions and experiences regarding newly proposed rules. The effort, however, is partly hampered by citizens' comments that lack reasoning and evidence, which are largely ignored since government agencies are unable to evaluate the validity and strength. We present Cornell eRulemaking Corpus -CDCP, an argument mining corpus annotated with argumentative structure information capturing the evaluability of arguments. The corpus consists of 731 user comments on Consumer Debt Collection Practices (CDCP) rule by the Consumer Financial Protection Bureau (CFPB); the resulting dataset contains 4931 elementary unit and 1221 support relation annotations. It is a resource for building argument mining systems that can not only extract arguments from unstructured text, but also identify what additional information is necessary for readers to understand and evaluate a given argument. Immediate applications include providing real-time feedback to commenters, specifying which types of support for which propositions can be added to construct better-formed arguments. | [
141282,
16290344,
14440175,
18733074,
1070708,
14764893,
3083231
] | A Corpus of eRulemaking User Comments for Measuring Evaluability of Arguments
Joonsuk Park jpark@cs.williams.edu
Claire Cardie cardie@cs.cornell.edu
Department of Computer Science
Department of Computer Science
Williams College
MassachusettsUSA
Cornell University
New YorkUSA
A Corpus of eRulemaking User Comments for Measuring Evaluability of Arguments
argument mininge-governmente-rulemakingtext analytics
eRulemaking is a means for government agencies to directly reach citizens to solicit their opinions and experiences regarding newly proposed rules. The effort, however, is partly hampered by citizens' comments that lack reasoning and evidence, which are largely ignored since government agencies are unable to evaluate the validity and strength. We present Cornell eRulemaking Corpus -CDCP, an argument mining corpus annotated with argumentative structure information capturing the evaluability of arguments. The corpus consists of 731 user comments on Consumer Debt Collection Practices (CDCP) rule by the Consumer Financial Protection Bureau (CFPB); the resulting dataset contains 4931 elementary unit and 1221 support relation annotations. It is a resource for building argument mining systems that can not only extract arguments from unstructured text, but also identify what additional information is necessary for readers to understand and evaluate a given argument. Immediate applications include providing real-time feedback to commenters, specifying which types of support for which propositions can be added to construct better-formed arguments.
Introduction
The U.S. federal agencies amend rules in a highly transparent manner, inviting public participation as they are finalized. This is legally ensured in part by the requirement that agencies publish descriptions and rationale behind newly proposed rules and solicit feedback from the public (Park et al., 2012;Farina and Newhart, 2013). However, the public participation tends to be dominated by large corporations and interest groups (CSFFR, 2009); eRulemaking is an ongoing effort to promote citizens' participation in federal policymaking by using the latest information technology to directly reach citizens and incorporate their feedback (Lubbers et al., 2012). Government agencies consider reasoning and validity of supporting evidence, rather than a mere number of citizens supporting an argument, to determine how the rules should be adjusted to meet the needs of those who are directly affected. Thus, useful feedback consists of clear reasoning and objective evidence supporting factual claims (Park et al., 2015). However, many comments are not written this way, thwarting the government agencies' effort to communicate with citizens.
Consider the following comments from www. regulationroom.org, an eRulemaking website:
(1) $400 is enough compensation, A as it can cover a one-way fare across the US. B I checked in a passenger on a $98.00 fare from east coast to Las Vegas the other day. C
(2) All airfare costs should include the passenger's right to check at least one standard piece of baggage. A All fees should be fully disclosed at the time of airfare purchase, regardless of nature (i.e. optional or mandatory). B Any changes in fees should be identified by air carriers at least 6 months prior to taking effect. C Comment 1 consists of propositions in support relations that collectively form a single argument: Proposition 1.C is an anecdotal evidence supporting Proposition 1.B, which in turn is a reason explaining why Proposition 1.A is true. Readers are able to make sense of the argument and evaluate its validity and strength, because each proposition is accompanied with a support of an appropriate type. (Figure 1 shows a sample annotation capturing the above discussion; see Section 3 for more details on the types of support and when they are appropriate.) In contrast, the propositions in Comment 2 are in no support relation with one another. In fact, each proposition functions as the conclusion of its own argument, where each argument contains no support for its conclusion. This renders it difficult for readers to understand the arguments, let alone evaluate them. Thus, Comment 1 is much more desirable for readers, whether it be government agencies or fellow citizens.
The aforementioned difference between the arguments Figure 1: Annotated Example Comments made in Comments 1 and 2 is captured by the notion of evaluability of argument proposed by Park et al. (2015)-are the propositions comprising a given argument adequately supported so as for readers to understand and evaluate the validity or strength of the whole argument?
We present Cornell eRulemaking Corpus -CDCP, an argument mining corpus annotated with argumentative structure information capturing the evaluability of arguments. We annotated 731 user comments on Consumer Debt Collection Practices (CDCP) rule by the Consumer Financial Protection Bureau (CFPB) posted on www.regulationroom.org; the resulting dataset contains 4931 elementary unit and 1221 support relation annotations. It will be a valuable resource for building argument mining systems that can not only extract arguments from unstructured text, but also identify ways in which a given argument can be improved with respect to its evaluability. Immediate applications include automatically ranking arguments based on their evaluability for a (crude) identification of read-worthy comments and providing real-time feedback to writers, specifying which types of support for which propositions can be added to construct better-formed arguments.
The remainder of this paper is organized as follows: We discuss related work (Section 2), provide an overview of the annotation scheme (Section 3), present an annotation study (Section 4) and describe the resulting dataset (Section 5).
Related Work
This paper presents a corpus for the purpose of mining and evaluating arguments in eRulemaking user comments. It is closely related to two areas of research: argument mining and argument quality assessment.
Argument Mining
Argument mining is a developing field of computational linguistics that aims at identifying argumentative structures in unstructured text. Extracting claims together with their respective premises allows us to go beyond opinion mining by considering the reasoning and rationale behind people's opinions. (Peldszus and Stede, 2013;Lippi and Torroni, 2016) Argument mining systems build on theoretical models of argument, which define argumentative components and their relations in a variety of ways. Famous models include the Toulmin Model (Toulmin, 1958) and argument schemes (Walton et al., 2008). The Toulmin Model is a general model of practical argumentation that can be instantiated in many forms. The three major components of the model are claim, warrant, and data, where warrant explains how data supports the claim. One criticism, which in turn make it challenging to build an argument mining system base on this model, is that the model leaves room for multiple interpretations. For example, according to Eemeren et al. (1987), warrant is indistinguishable from data. On the other hand, argument schemes capture specific patterns of argument that are in use; each argument scheme specifies specific premises for the given conclusion, as well as critical questions that can be used to examine the strength of the given argument (Walton, 1996;Blair, 2001). Having many specific premises, a subset of which may not be present in the text, makes it difficult for manual annotation and automatic classification. The sheer number of argument schemes also causes additional challenges in gathering enough examples for each scheme. In this work, we adopt a model uniquely designed to capture the evaluability of arguments, which is general enough to model diverse argumentative structures that appear in practical argumentation (Park et al., 2015).
Argument mining systems also differ in the domain, resulting in datasets consisting of newspaper articles , legal documents (Mochales and Moens, 2011), student essays (Stab and Gurevych, 2014), and eRulemaking user comments (Park and Cardie, 2014;Konat et al., 2016), to name a few. While ours is not the first eRulemaking dataset, the task is different; Park and Cardie (2014) targets elementary unit classification only, and Konat et al. (2016) focuses on identifying divisive issues between commenters by analyzing conflict relations found across multiple comments in a thread. In contrast, we examine support structures within a comment; our dataset contains both elementary unit and support relation annotation without cross-comment conflict annotation. Also, the user comments comprising our dataset are different from those in the aforementioned datasets.
Argument Quality Assessment
Measuring the quality of argument has long been a subject of discussion and research, leading to a variety of dimensions of quality (Toulmin, 1958;Perelman et al., 1969;van Eemeren and Grootendorst, 2004;Johnson and Blair, 2006;Wachsmuth et al., 2017). More recently, argument mining research is conducted with specific measures of quality depending on the domain and purpose, such as persuasiveness (Tan et al., 2016), strength (Persing and Ng, 2015), acceptability (Cabrio and Villata, 2012), and convincingness (Habernal and Gurevych, 2016). The measure of quality we are interested in is evaluability (Park et al., 2015). By examining arguments' evaluability, we aim to identify ways to improve them so that they can be better understood and evaluated. For example, we answer questions like, "Which propositions need additional reasons or evidence supporting them?" This is the type of constructive feedback that can help commenter improve their arguments, unlike quality measures that results in a single numeric score without specifying how an argument can be improved.
Annotation Scheme
The annotators annotated the elementary units and support relations defined in the argumentation model proposed by Park et al. (2015). In this section, we provide a brief overview of the model; please refer to the original paper for more details.
The goal of the model is to capture whether an argument consists of explicitly stated premises that allow readers to understand and evaluate the given argument. The model defines five types of elementary units that are prevalent in online comments, along with two types of support relations between the units.
Elementary Units
Proposition of Non-Experiential Fact (FACT) : This refers to an objective proposition "expressing or dealing with facts or conditions as perceived without distortion by personal feelings, prejudices, or interpretations." 1 By definition a FACT proposition has a truth value that can be verified with objective evidence. We restrict the notion of verifiability to pieces of evidence that may be available at the time the claim is made; predictions about future are considered unverifiable.
Here are examples from the dataset:
• Recently, courts have held that debt collectors can escape 1692i's venue provisions entirely by pursuing debt collection through arbitration instead.
• banks can simply add this provision to their Loan Sale Agreements.
• That process usually takes as much a 2 years or more.
Proposition of Experiential Fact (TESTIMONY) : This refers to an objective proposition about the author's personal state or experience. One major characteristic of this type of objective propositions, as opposed to the nonexperiential counterparts classified as FACT, is that it is often practically impossible to provide objective evidence in online commenting setting, in the form of URL or citation. That is, evidence for TESTIMONY is not publicly available in most cases. For example:
• Informing them that we wanted all debt collection to be written was also ignored.
• A neighbor who has since moved away has had her debts turned over to collection agencies.
• We receive repeated calls trying to get contact information, even though we request to be taken off their list.
Proposition of Value (VALUE) : This refers to a proposition containing value judgments without making specific
claims about what should be done (If so, then it is a POLICY proposition.). Because of the subjectivity of value judgments, a VALUE proposition cannot be proved directly with objective evidence; however, providing a reason as support is feasible and appropriate. For example:
• That would be a starting point that can be expanded on as the system is fine tuned.
• Admittedly, their system is much more complex and dives much deeper than would be required for the debt industry.
1 http://www.merriam-webster.com/ • However, the double penalty against the consumer is certainly unfair.
Proposition of Policy (POLICY) : This refers to a proposition proposing a specific course of action to be taken. It typically contains modal verbs like "should" and "ought to." Just like VALUE, a POLICY proposition cannot be directly proved with objective evidence, and a proper type of support is a logical reason from which the proposition can be inferred. For example:
• They should not be allowed to contact anyone (other than the debtor him/herself) more than once.
• I say there ought to be sanctions, monetary sanctions, against these credit reporting agencies for making these mistakes and their cavalier attitude.
• Set up a system where the consumer is on equal footing with the debt collectors.
Reference to a Resource (REFERENCE) : This refers to a reference to a source of objective evidence. In online comments, a REFERENCE is typically a citation or a URL of a published work from a renowned source.For example:
• http://files.consumerfinance.gov/f/201309 cfpb agencybrief 12-cv-04057.pdf
• http://www.myfico.com/CreditEducation/ImproveYour Score.aspx
• <a target=" blank"href="http://www.optoutprescreen. com">www.optoutprescreen.com</a>
Support Relations
Reason : An elementary unit X is a reason for a proposition Y (of type POLICY, VALUE, FACT, or TESTIMONY) if X provides rationale for Y. For example:
• Y: I urge the CFPB to include in a rule language interpreting 1692i as requiring debt collectors to proceed in court, not through largely-unregulated arbitral forums. X: As the NAF studies reflect, arbitration has not proven a satisfactory alternative.
Evidence : An elementary unit X is evidence for a proposition Y (of type POLICY, VALUE, FACT, or TESTIMONY) if it proves whether proposition Y is true or not. The possible types of evidence are limited to TESTIMONY or REFER-ENCE based on previous studies on what constitutes justified grounds (Toulmin and Janik, 1979;Hitchcock, 2005). For example:
• Y: At least in Illinois there is a Caller ID spoofing law. X: http://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID =1355ChapterID=24
Evaluability
An argument is evaluable if all propositions comprising the given argument is supported by an explicit premise of an appropriate type, as summarized in Table 1. The underlying assumption is that readers are able to understand the gist of an argument-and at least roughly evaluate its strength-as long as one premise of an appropriate type is explicitly stated for each proposition. 2
Once elementary units and support relations comprising an argument are identified, the evaluability of the given argument can be determined. This is done by comparing the appropriate types of support and the types of support present in the argument, if any. In the process, additional support that is necessary to make the given argument evaluable (e.g."A reason for proposition X needs to be provided.") can also be identified.
Annotation Study
We annotated user comments on the Consumer Debt Collection Practices (CDCP) rule. The discussion regarding CDCP rule was hosted on www.regulationroom.org with a partnership with the CFPB. The goal was for the CFPB to hear about the first-hand experiences and concerns regarding debt collection practices. According to a voluntary user survey that asked the commenters to self-identify themselves, about 64% of the comments came from consumers, 22% from debt collectors, and the remainder from others, such as consumer advocates and counsellor organizations (Farina et al., 2017).
Each user comment was annotated by two annotators, who independently determined the types of elementary units and support relations among them using the GATE annotation tool (Cunningham et al., 2011). A third annotator manually resolved the conflicts to produce the final dataset.
An elementary unit is either a sentence or a clause; a sentence is split into smaller units if there are multiple independent clauses or an independent clause with a subordinate clause of interest, such as a because-clause. Non-argumentative portions of comments, such as greetings and names, were removed as elementary unit boundaries are determined in this way.
Inter-annotator agreement between 2 annotators is measured with Krippendorf's α (Krippendorff, 1980) The disagreements in elementary unit type annotation mostly occurred between VALUE vs TESTIMONY and VALUE vs FACT. The former is the case when a testimony spans multiple propositions and a few of them are subjective opinions about the experience. The latter often happens with an elementary unit that contains both subjective and objective expressions, e.g. "Unfortunate, but yes they are allowed to deny due process and get away with it." In this case, annotators had to determine the commenter's main intention-is it to express the emotion or state the fact? Depending on the answer, the given elementary unit was either marked as VALUE or FACT.
(Allowing a more granular boundaries for elementary units can solve this type of disagreement; however, an undesirable effect of this is that automatic segmentation becomes more challenging.)
Dataset
The resulting dataset, Cornell eRulemaking Corpus -CDCP, consists of 731 comments, 4931 elementary units, and 1221 support relations as summarized in Table 2. About 45% of the elementary units are VALUE type, and most support relations are reasons. Table 3 describes annotated information in this dataset. to the most objective (REFERENCE). One reason is that it is easier to provide a reason as to why one thinks or feels something (POLICY and VALUE) than to justify factual propositions (FACT and TESTIMONY). Interestingly, even though both POLICY and VALUE are subjective, there is a notable difference in the support pattern; 51% of POLICY propositions are supported, whereas only 28% of VALUE propositions are supported. This means that when commenters propose a specific course of actions to be taken, they are more likely to provide support for it. This is because POLICY propositions are often the central claims of the comments, thus other propositions naturally support them. Also, unlike VALUE, a simple expression of one's thoughts and feelings, POL-ICY, a proposal to act in a certain way, is associated with persuasion, which benefits from explicitly stated reasoning.
A significant portion, roughly 75%, of support relation annotations are between adjacent elementary units. While commenters certainly tend to provide reasons immediately after the proposition to be supported, it is also easier for annotators to identify support relations in proximity. Thus, support relations in the wild may be not as skewed toward
Conclusion
We have presented Cornell eRulemaking Corpus -CDCP, an argument mining corpus annotated with argumentative structure information capturing the evaluability of arguments. The corpus consists of 731 user comments on Consumer Debt Collection Practices (CDCP) rule by by the Consumer Financial Protection Bureau (CFPB) posted on www.regulationroom.org; the resulting dataset consists of 4931 elementary unit and 1221 support relation annotations. It will be a valuable resource for building argument mining systems that can not only extract arguments from unstructured text, but also identify which additional information is necessary for readers to understand and evaluate a given argument. Future work includes: (1) construction of a larger corpus using the same or similar annotation scheme and (2) making use of the resources to train argument mining systems (Niculae et al., 2017) and subsequent applications, such as a commenting interface that provides real-time feedback to help commenters construct evaluable arguments. Domain adaptation is also desirable, since building an argument mining dataset for individual domains incurs a significant cost.
Figure 2
2shows the types of supported elementary units and those of supporting elementary units. The percentage of supported elementary units decreases as the elementary unit's objectivity goes from the least objective (POLICY)
POLICY VALUE FACT TESTIMONY REFERENCE Elementary UnitsReason Evidence Support Relations
815
2182
785
1117
32
4931
1174
46
1220
Table 2 :
2Number of Elementary Units and Support Relations in the Dataset (731 comments) Figure 2: Types of Elementary Units in Support Relations (%) Field Description ID ID of the elementary unit Text Text of the elementary unit Type POLICY, VALUE, FACT, TESTIMONY or REFER-ENCE Reasons List of elementary unit IDs serving as reasons Evidence List of elementary unit IDs serving as evidence
Table 3 :
3Annotated Information: Each comment annotation consists of a list of elementary units in the given comment with fields described in this table.those between adjacent elementary units.
Krippendorf's α is suitable for our purpose as it is compatible with various types of labeling, along with the ability to handle missing annotations.
Walton's argumentation schemes for presumptive reasoning: A critique and development. J A Blair, Argumentation. 154Blair, J. A. (2001). Walton's argumentation schemes for presumptive reasoning: A critique and development. Ar- gumentation, 15(4):365-379.
Combining textual entailment and argumentation theory for supporting online debates interactions. E Cabrio, S Villata, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. the 50th Annual Meeting of the Association for Computational LinguisticsJeju Island, KoreaShort Papers2Association for Computational LinguisticsCabrio, E. and Villata, S. (2012). Combining textual en- tailment and argumentation theory for supporting online debates interactions. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguis- tics (Volume 2: Short Papers), pages 208-212, Jeju Is- land, Korea, July. Association for Computational Lin- guistics.
Achieving the potential:the future of federal e-rulemaking. Csffr, Committee on the Status & Future of Federal e-Rulemaking/American Bar Association. Washington, DCTechnical reportCSFFR. (2009). Achieving the potential:the future of fed- eral e-rulemaking. Technical report, Committee on the Status & Future of Federal e-Rulemaking/American Bar Association, Washington, DC.
. H Cunningham, D Maynard, K Bontcheva, V Tablan, N Aswani, I Roberts, G Gorrell, A Funk, A Roberts, D Damljanovic, T Heitz, M A Greenwood, H Saggion, J Petrak, Y Li, W Peters, Text Processing with GATE (Version 6Cunningham, H., Maynard, D., Bontcheva, K., Tablan, V., Aswani, N., Roberts, I., Gorrell, G., Funk, A., Roberts, A., Damljanovic, D., Heitz, T., Greenwood, M. A., Sag- gion, H., Petrak, J., Li, Y., and Peters, W. (2011). Text Processing with GATE (Version 6).
Rulemaking 2.0: Understanding and getting better public participation. C R Farina, M J Newhart, Farina, C. R. and Newhart, M. J. (2013). Rulemaking 2.0: Understanding and getting better public participation.
Digital support for enhanced democratic participation in us rulemaking. C R Farina, C L Blake, M J Newhart, Nam , C , Digital Democracy in a Globalized World, chapter 10. C. Prins, et al.Elgar PublishingFarina, C. R., Blake, C. L., Newhart, M. J., and Nam, C. (2017). Digital support for enhanced democratic partici- pation in us rulemaking. In C. Prins, et al., editors, Dig- ital Democracy in a Globalized World, chapter 10. Ed- ward Elgar Publishing.
Which argument is more convincing? analyzing and predicting convincingness of web arguments using bidirectional lstm. I Habernal, I Gurevych, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), volume Volume 1: Long Papers, page. the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), volume Volume 1: Long Papers, pageAssociation for Computational Linguisticsto appearHabernal, I. and Gurevych, I. (2016). Which argument is more convincing? analyzing and predicting convincing- ness of web arguments using bidirectional lstm. In Pro- ceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), volume Vol- ume 1: Long Papers, page (to appear). Association for Computational Linguistics, August.
Good reasoning on the toulmin model. D Hitchcock, Argumentation. 193Hitchcock, D. (2005). Good reasoning on the toulmin model. Argumentation, 19(3):373-391.
Logical Self-defense. Key titles in rhetoric, argumentation, and debate series. International Debate Education Association. R Johnson, J Blair, Johnson, R. and Blair, J. (2006). Logical Self-defense. Key titles in rhetoric, argumentation, and debate series. Inter- national Debate Education Association.
A corpus of argument networks: Using graph properties to analyse divisive issues. B Konat, J Lawrence, J Park, K Budzynska, C Reed, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). the Tenth International Conference on Language Resources and Evaluation (LREC 2016)Paris, France, mayNicoletta Calzolari (Conference Chair). European Language Resources Association (ELRAKonat, B., Lawrence, J., Park, J., Budzynska, K., and Reed, C. (2016). A corpus of argument networks: Using graph properties to analyse divisive issues. In Nicoletta Calzo- lari (Conference Chair), et al., editors, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France, may. Euro- pean Language Resources Association (ELRA).
Content Analysis: An Introduction to Its Methodology. Sage commtext series. K Krippendorff, Sage PublicationsKrippendorff, K. (1980). Content Analysis: An Introduc- tion to Its Methodology. Sage commtext series. Sage Publications.
Argumentation mining: State of the art and emerging trends. M Lippi, P Torroni, 10:1-10:25ACM Trans. Internet Technol. 162Lippi, M. and Torroni, P. (2016). Argumentation mining: State of the art and emerging trends. ACM Trans. Inter- net Technol., 16(2):10:1-10:25, March.
. J Lubbers, Of Administrative, A B A S Law, R Practice, A B A Government, P S L Division, Lubbers, J., of Administrative Law, A. B. A. S., Practice, R., Government, A. B. A., and Division, P. S. L. (2012).
ABA Section of Administrative Law and Regulatory Practice and Government and Public Sector Lawyers Division. A Guide to Federal Agency RulemakingA Guide to Federal Agency Rulemaking. ABA Section of Administrative Law and Regulatory Practice and Gov- ernment and Public Sector Lawyers Division.
Argumentation mining. R Mochales, M.-F Moens, Artif. Intell. Law. 191Mochales, R. and Moens, M.-F. (2011). Argumentation mining. Artif. Intell. Law, 19(1):1-22, March.
Argument mining with structured svms and rnns. V Niculae, J Park, Cardie , C , Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Niculae, V., Park, J., and Cardie, C. (2017). Argument mining with structured svms and rnns. In Proceedings of the 55th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 985-995. Association for Computational Linguistics.
Identifying appropriate support for propositions in online user comments. J Park, C Cardie, Proceedings of the First Workshop on Argumentation Mining. the First Workshop on Argumentation MiningBaltimore, MarylandAssociation for Computational LinguisticsPark, J. and Cardie, C. (2014). Identifying appropriate sup- port for propositions in online user comments. In Pro- ceedings of the First Workshop on Argumentation Min- ing, pages 29-38, Baltimore, Maryland, June. Associa- tion for Computational Linguistics.
Facilitative moderation for online participation in erulemaking. J Park, S Klingel, C Cardie, M Newhart, C Farina, J.-J Vallbé, Proceedings of the 13th Annual International Conference on Digital Government Research. the 13th Annual International Conference on Digital Government ResearchACMPark, J., Klingel, S., Cardie, C., Newhart, M., Farina, C., and Vallbé, J.-J. (2012). Facilitative moderation for on- line participation in erulemaking. In Proceedings of the 13th Annual International Conference on Digital Gov- ernment Research, pages 173-182. ACM.
Toward machine-assisted participation in erulemaking: An argumentation model of evaluability. J Park, C Blake, Cardie , C , Proceedings of the 15th International Conference on Artificial Intelligence and Law, ICAIL '15. the 15th International Conference on Artificial Intelligence and Law, ICAIL '15New York, NY, USAACMPark, J., Blake, C., and Cardie, C. (2015). Toward machine-assisted participation in erulemaking: An argu- mentation model of evaluability. In Proceedings of the 15th International Conference on Artificial Intelligence and Law, ICAIL '15, pages 206-210, New York, NY, USA. ACM.
From argument diagrams to argumentation mining in texts: A survey. A Peldszus, M Stede, Int. J. Cogn. Inform. Nat. Intell. 71Peldszus, A. and Stede, M. (2013). From argument dia- grams to argumentation mining in texts: A survey. Int. J. Cogn. Inform. Nat. Intell., 7(1):1-31, January.
The New Rhetoric. C Perelman, L Olbrechts-Tyteca, J Wilkinson, P Weaver, University of Notre Dame P XPerelman, C., Olbrechts-tyteca, L., Wilkinson, J., and Weaver, P. (1969). The New Rhetoric. University of Notre Dame P X.
Modeling argument strength in student essays. I Persing, V Ng, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingLong Papers1Persing, I. and Ng, V. (2015). Modeling argument strength in student essays. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguis- tics and the 7th International Joint Conference on Nat- ural Language Processing (Volume 1: Long Papers), pages 543-552.
Language resources for studying argument. C Reed, R M Palau, G Rowe, M.-F Moens, LREC. European Language Resources Association. Reed, C., Palau, R. M., Rowe, G., and Moens, M.-F. (2008). Language resources for studying argument. In LREC. European Language Resources Association.
Annotating argument components and relations in persuasive essays. C Stab, I Gurevych, Proceedings of the 25th International Conference on Computational Linguistics (COLING 2014). Junichi Tsujii et al.the 25th International Conference on Computational Linguistics (COLING 2014)Dublin, Ireland, AugustDublin City University and Association for Computational LinguisticsStab, C. and Gurevych, I. (2014). Annotating argument components and relations in persuasive essays. In Ju- nichi Tsujii et al., editors, Proceedings of the 25th In- ternational Conference on Computational Linguistics (COLING 2014), pages 1501-1510, Dublin, Ireland, Au- gust. Dublin City University and Association for Compu- tational Linguistics.
Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. C Tan, V Niculae, C Danescu-Niculescu-Mizil, L Lee, Proceedings of the 25th International Conference on World Wide Web, WWW '16. the 25th International Conference on World Wide Web, WWW '16Republic and Canton of Geneva, SwitzerlandInternational World Wide Web Conferences Steering CommitteeTan, C., Niculae, V., Danescu-Niculescu-Mizil, C., and Lee, L. (2016). Winning arguments: Interaction dynam- ics and persuasion strategies in good-faith online discus- sions. In Proceedings of the 25th International Confer- ence on World Wide Web, WWW '16, pages 613-624, Republic and Canton of Geneva, Switzerland. Interna- tional World Wide Web Conferences Steering Commit- tee.
An Introduction to Reasoning. S E Toulmin, R R Janik, A , Macmillan Publishing CompanyToulmin, S.E., R. R. and Janik, A. (1979). An Introduction to Reasoning. Macmillan Publishing Company.
The uses of argument. S E Toulmin, Cambridge University PressToulmin, S. E. (1958). The uses of argument. Cambridge University Press.
A Systematic Theory of Argumentation: The Pragma-dialectical Approach. A Systematic Theory of Argumentation: The Pragma-dialectical Approach. F Van Eemeren, R Grootendorst, Cambridge University Pressvan Eemeren, F. and Grootendorst, R. (2004). A System- atic Theory of Argumentation: The Pragma-dialectical Approach. A Systematic Theory of Argumentation: The Pragma-dialectical Approach. Cambridge Univer- sity Press.
Handbook of argumentation theory ; a critical survey of classical backgrounds and modern studies. F Van Eemeren, R Grootendorst, T Kruiger, PDA Series. Foris Publicationsvan Eemeren, F., Grootendorst, R., and Kruiger, T. (1987). Handbook of argumentation theory ; a critical survey of classical backgrounds and modern studies. PDA Series. Foris Publications.
Computational argumentation quality assessment in natural language. H Wachsmuth, N Naderi, Y Hou, Y Bilu, V Prabhakaran, T A Thijm, G Hirst, B Stein, Proceedings of the 15th Conference of the European Chapter. the 15th Conference of the European ChapterValencia, Spain1Long Papers. Association for Computational LinguisticsWachsmuth, H., Naderi, N., Hou, Y., Bilu, Y., Prabhakaran, V., Thijm, T. A., Hirst, G., and Stein, B. (2017). Com- putational argumentation quality assessment in natural language. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 176-187, Va- lencia, Spain, April. Association for Computational Lin- guistics.
Argumentation Schemes. D Walton, C Reed, F Macagno, Cambridge University PressWalton, D., Reed, C., and Macagno, F. (2008). Argumen- tation Schemes. Cambridge University Press.
Argumentation schemes for presumptive reasoning. D Walton, Lawrence Erlbaum AssociatesWalton, D. (1996). Argumentation schemes for presump- tive reasoning. Lawrence Erlbaum Associates. |
201,085 | Identifying Word Correspondences in Parallel Texts | [
14386564,
3166885
] | Identifying Word Correspondences in Parallel Texts
William A Gale gale@research.att.com
AT&T Bell Laboratories Murray Hill
07974N.J
Kenneth W Church
AT&T Bell Laboratories Murray Hill
07974N.J
Identifying Word Correspondences in Parallel Texts
Introduction
Researchers in both machine translation (e.g., Brown et a/, 1990) arm bilingual lexicography (e.g., Klavans and Tzoukermarm, 1990) have recently become interested in studying parallel texts (also known as bilingual corpora), bodies of text such as the Canadian Hansards (parliamentary debates) which are available in multiple languages (such as French and English). Much of the current excitement surrounding parallel texts was initiated by Brown et aL (1990), who outline a selforganizing method for using these parallel texts to build a machine translation system. Brown et al. begin by aligning the parallel texts at the sentence level. In our experience, 90% of the English sentences match exactly one French sentence, but other possibilities, especially two sentences matching one or one matching two, are not uncommon. There has been quite a bit of recent work on sentence alignment, e.g., (Brown, Lai and Mercer, 1990, (Kay and Rbscheisen, 1988), (Catizone, Russell, and Warwick, to appear); we use a method described in (Gale and Church, 1991) which makes use of the fact that the length of a text (in characters) i~ 5ighly correlated (0.991) with the length of its translation. A probabilistic score is assigned to each proposed match, based on the lengths of the two regions and some simple assumptions about the distributions of these two lengths. This probabilistic score is used in a dynamic programming framework to find the maximum likelihood alignment of sentences.
After sentences have been aligned, the second step is to identify correspondences at the word level. That is, we would like to know which words in the English text correspond to which words in the French text. The identification of word level correspondences is the main topic of this paper.
We wish to distinguish the terms alignment and correspondence, The term alignment will be used when order constraints must be preserved and the term correspondence will be used when order constraints need not be preserved and crossing dependencies are permitted. We refer to the matching problem at the word level as a correspondence problem because it is important to model crossing dependencies (e.g., sales volume and volume des ventes). In contrast, we refer to the matching problem at the sentence level as an alignment problem because we believe that it is not necessary to model crossing dependencies at the sentence level as they are quite rare and can be ignored for now.
Here is an example of our word correspondence program. Given the input English and French Sentences."
English we took the initiative in assessing and amending current legislation and policies to ensure that they reflect a broad interpretation of the charter. In this example, 15 out of the 23 (65%) English words were matched with a French word (with to/d in error), and 8 of the English words were left unmatched (paired with "0"). Throughout this work, we have focused our attention on robust statistics that tend to avoid making hard decisions when there isn't much confidence. In other words, we favor methods with relatively high precision and possibly low recall. For now, we are more concerned with errors of commission than errors of omission. Based on a sample of 800 sentences, we estimate that our word matching procedure matches 61% of the English words with some French word, and about 95% of these pairs match the English word with the appropriate French word.
After word correspondences have been identified, it is possible to estimate a probabilistic transfer dictionary. The entry for "the" found in prawn et al.) includes the estimates of ~rob(le I the)=.61 and Prob(1a I the)=.18. Brown et al. show how this probabilistic transfer dictionary can be combined with a trigram grammar in order to produce a machine translation system. Since this paper is primarily concerned with the identification of word correspondences. we will not go into these other very interesting issues here.
Applications Beyond MT
As mentioned above, MT is not the only motivation for sentence alignment and word correspondence. Computational linguists (e.g.. Klavans and Tzoukermann, 1990) have recently become interested in bilingual concordances. Table 1, for example, shows a bilingual concordance contrasting the uses of bank that are translated as banque with those that are wanslated as banc. Of course it is well-know that sense disambiguation is important for many natural language applications (including MT as well as many others). In the past, this fact has been seen as a serious obstacle blocking progress in natural language research since sense disambiguation is a very tricky unsolved problem, and it is unlikely that it will be solved in the near future.
However, we prefer to view these same facts in a more optimistic light. In many cases, the French text can be used to disambiguate the English text, so that the French can be used to generate a corpus of (partially) sense-disambiguated English text. Such a sensedisambiguated corpus would be a valuable resource for all kinds of natural language applications. In particular, the corpus could be used to develop and test sensedisambiguation algorithms. For example, if you have an algorithm that is intended to distinguish the 6 ' money" sense of bank from the "place" sense of bank, then you might apply your algorithm to all of the uses of bank in the English portion of the parallel corpus and use the French text to grade the results. That is, you would say that your program was correct if it identified a use of bank as a "money" sense and it was translated as banque, and you would say that the program was incorrect if the program identified the use as a "money" sense and it was translated as banc. Thus, the availability of the French text provides a valuable research opportunity, for both monolingual and bilingual applications. The French text can be used to help clanfy distinctions in the English text that may not be obvious to a dumb computer.
Using Word Correspondances Rather than Sentence Alignments
Most bilingual concordance programs such as ISSCO's BCP program mentioned in footnote 1 of (Warwick and Russel, 1990) and a similar program mentioned on page 20 of (Klavans and Tzoukermann, 1990) are based on aligned sentences rather than word correspondences. Table 1 shows an example of such a sentence-based concordance program. These sentence-based programs require the user to supply the program with both an English and a French word (e.g., bank and banque). In contrast, a word-based concordance program is given just bank and finds the French translations by making use of the word correspondences.
The advantage of the word-based approach becomes important for complicated words like take, where it is difficult for users to generate many of the possible translations. take is often used in complex idiomatic expressions, and consequently, there are many uses of take that should not be translated with prendre. In fact, most uses of take are not translated with prendre (or any of its morphological variants). The word-based bilingual concordances show this fairly clearly. We find that only 23% of the uses of take are translated with a form of prendre, a figure is fairly consistent with IBM's estimate of 28% (Brown, personal communication). The striking absence of prendre is consistent with the observation in the Cobuild dictionary (Sinclair et al., 1987(Sinclair et al., , p. 1488) that "[tlhe most frequent use of take is in expressions where it does not have a very distinct meaning of its own, but where most of the meaning is in ... the direct object."
Two Possible Problems with the EM Algorithm
This paper is primarily concerned with the task of identifying word correspondences. There is relatively little discussion of this topic in Brown et al. (1990). although a brief mention of the EM algorithm is made. We decided to look for an alternative estimation algorithm for two reasons.
First, their procedure appears to require a prohibitive amount of memory. We observed that they limited the sizes of the English and French vocabularies, V E and V e, respectively, to just 9000 words each. Having constrained the vocabularies in this way, there were a mere 81 million parameters to estimate, all of which could be squeezed into memory at the same time. However, if the two vocabularies are increased to a more realistic size of 106 words, then there are 10 TM parameters to estimate, and it is no longer practical to store all of them in memory. (Apparently, in some more recent unpublished work (Brown, personal communication), they have also found a way to scale up the size of the vocabulary).
Secondly, we were concerned that their estimates might lack robustness (at least in some cases):
"This algorithm leads to a local maximum of the probability of the observed pairs as a function of the parameters of the model. There may be many such local maxima. The particular one at which we arrive will, in general, depend on the initial choice of parameters." (Brown et al.,p. 82) In particular, we looked at their estimates for the word hear, which is surprisingly often translated as bravo (espeeiaUy, Hear, hear? --~ Bravo?), though it is not clear just how common this is. Brown et al. reported that more than 99% of the uses of hear were translated with bravo, whereas we estimate the fraction to be much closer to 60% (which is fairly consistent with their more recent estimates (Brown, personal communication)). The fact that estimates can vary so widely from 99% to 60% indicates that there might be a serious problem with robustness. It became clear after more private discussions that our methods were coming up with substantially different probability estimates for quite a number of words. It is not clear that the maximum likelihood methods are robust enough to produce estimates that can be reliably replicated in other laboratories.
Contingency Tables
Because of the memory and robustness questions, we decided to explore an alternative to the EM algorithm. We can now measure the association between house and chambre by making use of any one of a number of association measures such as mutual information. ~b 2, a g2-like statistic, seems to be a particularly good choice because it makes good use of the off-diagonal cells b andc.
¢~ 2 = ( ad -be) 2 (a + b) (a + c) (b+ d) (c + d) 02 is bounded between 0 and 1. In this case, 02 is 0.62, a relatively high value, indicating the two words are strongly associated, and that they may be translations of one another. One can make this argument more rigorous by measuring the confidence that ~2 is different from chance (zero). In this case, the variance of ~b z is estimated to be 2.5x10 -5 (see the section "Calculation of Variances"), and hence t = ¢~2/~4(var(~2)) = 0.62/~2.5x10 -5 = 123. With such a large t, we can very confidently reject the null hypothesis and assume that there is very likely to be an association between house and chambre.
i.e.
One mig) -ntr t--e/cha,,a, re with a near miss
such a(~ommune~e Table 3). Unfortunately, this pair is also significantly different from zero (t = 31) because there are many references in the Canadian Hansard to the English phrase House of Commons and its French equivalent Chambre des Communes. How do we know that house is more associated with chambre than with communes? Note that mutual information does not distinguish these two pairs. Recall the mutual information I(x;y) is computed by
Prob(x,y) l°g2 Prob(x)Prob(y)
where Prob(x,y) = a/N, Prob(x) = (a + b)/N, and Prob(y) = (a + c)/N. If we plug these numbers into the formulas, we find that house and chambre actually have a lower mutual information value than house and communes: l(house;chambre) = 4.1 while l(house;communes) = 4.2.
Mutual information picks up the fact that there are strong associations in both eases. Unfortunately, it is not very good at deciding which association is stronger. Crucially, it does not make very good use of the offdiagonal cells b and c, which are often better estimated than cell a since the counts in b and c are often larger than those in a.
In this case, the crucial difference is that cell b is much smaller in Table 2 than in Table 3. ~2 picks up this difference; Table 3 has a ~2 of 0.098, signitieantly less than Table 2% ~2 of 0.62:
t = ~2(h'ch) -~2(h'c°)
"qvar2(~2(h,ch)) + var2 (~2(h,co)) 0.62 -0.099 = = 88 %/2.5x10 -5 + 9.9×10 -6
Thus, we can very confidently say that house (h)is more associated with chambre (ch) than with communes (co).
Calculation of Variances
The estimate of var(~ 2) is very important to this argument. We use the following reasoning:
gyp "
As ~2 approaches 1, var(~ 2) decreases to 0, which makes the equation for var~,,,at unsuitable as an estimate of the variance. We calculate a variance for this case by assuming that bc << ad, which implies that ~2 = I -(b + c)/a. With this assumption, we obtain vara,~,(O2) = a-2(b + c)(1 + b + c) a
We do not have an exact relation to specify when ~2 is large and when it is small. Rather, we observe that each estimate produces a value that is small in its domain, so we estimate the variance of ~2 by the minimum of the two cases: var(~ 2) = min (var~,~,,vart~,8 ,).
Selecting Pairs
We have now seen how we could decide that house and chambre are more associated than house and communes. But why did we decide to look at these pairs of words and not some others? As we mentioned before, we probably can't afford to look at all VzVp pairs unless we limit the vocabulary sizes down to something like the 9000 word limit in Brown et al. And even then, there would be 81 million pairs to consider. If the training corpus is not too large (e.g., 50,000 regions), then it is possible to consider all pairs of words that actually co-occur in at least one region (i.e., a ~ 0). Unfortunately, with a training corpus of N = 890,000 regions, we have found that there are too many such pairs and it becomes necessary to be more sdective (heuristic).
We have had fakly good success with a progressive deepening strategy. That is, select a small set of regions (e.g., 10,000) and use all of the training material to compute #2 for all pairs of words that appear in any of these 10,000 regions. Select the best pairs. That is, take a pair (x, y) if it has a ~2 significantly better than any other pair of the form (x, z) or (w, y). This procedure would take house/chambre but not house/communes. Repeat this operation, using larger and larger samples of the training corpus to suggest possibly interesting pairs. On each iteration, remove pairs of words from the training corpus that have already been selected so that other alternatives can be identified. We have completed four passes of this algorithm, and selected more than a thousand pairs on each iteration. 2 278 accept accepte 0 1335 accept accepter 3 111 accept acceptons 1 165 acceptable aeceptables 2 101 acceptable inacceptable 1 90 acceptance acceptation 1 596 accepted accept6 1 55 accepting acceptant 3 130 accepting accepter 0 62 accepts accepte After a few iterations, it became clear that many of the pairs that were being selected were morphologically related to pairs that had already been selected on a previous iteration. A remarkably simple heuristic seemed to work fairly well to incorporate this observation. That is, assume that two pairs are morphologically related if both words start with the same first 5 characters. Then, select a pair if it is morphologically related to a pair that is already selected and it appears "significantly often" (in many more sentences than you would expect by chance) on any iteration. This very simple heuristic more than doubled the number of pairs that had been selected on the first four iterations, from 6419 to 13,466. As we will see in the next section, these 13 thousand pairs cover more than half of the words in the text. Again, the error rate for pairs selected by this procedure was low, less than two percent.
Returing to the Sentence Context
It is now time to try to put these pairs back into their sentence context. Consider the pair of sentences mentioned previously.
English:
we took the initiative in assessing and amending current legislation and policies to ensure that they reflect a broad interpretation of the charter.
French:
nous avons In'is 1' initiative d' ffvaluer et de modifier des lois et des politiques en vigueur afin qu' elles correspondent ~t une interpr&ation gdn&euse de la charte.
The matching procedure attempts to match English and French words using the selected pairs. When there are several possibilities, the procedure uses a slope condition to select the best pair. The matching procedure uses a dynamic programming optimization to find the sequence of j values with the best score. A sequence ofj values is scored with X. logprob (match I slope j) J Using Bayes rule, the prob(matchlslopej) is rewritten as prob( slope ~ I match) prob ( match). Both terms were estimated empirically.
The second term is determined by the fan-in, the number of possible matches that a particular j value might play a role in. In this example, most of the j values had a fan-in of 1. However, the two instances of et had a fan-in of 2 because they could match either of the two instances of and. The score is smaller for both of these uses of et because there is more uncertainty.
We considered three cases: the fan-in is 1, 2 or many.
The log prob(match) in each of these three cases is -0.05, --0.34 and ---0.43, respectively.
The first term is also determined empirically. The score is maximized for a slope of 1, In this case, log prob(slopelmatch) is --0.46. The score falls off rapidly with larger or smaller slopes.
The dynamic programming optimization is also given the choice to match an English word to NULL. If the procedure elects this option, then a constant, log prob(NULL), is added to the score. This value is set so that the matching procedure will avoid making hard decisions when it isn't sure. For example, the 5 ~h English word (in) could have been matched with 16 ~h French word (en), but it didn't do so because log prob(NULL) was more than the score of such a radical reordering. We have found that -5 is a good setting for log prob(match). If we set the value much higher, then the matching procedure attempts to reorder the text too much. If we set the value much lower, then the matching procedure does not attempt to reorder the text enough.
This matching procedure works remarkably well. As mentioned above, based on a sample of 800 sentences, we estimate that the procedure matches 61% of the English words with some French word, and about 95% of these pairs match the English word with the appropriate French word. All but one of these errors of commission involved a function word, usually one surrounded on both sides by words that could not be matched.
Conclusions
We have been studying how to find corresponding words in parallel texts given aligned regions. We have introduced several novel techniques that make substantial progress toward this goal. The philosophy underlying all our techniques is to keep errors of commission low. Whatever words are matched by these robust techniques should almost always be correct. Then, at any stage, the words that are matched can be used eortfidently for further research.
The first technique we have introduced is the measurement of association of pairs of words by d~ 2, based on a two by two contingency table. This measure does better than mutual information at showing which pairs of words are translations, because it accounts for the cases in which one of the words occurs and the other does not. We apply this measure iteratively. Our caution is expressed by selecting at most one pair of words containing a given word on each iteration. The ¢~2 measure for a selected pair must be significantly greater than the ¢2 measures for each of the words of the pair and any other suggested translation.
The iteration is accompanied by a progressive enlargement of possibly interesting pairs. We could not study all paks of words, or even all occurring pairs of words. Rather we take all the oceuring pairs in a progressively enlarged sample of regions. This does propose the most frequently cooccurring pairs first. On each iteration we delete the pairs of words that have already been selected, thereby reducing the confusion among collocates. Our eantion was expressed by hand checking the accuracy of selected pairs after each iteration. We chose techniques which could give 98 percent accuracy on the selected pairs. This has not been a blind automatic procedure, but one controlled at each step by human expertise.
When we observed that many of the pairs considered contained morphological variants of a pair selected, we allowed such pairs to be accepted if they also had a d~ 2 significantly greater than chance.
Several of our tests acknowledge that any function, such as ~2, of noisy data, such as frequencies, is itself a noisy measure. Therefore our caution is to require not just that one measure be greater than another, but that it be significantly greater. This calculation is made using an estimate of the variance of ~ 2.
We then used the selected word pairs to suggest word correspondences within a given aligned region. The alignment was done by a dynamic programming technique with a parameter that controlled how certain we should be before accepting a specific pair of words as corresponding. We set the parameter to give results that are quite likely to be correct. Currently we suggest correspondences for about 60 percent of the words, and when we do suggest a correspondence we are correct in about 95 percent of cases. This is work in progress. We expect that in the future the coverage can be increased substantially above 60% while errors can be deoreased somewhat from 5%. We believe that errors of omission are much less important than errors of commission and expect to continue choosing techniques accordingly.
f finance (mr . wilson ) and the governor of the bank of canada have frequently on es finances ( m . wilson ) et le gouvemeur de la banque du canada ont frt?quemmct reduced by over 800 per cent in one week through bank action. SENT there was a he us de 800 p . 100 en une semaine i! cause d' une banque . SENT voili un chemisic~ bank/ banc ("ulace" sense) . . h a forum. SENT such was the case in the gwrges bank issue which was settlcd be~u entre les dtats-unis et le canada B p r o p du banc de george . SENT > c' est da han i did. SENT he said the nose and tail of the bank were surrendered by this go\ gouvemement avait ctddles mtdmitds du banc . SENT en fait , lors des nCgc
~-,~,L.t~.,~ .s ~ ..S /
var(a) = a, var(b) = b, var(c) = c and var(d) = a + b + c. A direct calculation of this is valid when ~2 is small: vat'real(02) = + "-(a + b)(c + a)(a + c)(b + d
French nous avons pris 1' initiative d' 4valuer et de modifier des lois et des politiques en vigueur afin qu' elles correspondent ~ une interprdation ggn4reuse de la charm. Output: we/nous took/O the/O initiative/initiative in/O assessing/6valuer and/et ammending/modifier current/O legislation/O and/et policies/politiques to/~ ensure/O that/qu' they/elles reflect/O a/une broad/OThe program wouM produce the following
correspondences:
interpretafion/interpr6tation
of/de
theBa
charter/charte ./.
Table 1 :
1A Bilingual Concordance Based on Aligned Sentences
bank/ banque ("money" sense)
Table 2 :
2A Contingency Tablechambre
house
31,950
12,004
4,793
848,330
Table 3 : A Near Miss
3coFnmunes
house
4,974
38,980
441
852,682
. P Brown, J Cocke, S Della Pietra, V Della Pietra, F , Brown, P., J. Cocke, S. Della Pietra, V. Della Pietra, F.
A Statistical Approach to Machine Translation. J Jelinek, R Lafferty, P Mercer, Roossin, ComputationaILinguistics. 16Jelinek, J. Lafferty, R. Mercer, and P. Roossin (1990) "A Statistical Approach to Machine Translation," ComputationaILinguistics, v 16, pp 79-85.
Aligning Sentences in Parallel Corpora. P Brown, J Lai, R Mercer, IBM Report submitted to 29 al Annual Meeting of the Association for Computational Linguistics. Brown, P, J. Lai, R. Mercer (1991) "Aligning Sentences in Parallel Corpora," IBM Report submitted to 29 al Annual Meeting of the Association for Computational Linguistics.
Deriving Translation Data from Bilingual Texts. R Catizone, G Russell, S Warwick, Lexical Acquisition: Using on-line Resources to Build a Lexicon. Lawrence Erlbaumto appearCatizone, R., G. Russell, and S. Warwick (to appear) "Deriving Translation Data from Bilingual Texts," in Zernik (ed), Lexical Acquisition: Using on-line Resources to Build a Lexicon, Lawrence Erlbaum.
A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text. K Church, W Gale, K Church, submitted to 29 th Annual Meeting of the Association for Computational Linguistics. Austin, TexasSecond Conference on Applied Natural Language ProcessingChurch, K., (1988) "A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text," Second Conference on Applied Natural Language Processing, Austin, Texas. Gale, W. and K. Church (1990) "A Program for Aligning Sentences in Bilingual Corpora," unpublished ms., submitted to 29 th Annual Meeting of the Association for Computational Linguistics.
Text-Translation AlignmentS' unpublished ms. M Kay, M Rsscheisen, Xerox Palo Alto Research CeuterKay, M. and M. RSscheisen (1988) "Text-Translation AlignmentS' unpublished ms., Xerox Palo Alto Research Ceuter.
The BICORD System. J Klavans, E Tzoukermarm, COLING-90. Klavans, J., an6 E. Tzoukermarm (1990) "The BICORD System," COLING-90, pp 174-179.
Bilingual Coneordaneing and Bilingual Lexicography. S Warwick, G Russell, Euralex. Warwick, S. and G. Russell (1990) "Bilingual Coneordaneing and Bilingual Lexicography," Euralex 1990. |
|
21,718,123 | EMTC: Multilabel Corpus in Movie Domain for Emotion Analysis in Conversational Text | It is proved that in text-based communication such as sms, messengers applications, misinterpretation of partner's emotions are pretty common. In order to tackle this problem, we propose a new multilabel corpus named Emotional Movie Transcript Corpus (EMTC). Unlike most of the existing emotion corpora that are collected from Twitters and use hashtags labels, our corpus includes conversations from movie with more than 2.1 millions utterances which are partly annotated by ourselves and independent annotators. To our intuition, conversations from movies are closer to real-life settings and emotionally richer. We believe that a corpus like EMTC will greatly benefit the development and evaluation of emotion analysis systems and improve their ability to express and interpret emotions in text-based communication. | [
9168133,
12750543,
15590323,
2856630,
17531038,
6266911
] | EMTC: Multilabel Corpus in Movie Domain for Emotion Analysis in Conversational Text
Phan Duc-Anh phan.ducanh.oq3@is.naist.jp
Graduate school of Information Science
Computational Linguistics Laboratory
Nara Institute of Science and Technology
Japan
Yuji Matsumoto
Graduate school of Information Science
Computational Linguistics Laboratory
Nara Institute of Science and Technology
Japan
EMTC: Multilabel Corpus in Movie Domain for Emotion Analysis in Conversational Text
emotion corpusmovie transcripttext-based communicationemotion analysismultilabel
It is proved that in text-based communication such as sms, messengers applications, misinterpretation of partner's emotions are pretty common. In order to tackle this problem, we propose a new multilabel corpus named Emotional Movie Transcript Corpus (EMTC). Unlike most of the existing emotion corpora that are collected from Twitters and use hashtags labels, our corpus includes conversations from movie with more than 2.1 millions utterances which are partly annotated by ourselves and independent annotators. To our intuition, conversations from movies are closer to real-life settings and emotionally richer. We believe that a corpus like EMTC will greatly benefit the development and evaluation of emotion analysis systems and improve their ability to express and interpret emotions in text-based communication.
Introduction
In the recent years, we experience rapid development of online communication with the help of mediated devices such as smart phones, tablets, computer. People use talkover-internet, video conferencing features for both daily and business tasks. Text based methods like emails or text messengers are still very convenient and indispensable to us because of its unique advantages: they do not require intermediate responses and can be used for the sake of recordkeeping. However, it is already proved that in any online communication methods, users experience more difficulties in interpreting and conveying emotions than face-toface communication due to the limitation in communication modality. Furthermore, text-based methods are where difficulties are encountered the most (Kruger et al., 2005;Arimoto and Okanoya, 2016). Therefore, we targeted the effort to build an emotion analysis system focusing on text data. The starting point is to develop an emotional corpus that has conversational texts and is as close to real-life communication as possible.
Existing emotional text corpora are often collected from micro-blog platforms using multiclass scheme -one emotion per example (Liew et al., 2016). Most of them are automatically annotated by extracting hashtags rather than by human judgements (Dini and Bittar, 2016;Li et al., 2016). While the text data from micro-blog platforms like Twitters are very convenient and easy to collect, the fact that they are limited in the number of characters (140 for a tweet) differs themselves from daily conversation text and therefore, have limited use in a real-life settings. On the other hand, multiclass scheme has its own limitation. One input is only associated to one emotion. However, it is observed in some research (Liew et al., 2016) that multilabel scheme with no limitation in the number of emotions per example is a better and more natural way of annotating emotion labels. We describe our efforts to construct and annotate partly the Emotional Movie Transcript Corpus (EMTC). Most of the corpus are unsupervised data. We annotated by ourselves 10,000 utterances and use them for training. Finally, the testing data, which include 1000 utterances, are annotated by 5 independent annotators. To our understanding, EMTC is the only emotional corpus that is annotated using multilabel scheme and has conversational text instead of short text like tweets or news headlines (Strapparava and Mihalcea, 2007;Mohammad, 2012b). Moreover, EMTC provide the annotators with movie clips instead of just text to help them give better annotation. Our contributions are summarized as follow:
• We explain the multilabel annotating scheme following Plutchik's theory of emotions (Plutchik, 2001). We then later conclude that our annotating scheme provide much better inter-annotators agreement score than other corpora.
• We present and describe the characteristics of the conversational corpus and the statistics of the annotated data.
• We conduct supervised machine learning experiments to evaluate the emotion classification using our corpus and the word-embedding extracted from it.
Related Works
There have been numerous works on building emotion corpora. The first notable work is the ISEAR dataset (Scherer and Wallbott, 1994) which has more than 7,000 responses from participants. The participants are asked to describe the situation where they experience some certain emotions.
In our work, we use this dataset to extract collocation features for the manual feature extration step describled in the below section. Another corpus is the Semeval-2007 task 14: Affective text (Strapparava and Mihalcea, 2007) which consists of 1,250 news headlines with six Ekman's emotion labels. More recent works are (Mohammad, 2012b;Liew et al., 2016;Dini and Bittar, 2016) where they collect data from micro-blog platforms and automatically annotate them using hashtags with or without human revision afterwards. The limitation with those corpora is that they only consist of short, independent pieces of text and undoubtedly not close to real-life conversation. As a matter of fact, modeling emotions in a conversation is indeed a difficult but rewarding task with a wide range of applications. A good system should consider every word in the conversation, the grammatical structure and syntactic variables such as negations, embedded sentences, and type of sentence (question, exclamation, command, or statement), the general context of the conversation, each and every utterances in the conversation -especially when what is said in the previous utterance can have an impact on the emotions of the later one (Collier, 2014). Maybe, because of this complicated nature of the problem, there is a lack of emotional conversation corpus. Another problem with the existing corpora is the annotating scheme: many works limit the emotion labels to a small number (Mohammad, 2012b;Wang et al., 2015) or only allow annotators to label one emotion per utterance (Yang et al., 2007;Hasegawa et al., 2013). As pointed out in many psychology research (Plutchik, 2001;Russell, 2003), emotions are not mutually exclusive. In fact, in many cases, people may experience a mixture of various emotions at the same time (Choe et al., 2013). Therefore, the corpus for any emotion analysis task should be multilabel. Limiting the number of emotion labels may narrow down the problem but can cause troubles for the annotators to provide correct judgement when the emotions in an example are sophisticated or expressed implicitly. In our work, we employ Plutchik's theory of emotions and extend the set of labels to a total of 48 labels to provide more freedom to the annotators. The extension and Plutchik's theory will be explained in more details in Section 3., where we present the construction of the corpus. Section 4. investigates the characteristics of our newly built corpus and Section 5. discusses the experiments and evaluation of the corpus. Lastly, Section 6. gives conclusions and future work.
Methodology
Imdb quotes dataset
In order to mimic real-life conversation settings, we rely on the Imdb datasets 1 , in particularly, the movie quotes dataset. This dataset includes in total 2,107,863 utterances (turns in conversation) out of 117,425 movies and tv series of all genres such as: thrillers, action, romantic, etc. To our assumption, movies conversation should be close to reallife settings and emotionally rich. We can also easily eliminate the low inter-annotators agreement score problem that is often encountered in other corpus (Strapparava and Mihalcea, 2007;Dini and Bittar, 2016) by providing them the clips from the movies in addition to the transcripts ( Figure 1a). At first, we would also want to measure the judgement of emotion intensity from the annotators, hence the bar measurement. However, the collected numbers are very diverse 1 The datasets are available from http://www.imdb.com/interfaces and unreliable. This is due to disagreement among the annotators and their different interpretation of the emotion intensity during the annotation sessions. There is also a concern that different culture will give different interpretation of emotion expressions in those provided clips. However, nowadays, as people from different cultures are more exposed to and have more opportunity to watch American movies, they steadily learn how to interpret the emotions from other cultures better (Hareli et al., 2015;Lim, 2016).
Plutchik's theory of emotions
The reason for most research to limit the number of emotion categories is to have a better inter-annotators agreement score. The more categories are allowed, the lower the score it becomes. However, by limiting the number of categories, they also limit the freedom of the annotators to give accurate judgements because emotions are sophisticated and the basic emotions can hardly cover all the cases. In our work, we found out a way to avoid this trade-off: Plutchik's theory of emotions. According to Plutchik, there are eight primary emotions grouped on a positive or negative basis: joy versus sadness; anger versus fear; trust versus disgust; and surprise versus anticipation. Some emotions are similar to the primary ones but different in intensity (Table 1). Some pri-mary emotions can be mixed to form more complex emotions 2. Implementing the theory, we allow the annotators to use the full 48 emotion labels from the two tables, the system will automatically decompose the annotated labels into primary emotions later (Figure 1).
Intense emotion
Ecstasy Admiration
Annotation Scheme
To produce labeled data, annotators are asked to watch the corresponding movies with subtitles ( Figure 1a) and follow the annotation scheme shown below:
• One utterance may hold zero, one or more emotions at the same time. In case an utterance holds no emo-tion, it should be annotated with "None." The intensity of emotions is also considered in the labeling phrase ( Figure 1).
• The annotators can choose appropriate emotion labels from the list of 48 emotions in tables 1 and 2. The system will decompose the dyads into primary emotions automatically.
• The annotators need to assign the whole utterance which may have two or more sentences with a set of all emotions expressed inside it. There may be cases where conflict emotions according to Plutchik's theory to appear simultaneously in the same utterance as in the last example of the subfigure 1b.
Characteristics, the inter-annotators agreement and the word-embedding of the corpus
This corpus includes in total 2,107,863 utterances with 26 millions words, 181,276 of which are unique terms. As mentioned in the above sections, we can only annotate the corpus partly. There are 10,000 utterances that are annotated by the authors ourselves as the training data. The average emotion labels per utterances are 1.68. The testing data are reviewed by 5 independent annotators to form a gold standard data (with majority rule) with 1,000 utterances. The reason for two different datasets was because the annotation sessions are expensive and time consuming. We have to provide the annotators with clips cutting from real movies and match the text from the corpus to the correct scenes in the full movies. The average labels per utterances are 1.41. We report the inter-annotators agreement score of our testing data in the Table 3 where the performance of each annotator is compared to the gold standard data as ground-truth. We believe that our choice of using a movie corpus and providing movie clips to support the annotation process plays an important factor here.
Word-embedding of the corpus
Word-embedding is the vector multi-dimensional representations of every words in the corpus. It can be a simple yet effective input features for many machine learning methods. In this research, we follow this approach and create the embedding with 100 dimensions using word2vec (Řehůřek and Sojka, 2010). Table 4 shows the top 5 most similar terms to the primary emotion words and some of the dyads.
Interesting points can be observed from the
Experiments and Evaluation
Evaluation of extracted word-embedding
In order to test the practicality of our extracted wordembedding, we run an experiment on our corpus comparing two approach: One uses manual feature selection and Wordnet-Affect (Strapparava et al., 2004) and another uses the word-embedding for automatic feature extractions.
Feature selection approach
Most research agree that emotion words and phrases are the most obvious clue to identify emotions (Mohammad, 2012a;Strapparava and Mihalcea, 2007). Human have developed language to fit their needs of expressing ideas and feelings. Therefore, when describing our emotion, we tend to use some specific words. By picking up on these words, we have a general idea about the emotional direction of the examined text. In this approach, our first set of features is the basic emotion tendency: to express how an input relates to the 8 basic emotions. Wordnet-Affect is employed to interpret the emotion tendency of each and every words in an utterance. If one emotion exists in one word of the input then the corresponding tendency feature will be set to 1 and 0 otherwise. However, solely relying on text would cause problem for emotion detection system. We should also consider the effect of negation words and phrases. Simply by putting a negation word, we reverse the emotion state of the text. The sentence: You are not bad at all! indicate a strong feeling of approval instead of the usual negative feelings from the word bad. Moreover, the context of the input also provide valuable information, especially in conversations. Therefore, we then define the second set of features which includes all similar traits. In the end, we have a list of manual selected features as follow:
1. The sum vector of the current input which suggest the local tendency.
2. The sum vector of all the utterances in the lexicon that appear in the conversation which provides the context of the conversation.
3. The sum vector of the previous utterance in the conversation which also provides the context of previous exchange (of what triggered the current emotion).
4. The polarity (negative/ positive) score of the sentence.
5. Features such as: length, is it a question, is it an exclamatory sentence, is there negation word.
6. Colocation features: we mine the ISEAR () dataset for phrases that are often appear in a specific emotional situation. If the input include these phrases, we set the binary flag of the corresponding features to 1.
The structure of the network is shown in figure 2: input layer of manually selected features, two hidden layers, a threshold multi-label output layer.
Word-embedding Network: text to vector
We consider a bag-of-features approach to transform the raw input text into vectors form. Therefore, for a piece of text, its representation is the sum vector of all lexical items inside. Because our goal is to predict the emotional labels for each utterance in a conversation, we also have to vectorize the previous utterance and the entire conversation to capture the contextual information . As a result, the vector representation of an utterance is a 300-dimensional-vector The two networks are both experimented on our annotated corpus. We report the performance of each network and make comparison of our methods to the another method and corpus in the next section.
Evaluation of the corpus
To evaluate, we use the two previous mentioned networks: manual feature selection network (MFSnet) and wordembeddings network (WENet). We evaluate the result with the gold standard test data using two major measurements in multi-label learning: hamming score (or accuracy in multilabel classification), multilabel F1-score (Table 5).
The most important baseline is the average agreement score of the 5 human annotators on our corpus. We also want to compare our corpus to the existing Twitter Emotion Corpus (TEC) (Mohammad, 2012b) for their agreement score and their system's performance. The Twitter Emotion Cor-pus has tweets with emotion word hashtags. Similar to our work of creating word-embeddings, TEC was used to create the NRC Hashtag Emotion Lexicon (Mohammad, 2012a As we can observe, our simple system are performing worse than human annotators by a considerable margin. However, in comparison with other corpora 's Interannotators agreement F1-score such as Twitter Emotion Corpus (the score is 43.7), we see the potential of the corpus: It is reliable and the performance of emotion analysis system on it is great. The result is especially significant when our corpus is multilabeled and consists of conversational data which are much more complicated and practical. Both of our networks benefit from the corpus and have good F1-score. While MFSNet is slightly behind Binary Classifiers of TEC, WENet outperforms both. This result suggests that automatic feature extraction using wordembedding is better than manual selected features. We believe that because our embedding are built from the corpus, it has captured the relation between emotional words in the corpus better than general domain lexicon like Wordnet-Affect.
Conclusion
In this paper, we present our Emotion Movie Transcript Corpus developed from Imdb quotes dataset. EMTC consists of conversational text extracted from movies and as a result, is close to real-life settings and very practical for emotion analysis tasks. The corpus is partly annotated using our multilabel scheme and the annotators are provided with corresponding movie clips to ensure the reliability of the inter-annotators score of the corpus. We also conduct experiments on two networks: MFSNet that uses manual feature selection and WENet that uses Word-embedding to extract the bag-of-features from the input for supervised learning. The statistics and experimental results show that our extracted word-embedding and the corpus are reliable and even a very simple supervised method like WENet can perform fairly well using only bag-of-features from the embedding.
We would like to investigate the correlation among annotated labels and expand the size of testing data of our corpus using the same annotating scheme in the future. After that, we would focus on building an emotion lexicon from the word-embedding extracted from EMTC.
Figure 1 :
1(a) UI of the annotating website. Users can choose the appropriate emotions by adjusting the confidence bars or by typing the emotions or dyads into the text box. The dyads are then decomposed automatically into primary emotions and the bars are readjusted (b) Examples of annotated transcripts from movie: Brave Heart (1995) -Annotating scheme of the testing data. Each utterance is annotated with primary emotions
Figure 2 :
2Structure of the manual feature selection network (MFSNet) concatenated product of the utterance itself and the abovementioned contextual information. This representation is then fed to the input layer of the neural network in the below figure 3.
Figure 3 :
3Structure of the word-embedding network
Table 2 :
2Dyads -Combinations of emotions: two primary emotions can blend together to form another complex one
Table 3 :
3Inter-annotator Agreement score with gold standard data as ground-truth.From on the table, it can be concluded that: Our corpus,
even when being annotated using multilabel scheme, yields
better agreement score to the multiclass -Twitter Emotion
Corpus (Mohammad, 2012b) (Average F1-score is 43.7).
Table 4 :
4Top similar words to primary emotions and some dyads
).Corpus Baselines
Hamming
score
F1-score
EMTC
Human annotators 43.2
62.6
WENet
39.1
53.9
MSFNet
35.1
41.4
TEC
Human annotators
-
43.7
Binary Classifiers
-
42.2
Table 5: Corpus evaluation
AcknowledgementsThis research was supported by JST CREST Grant Number JPMJCR1513, Japan. We deeply appreciate the support of
colleagues from Computational Linguistics Lab and all the annotators participated in this research for the valuable effort and patience. colleagues from Computational Linguistics Lab and all the annotators participated in this research for the valuable effort and patience.
Bibliographical References. Bibliographical References
Comparison of emotional understanding in modality-controlled environments using multimodal online emotional communication corpus. Y Arimoto, K Okanoya, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). Nicoletta Calzolarithe Tenth International Conference on Language Resources and Evaluation (LREC 2016)Paris, France, mayet al., editors. European Language Resources Association (ELRAArimoto, Y. and Okanoya, K. (2016). Comparison of emotional understanding in modality-controlled environ- ments using multimodal online emotional communica- tion corpus. In Nicoletta Calzolari (Conference Chair), et al., editors, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France, may. European Language Resources Association (ELRA).
Estimating multiple evoked emotions from videos. W Choe, H.-S Chun, J Noh, S.-D Lee, B.-T Zhang, Proceedings of the Annual Meeting of the Cognitive Science Society. the Annual Meeting of the Cognitive Science Society35Choe, W., Chun, H.-S., Noh, J., Lee, S.-D., and Zhang, B.- T. (2013). Estimating multiple evoked emotions from videos. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 35.
Emotional expression. G Collier, Psychology PressCollier, G. (2014). Emotional expression. Psychology Press.
Emotion analysis on twitter: The hidden challenge. L Dini, A Bittar, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). Nicoletta Calzolari (Conference Chair), et al.the Tenth International Conference on Language Resources and Evaluation (LREC 2016)Paris, France, mayEuropean Language Resources Association (ELRADini, L. and Bittar, A. (2016). Emotion analysis on twit- ter: The hidden challenge. In Nicoletta Calzolari (Con- ference Chair), et al., editors, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France, may. European Language Resources Association (ELRA).
A crosscultural study on emotion expression and the learning of social norms. S Hareli, K Kafetsios, U Hess, Frontiers in psychology. 61501Hareli, S., Kafetsios, K., and Hess, U. (2015). A cross- cultural study on emotion expression and the learning of social norms. Frontiers in psychology, 6:1501.
Predicting and eliciting addressee's emotion in online dialogue. T Hasegawa, N Kaji, N Yoshinaga, M Toyoda, ACL (1). Hasegawa, T., Kaji, N., Yoshinaga, N., and Toyoda, M. (2013). Predicting and eliciting addressee's emotion in online dialogue. In ACL (1), pages 964-972.
Egocentrism over e-mail: Can we communicate as well as we think?. J Kruger, N Epley, J Parker, Z.-W Ng, Journal of personality and social psychology. 896925Kruger, J., Epley, N., Parker, J., and Ng, Z.-W. (2005). Egocentrism over e-mail: Can we communicate as well as we think? Journal of personality and social psychol- ogy, 89(6):925.
Emotion corpus construction based on selection from hashtags. M Li, Y Long, L Qin, Li , W , Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). Nicoletta Calzolarithe Tenth International Conference on Language Resources and Evaluation (LREC 2016)Paris, France, mayet al., editors. European Language Resources Association (ELRALi, M., Long, Y., Qin, L., and Li, W. (2016). Emotion corpus construction based on selection from hashtags. In Nicoletta Calzolari (Conference Chair), et al., edi- tors, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France, may. European Language Resources As- sociation (ELRA).
Emotweet-28: A fine-grained emotion corpus for sentiment analysis. J S Y Liew, H R Turtle, E D Liddy, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). Nicoletta Calzolarithe Tenth International Conference on Language Resources and Evaluation (LREC 2016)Paris, France, mayet al., editors. European Language Resources Association (ELRALiew, J. S. Y., Turtle, H. R., and Liddy, E. D. (2016). Emotweet-28: A fine-grained emotion corpus for sen- timent analysis. In Nicoletta Calzolari (Conference Chair), et al., editors, Proceedings of the Tenth Interna- tional Conference on Language Resources and Evalua- tion (LREC 2016), Paris, France, may. European Lan- guage Resources Association (ELRA).
Cultural differences in emotion: differences in emotional arousal level between the east and the west. N Lim, Integrative medicine research. 52Lim, N. (2016). Cultural differences in emotion: differ- ences in emotional arousal level between the east and the west. Integrative medicine research, 5(2):105-109.
Portable features for classifying emotional text. S Mohammad, Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsMohammad, S. (2012a). Portable features for classifying emotional text. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies, pages 587-591. Association for Computational Linguis- tics.
# emotional tweets. S M Mohammad, Proceedings of the First Joint Conference on Lexical and Computational Semantics. the First Joint Conference on Lexical and Computational Semantics1Proceedings of the Sixth International Workshop on Semantic Evaluation. Association for Computational LinguisticsMohammad, S. M. (2012b). # emotional tweets. In Pro- ceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Pro- ceedings of the Sixth International Workshop on Seman- tic Evaluation, pages 246-255. Association for Compu- tational Linguistics.
The nature of emotions human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice. R Plutchik, American scientist. 894Plutchik, R. (2001). The nature of emotions human emo- tions have deep evolutionary roots, a fact that may ex- plain their complexity and provide tools for clinical prac- tice. American scientist, 89(4):344-350.
Software Framework for Topic Modelling with Large Corpora. R Rehůřek, P Sojka, Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks. the LREC 2010 Workshop on New Challenges for NLP FrameworksMalta, MayVallettaRehůřek, R. and Sojka, P. (2010). Software Frame- work for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Val- letta, Malta, May. ELRA. http://is.muni.cz/ publication/884893/en.
Core affect and the psychological construction of emotion. J A Russell, Psychological review. 1101145Russell, J. A. (2003). Core affect and the psycholog- ical construction of emotion. Psychological review, 110(1):145.
Evidence for universality and cultural variation of differential emotion response patterning. K R Scherer, H G Wallbott, Journal of personality and social psychology. 662310Scherer, K. R. and Wallbott, H. G. (1994). Evidence for universality and cultural variation of differential emotion response patterning. Journal of personality and social psychology, 66(2):310.
Semeval-2007 task 14: Affective text. C Strapparava, R Mihalcea, Proceedings of the 4th International Workshop on Semantic Evaluations. the 4th International Workshop on Semantic EvaluationsAssociation for Computational LinguisticsStrapparava, C. and Mihalcea, R. (2007). Semeval-2007 task 14: Affective text. In Proceedings of the 4th Inter- national Workshop on Semantic Evaluations, pages 70- 74. Association for Computational Linguistics.
Wordnet affect: an affective extension of wordnet. C Strapparava, A Valitutti, CiteseerStrapparava, C., Valitutti, A., et al. (2004). Wordnet affect: an affective extension of wordnet. Citeseer.
Emotion detection in code-switching texts via bilingual and sentimental information. Z Wang, S Lee, S Li, G Zhou, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaAssociation for Computational Linguistics2Short Papers)Wang, Z., Lee, S., Li, S., and Zhou, G. (2015). Emo- tion detection in code-switching texts via bilingual and sentimental information. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 763-768, Beijing, China, July. Association for Computational Linguistics.
Emotion classification using web blog corpora. C Yang, K H Lin, .-Y , Chen , H.-H , Web Intelligence, IEEE/WIC/ACM International Conference on. IEEEYang, C., Lin, K. H.-Y., and Chen, H.-H. (2007). Emo- tion classification using web blog corpora. In Web Intel- ligence, IEEE/WIC/ACM International Conference on, pages 275-278. IEEE. |
234,345,291 | [] | A Metric for Lexical Complexity in Malayalam
NLPAICopyright NLPAIDecember 2019
D M Sharma
P Bhattacharyya
R Sangal
Richard Shallam
A Metric for Lexical Complexity in Malayalam
Proc. of the 16th Intl. Conference on Natural Language Processing
. of the 16th Intl. Conference on Natural Language essingHyderabad, IndiaNLPAIDecember 2019178 Independent Researcher Ashwini Vaidya (avaidya@hss.iitd.ac.in) IIT Delhilexical processingvisual word recognitionlexical complexityDravidian languages
This paper proposes a metric to quantify lexical complexity in Malayalam. The metric utilizes word frequency, orthography and morphology as the three factors affecting visual word recognition in Malayalam. Malayalam differs from other Indian languages due to its agglutinative morphology and orthography, which are incorporated into our model. The predictions made by our model are then evaluated against reaction times in a lexical decision task. We find that reaction times are predicted by frequency, morphological complexity and script complexity. We also explore the interactions between morphological complexity with frequency and script in our results. To the best of our knowledge, this is the first study on lexical complexity in Malayalam.
Introduction
The task of visual word recognition is related to language processing at the level of a word/lexical item. A word can be analyzed at several linguistic levels, and the word recognition task helps us understand the role of these levels in relation to processing, memory and attention. In psycholinguistics, previous work on this topic focuses on understanding the individual variables that affect the lexical processing of words. If we can quantify the influence of variables ranging from orthographic features to semantic factors on the cognitive processing of words, it would help us in understanding the critical factors underlying visual word recognition (and pattern recognition, more generally). The resulting model of word recognition can be evaluated against human judgements.
Models of word recognition are especially relevant for eye-tracking studies, where they have been extensively explored (Rayner and Duffy, 1986). Word recognition models have also been used to understand reading disabilities such as phonological and surface dyslexia (Balota et al., 2006). For these studies, it is crucial to tease apart the effect of various factors that affect the task of reading. Previous research has shown that the eye gaze duration is affected by frequency, orthography, morphology and phonology, among others. Apart from these studies, an understanding of lexical complexity is also an interesting topic for study on its own.
In this paper, we explore the case of Malayalam and in particular examine three factors that could predict word complexity in the language: frequency, orthography and morphology. The role of variables that determine word recognition in Malayalam has not been explored, as it has been for Hindi (Husain et al., 2015;Verma et al., 2018). Quantifying these factors in a model of lexical complexity can help us in developing norms that are useful in areas such as reading studies and word generation for lexical decision tasks. Further, this would contribute towards cross-linguistic comparison of these factors from a different language family. To the best of our knowledge, this is the first work that examines lexical complexity in Malayalam.
Lexical Complexity
The task of visual word recognition involves the cognitive processing of visual information and comparing it with a particular internal mental representation of a word. This representation itself may be at the graphemic, phonemic, morphemic and lexical semantic level, all of which have been shown to affect word recognition (Balota et al., 2006). In the sections that follow, we describe the three factors that are included in our study.
Word Frequency
The effect of word frequency is robust and has been well studied across word recognition tasks (Balota et al., 2006). High frequency words tend to be recognized faster than low frequency words. In eye tracking studies high frequency words have lower gaze duration and fixation measures. We would expect that frequency would have a similar effect on the Malayalam data, where high frequency would contribute towards a lower lexical complexity.
Morphology
A word may be composed of a single morpheme e.g. boy or more than one e.g. funnily: funny+ ly. The role of morphology in word recognition is at a sub-lexical level. Morphology as a measure is particularly relevant for an agglutinative language such as Malayalam, which also exhibits productive word compounding e.g. Just the word മരം (mara) "tree" has a number of morphological forms such as മര ിൽ (marattil) -in the tree മര ിെ (marattinṟ e) -of the tree മര ൾ ിടയി െട (maraṅṅaḷkkiṭayilūṭe) -through the trees
മരെ ാ കൾ (marakkeāmpukaḷ) -tree branches
Early studies that looked at the effect of morphology on lexical access have suggested that polymorphemic words (i.e. words consisting of more than one morpheme) are decomposed into their component parts during online processing. This process would find the root first (e.g. funny and on finding it, proceed to search stored affix-stem combinations till funnily is retrieved (Taft and Forster, 1975). In a morphologically-rich language such as Malayalam, we would expect that this would be an important factor in lexical processing.
Orthography
The visual processing of words involves processing at the orthographic level as well. This implies that the writing system of various languages will influence recognition. A writing system-whether alpha-syllabic, logographic or alphabetic has been shown to influence reading times (Katz and Frost, 1992). Sub-lexical properties such as letter features and their interactions with the words themselves can also influence word complexity, which needs to be accounted for in the model.
Method
In order to compute the lexical complexity metric, token frequency, morphology and orthography were included as our variables. Below, the methods for computing the values for each of these variables are discussed.
Corpus
In order to compute our metric for Malayalam, we first obtained a corpus from the Leipzig Corpora Collection containing 300,000 sentences from Malayalam Wikipedia articles and 100,000 sentences from Malayalam news crawl (Goldhahn et al., 2012). The corpus was then preprocessed by removing punctuation and special characters, and then tokenized using whitespace. The text was also normalized to remove inconsistencies in spelling using the Indic NLP Library 1 and this resulted in 4,711,219 tokens and 762,858 unique types.
Word Frequency Metric
The corpus was used to collect counts for each word and then scaled them between 0 and 1, which was then inverted such that the most frequent tokens have a value closer to 0 and the less frequent tokens will have a value approaching 1. This score indicated the relative frequency of each word in this corpus, and the idea that highly frequent words are much easier to process than those that have lower frequency.
Morphology Metric
Our morphology metric required us to obtain information about the root and the morpho-logical affixes for a given word. Given the rich morphology and compounding processes in the language, we had to make use of a two-step process to compute our scores.
First, SandhiSplitter (Devadath et al., 2014) was used to split tokens that are compound words into their constituent component words. For example, consider the compound word കാരണമായിരി ണം (kAraNamAyirikkaNaM) കാരണമായിരി ണം ⇒ കാരണം + ആയിരി ണം kāraṇamāyirikkaṇaṁ⇒ kāraṇaṁ+ āyirikkaṇam" must be the reason" ⇒ "reason" + "must be"
As a second step, these results were passed through IndicStemmer 2 , a rule-based stemmer for Malayalam, which further decomposed the words into stems and affixes. As an example, the word േലഖന െട (lēkhanaṅṅaḷuṭe) meaning "Of articles". is decomposed into the stem േലഖനം (lēkhanam) meaning article with the suffix -ൾ ( ṅṅal) indicating plural and --ുെട (uṭe) indicating the Genitive case. In our metric we only considered suffixes as in Malayalam usually contains always suffixes being added to the end of the stem.
After this two-step process, we are able to obtain the stems and suffixes for a given word.
Morpheme Count
By simply summing the number of stems and suffixes, the total number of morphemes contained in each word is computed. For example, the word സ ി ം (sampatsamr d'dhiyum) meaning "prosperity" is a compound word split into constituent words സ ് (sampatt) meaning "richness" and സ ി ം (samr d'dhiyum) meaning "and plentiful". സ ി ം (samr d'dhiyum) is further stemmed to stem word സ ി (samr d'dhi) meaning "plentiful" and suffix -ും (um) meaning "-and". സ ് (sampatt) is a root word. Thus, the number of morphemes in this case is three, counting the two stems and one suffix.
Based on this pre-processing, we then calculate the total number of morphemes for each whole word and then scale this number between 0 and 1 to give a morpheme score. We 2 https://github.com/libindic/indicstemmer note that there could be several different ways to compute the morpheme score, as affixes themselves are not all alike. In this preliminary study, it was not immediately apparent how the differing costs for various affixes could be calculated. Additionally, fine-grained information regarding the morphological properties of the affixes (e.g. whether they were inflectional or derivational) was not easily obtained with existing tools and resources. In future work, we plan to explore this possibility by enhancing the morphological analyzer's output.
Orthography Metric
Malayalam is an alphasyllabic writing system that has its source in the Vatteluttu alphabet from the 9 th century. Its modern alphabets have been borrowed from the Grantha alphabet. It consists of 15 vowels and 36 consonant letters.
We devised a script score based on complexity of the script in the following three ways:-
Mismatch in Spoken and Visual Order
In the alpha-syllabic script of Malayalam, vowels may either appear as letters at the beginning of a word or as diacritics. Consonants themselves are understood to have an inherent schwa, which is not separately represented. The diacritics will appear either left or right of the consonant it modifies. If it appears to the left, there will be a discrepancy in the phonemic and the orthographic order, as the vowel will always be pronounced after the consonant, but read before the consonant actually appear in the text. For example:
ക +െ◌ = െക ka + .e = ke
Here the vowel violates the order in which it is spoken. Similarly: ക +േ◌ = േക (ka + ē = kē), as seen in േകൾ ക (kēḷkkuka) meaning "hear". Such inconsistencies in spoken and visual order have been shown to incur a cost in Hindi word recognition (which is also an alpha-syllabic script) (Vaid and Gupta, 2002).
In order to capture the lexical processing cost for such a discrepancy, we give a penalty of 1 every time it occurs in the word.
Diacritic Appearing Above or Below
In Malayalam, the diacritic may also appear above or below a consonant. In such a case, we we give a penalty of 0.5 to the word. For example the symbol ◌് also known as virama is used to replace the inherent schwa sound of consonants with ŭ. As in ക + ◌് = ക് (ka + virama = ku)
Ligatures and Consonant Clusters
A penalty of one is assigned for every two letters that form a composite glyph. For example: മ ി (mantri) = മന് + ി (man + tri) where the new composite glyph is (ntra). With the above complexity rules in place, the total penalty cost for each whole word is calculated. Then the total penalty for each word is scaled linearly to between 0 and 1 to give us an orthographic score.
Evaluation of the Complexity Metric
In order to evaluate our lexical complexity metric, we used a lexical decision task paradigm to collect reaction times for a sample of Malayalam words. More complex words would result in longer reaction times, and vice versa. This would help us evaluate whether our lexical complexity model could predict reaction times for the given set of words. We used a well-understood experimental paradigm in the form of a lexical decision task. In such a setup, a participant will see a word stimuli on a screen which they have to classify as either a word or a non-word using a button press. The response time (RT) is calculated from the point the word appears on the screen to the point where the participant presses the response button.
Materials
Our task consisted of a balanced set of 50 Malayalam words and 50 pseudowords. Pseudowords follow the phonotactics of the language, but have no lexical meaning (i.e. are not legitimate words). In order to select words for the task, two sets of 25 words were randomly sampled from the unique tokens obtained from the Leipzig Corpus. The first set was randomly sampled from words with a frequency score between the range of 0.1 to Figure 1: Stimuli word shown for 2500ms. The first word is a proper Malayalam word ("vivaraṅṅaḷ" meaning "information") hence the correct response is to press the 'a' key. The second word is non-word (vamittam) and therefore, the correct response is to press 'l' key. 0.4 to obtain high frequency words as calculated by the metric. The second set was chosen similarly but with frequency score between the range of 0.7 to 0.9 to yield low frequency words. If the sampled word turned out to be an English word written in Malayalam or happens to be a proper noun, it was replaced with another until both sets had 25 words each.
The pseudowords were constructed in keeping with the phonotactics of Malayalam. Both the pseudowords and the valid words were constrained in length between 6 and 14 characters. Note that we do not take into consideration the reaction times for the pseudowords; they are simply distractors for the participants.
Participants
Participants included 38 students from S.N. College, Kerala, who volunteered for the study. Participants included 20 females and 18 males between the ages of 18 and 23 (mean age of 19.7). All participants were native speakers of Malayalam and had formal education in Malayalam upto grade 10.
Procedure
Participants were tested individually on a computer running the lexical decision task on the JsPsych stimulus presentation software (De Leeuw, 2015). Each participant was asked to press either the 'a' key or the 'l' key for word and non-word respectively. The order of words and pseudowords was randomized for each participant. Participants were instructed to read the word presented and respond with the appropriate button press. Each trial consisted of a word that was presented for 2500ms. A fixation cross was placed in the center for 1600ms between each trial. The first 10 trials were practice trials from a word set different from the study. This enabled participants to get familiarized with the task.
Results
The trials belonging to those who scored below 70% in word-non-word accuracy were excluded, which brought the number of participants to 35.
We fit a linear model using the lm function in R. Log reaction times were used with frequency, script and morph as the covariates. Figure 2 shows that the three variables are not highly correlated in our test set. Table 1 shows the results of the regression analysis. The main inference we can draw from the result is that the variables Script, Morphology and Frequency have a significant effect (all p-values < 0.05) on (reaction times) RTs, such that a high cost of script, morph and frequency leads to higher RTs.
In addition, the results also indicate a Figure 3). There is also a marginal interaction between Morphology and Frequency (p=0.08) such that an increase in the frequency cost leads to higher reaction times in morphologically complex words as compared to morphologically simpler words (see Figure 4).
Discussion
Our results replicate the robust effects of frequency on lexical processing in Malayalam.
As frequency is a known predictor of reaction times, we expected to find a significant effect for frequency, but we particularly wanted to understand the effect of morphology and orthography on word recognition in Malayalam. Orthographic complexity as captured by diacritic placement and ligatures also has a significant effect on lexical processing. Similarly, we also find an effect for morphological complexity in terms of the number of morphemes in a word.
The interactions in our model point to an interesting relationship between high frequency words and morphological complexity. It appears that the effect of frequency cost becomes more pronounced in more complex words. In other words, low frequency words lead to higher reaction times particularly when they are morphologically complex. Perhaps this is because the cost of lexical decomposition is higher in these words. On the other hand, the effect size of script is weaker and becomes visible only when the word is morphologically simple. When the word is morphologically complex, this effect is not very apparent.
This work points to many interesting future avenues for exploring lexical complexity in an agglutinative language like Malayalam. Particularly, the effect of morphological complexity on factors like frequency need to be explored more thoroughly. In the future, we plan to carry out experiments with a larger set of items for the lexical decision task, as this was a preliminary study. We also plan to experiment with other measures of morphological complexity that take into account information about the type as well as the number of morphemes.
Figure 2 :
2Heat plot showing correlation between the three variables in our test data
Figure 3 :Figure 4 :
34Interaction between Morphological Complexity and Script Complexity Interaction between Morphological Complexity and Frequency Cost. Note that a low Frequency Cost corresponds to a high Frequency Count for a word
Table 1: Results for all three variables and their interactions. Script and Morphological Complexity as well as Frequency and Morphological Complexity show a significant interaction marginal interaction between Script and Morphology (p=0.06), such that an increase in the script complexity leads to larger increases in RTs for morphologically simpler words (Cost <0.9) compared to morphologically complex words (Cost >0.9) (seeEstimate Std. Error t-value p-value
(Intercept)
4.30
0.679
6.35
0
Script
9.157
3.76
2.43
0.015 *
Freq
2.87
0.96
2.97
0.003 **
Morph
1.91
0.71
2.67
0.007 **
Script:Freq
-3.171
5.77
-0.55
0.58
Script: Morph
-7.64
4.1
-1.873
0.06 .
Freq: Morph
-1.79
1.03
-1.743
0.08 .
Script:Freq:Morph 0.28
6.31
0.045
0.96
https://anoopkunchukuttan.github.io/indic_ nlp_library/
Visual word recognition: The journey from features to meaning (a travel update). A David, Balota, J Melvin, Michael J Yap, Cortese, Handbook of Psycholinguistics. ElsevierDavid A Balota, Melvin J Yap, and Michael J Cortese. 2006. Visual word recognition: The journey from features to meaning (a travel up- date). In Handbook of Psycholinguistics, pages 285-375. Elsevier.
Joshua R De Leeuw, jspsych: A javascript library for creating behavioral experiments in a web browser. Behavior research methods. 47Joshua R De Leeuw. 2015. jspsych: A javascript library for creating behavioral experiments in a web browser. Behavior research methods, 47(1):1-12.
A sandhi splitter for malayalam. Vv Devadath, J Litton, Dipti Misra Kurisinkel, Vasudeva Sharma, Varma, Proceedings of the 11th International Conference on Natural Language Processing. the 11th International Conference on Natural Language ProcessingVV Devadath, Litton J Kurisinkel, Dipti Misra Sharma, and Vasudeva Varma. 2014. A sandhi splitter for malayalam. In Proceedings of the 11th International Conference on Natural Lan- guage Processing, pages 156-161.
Building large monolingual dictionaries at the leipzig corpora collection: From 100 to 200 languages. Dirk Goldhahn, Thomas Eckart, Uwe Quasthoff, LREC. 29Dirk Goldhahn, Thomas Eckart, and Uwe Quasthoff. 2012. Building large monolingual dictionaries at the leipzig corpora collection: From 100 to 200 languages. In LREC, vol- ume 29, pages 31-43.
Integration and prediction difficulty in hindi sentence comprehension: Evidence from an eye-tracking corpus. Samar Husain, Shravan Vasishth, Narayanan Srinivasan, Journal of Eye Movement Research. 82Samar Husain, Shravan Vasishth, and Narayanan Srinivasan. 2015. Integration and prediction dif- ficulty in hindi sentence comprehension: Evi- dence from an eye-tracking corpus. Journal of Eye Movement Research, 8(2):1-12.
The reading process is different for different orthographies: The orthographic depth hypothesis. Leonard Katz, Ram Frost, Advances in psychology. Elsevier94Leonard Katz and Ram Frost. 1992. The read- ing process is different for different orthogra- phies: The orthographic depth hypothesis. In Advances in psychology, volume 94, pages 67- 84. Elsevier.
Lexical complexity and fixation times in reading: Effects of word frequency, verb complexity, and lexical ambiguity. Keith Rayner, Susan A Duffy, Memory & cognition. 143Keith Rayner and Susan A Duffy. 1986. Lexical complexity and fixation times in reading: Effects of word frequency, verb complexity, and lexical ambiguity. Memory & cognition, 14(3):191-201.
Lexical storage and retrieval of prefixed words. Marcus Taft, Kenneth I Forster, Journal of verbal learning and verbal behavior. 146Marcus Taft and Kenneth I Forster. 1975. Lexical storage and retrieval of prefixed words. Journal of verbal learning and verbal behavior, 14(6):638- 647.
Exploring word recognition in a semi-alphabetic script: The case of devanagari. Jyotsna Vaid, Ashum Gupta, Brain and Language. 811-3Jyotsna Vaid and Ashum Gupta. 2002. Explor- ing word recognition in a semi-alphabetic script: The case of devanagari. Brain and Language, 81(1-3):679-690.
Shabd: A psycholinguistics database for hindi words. Ark Verma, V Sikarwar, H Yadav, J Ranjith, Pawan Kumar, Proceedings of ACCS. ACCSArk Verma, V. Sikarwar, H. Yadav, J. Ranjith, and Pawan Kumar. 2018. Shabd: A psycholinguis- tics database for hindi words. In Proceedings of ACCS 2018. |
||
7,549,275 | Low-cost Customized Speech Corpus Creation for Speech Technology Applications | Speech technology applications, such as speech recognition, speech synthesis, and speech dialog systems, often require corpora based on highly customized specifications. Existing corpora available to the community, such as TIMIT and other corpora distributed by LDC and ELDA, do not always meet the requirements of such applications. In such cases, the developers need to create their own corpora. The creation of a highly customized speech corpus, however, could be a very expensive and time-consuming task, especially for small organizations. It requires multidisciplinary expertise in linguistics, management and engineering as it involves subtasks such as the corpus design, human subject recruitment, recording, quality assurance, and in some cases, segmentation, transcription and annotation. This paper describes LDC's recent involvement in the creation of a low-cost yet highly-customized speech corpus for a commercial organization under a novel data creation and licensing model, which benefits both the particular data requester and the general linguistic data user community. | [
18349465,
46057261
] | Low-cost Customized Speech Corpus Creation for Speech Technology Applications
Kazuaki Maeda maeda@ldc.upenn.edu
Linguistic Data Consortium University of Pennsylvania
3600 Market St., Suite 810 Philadelphia19104 PAU.S.A
Christopher Cieri ccieri@ldc.upenn.edu
Linguistic Data Consortium University of Pennsylvania
3600 Market St., Suite 810 Philadelphia19104 PAU.S.A
Kevin Walker walkerk@ldc.upenn.edu
Linguistic Data Consortium University of Pennsylvania
3600 Market St., Suite 810 Philadelphia19104 PAU.S.A
Low-cost Customized Speech Corpus Creation for Speech Technology Applications
Speech technology applications, such as speech recognition, speech synthesis, and speech dialog systems, often require corpora based on highly customized specifications. Existing corpora available to the community, such as TIMIT and other corpora distributed by LDC and ELDA, do not always meet the requirements of such applications. In such cases, the developers need to create their own corpora. The creation of a highly customized speech corpus, however, could be a very expensive and time-consuming task, especially for small organizations. It requires multidisciplinary expertise in linguistics, management and engineering as it involves subtasks such as the corpus design, human subject recruitment, recording, quality assurance, and in some cases, segmentation, transcription and annotation. This paper describes LDC's recent involvement in the creation of a low-cost yet highly-customized speech corpus for a commercial organization under a novel data creation and licensing model, which benefits both the particular data requester and the general linguistic data user community.
Introduction
The Linguistic Data Consortium (LDC) is a non-profit organization, whose primary mission is to support education, research and technology development in languagerelated disciplines. LDC creates and disseminates linguistic resources for this mission; it does not create linguistics resources to benefit a single organization. There have been, however, strong interests from commercial and noncommercial organizations to subcontract LDC to create customized speech corpora. In response to such data requests from organizations while meeting its mission goals, LDC has created a "delayed release" model of data creation and licensing. Under this model, the data requester (also the sponsor) subcontracts LDC to create a speech corpus meeting their specifications. The data requester funds the creation of the corpus, and in return, benefits from a lead time of typically eighteen months, in which the data requester has the exclusive rights to use the data. The corpus is customized to their needs, and the effort of communicating needs to an outside group generally has a clarifying effect; in the process, they may learn of approaches or technologies they had not considered. After the lead time, which begins when the corpus is delivered, LDC releases the corpus to LDC members and non-members at a significantly reduced cost.
Speech Controlled Computing Corpus
The Speech Controlled Computing (SCC) corpus was the first corpus created at LDC under this model. It was developed for limited-vocabulary speech recognition applications targeting a wide variety of American English speakers. The data requester's idea was to have a set of recordings of isolated words and short phrases of the kind one would use to control household appliances recorded to be representative of most American speakers. LDC and the data requester agreed to have a pool of speakers that repre-sented each of four regional groups, three age groups and two gender groups. To meet this first challenge, we conducted a recruitment effort to meet these demographic requirements. In order to ensure that recordings are consistent and of highest possible quality, all recordings were to be done in a recording booth in the LDC suite. This limited the pool of possible participants to Philadelphia-region residents. Another challenge we faced was the paucity of male subjects willing to participate, particularly in the mid and high age groups. One of the most effective solutions to this problem was to recruit subjects who had participated in our previous studies, such as Fisher and Mixer. LDC keeps a database of speakers who participated in the past projects. LDC's human subject recruitment team contacted possible participants living in the Philadelphia region for each of the 24 demographic groups shown in Table 1 The recordings were performed in LDC's soundattenuating recording room. Prior to the production recordings, we conducted a series of pilot recordings to test whether our recording method met the requirements set by the data requester. A number of microphones, including headset-mounted microphones and stand-mounted microphones, were tested, and sample recordings were sent to the data requester to determine the best recording method for them. In order to facilitate efficient recordings, LDC developed an infrastructure to control and monitor recording sessions, the hardware components of which include hard disk-based digital recording with back-up recordings to DAT tapes. A software-based prompter was created to display the word list in a randomized order to speakers, and to control the progress of the recordings. An assistant recording engineer monitored the recording sessions outside of the recording booth. The digitized sound files were initially segmented into individual tokens using an automatic acoustic segmentation program. As the corpus specifications required all tokens to be manually reviewed and for the segmentation to be corrected, we created a specialized annotation tool for these purposes. The resulting auditing and segmentation tool utilizes the Annotation Graph Toolkit (AGTK), Snack and WaveSurfer (Sjölander and Beskow, 2000). The WaveSurfer module provides the ability to play back audio, display waveforms and compute spectrograms. The spectrograms provide the auditor/segmenter an effective means to judge what is spoken and where each utterance begins and ends. Our experience demonstrates that LDC annotators, even those who are not necessarily trained phoneticians, were able to recognize the key features of spectrograms after a short training period. However, student workers with a strong interest in languages and/or linguistics were particularly well suited for this task.
Delayed Release Licensing Model
The delayed release model used for the SCC corpus benefits both the individual data requester and the international user community of linguistic resources. In this model the data requester, who is also the sponsor of the project, receives a lead time of typically 18 months. During this time, the data requester has the exclusive rights to use the data. After this time, the data set is published as an LDC general publication to LDC members and non-members. This is documented in the statement of work, and is agreed upon between the data requester and LDC at the time of the contract. The data set released to the LDC members and nonmembers is essentially identical to what is delivered to the data requester. In the case of the SCC corpus, only the following changes were made to the general publication:
• File names used to identify the names of the subjects were changed to anonymous subject names.
• Mentions of sponsor names were removed from the documentation.
• The regions of silence before and after each utterance were extended from 10 ms to 100 ms. Figure 1 illustrates the delayed release licensing model.
Carpooling Philosophy
The carpooling philosophy here refers to the grouping of potential data requesters/sponsors in order to reduce the cost of customized speech corpus creation. Many speech corpus creation efforts are similar in terms of speaker and recording requirements. For example, if three projects require 50 native speakers of English to be recorded in a sound booth, it is much more cost and time efficient for us to record each subject for all of these projects at the same time, than to have each speaker come in three times. Similarly the corpus design for multiple projects may be very similar so that we can merge multiple data sets into one. This will also significantly reduce the cost for the data requesters.
Solidifying In-house Infrastructure and Knowledge Base
Overview
In order for this venture to be successful, LDC will need to solidify its expertise in customized speech corpus creation. LDC's existing infrastructure and knowledge bases will need constant enhancement and improvement. In the following sections, we discuss not only our current areas of expertise in creating unique speech corpora, but also our plans for strengthening our approaches and methods in this effort.
Corpus Design
The first step in creating a customized speech corpus is to design the corpus (Gibbon et al., 1997). This should be discussed throughly between the data requester and LDC. While the data requester may or may not be an expert in corpus design methodologies, LDC employs experts in various fields of Linguistics, including Sociolinguistics and Phonetics, as well as experts in Speech Technologies, and can assist the date requester to define the corpus needs.
Human Subject Recruitment
In the past, the LDC has recruited human subjects to participate in both telephone conversation recording sessions and in-house recording sessions. The current Mixer study records native speakers of various languages 1 (Cieri et al., 2006). The participants in these studies are likely to be interested in participating similar projects. We keep a database of our past participants and inquirers, and expect this database to grow. In addition, LDC, as part of the University of Pennsylvania, has access to student populations from various parts of the country and the world.
Recording Infrastructure and Methodologies
LDC offers an extensive telephone speech collection system, as well as facilities to record subjects in-house. Inhouse recordings are made in a sound attenuated recording booth. The recording booth is equipped with standmounted microphones and headset-mounted microphones as well as a monitor display, which may be used to show software-controlled prompts. The recorded sounds are sent to the monitoring station outside of the booth. The recording and digitization can be made to both a DAT tape and a hard disk drive. LDC has a multi-channel digital recording system that can be used to record multiple speakers simultaneously, or to record a single speaker with multiple microphones.
The current setup requires a recording assistant to monitor the recording level and the speaker's pronunciation. If a particular word or phrase needs to be repeated, the recording assistant operates the prompting software to show the prompt for the same word or phrase again. Some of these tasks, such as monitoring the recording level could be automated. We plan to create an automated real-time quality control system that checks for recording problems.
Auditing and Annotation
LDC has expertise in transcription and annotation of spoken corpora by maintaining a well-training staff of transcribers, annotators and experts in these fields. In addition, LDC has in-house software developers who are well experienced in designing and developing customized annotation tools (Maeda et al., 2006;Bird et al., 2002), such as the auditing and segmentation tool used in the creation of the SCC corpus. The tool displays a spectrogram which allows the auditor to identify words and word boundaries visually, as shown in Figure 2.
Financial Considerations
Recruitment, recording methodologies and annotation are only part of of the picture of speech corpus design. Additionally, LDC must estimate the cost required for a customized speech corpus creation effort. If the cost estimate is too low, LDC will lose money and time of the staff members involved. If the cost estimate is too high, then it may be difficult to attract subcontractors to LDC. It is also important for LDC to compensate the subjects at the right rate. We consider these subjects to be an extremely valuable resource; we hope that the subjects will come back 1 http://mixer.ldc.upenn.edu and participate in future studies. Both undercompensation and overcompensation hurt us in the long run. The experience from the SCC corpus creation as well as from studies, such as Mixer and Fisher, gave us some good ideas about these aspects of speech corpus creation. We will analyze the financial aspects of each study at various stages, and will incorporate the results into our knowledge base.
Commercial Sponsorship and Other Possibilities
The delayed release licensing model allows LDC to create speech corpora for commercial organizations. This model may also be an attractive option for research and academic organizations who need to create speech corpora for their research. The lead time allows the researchers use the data exclusively and publish the results. The researchers then can cite the LDC data publication as the data used in their study, allowing the readers to access their data.
Conclusion
Speech corpus creation under the new delayed release licensing model presents a number of attractive advantages to the data requester, LDC and the linguistic data user community. The data requester indeed receives the customized corpus they require utilizing LDC's multidisciplinary expertise in linguistic data creation. LDC, on the other hand, seizes the opportunity to strengthen its expertise, and in some cases to enhance its infrastructure and to add to its subject database, all of which benefit its user community including future commercial users. The linguistic data user community, speech technology researchers and linguistic researchers alike, are able to to access the data through LDC publications after the lead time was passed. LDC has created speech corpora of various types, including telephone conversation recordings, meeting recordings, multimodal recordings and other specialized corpora, such as the Emotional Prosody speech corpus, in which actors and actresses simulated various emotional speech. We expect that the new data creation model provides LDC new opportunities to utilize our expertise in creating various speech corpora.
Figure 1 :
1Delayed Release Licensing Model
.Region
Male
Female
Young Mid Old Young
Mid Old
North
7
6
3
6
6
5
South
5
5
3
7
3
6
Midland 9
8
6
8
7
5
West
5
3
1
5
3
2
Table 1: Speakers in the SCC corpus
Table 2
2shows a
partial list of the words recorded for the SCC corpus.
alarm
answer
answer arm
back
balance
bass
brake
call
camera
cancel
CD
channel clear
close
computer
control cook
Table 2 :
2A partial word list for SCC
Steven Bird, Kazuaki Maeda, Xiaoyi Ma, Haejoong Lee, Beth Randall, Salim Zayat, TableTrans, Mul-tiTrans, InterTrans and TreeTrans: Diverse tools built on the Annotation Graph Toolkit. Proceedings of the Third International Conference on Language Resources and EvaluationSteven Bird, Kazuaki Maeda, Xiaoyi Ma, Haejoong Lee, Beth Randall, and Salim Zayat. 2002. TableTrans, Mul- tiTrans, InterTrans and TreeTrans: Diverse tools built on the Annotation Graph Toolkit. In Proceedings of the Third International Conference on Language Resources and Evaluation.
The Mixer and transcript reading corpora: Resources for multilingual, crosschannel speaker recognition research. Christopher Cieri, Walt Andrews, Joseph P Campbell, George Doddington, Jack Godfrey, Shudong Huang, Mark Liberman, Alvin Martin, Hirotaka Nakasone, Mark Przybocki, Kevin Walker, Proceedings of the Fifth International Conference on Language Resources and Evaluation. the Fifth International Conference on Language Resources and EvaluationChristopher Cieri, Walt Andrews, Joseph P. Campbell, George Doddington, Jack Godfrey, Shudong Huang, Mark Liberman, Alvin Martin, Hirotaka Nakasone, Mark Przybocki, and Kevin Walker. 2006. The Mixer and transcript reading corpora: Resources for multilin- gual, crosschannel speaker recognition research. In Pro- ceedings of the Fifth International Conference on Lan- guage Resources and Evaluation.
Figure 2: SCC Auditing and Segmenting Tool. Figure 2: SCC Auditing and Segmenting Tool
Handbook of Standards and Resources for Spoken Language Systems: Spoken Language System and Corpus Design. Dafydd Gibbon, Roger Moore, Richard Winski, Mouton de Gruyter1Dafydd Gibbon, Roger Moore, and Richard Winski, edi- tors. 1997. Handbook of Standards and Resources for Spoken Language Systems: Spoken Language System and Corpus Design, volume 1. Mouton de Gruyter.
A new phase in annotation tool development at the linguistic data consortium: The evolution of the annotation graph toolkit. Kazuaki Maeda, Haejoong Lee, Julie Medero, Stephanie Strassel, Proceedings of the Fifth International Conference on Language Resources and Evaluation. the Fifth International Conference on Language Resources and EvaluationKazuaki Maeda, Haejoong Lee, Julie Medero, and Stephanie Strassel. 2006. A new phase in annotation tool development at the linguistic data consortium: The evolution of the annotation graph toolkit. In Proceed- ings of the Fifth International Conference on Language Resources and Evaluation.
WaveSurfer -an open source speech tool. Kåre Sjölander, Jonas Beskow, Proceedings of the 6th International Conference on Spoken Language Processing. the 6th International Conference on Spoken Language ProcessingKåre Sjölander and Jonas Beskow. 2000. WaveSurfer -an open source speech tool. In Proceedings of the 6th In- ternational Conference on Spoken Language Processing. http://www.speech.kth.se/wavesurfer/. |
213,884,007 | [] | Arguments and adjuncts
Friday 30th August 2019
Adam Przepiórkowski
University of Warsaw / Polish Academy of Sciences
University of Oxford
Arguments and adjuncts
Friday 30th August 2019SyntaxFest 2019 -26-30 August -Paris Invited Talk
Linguists agree that the phrase "two hours" is an argument in "John only lost two hours" but an adjunct in "John only slept two hours", and similarly for "well" in "John behaved well" (an argument) and "John played well" (an adjunct). While the argument/adjunct distinction is hardwired in major linguistic theories, Universal Dependencies eschews this dichotomy and replaces it with the core/non-core distinction. The aim of this talk is to add support to the UD approach by critically examinining the argument/adjunct distinction. I will suggest that not much progress has been made during the last 60 years, since Tesnière used three pairwise-incompatible criteria to distinguish arguments from adjuncts. This justifies doubts about the linguistic reality of this purported dichotomy. But -given that this distinction is built into the internal machinery and/or resulting representations of perhaps all popular linguistic theories -what would a linguistic theory not making such an argument-adjunct distinction look like? I will briefly sketch the main components of such an approach, based on ideas from diverse corners of linguistic and lexicographic theory and practice.
Short bio
Adam Przepiórkowski is a full professor at the University of Warsaw (Institute of Philosophy) and at the Polish Academy of Sciences (Institute of Computer Science). As a computational and corpus linguist, he has led NLP projects resulting in the development of various tools and resources for Polish, including the National Corpus of Polish and tools for its manual and automatic annotation, and has worked on topics ranging from deep and shallow syntactic parsing to corpus search engines and valency dictionaries. As a theoretical linguist, he has worked on the syntax and morphosyntax of Polish (within Head-driven Phrase Structure Grammar and within Lexical-Functional Grammar), on dependency representations of various syntactic phenomena (within Universal Dependencies), and on the semantics of negation, coordination and adverbial modifcation (at different periods, within Glue Semantics, Situation Semantics and Truthmaker Semantics). He is currently a visiting scholar at the University of Oxford. |
||
9,225,825 | Software Requirements: A new Domain for Semantic Parsers | Software requirements are commonly written in natural language, making them prone to ambiguity, incompleteness and inconsistency. By converting requirements to formal semantic representations, emerging problems can be detected at an early stage of the development process, thus reducing the number of ensuing errors and the development costs. In this paper, we treat the mapping from requirements to formal representations as a semantic parsing task. We describe a novel data set for this task that involves two contributions: first, we establish an ontology for formally representing requirements; and second, we introduce an iterative annotation scheme, in which formal representations are derived through step-wise refinements. | [] | Software Requirements: A new Domain for Semantic Parsers
June 26
Michael Roth mroth@inf.ed.ac.uk‡electrical
School of Informatics
Computer Engineering Department
University of Edinburgh
Aristotle University of Thessaloniki
Themistoklis Diamantopoulos
School of Informatics
Computer Engineering Department
University of Edinburgh
Aristotle University of Thessaloniki
Ewan Klein
School of Informatics
Computer Engineering Department
University of Edinburgh
Aristotle University of Thessaloniki
Andreas Symeonidis
School of Informatics
Computer Engineering Department
University of Edinburgh
Aristotle University of Thessaloniki
†ilcc
School of Informatics
Computer Engineering Department
University of Edinburgh
Aristotle University of Thessaloniki
Software Requirements: A new Domain for Semantic Parsers
Proceedings of the ACL 2014 Workshop on Semantic Parsing
the ACL 2014 Workshop on Semantic ParsingBaltimore, Maryland USAJune 26
Software requirements are commonly written in natural language, making them prone to ambiguity, incompleteness and inconsistency. By converting requirements to formal semantic representations, emerging problems can be detected at an early stage of the development process, thus reducing the number of ensuing errors and the development costs. In this paper, we treat the mapping from requirements to formal representations as a semantic parsing task. We describe a novel data set for this task that involves two contributions: first, we establish an ontology for formally representing requirements; and second, we introduce an iterative annotation scheme, in which formal representations are derived through step-wise refinements.
Introduction
During the process of software development, developers and customers typically discuss and agree on requirements that specify the functionality of a system that is being developed. 1 Such requirements play a crucial role in the development lifecycle, as they form the basis for actual implementations, corresponding work plans, cost estimations and follow-up directives (van Lamsweerde, 2009). In general, software requirements can be expressed in various different ways, including the use of UML diagrams and storyboards. Most commonly, however, expectations are expressed in natural language (Mich et al., 2004), as shown in Example (1):
(1) A user should be able to login to his account. 1 Although software engineering can also involve nonfunctional requirements, which describe general quality criteria of a system, this paper is only concerned with functional requirements, i.e., requirements that specify the behavior of a system. While requirements expressed in natural language have the advantage of being intelligible to both clients and developers, they can of course also be ambiguous, vague and incomplete. Although formal languages could be used as an alternative that eliminates some of these problems, customers are rarely equipped with the mathematical and technical expertise for understanding highly formalised requirements. To benefit from the advantages of both natural language and formal representations, we propose to induce the latter automatically from text in a semantic parsing task. Given the software requirement in Example (1), for instance, we would like to construct a representation that explicitly specifies the types of the entities involved (e.g., object(account)) and that captures explicit and inferable relationships among them (e.g., owns(user, account)). We expect such formal representations to be helpful in detecting errors at an early stage of the development process (e.g., via logical inference and verification tools), thus avoiding the costs of finding and fixing problems at a later and hence more expensive stage (Boehm and Basili, 2001).
Given the benefits of formal representations, we believe that software requirements constitute a useful application domain for semantic parsers. Requirement texts naturally occur in the real world and appropriate data sets can thus be constructed without setting up artificial tasks to collect them. Parsing requirements of different software projects also poses interesting challenges as texts exhibit a considerable amount of lexical variety, while frequently also containing more than one relation per sentence.
Related Work
A range of methods have been proposed in previous work to (semi-)automatically process requirements written in plain, natural language text and map them to formal representations. To the best of our knowledge, Abbott (1983) was the first to introduce a technique for extracting data types, variables and operators from informal texts describing a problem. The proposed method follows a simple rule-based setup, in which common nouns are identified as data types, proper nouns as objects and verbs as operators between them. Booch (1986) described a method of similar complexity that extends Abbot's approach to objectoriented development. Saeki et al. (1989) implemented a first prototype that automatically constructs object-oriented models from informal requirements. As proposed by Abbott and Booch, the system is based on automatically extracted nouns and verbs. Although Saeki et al. found resulting object diagrams of reasonable quality, they concluded that human intervention was still necessary to distinguish between words that are relevant for the model and irrelevant nouns and verbs. Nanduri and Rugaber (1995) proposed to further automate object-oriented analysis of requirement texts by applying a syntactic parser and a set of post-processing rules. In a similar setting, Mich (1996) employed a full NLP pipeline that contains a semantic analysis module, thus omitting the need for additional post-processing rules. More recent approaches include those by Harmain and Gaizauskas (2003) and Kof (2004), who relied on a combination of NLP components and human interaction. Whereas most approaches in previous work aim to derive class diagrams, Ghosh et al. (2014) proposed a pipeline architecture that converts syntactic parses to logical expressions via a set of heuristic post-processing rules.
Despite this seemingly long tradition, previous methods for processing software requirements have tended to depend on domain-specific heuristics and knowledge bases or have required additional user intervention. In contrast, we propose to utilize annotated data to learn how to perform semantic parsing of requirements automatically.
Data Set
Given our conviction that mapping natural language software requirements to formal representations provides an attractive challenge for semantic parsing research, we believe that there is a more general benefit in building a corpus of annotated requirements. One immediate obstacle is that software requirements can drastically differ in quality, style and granularity. To cover a range of possible differences, we asked lecturers from several universities to provide requirement documents written by students. We received requirement documents on student projects from various domains, including embedded systems, virtual reality and web applications. 2 From these documents, we extracted lists of requirements, each of which is expressed within a single sentence. We additionally collected single sentence requirements within the S-CASE project, describing industrial prototypes of cloud-based web services. 3 Table 1 gives an overview of the quantity of requirements collected. We observe that the number of requirements received for student projects is much higher. The token counts reveal however that requirements written for industrial prototypes are longer on average (16.6 vs. 11.6 words). This observation might be related to the fact that students in software engineering classes are often provided with explicit guidelines on how to concisely express requirements in natural language. As a consequence, we also find their requirement texts to be more regimented and stylised than those written by senior software engineers. Examples (2) and (3) show examples of a student-written and developer-written requirement, respectively.
(2) The user must be able to vote on polls.
(3) For each user contact, back-end must perform a check to determine whether the contact is a registered user or not.
In comparison to two extant data sets, namely GeoQuery880 (Tang, 2003) and Free917 (Cai and Yates, 2013), we find that our collection is still relatively small in terms of example sentences. The difference in total number of tokens is not as crucial, however, given that sentences in our data set are much longer on average. We further observe that the token/type ratio in our texts lies somewhere between ratios reported in previous work. Based on the observed lexical variety and average sentence length, we expect our texts to be challenging but not too difficult to parse using existing methods.
Modeling Requirements Conceptually
Different representations have been proposed for modeling requirements in previous work: whereas early work focused on deriving simple class diagrams, more recent approaches suggest representing requirements via logical forms (cf. Section 2). In this paper, we propose to model requirements using a formal ontology that captures general concepts from different application domains. Our proposed ontology covers the same properties as earlier work and provides a means to represent requirements in logical form. In practice, such logical forms can be induced by semantic parsers and in subsequent steps be utilized for automatic inference. The class hierarchy of our ontology is shown in Figure 1. At the highest level of the class hierarchy, we distinguish between "things" (ThingType) and "operations" (OperationType).
ThingType
We define the following subclasses of ThingType:
• A Participant is a thing that is involved in an operation. We further subdivide Participants into Actors, which can be users of a system or the system itself, and Objects.
• A Property is an attribute of an Object or a characteristic of an OperationType.
OperationType
We further divide operations into the following subclasses:
• An Action describes an operation that is performed by an Actor on one or several Object(s).
• A State is an operation that describes the status of an Actor.
• Ownership is used to model operations that express possession.
• Emergence represent operations that undergo passive transformation.
Relations
In addition to the class hierarchy, we define a set of relations between classes, which describe and constrain how different operations and things can interact with each other. On the level of OperationType, every operation can be assigned one Actor via the relations HAS ACTOR or HAS OWNER, respectively. Objects can participate in Actions, States and Ownerships via the relations ACTS ON, HAS STATE and OWNS, respectively. Every instance of Opera-tionType and Object can further have an arbitrary number of properties assigned to it via the relation HAS PROPERTY.
Annotation Process
In preliminary annotation experiments, we found that class diagrams may be too simple to represent requirements conceptually. Logical forms, on the other hand, can be difficult to use for annotators without sufficient background knowledge. To keep the same level of expressiveness as logical forms and the simplicity of object-oriented annotations, we propose a multi-step annotation scheme, in which decisions in one iteration are further refined in later iterations.
By adopting the class hierarchy introduced in Section 4, we can naturally divide each annotation iteration according to a level in the ontology. This means that in the first iteration, we ask annotators A user that is logged in to his account must be able to update his password.
Actor(user)
∧ Action(login) ∧ Action(update) ∧ Object(account) ∧ HAS ACTOR(login,user) ∧ HAS ACTOR(update,user) ∧ Object(password) ∧ ACTS ON(login,account) ∧ ACTS ON(update,password)
∧ Ownership(o 1 ) ∧ Ownership(o 2 ) ∧ HAS OWNER(o 1 ,user) ∧ HAS OWNER(o 2 ,user) ∧ OWNS(o 1 ,account) ∧ OWNS(o 2 ,password)
The system must be able to forward and rewind a playing program.
Actor(system) ∧ Action(forward) ∧ Action(rewind) ∧ Object(program) ∧ HAS ACTOR(forward,system) ∧ HAS ACTOR(rewind,system) ∧ ACTS ON(forward,program) ∧ ACTS ON(rewind,program) ∧ Property(playing) ∧ HAS PROPERTY(program,playing) to simply mark all instances of ThingType and Op-erationType that are explicitly expressed in a given requirement. We then resolve conflicting annotations and present the resulting instances from the first level to annotators for the next iteration. In each iteration, we add one layer of sophistication from the class hierarchy, resulting in step-wise refinements. In the final iteration, we add relations between instances of concepts, including implicit but inferable cases. An illustration of the overall annotation process, based on Example (1), is depicted in Figure 2. The last iteration in this example involves the addition of an Ownership instance that is indicated (by the phrase "his account") but not explicitly realized in text. Although identifying and annotating such instances can be more challenging than the previous annotation steps, we can directly populate our ontology at this stage (e.g., via conversion to RDF tuples) and run verification tools to check whether they are consistent with the annotation schema.
Discussion
The annotation scheme introduced in Section 4 is designed with the goal of covering a wide range of different application domains. Although this means that many of the more fine-grained distinctions within a domain are not considered here, we believe that the scheme already provides sufficient information for a range of tasks. By storing processed requirements in a relational database, for example, they can be retrieved using structured queries and utilized for probabilistic inference.
Given the hierarchical structure of our annotation process, as defined in Section 5, it is possible to extend existing annotations with additional levels of granularity provided by domain ontologies. As an example, we have defined a domain ontology for web services, which contains subclasses of Action to further distinguish between the HTTP methods get, put, post and delete. Similar extensions can be defined for other domains.
Regarding the task of semantic parsing itself, we are currently in the process of annotating several hundreds of instances of requirements (cf. Section 3) following the proposed ontology. We will release an initial version of this data set at the Semantic Parsing workshop. The initial release will serve as a basis for training and evaluating parsers in this domain, for which we are also planning to collect more examples throughout the year. We believe that requirements form an interesting domain for the parsing community as the texts involve a fair amount of variation and challenging semantic phenomena (such as inferable relations), while also serving a practical and valuable purpose.
2Figure 1 :
1The majority of collected requirements are from a software development course organized jointly by several European universities, cf. http://www.fer.unizg.hr/rasip/dsd 3 http://www.scasefp7.eu/ Class hierarchy of our conceptual ontology for modeling software requirements.
Figure 2 :
2Annotation process: instances are marked in text (dashed), class assignments are refined (dotted), and relations are added (solid).
Table 1: Statistics on our requirements collection and existing semantic parsing data sets.#sentences #tokens #types
student projects 270
3130
604
industrial prototypes
55
927
286
Our dataset (total) 325
4057
765
GEOQUERY880
880
6656
279
FREE917
917
6769
2035
Table 2 :
2Example requirements from different domains and logical forms derived from annotations.A user should be able login to his account
ThingType
OperationType ThingType
Participant
Action
Participant
Actor
Object
HAS ACTOR
ACTS ON
(implicit)
Ownership
HAS OWNER
OWNS
AcknowledgementsParts of this work have been supported by the FP7 Collaborative Project S-CASE (Grant Agreement No 610717), funded by the European Commission. We thank our project partners for data support and useful discussions on the proposed ontology.
Program design by informal english descriptions. J Russell, Abbott, Communications of the ACM. 2611Russell J Abbott. 1983. Program design by informal english descriptions. Communications of the ACM, 26(11):882-894.
Software defect reduction top 10 list. Barry Boehm, Victor R Basili, Computer. 34Barry Boehm and Victor R. Basili. 2001. Software defect reduction top 10 list. Computer, 34:135-137.
Object-oriented development. Grady Booch, IEEE Transactions on Software Engineering. 2Grady Booch. 1986. Object-oriented develop- ment. IEEE Transactions on Software Engineering, (2):211-221.
Large-scale semantic parsing via schema matching and lexicon extension. Qingqing Cai, Alexander Yates, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsSofia, BulgariaLong Papers1Qingqing Cai and Alexander Yates. 2013. Large-scale semantic parsing via schema matching and lexicon extension. In Proceedings of the 51st Annual Meet- ing of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 423-433, Sofia, Bulgaria, August.
Shalini Ghosh, Daniel Elenius, Wenchao Li, Patrick Lincoln, Natarajan Shankar, Wilfried Steiner, arXiv:1403.3142Automatically extracting requirements specifications from natural language. arXiv preprintShalini Ghosh, Daniel Elenius, Wenchao Li, Patrick Lincoln, Natarajan Shankar, and Wilfried Steiner. 2014. Automatically extracting requirements spec- ifications from natural language. arXiv preprint arXiv:1403.3142.
Cmbuilder: A natural language-based case tool for object-oriented analysis. H M Harmain, Robert Gaizauskas, Automated Software Engineering. 102H. M. Harmain and Robert Gaizauskas. 2003. Cm- builder: A natural language-based case tool for object-oriented analysis. Automated Software Engi- neering, 10(2):157-181.
Natural language processing for requirements engineering: Applicability to large requirements documents. Leonid Kof, 19th International Conference on Automated Software Engineering, Workshop Proceedings. Leonid Kof. 2004. Natural language processing for requirements engineering: Applicability to large re- quirements documents. In 19th International Con- ference on Automated Software Engineering, Work- shop Proceedings.
Market research for requirements analysis using linguistic tools. Luisa Mich, Franch Mariangela, Novi Inverardi Pierluigi, Requirements Engineering. 91Luisa Mich, Franch Mariangela, and Novi Inverardi Pierluigi. 2004. Market research for requirements analysis using linguistic tools. Requirements Engi- neering, 9(1):40-56.
NL-OOPS: From natural language to object oriented requirements using the natural language processing system LOLITA. Luisa Mich, Natural Language Engineering. 22Luisa Mich. 1996. NL-OOPS: From natural language to object oriented requirements using the natural lan- guage processing system LOLITA. Natural Lan- guage Engineering, 2(2):161-187.
Requirements validation via automated natural language parsing. Sastry Nanduri, Spencer Rugaber, Proceedings of the Twenty-Eighth Hawaii International Conference on System Sciences. the Twenty-Eighth Hawaii International Conference on System Sciences3Sastry Nanduri and Spencer Rugaber. 1995. Re- quirements validation via automated natural lan- guage parsing. In Proceedings of the Twenty-Eighth Hawaii International Conference on System Sci- ences, volume 3, pages 362-368.
Software development process from natural language specification. Motoshi Saeki, Hisayuki Horai, Hajime Enomoto, Proceedings of the 11th International Conference on Software Engineering. the 11th International Conference on Software EngineeringMotoshi Saeki, Hisayuki Horai, and Hajime Enomoto. 1989. Software development process from natural language specification. In Proceedings of the 11th International Conference on Software Engineering, pages 64-73.
Integrating Top-down and Bottom-up Approaches in Inductive Logic Programming: Applications in Natural Language Processing and Relational Data Mining. R Lappoon, Tang, Austin, Texas, USADepartment of Computer Sciences, University of TexasPh.D. thesisLappoon R. Tang. 2003. Integrating Top-down and Bottom-up Approaches in Inductive Logic Program- ming: Applications in Natural Language Processing and Relational Data Mining. Ph.D. thesis, Depart- ment of Computer Sciences, University of Texas, Austin, Texas, USA, August.
Requirements Engineering: From System Goals to UML Models to Software Specifications. Axel Van Lamsweerde, WileyAxel van Lamsweerde. 2009. Requirements Engineer- ing: From System Goals to UML Models to Software Specifications. Wiley. |
18,427,149 | A constraint driven metagrammar | We present an operational framework allowing to express a large scale Tree Adjoining Grammar (TAG) by using higher level operational constraints on tree descriptions. These constraints first meant to guarantee the well formedness of the grammatical units may also be viewed as a way to put model theoretic syntax at work through an efficient offline grammatical compilation process. Our strategy preserves TAG formal properties, hence ensures a reasonable processing efficiency. | [
232021544,
2549745
] | A constraint driven metagrammar
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 2006. 2006
Joseph Le Roux leroux@loria.fr
HCRC
LORIA Institut National Polytechnique de Lorraine 615
Rue du Jardin Botanique 54 600Villers-Lès-NancyFrance
Benoît Crabbé bcrabbe@inf.ed.ac.uk
ICCS University of Edinburgh
2 Buccleuch Place EH8 9LWEdinburghScotland
Yannick Parmentier parmenti@loria.fr
INRIA / LORIA
Université Henri Poincaré
615, Rue du JardinBotanique 54 600Villers-Lès-NancyFrance
A constraint driven metagrammar
Proceedings of the 8th International Workshop on Tree Adjoining Grammar and Related Formalisms
the 8th International Workshop on Tree Adjoining Grammar and Related FormalismsSydneyAssociation for Computational LinguisticsJuly 2006. 2006
We present an operational framework allowing to express a large scale Tree Adjoining Grammar (TAG) by using higher level operational constraints on tree descriptions. These constraints first meant to guarantee the well formedness of the grammatical units may also be viewed as a way to put model theoretic syntax at work through an efficient offline grammatical compilation process. Our strategy preserves TAG formal properties, hence ensures a reasonable processing efficiency.
Introduction
This paper is concerned with the semi-automatic grammar development of real-scale grammars. For natural language syntax, lexicalised TAGs are made of thousands of trees, carrying an extreme structural redundancy. Their development and their maintenance is known to be cumbersome as the size of the grammar raises significantly.
To counter the lack of generalisations inherent to strong lexicalisation, various proposals for semi-automatic grammar development have been carried out: lexical rules or meta-rules (Becker, 2000) and metagrammars: (Candito, 1999;Gaiffe et al., 2002;Xia, 2001). The aim of these frameworks is twofold: expressing general facts about the grammar of a language and factorising the information to avoid redundancy.
The metagrammar path adopts a different perspective from the lexical rule based grammar development: instead of describing how a derived tree is different from a canonical one, grammatical description mainly consists of combining fragmentary tree descriptions or building blocks.
The paper is structured as follows. We start in section 2 by providing motivations and background information on the framework we are using. Section 3 shows that the metagrammar framework may be viewed as an offline system allowing to express high level well-formedness constraints on elementary grammatical structures while preserving TAG computational and formal properties. Section 4 shows how to implement efficiently this constraint-based approach with logic programming techniques and finally section 5 provides an idea of the performance of the implemented system.
eXtensible MetaGrammar (XMG)
By opposition to other metagrammatical frameworks, XMG uses an expressive though simple language, enabling a monotonic description of a real scale grammar. Monotonicity is important because it means that the order of application of the different operations does not matter. This is the major drawback of lexicalrule systems. Moreover, (Crabbé, 2005b) shows that it is sufficiently expressive to implement conveniently a core TAG for French.
XMG allows the grammar writer to manipulate tree descriptions through a control language. The intuition behind is that a metagrammatical language needs to provide means to describe syntactic information along two methodological axis (Crabbé, 2005b): structure sharing and alternatives. Structure sharing is the axis dedicated to express factorisation in the grammar, whereas alternatives allow to express regular alternation relationships such as alternatives between the representation of a canonical nominal subject and its interrogative representation, or between an active and a passive verb form 1 .
Building on this intuition the XMG language allows the user to name partial tree descriptions within classes. The name of the class can be manipulated afterwards. For instance the following tree descriptions on the right of the arrow are associated with the names stated on the left of the arrow 2 :
(1) a. CanonicalSubject → S N↓ V b. RelativisedSubject → N N* S N↓ V c. VerbalForm → S V
Naming is the main device that allows the grammar writer to express and to take advantage of the structure sharing axis mentioned above. Indeed class names can be reused in other descriptions. Thus names can also be used to describe alternatives. To express, in our simplified example, that a Subject is an abstract way to name a Relativised-Subject or a CanonicalSubject, we use a choice operator (∨) as illustrated below:
(2) Subject → CanonicalSubject ∨ RelativisedSubject Disjunction (non-deterministic choice) is the device provided by the language to express the methodological axis of alternatives. Finally, names can be given to class combinations. To express the composition of two tree descriptions in the language, we use the ∧ operator. 1 The passive is a semi-regular alternation, many transitive verbs do not passivise. Our system presupposes a classical architecture for the computational representation of Tree Adjoining Grammars such as XTAG, where means to express such exceptions during the anchoring process are wellknown. In what follows, we therefore consider only tree templates (or tree schematas) as our working units. Finally the trees depicted in this paper take their inspiration from the grammar described by (Abeillé, 2002).
2 To represent the tree descriptions mentioned in this paper, we use a graphical notation. Immediate dominance is depicted with a straight line and precedence follows the graphical order. Note that nodes are decorated with their labels only, ignoring the names of the variables denoting them. Note also that we use only the reflexive transitive closure of precedence between sibling nodes and it is explicitly stated with the symbol ≺ * .
Thus we can say that an IntransitiveVerb is made by the composition of a Subject and a VerbalForm as follows:
(3) IntransitiveVerb → Subject ∧ VerbalForm Given these 3 primitives, the control language is naturally interpreted as a context free grammar whose terminals are tree descriptions and where our composition plays the role of concatenation. This abstract grammar or metagrammar is further restricted to be non recursive in order to ensure that the generated TAG is finite.
Provided the axiom IntransitiveVerb, an interpreter for this language generates non deterministically all the sentences of the grammar 3 underlying a grammatical description. Thus in our current example the two sentences generated are those depicted on the left hand side of the arrows in Figure 1. On the right hand side of the arrow is depicted the result of the composition of the tree descriptions.
It remains to make clear what is actually this composition. The grammatical classes may contain information on tree descriptions and/or express composition of descriptions stated in other classes. Tree descriptions take their inspiration from the logic described in (Rogers and Vijay-Shanker, 1994). Its syntax is the following:
Description ::= x → y | x → * y | x ≺ y | x ≺ * y | x[f :E]
where x, y are node variables, → the dominance relation, ≺ the precedence relation, * denoting the reflexive transitive closure of a relation. The last line associates x with a feature f whose value is the result of evaluating expression E. Tree descriptions are interpreted as finite linear ordered trees being the minimal models of the description.
Using tree descriptions, the above mentioned operation of tree "composition" breaks down to a conjunction of formulas where variables of each conjunct are in first approximation renamed to avoid name collisions. Renaming is a crucial difference with previous approaches to metagrammar (Candito, 1999;Xia, 2001) where the user had to manage explicitly a "global namespace". Here a specific attention is given to namespace management, because this was a bottleneck for real scale
S N↓ V Le garçon. . . The boy. . . ∧ S V dort sleeps ⇒ S N↓ V Le garçon dort The boy who sleeps N N* S N↓ V (Le garçon) qui. . . (The boy) who. . . ∧ S V dort sleeps ⇒ N N* S N↓ V Le garçon qui dort
The boy who sleeps Figure 1: Interpretation of a grammatical description grammar design. More precisely each class has its own namespace of identifiers and namespace merging can be triggered when a class combination occurs. This merging relies on a fine-grained import/export mechanism.
In addition to conjunction and disjunction, XMG is augmented with syntactic sugar to offer some of the features other metagrammatical formalisms propose. For instance, inheritance of classes is not built-in in the core language but is realised through conjunction and namespace import. Of course, this restricts users to monotonic inheritance (specialisation) but it seems to be sufficient for most linguists.
Constraining admissible structures
XMG has been tested against the development of a large scale French Grammar (Crabbé, 2005a). To ease practical grammatical development we have added several augmentations to the common tree description language presented so far in order to further restrict the class of admissible structures generated by the metagrammar.
Further constraining the structures generated by a grammar is a common practice in computational linguistics. For instance a Lexical Functional Grammar (Bresnan and Kaplan, 1982) further restricts the structures generated by the grammar by means of a functional uniqueness and a functional completeness principles. These constraints further restricts the class of admissible structures generated by an LFG grammar to verify valency conditions.
For TAG and in a theoretical context, (Frank, 2002) states a set of such well formedness principles that contribute to formulate a TAG theory within a minimalist framework. In what remains we describe operational constraints of this kind that further restrict the admissibility of the structure generated by the metagrammar. By contrast with the principles stated by (Frank, 2002), we do not make any theoretical claim, instead we are stating operational constraints that have been found useful in practical grammar development.
However as already noted by (Frank, 2002) and by opposition to an LFG framework where constraints apply to the syntactic structure of a sentence as a whole, we formulate here constraints on the well-formedness of TAG elementary trees. In other words these constraints apply to units that define themselves their own global domain of locality. In this case, it means that we can safely ignore locality issues while formulating our constraints. This is theoretically weaker than formulating constraints on the whole sentential structure but this framework allows us to generate common TAG units, preserving the formal and computational properties of TAG.
We formulate this constraint driven framework by specifying conditions on model admissibility. Methodologically the constraints used in the development of the French TAG can be classified in four categories: formal constraints, operational constraints, language dependent constraints and theoretical principles.
First the formal constraints are those constraining the trees generated by the model builder to be regular TAG trees. These constraints require the trees to be linear ordered trees with appropriate decorations : each node has a category label, leaf nodes are either terminal, foot or substitution, there is at most one foot node, the category of the foot note is identical to that of the root node, each tree has at least one leaf node which is an anchor.
It is worth noting here that using a different set of formal constraints may change the target formalism. Indeed XMG provides a different set of formal constraints (not detailed here) that allow to generate elementary units for another formalism, namely Interaction Grammars.
The second kind of constraint is a single operational constraint dubbed the colouration constraint. We found it convenient in the course of grammar development. It consists of associating colour-based polarities to the nodes to ensure a proper combination of the fragmentary tree descriptions stated within classes. Since in our framework descriptions stated in two different classes are renamed before being conjoined, given a formula being the conjunction of the two following tree descriptions :
(4) X W Z X Z Y
both the following trees are valid models of that formula:
(5) (a) X W Z Y (b) X W Z Z Y
In the context of grammar development, however, only (a) is regarded as a desired model. To rule out (b) (Candito, 1999;Xia, 2001) use a naming convention that can be viewed as follows 4 : they assign a name to every node of the tree description. Both further constrain model admissibility by enforcing the identity of the interpretation of two variables associated to the same name. Thus the description stated in their systems can be exemplified as follows:
(6) X a W b Z c X a Z c Y d
Though solving the initial formal problem, this design choice creates two additional complications:
(1) it constrains the grammar writer to manually manage a global naming, entailing obvious problems as the size of the grammatical description grows and (2) it prevents the user to reuse several times the same class in a composition. This case is a real issue in the context of grammatical development since a grammar writer willing to describe a ditransitive context with two prepositional phrases cannot reuse two times a fragment describing such a PP since the naming constraint will identify them.
To solve these problems we use a colouration constraint. This constraint associates unary properties, colours, to every node of the descriptions. A colour is taken among the set red(• R ), black(• B ), white (• W ). A valid model is a model in which every node is coloured either in red or black. Two variables in the description interpreted by the same node have their colours merged following the table given in Figure 2. The table indicates the resulting colour after a merge. The ⊥ symbol indicates that this two colours cannot be merged and hence two nodes labelled with these colours cannot be merged. Note that the table is designed to ensure that merging is not a procedural operation.
• B • R • W ⊥ • B ⊥ ⊥ • B ⊥ • R ⊥ ⊥ ⊥ ⊥ • W • B ⊥ • W ⊥ ⊥ ⊥ ⊥ ⊥ ⊥
The idea behind colouration is that of saturating the tree description. The colour white represents the non saturation or the need of a node to be combined with a resource, represented by the colour black. Black nodes need not necessarily be combined with other nodes. Red is the colour used to label nodes that cannot be merged with any other node. A sample tree description with coloured node is as follows:
(7) X• B W• R Z• B X• W Z• W Y• R
Colours contribute to rule out the (b) case and remove the grammar writer the burden of managing manually a "global namespace".
The third category of constraints are language dependent constraints. In the case of French, such constraints are clitic ordering, islands constraints, etc. We illustrate these constraints with clitic ordering in French. In French clitics are non tonic particles with two specific properties already identified by (Perlmutter, 1970): first they appear in front of the verb in a fixed order according to their rank (8a-8b) and second two different clitics in front of the verb cannot have the same rank (8c). For instance the clitics le, la have the rank 3 and lui the rank 4.
S N↓ V' ≺ + ∧ V' Cl↓ 3 V ≺ + ∧ V' Cl↓ 4 V ≺ + ∧ S V' V ⇒ S N↓ V'
Cl↓ 3 In the French grammar of (Crabbé, 2005a) trees with clitics are generated with the fragments illustrated on the left of the arrow in Figure 3 5 . As illustrated on the right of the arrow, the composition may generate ill-formed trees. To rule them out we formulate a clitic ordering constraint. Each variable labelled with a clitic category is also labelled with a property, an integer representing its rank. The constraint stipulates that sibling nodes labelled with a rank have to be linearly ordered according to the order defined over integers.
Overall language dependent constraints handle cases where the information independently specified in different fragments may interact. These interactions are a counterpart in a metagrammar to the interactions between independently described lexical rules in a lexical rule based system. Assuming independent lexical rules moving canonical arguments (NP or PP) to their clitic position, lexical rules fall short for capturing the relative ordering among clitics 6 .
A fourth category of constraints, not implemented in our system so far are obviously the language independent principles defining the theory underlying the grammar. Such constraints could involve for instance a Principle of Predicate Argument Coocurrency (PPAC) or even the set of minimalist principles described by (Frank, 2002).
cus on the implementation of the constraints discussed above within XMG.
As mentioned above, a metagrammar corresponds to a reduced description of the grammar. In our case, this description consists of tree fragments combined either conjunctively or disjunctively. These combinations are expressed using a language close to the Definite Clause Grammar formalism (Pereira and Warren, 1980), except that partial tree descriptions are used as terminal symbols. In this context, a metagrammar can be reduced to a logic program whose execution will lead to the computation of the trees of the grammar.
To perform this execution, a compiler for our metagrammatical language has been implemented. This compilation is a 3-step process as shown in Figure 4.
First, the metagrammar is compiled into instructions for a specific virtual machine inspired by the Warren's Abstract Machine (Ait-Kaci, 1991). These instructions correspond to the unfolding of the relations 7 contained in the tree descriptions of the metagrammar.
Then, the virtual machine performs unifications of structures meant to refer to corresponding information within fragments (e.g. two nodes, two feature structures ...). Note that the XMG's virtual machine uses the structure sharing technique for memory management, i.e. data are represented by a pair pattern -environment in which to interpret it. The consequences are that (a) we save memory when compiling the metagrammar, and (b) we have to perform pointer dereferencing during unification. Even if the latter is time-consuming, it remains more efficient than structure copying as we have to possibly deal with a certain amount of tree descriptions.
Eventually, as a result of this instruction processing by the virtual machine, we obtain poten- tially total tree descriptions, that have to be solved in order to produce the expected TAG. Now, we will introduce XMG's tree description solver and show that it is naturally designed to process efficiently the higher level constraints mentioned above. In particular, we will see that the description solver has been designed to be easily extended with additional parametric admissibility constraints.
Tree descriptions solving
To find the minimal models corresponding to the total tree descriptions obtained by accumulating fragmentary tree descriptions, we use a tree description solver. This solver has been developed in the Constraint Programming paradigm using the constraint satisfaction approach of (Duchier and Niehren, 2000). The idea is to translate relations between node variables into constraints over sets of integers.
Basically, we refer to a node of the input description in terms of the nodes being equals, above, below, or on its side (see Figure 5). More precisely, we associate each node of the description with an integer, then our reference to a node corresponds to a tuple containing sets of nodes (i.e. sets of integers).
As a first approximation, let us imagine that we refer to a node x in a model by means of a 5-tuple Up, Down, Left, Right) where i is an integer associated with x and Eq (respectively Up, Down, Left, Right) denotes the set of nodes 8 in the description which are equal, (respectively above, below, left, and right) of x.
N i x = (Eq,
Then we can convert the relations between nodes of our description language into constraints on sets of integer. For instance, if we consider 2 nodes x and y of the description. Assuming we associate x with the integer i and y with j, we can translate the dominance relation x → y the following way 9 :
N i x → N j y ≡ [N i x.EqU p ⊆ N j y.U p ∧N i x.Down ⊇ N j y.EqDown ∧N i x.Lef t ⊆ N j y.Lef t ∧N i x.Right ⊆ N j y.Right ]
This means that if the node 10 x strictly dominates y in the input description, then (i) the set of nodes that are above or equal x in a valid model is included in the set of those that are strictly above y and (ii) the dual holds for the nodes that are above and (iii) the set of nodes that are on the left of y is included in the set of those that are on the left of x and (iv) similarly for the right part.
Once the constraints framework is settled, we can search for the solutions to our problem, i.e. the variable assignments for each of the sets of integers used to refer to the nodes of the input description. This search is performed by associating with each pair of nodes (x, y) of the input description a choice variable denoting the mutually exclusive relations 11 between these two nodes. Then 9 N i x.EqU p corresponds to the disjoint union of N i x.Eq and N i
x.U p , similarly for N j x.EqDown with N i x.Eq and N i x.Down . 10 One should read the node denoted by the variable x. 11 Either x equals y, x dominates y, y dominates x, x precedes y or y precedes x.
we use a search strategy to explore the consistent assignments to these choices variables (and the associated assignments for sets of integers referring to nodes) 12 . Note that the strategy used in XMG is a first-fail strategy which leads to very good results (see section 5 below). The implementation of this solver has been done using the constraint programming support of the Mozart Programming System (The Oz-Mozart Board, 2005).
Extension to higher-level constraints solving
An important feature of our approach is that this system of constraints over integer sets can be extended so that we not only ensure tree wellformedness of the outputted trees, but also the respect of linguistic properties such as the uniqueness of clitics in French, etc. The idea is that if we extend adequately our node representation, we can find additional constraints that reflects the syntactic constraints we want to express.
Clitic uniqueness For instance, let us consider the clitic uniqueness constraint introduced above. We want to express the fact that in a valid model φ, there is only one node having a given property p (i.e. a parameter of the constraint, here the category clitic 13 ). This can be done by introducing, for each node x of the description, a boolean variable p x indicating whether the node denoting x in the model has this property or not. Then, if we call V φ p the set of integers referring to nodes having the property p in a model, we have:
p x ≡ (N i x.Eq ∩ V φ p ) = ∅
Finally, if we represent the true value with the integer 1 and false with 0, we can sum the p x for each x in the model. When this sum gets greater than 1, we can consider that we are not building a valid model.
Colouration constraint
Another example of the constraints introduced in section 3 is colouration. Colouration represents operational constraints whose effect is to control tree fragment combination. The idea is to label nodes with a colour between red, black and white. Then, during description solving, nodes are identified according to the rules given previously (see Figure 2). That is, red nodes are not identified with any other node, white nodes can be identified with a black one. Black nodes are not identified with each other. A valid model in this context is a saturated tree, i.e. where nodes are either black (possibly resulting from identifications) or red. In other words, for every node in the model, there is at most one red or black node with which it has been identified. The implementation of such a constraint is done the following way. First, the tuples representing nodes are extended by adding a integer field RB referring to the red or black node with which the node has been identified. Then, considering the following sets of integers: V R , V B , V W respectively containing the integers referring to red, black and white nodes in the input description, the following constraints hold:
x ∈ V R ⇒ N i x.RB = i ∧ N i x.Eq = {i} (a) x ∈ V B ⇒ N i x.RB = i (b) x ∈ V W ⇒ N i x.RB ∈ V φ B (c) where V φ B represents the black nodes in a model, i.e. V φ B = V φ ∩ V B .
(a) expresses the fact that for red nodes, N i
x.RB is the integer i associated with x itself, and N i
x.Eq is a set only containing i. (b) means that for black nodes, we have that N i
x.RB is also the integer i denoting x itself, but we cannot say anything about N i x.Eq . Eventually (c) means that whites nodes have to be identified with a black one.
Thus, we have seen that Constraint Programming offers an efficient and relatively natural way of representing syntactic constraints, as "all" that has to be done is to find an adequate node representation in terms of sets of nodes, then declare the constraints associated with these sets, and finally use a search strategy to compute the solutions.
Some features
There are two points worth considering here: (i) the usability of the formalism to describe a real scale grammar with a high factorisation, and (ii) the efficiency of the implementation in terms of time and memory use.
Concerning the first point, XMG has been used successfully to compute a TAG having more than 6,000 trees from a description containing 293 classes 14 . Moreover, this description has been designed relatively quickly as the description language is intuitive as advocated in (Crabbé, 2005a).
Concerning the efficiency of the system, the compilation of this TAG with more than 6,000 trees takes about 15 min with a P4 processor 2.6 GHz and 1 GB RAM. Note that compared with the compilation time of previous approaches (Candito, 1999;Gaiffe et al., 2002) (with the latter, a TAG of 3,000 trees was compiled in about an hour), these results are quite encouraging.
Eventually, XMG is released under the terms of the GPL-like CeCILL license 15 and can be freely downloaded at http://sourcesup.cru.fr/xmg.
Conclusion
Unlike previous approaches, the description language implemented by XMG is fully declarative, hence allowing to reuse efficient techniques borrowed to Logic Programming. The system has been used successfully to produce core TAG (Crabbé, 2005b) and Interaction Grammar (Perrier, 2003) for French along with a core French TAG augmented with semantics (Gardent, 2006). This paper shows that the metagrammar can be used to put model theoretic syntax at work while preserving reasonably efficient processing properties. The strategy used here builds on constraining offline a TAG whose units are elementary trees The other option is to formulate constraints applied on-line, in the course of parsing, applying on the whole syntactic structure. In a dependency framework, XDG followed this path (Debusmann et al., 2004), however it remains unknown to us whether this approach remains computationally tractable for parsing with real scale grammars.
Figure 2 :
2Colour identification rules.
Figure 3 :
3Clitic ordering (8) a. Jean le 3 lui 4 donne John gives it to him b. *Jean lui 4 le 3 donne *John gives to him it c. *Jean le 3 la 3 donne *John gives it it
Figure 4 :
4Metagrammar compilation.
8Figure 5 :
5I.e. integers. Node representation.
Understood as compositions of tree fragments.
They actually use a different formal representation that does not affect the present discussion.
Efficient implementationWe describe now the implementation of our metagrammatical framework. In particular, we will fo-5 Colours are omitted. 6 This observation was already made by(Perlmutter, 1970) in a generative grammar framework where clitics where assumed to be moved by transformations.
These relations are either dominance or precedence between node variables, or their reflexive transitive closure, or the labelling of node variable with feature structures.
More information about the use of such choice variables is given in(Duchier, 1999) 13 In fact, the uniqueness concerns the rank of the clitics, see(Crabbé, 2005b), §9.6.3.
Une grammaireélectronique du franais. A Abeillé, ParisCNRS EditionsA. Abeillé. 2002. Une grammaireélectronique du franais. CNRS Editions, Paris.
Warren's abstract machine: A tutorial reconstruction. H Ait-Kaci, Proc. of the Eighth International Conference of Logic Programming. K. Furukawaof the Eighth International Conference of Logic ProgrammingCambridge, MAMIT PressH. Ait-Kaci. 1991. Warren's abstract machine: A tuto- rial reconstruction. In K. Furukawa, editor, Proc. of the Eighth International Conference of Logic Programming. MIT Press, Cambridge, MA.
Patterns in metarules. T Becker, Tree Adjoining Grammars: formal, computational and linguistic aspects. CSLI publications. A. Abeille and O. RambowStanfordT. Becker. 2000. Patterns in metarules. In A. Abeille and O. Rambow, editors, Tree Adjoining Grammars: formal, computational and linguistic aspects. CSLI publications, Stanford.
tree fragments or conjunction / disjunction of fragments 15 More information about this. I , I.e. tree fragments or conjunction / disjunction of frag- ments 15 More information about this license at http://www. cecill.info/index.en.html.
The Mental Representation of Grammatical Relations. Joan Bresnan, Ronal M Kaplan, The MIT PressCambridge MAJoan Bresnan and Ronal M. Kaplan. 1982. The Mental Rep- resentation of Grammatical Relations. The MIT Press, Cambridge MA.
Représentation modulaire et paramétrable de grammairesélectroniques lexicalisées : application au français età l'italien. M H Candito, Uni- versité Paris 7Ph.D. thesisM.H. Candito. 1999. Représentation modulaire et paramétrable de grammairesélectroniques lexicalisées : application au français età l'italien. Ph.D. thesis, Uni- versité Paris 7.
Grammatical development with XMG. B Crabbé, Proceedings of the Fifth International Conference on Logical Aspects of Computational Linguistics (LACL05). the Fifth International Conference on Logical Aspects of Computational Linguistics (LACL05)B. Crabbé. 2005a. Grammatical development with XMG. Proceedings of the Fifth International Conference on Log- ical Aspects of Computational Linguistics (LACL05).
Représentation informatique de grammaires fortement lexicalisées : Applicationà la grammaire d'arbres adjoints. B Crabbé, Université Nancy 2Ph.D. thesisB. Crabbé. 2005b. Représentation informatique de gram- maires fortement lexicalisées : Applicationà la gram- maire d'arbres adjoints. Ph.D. thesis, Université Nancy 2.
Extensible dependency grammar: A new methodology. R Debusmann, D Duchier, G.-J M Kruijff, Proceedings of the COLING 2004 Workshop on Recent Advances in Dependency Grammar. the COLING 2004 Workshop on Recent Advances in Dependency GrammarGeneva/SUIR. Debusmann, D. Duchier, and G.-J. M. Kruijff. 2004. Ex- tensible dependency grammar: A new methodology. In Proceedings of the COLING 2004 Workshop on Recent Advances in Dependency Grammar, Geneva/SUI.
Dominance constraints with set operators. D Duchier, J Niehren, Proceedings of CL2000. CL2000Springer1861D. Duchier and J. Niehren. 2000. Dominance constraints with set operators. In Proceedings of CL2000, volume 1861 of Lecture Notes in Computer Science, pages 326- 341. Springer.
The Metagrammar Compiler: An NLP Application with a Multiparadigm Architecture. D Duchier, J Le Roux, Y Parmentier, 2nd International Mozart/Oz Conference (MOZ'2004). CharleroiD. Duchier, J. Le Roux, and Y. Parmentier. 2004. The Meta- grammar Compiler: An NLP Application with a Multi- paradigm Architecture. In 2nd International Mozart/Oz Conference (MOZ'2004), Charleroi.
Set constraints in computational linguistics -solving tree descriptions. D Duchier, Workshop on Declarative Programming with Sets (DPS'99). ParisD. Duchier. 1999. Set constraints in computational linguis- tics -solving tree descriptions. In Workshop on Declara- tive Programming with Sets (DPS'99), Paris, pp. 91 -98.
Phrase Structure Composition and Syntactic Dependencies. Robert Frank, MIT PressBostonRobert Frank. 2002. Phrase Structure Composition and Syn- tactic Dependencies. MIT Press, Boston.
A new metagrammar compiler. B Gaiffe, B Crabbé, A Roussanaly, Proceedings of TAG+6. TAG+6VeniceB. Gaiffe, B. Crabbé, and A. Roussanaly. 2002. A new meta- grammar compiler. In Proceedings of TAG+6, Venice.
Intégration d'une dimension sémantique dans les grammaires d'arbres adjoints. C Gardent, Actes de La 13èmeédition de la conférence sur le TALN. s de La 13èmeédition de la conférence sur le TALNC. Gardent. 2006. Intégration d'une dimension sémantique dans les grammaires d'arbres adjoints. In Actes de La 13èmeédition de la conférence sur le TALN (TALN 2006).
Definite clause grammars for language analysis -a survey of the formalism and a comparison to augmented transition networks. F Pereira, D Warren, Artificial Intelligence. 13F. Pereira and D. Warren. 1980. Definite clause grammars for language analysis -a survey of the formalism and a comparison to augmented transition networks. Artificial Intelligence, 13:231-278.
Surface structure constraints in syntax. David Perlmutter, Linguistic Inquiry. 1David Perlmutter. 1970. Surface structure constraints in syn- tax. Linguistic Inquiry, 1:187-255.
Les grammaires d'interaction. HDR en informatique. Guy Perrier, Université Nancy 2Guy Perrier. 2003. Les grammaires d'interaction. HDR en informatique, Université Nancy 2.
Obtaining trees from their descriptions: An application to tree-adjoining grammars. J Rogers, K Vijay-Shanker, Computational Intelligence. 10J. Rogers and K. Vijay-Shanker. 1994. Obtaining trees from their descriptions: An application to tree-adjoining gram- mars. Computational Intelligence, 10:401-421.
The Oz-Mozart Programming System. The Oz-Mozart BoardThe Oz-Mozart Board. 2005. The Oz-Mozart Programming System. http://www.mozart-oz.org.
Automatic Grammar Generation from two Different Perspectives. Fei Xia, University of PennsylvaniaPh.D. thesisFei Xia. 2001. Automatic Grammar Generation from two Different Perspectives. Ph.D. thesis, University of Penn- sylvania. |
9,035,655 | YAMAMA: Yet Another Multi-Dialect Arabic Morphological Analyzer | In this paper, we present YAMAMA, a multi-dialect Arabic morphological analyzer and disambiguator. Our system is almost five times faster than the state-of-the-art MADAMIRA system with a slightly lower quality. In addition to speed, YAMAMA outputs a rich representation which allows for a wider spectrum of use. In this regard, YAMAMA transcends other systems, such as FARASA, which is faster but provides specific outputs catering to specific applications. | [
16998656,
9792162,
34845892,
10887722,
15119437,
2561041,
7164502,
472215,
13222584,
5219389
] | YAMAMA: Yet Another Multi-Dialect Arabic Morphological Analyzer
December 11-17 2016
Salam Khalifa salamkhalifa@nyu.edu
Nasser Zalmout nasser.zalmout@nyu.edu
Nizar Habash nizar.habash@nyu.edu
YAMAMA: Yet Another Multi-Dialect Arabic Morphological Analyzer
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations
COLING 2016, the 26th International Conference on Computational Linguistics: System DemonstrationsOsaka, JapanDecember 11-17 2016Computational Approaches to Modeling Language Lab New York University Abu Dhabi, UAE
In this paper, we present YAMAMA, a multi-dialect Arabic morphological analyzer and disambiguator. Our system is almost five times faster than the state-of-the-art MADAMIRA system with a slightly lower quality. In addition to speed, YAMAMA outputs a rich representation which allows for a wider spectrum of use. In this regard, YAMAMA transcends other systems, such as FARASA, which is faster but provides specific outputs catering to specific applications.
The Arabic language poses many challenges for Natural Language Processing (NLP). First, Arabic is morphologically rich, having a large number of inflections per lemma. Secondly, Arabic is orthographically ambiguous, having about 12 full morphological analyses per word on average. Finally, Arabic has a number of linguistic varieties among which Modern Standard Arabic (MSA) is the official primary written standard with numerous resources, while the other varieties are the unofficial primarily spoken Dialects of Arabic (DA). For more on Arabic NLP, see (Habash, 2010). Table 1 presents an example that showcases the aspect of morphological ambiguity which is shared across all varieties of Arabic. 1 Previous efforts on morphological analysis and disambiguation have led to the creation of a number of state-of-the-art tools with high accuracy, such as MADAMIRA (Pasha et al., 2014). MADAMIRA produces a rich output (diacritization, tokenization, part-of-speech (POS), lemmatization, gloss, and all inflected features), but it is slow. Other systems such as FARASA are very fast but focus on specific types of output with high quality performance (tokenization). Clearly, there is always a tradeoff between speed, quality and richness. Our system, YAMAMA (Yet Another Multi-Dialect Arabic Morphological Analyzer; Arabic 'Barbary Dove'), is an alternative to MADAMIRA and FARASA: it offers a faster performance than MADAMIRA but with all of MADAMIRA's rich output at a reasonable tradeoff of quality that varies depending on the specific feature. 2
Related Work
There has been a considerable amount of work on MSA and DA morphological analysis, disambiguation, POS tagging, tokenization, lemmatization and diacritization. One of the most notable efforts is MADAMIRA (Pasha et al., 2014). MADAMIRA produces a rich feature set for each word, containing more than 14 morphological and lexical features. It also provides different tokenization schemes. Additionally, MADAMIRA has two modes for MSA and Egyptian Arabic (EGY). The speed of MADAMIRA however is relatively slow (420 words/sec in stand-alone mode, and 1,013 words/sec in server mode) especially for NLP tasks where speed may be critical. Recently, and Abdelali et al. (2016) presented a new Arabic segmenter, FARASA. They reported much faster running times than MADAMIRA with similar accuracy on toekniztion. FARASA produces word segmentations only as opposed to MADAMIRA's richer output; and it currently does not handle any Arabic dialect. Our system, YAMAMA, uses some components from MADAMIRA, in particular the morphological analyzers (out-of-context readings) but has its own disambiguation models. This allows YAMAMA to maintain the richness of MADAMIRA, but increase the speed. The disambiguation modeling components are inspired by FARASA's design. In this paper, we compare to both systems in terms of quality and speed.
YAMAMA
Motivation We were motivated by the FARASA approach Abdelali et al., 2016). FARASA achieves very high tokenization accuracy at a very high speed by not using any context. It relies on simple probabilistic models of stems, prefixes, suffixes and their combinations. While this approach will be limiting for complex tasks such as POS tagging, it is sufficient for tokenization, particularly when it comes to specific applications such as machine translation (MT) and information retrieval (IR) (Abdelali et al., 2016). Our goal for YAMAMA is to create a system that combines the rich output of MADAMIRA with fast and simple out-of-context analysis selection comparable to FARASA's approach. For in-vocabulary words, YAMAMA uses a pre-computed maximum likelihood model to assign an analysis to every word. For out-of-vocabulary words, YAMAMA ranks all of the analyses for such words using two unigram language models of the lemma and the Buckwalter POS tag. In both cases, YAMAMA reduces the text to types and makes decisions in type space, thus benefiting from the low type to token ratio. 3
Datasets For the training and development of our system, we used the same settings as those used for MADAMIRA. For MSA, we used the Penn Arabic Treebank (PATB parts 1,2 and 3) (Maamouri et al., 2004), and for EGY, the ARZ Treebank (Maamouri et al., 2014) . We followed the data splits recommend by Diab et al. (2013) for both treebanks.
Maximum Likelihood Model
We created the maximum likelihood model based on the ATB Train dataset by selecting the most frequent analysis for each word token in the dataset. The selected analyses are then stored in a dictionary that is loaded once the system starts running. The analyses include all the morphological and lexical features as in MADAMIRA.
Analysis and Disambiguation
For the OOV words, we run a morphological analyzer (same analyzer used in MADAMIRA). For MSA we used the SAMA database , and for EGY we used the CALIMA ARZ database (Habash et al., 2012). The analyses of each word are ranked using the multiplication of their lemma probability and their semi-lexicalized Buckwalter tag probability. Both probabilities are estimated using the training data. The highest ranking analysis is selected; and the word and analysis are added to the loaded analysis dictionary.
Tokenization YAMAMA currently produces a detailed segmentation consisting of the undiacritized morphemes from the Buckwalter tag analysis (BWTagTok). For the analysis dictionary the BWTagTok segmentation is generated for each word ahead of time. Whereas for OOV words, the segmentation is generated after disambiguation.
Output Generation Although all analyses are determined in type space, the output has to be generated in token space. YAMAMA's output is in the same format as MADAMIRA's.
Evaluation
We present next two sets of experiments. The first set targets accuracy and speed and the second set targets machine translation quality. In both sets, we try to compare YAMAMA 4 to MADAMIRA 5 and FARASA 6 both, when possible.
Accuracy and Speed Evaluation
Experimental Setup While MADAMIRA and YAMAMA share similar output, they are different from FARASA. To allow us to compare them, we conducted three experiments. First, we compared MADAMIRA and YAMAMA in terms of accuracy of their rich output. Second, we compared all systems in terms of accuracy of the specific tokenization output of FARASA. Finally, we compared all three systems in terms of speed on a very large corpus. We also report speeds in the first two experiments, although the test sets are relatively small. For the accuracy evaluation we used the Test sets recommended by Diab et al. (2013) Results First, in Table 1, we compare YAMAMA to MADAMIRA in terms of accuracy over the (a) Buckwalter POS tag segmentation, which is an undiacritized segmentation based on the morphemes in the Buckwalter analysis, (b) Lemma, (c) POS, (d) Diacritization, and (e) ALL features. We also report the time the systems took to complete the task.
To give an example of the various features evaluated, the word Almjmwςh 'collection/group' may have a correct analysis with the BWTagTok Al+mjmwE+p, the lemma majomuwEap, the POS noun, and the diacritization AlmajomuwEapi. The ALL condition would include all of these in addition to prc3:0 prc2:0 prc1:0 prc0:Al_det per:na asp:na vox:na mod:na gen:f num:s stt:d cas:g enc0:0. For MSA, YA-MAMA performs very closely to MADAMIRA except for DIAC, which explains the drop in ALL. However, in EGY, YAMAMA beats MADAMIRA in almost all aspects, except for POS. YAMAMA is four times faster than MADAMIRA in the MSA setting, and two times faster in EGY; due to the large size of the CALIMA ARZ database which is three times larger than SAMA, hence more loading time. Also, the speed of YAMAMA is sensitive to the ratio of OOV types, where it uses the morphological analyzer.
Second, in Table 2 we compare to MADAMIRA and YAMAMA to FARASA in terms of FARASA's tokenization scheme (FarasaTok), which is similar but not exactly the same as the BWTagTok. We automatically converted the MADAMIRA and YAMAMA outputs as well as the MSA and EGY test sets to FarasaTok to be able to compare in the same tokenization space. We also report on an Alif, Ya and Ta-Marbuta normalized version of FarasaTok (FarasaTokNorm) for all test conditions. In addition to the test sets reported on earlier, we add the MSA WikiNews test set that was reported on by . Across all conditions, YAMAMA and MADAMIRA behave very similarly. In MSA WikiNews, all three systems behave similarly. However, as would be expected, YAMAMA and MADAMIRA beat FARASA on the EGY set by a large margin. YAMAMA and MADAMIRA also have higher performance than FARASA on the MSA set. In terms of speed, YAMAMA outperforms in all modes except for EGY. The speeds of YAMAMA are competitive with FARASA except for EGY for the reasons mentioned earlier. (ATB), EGY (ARZ-ALL) and MSA-Wiki (WikiNews) using FARASA tokenization scheme in basic (FarasaTok) and normalized forms (FarasaTok Norm). We also report running time.
Finally, we ran all systems through a large dataset of 7.5 million words from Gigaword (Parker et al., 2009). The reported running times for MADAMIRA (standalone mode), YAMAMA and FARASA are 2,305s, 398s and 99s, respectively. YAMAMA is five times faster than MADAMIRA and FARASA is four times faster than YAMAMA. (Koehn et al., 2007) with default parameters to develop the Statistical Machine Translation (SMT) systems. For alignment, we used GIZA++ (Och and Ney, 2003). And for language modeling, we used KenLM (Heafield et al., 2013) to build a 5-gram language model. We evaluate using BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005). We apply statistical significance tests using the paired bootstrapped resampling method (Koehn, 2004). We used the Arabic-English parallel component of the UN Corpus (Eisele and Chen, 2010), with about 9 million lines for the English language model (∼286 million words), 200 thousand parallel lines for training (∼5 million words), 2000 lines for tuning, and 3000 lines for testing. The English content was tokenized using the default English tokenizer at Moses, and the Arabic texts were tokenized through YAMAMA, MADAMIRA and FARASA into the same Arabic Treebank tokenization scheme. For YAMAMA, we used the TOKAN tool (Habash et al., 2009) to do the tokenization. The Arabic dataset we used had English text segments covering UN resolutions numbers and named entities; so we applied Moses' English whitespace tokenization scripts on the Arabic files in advance of the Arabic tokenization for the three systems to be of a better match to the English reference.
Machine Translation Evaluation
Results and Analysis
The results of the SMT experiments are presented in Table 3, with YAMAMA and MADAMIRA showing a statistically significant performance improvement relative to FARASA. For a better understanding of the results, we analyzed the output files and observed that FARASA transliterates English words with Arabic letters and deletes the vowels, most likely the result of an internal minor transliteration error. This behavior is problematic for SMT, as Moses would pass such English Out-of-Vocabulary (OOV) words in Arabic letters. To facilitate a better comparison ignoring the effect of different OOV handling, we performed additional SMT experiments that drop the OOV words from all three systems' output. Results are also in Table 3, with YAMAMA outperforming the other two systems slightly but with statistical significance. MADAMIRA and FARASA performed closely, with a statistically insignificant difference. As a general observation, we conclude that the variations among the different systems don't have a profound impact on the SMT quality. 7
Conclusions and Future Work
We presented YAMAMA, a multi-dialect Arabic morphological analyzer and disambiguator. YAMAMA is almost five times faster than MADAMIRA, with slightly lower quality. YAMAMA outputs a rich representation which allows for a wider spectrum of use, transcending other systems, such as FARASA, which is faster but provides specific outputs catering to specific applications. There is yet much room for enhancing the speed and the quality of YAMAMA, which we plan to investigate.
Figure 1 :
1Possible analyses produced by the morphological analyzer of the word byn. The correct analysis is highlighted in gray.
1 Introduction
1hl synjH byn Âflyk fy dwr bAtmAn Will Ben Affleck be a good Batman?POS
Diac
Gloss
PV+PVSUFF_SUBJ:3MS bay∼ana He demonstrated
PV+PVSUFF_SUBJ:3FP bay∼an∼a They demonstrated
NOUN_PROP
biyn
Ben
ADJ
bay∼in
Clear
PREP
bayn
Between, among
NOUN_PROP
bi+yan
with a Yen
15 more analysis ...
of the Penn Arabic Treebank (PATB parts 1,2 and 3) (Maamouri et al., 2004) (for MSA) and the ARZ Treebank (Maamouri et al., 2014) (for EGY).MDMR
YMM
MSA EGY MSA EGY
BWTagTok
98.5
93.8
98.4
94.0
LEX
96.8
87.5
96.1
87.8
POS
96.8
92.5
96.1
91.9
DIAC
88.0
83.6
81.0
85.3
ALL
86.0
78.4
78.8
79.3
Time (s)
57.7
51.2
15.4
31.1
Table 1: Evaluation results for MADAMIRA
(MDMR) and YAMAMA (YMM) on the two tests
MSA (ATB) and EGY (ARZ-ALL) using a num-
ber of morphological features: Buckwalter POS Tag
tokenization (BWTagTok), Lemma (LEX), Part-of-
Speech (POS), Diacritization (DIAC) and all features
together (ALL). We also report running time.
Table 2 :
2Evaluation results for MADAMIRA (MDMR), YAMAMA (YMM) and FARASA (FRS) on the three tests MSA
Table 3 :
3Machine translation resultsExperimental Setup We used the Moses
toolkit
Arabic transliteration is presented in the Habash-Soudi-Buckwalter scheme(Habash et al., 2007). 2 To obtain YAMAMA (Version 1.0), go to http://camel.abudhabi.nyu.edu/resources/.
In a text of 80 words, the type to token ratio is 89%, whereas in a text of 8M words, the type to token ratio is only 3.7%.
YAMAMA:Version: 1.0. 5 MADAMIRA: Released on May 16, 2016, version 2.1. 6 FARASA: Downloaded on May 27, 2016.
We would like to thank the Farasa team, specifically, Kareem Darwish, Hamdy Mubarak, and Ahmed Abdelali for helpful conversations. We have provided them with feedback and they have since released an updated version of Farasa.
Farasa: A fast and furious segmenter for Arabic. Ahmed Abdelali, Kareem Darwish, Nadir Durrani, Hamdy Mubarak, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: DemonstrationsSan Diego, CaliforniaAhmed Abdelali, Kareem Darwish, Nadir Durrani, and Hamdy Mubarak. 2016. Farasa: A fast and furious segmenter for Arabic. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 11-16, San Diego, California.
METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. Satanjeev Banerjee, Alon Lavie, Proceedings of the ACL 2005 Workshop on Intrinsic and Extrinsic Evaulation Measures for MT and/or Summarization. the ACL 2005 Workshop on Intrinsic and Extrinsic Evaulation Measures for MT and/or SummarizationSatanjeev Banerjee and Alon Lavie. 2005. METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. In Proceedings of the ACL 2005 Workshop on Intrinsic and Extrinsic Evaulation Measures for MT and/or Summarization.
Farasa: A new fast and accurate Arabic word segmenter. Kareem Darwish, Hamdy Mubarak, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC. the Tenth International Conference on Language Resources and Evaluation (LRECKareem Darwish and Hamdy Mubarak. 2016. Farasa: A new fast and accurate Arabic word segmenter. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016).
LDC Arabic treebanks and associated corpora: Data divisions manual. Mona Diab, Nizar Habash, Owen Rambow, Ryan Roth, arXiv:1309.5652arXiv preprintMona Diab, Nizar Habash, Owen Rambow, and Ryan Roth. 2013. LDC Arabic treebanks and associated corpora: Data divisions manual. arXiv preprint arXiv:1309.5652.
Multiun: A multilingual corpus from united nation documents. Andreas Eisele, Yu Chen, Proceedings of the Language Resources and Evaluation Conference (LREC). the Language Resources and Evaluation Conference (LREC)Andreas Eisele and Yu Chen. 2010. Multiun: A multilingual corpus from united nation documents. In Proceedings of the Language Resources and Evaluation Conference (LREC), pages 2868-2872.
Standard Arabic Morphological Analyzer (SAMA) Version 3.1. Linguistic Data Consortium LDC2009E73. David Graff, Mohamed Maamouri, Basma Bouziri, Sondos Krouna, Seth Kulick, Tim Buckwalter, David Graff, Mohamed Maamouri, Basma Bouziri, Sondos Krouna, Seth Kulick, and Tim Buckwalter. 2009. Standard Arabic Morphological Analyzer (SAMA) Version 3.1. Linguistic Data Consortium LDC2009E73.
MADA+TOKAN: A toolkit for Arabic tokenization, diacritization, morphological disambiguation, POS tagging, stemming and lemmatization. Nizar Habash, Abdelhadi Soudi, Tim Buckwalter, Proceedings of the Second International Conference on Arabic Language Resources and Tools. The MEDAR Consortium. Khalid Choukri and Bente Maegaardthe Second International Conference on Arabic Language Resources and Tools. The MEDAR ConsortiumArabic Computational Morphology: Knowledge-based and Empirical MethodsNizar Habash, Abdelhadi Soudi, and Tim Buckwalter. 2007. On Arabic Transliteration. In A. van den Bosch and A. Soudi, editors, Arabic Computational Morphology: Knowledge-based and Empirical Methods. Springer. Nizar Habash, Owen Rambow, and Ryan Roth. 2009. MADA+TOKAN: A toolkit for Arabic tokenization, dia- critization, morphological disambiguation, POS tagging, stemming and lemmatization. In Khalid Choukri and Bente Maegaard, editors, Proceedings of the Second International Conference on Arabic Language Resources and Tools. The MEDAR Consortium, April.
A Morphological Analyzer for Egyptian Arabic. Nizar Habash, Ramy Eskander, Abdelati Hawwari, Proceedings of the Twelfth Meeting of the Special Interest Group on Computational Morphology and Phonology. the Twelfth Meeting of the Special Interest Group on Computational Morphology and PhonologyMontréal, CanadaNizar Habash, Ramy Eskander, and Abdelati Hawwari. 2012. A Morphological Analyzer for Egyptian Arabic. In Proceedings of the Twelfth Meeting of the Special Interest Group on Computational Morphology and Phonology, pages 1-9, Montréal, Canada.
Introduction to Arabic natural language processing. Y Nizar, Habash, Morgan & Claypool Publishers3Nizar Y Habash. 2010. Introduction to Arabic natural language processing, volume 3. Morgan & Claypool Publishers.
Scalable modified Kneser-Ney language model estimation. Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H Clark, Philipp Koehn, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsSofia, BulgariaKenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable modified Kneser-Ney language model estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 690-696, Sofia, Bulgaria, August.
Moses: open source toolkit for statistical machine translation. Philipp Koehn, Hieu Hoang, Alexandra Birch, Christopher Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Christopher Dyer, Ondrej Bojar, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions. the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume the Demo and Poster SessionsPrague, Czech RepublicAlexandra Constantin, and Evan HerbstPhilipp Koehn, Hieu Hoang, Alexandra Birch, Christopher Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Christopher Dyer, Ondrej Bojar, Alexandra Con- stantin, and Evan Herbst. 2007. Moses: open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177-180, Prague, Czech Republic.
Statistical significance tests for machine translation evaluation. Philipp Koehn, Proceedings of EMNLP 2004. EMNLP 2004Barcelona, SpainAssociation for Computational LinguisticsPhilipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of EMNLP 2004, pages 388-395, Barcelona, Spain, July. Association for Computational Linguistics.
The Penn Arabic Treebank: Building a Large-Scale Annotated Arabic Corpus. Mohamed Maamouri, Ann Bies, Tim Buckwalter, Wigdan Mekki, NEMLAR Conference on Arabic Language Resources and Tools. Cairo, EgyptMohamed Maamouri, Ann Bies, Tim Buckwalter, and Wigdan Mekki. 2004. The Penn Arabic Treebank: Building a Large-Scale Annotated Arabic Corpus. In NEMLAR Conference on Arabic Language Resources and Tools, pages 102-109, Cairo, Egypt.
Developing an Egyptian Arabic Treebank: Impact of Dialectal Morphology on Annotation and Tool Development. Mohamed Maamouri, Ann Bies, Seth Kulick, Michael Ciul, Proceedings of the Ninth International Conference on Language Resources and Evaluation. the Ninth International Conference on Language Resources and Evaluation2014Nizar Habash, and Ramy EskanderMohamed Maamouri, Ann Bies, Seth Kulick, Michael Ciul, Nizar Habash, and Ramy Eskander. 2014. Devel- oping an Egyptian Arabic Treebank: Impact of Dialectal Morphology on Annotation and Tool Development. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014).
European Language Resources Association (ELRA). European Language Resources Association (ELRA).
A Systematic Comparison of Various Statistical Alignment Models. Josef Franz, Hermann Och, Ney, Computational Linguistics. 291Franz Josef Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29(1):19-52.
BLEU: a Method for Automatic Evaluation of Machine Translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, PAKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, PA.
Arabic Gigaword Fourth Edition. Robert Parker, David Graff, Ke Chen, Junbo Kong, Kazuaki Maeda, LDC catalog number No. LDC2009T30. Robert Parker, David Graff, Ke Chen, Junbo Kong, and Kazuaki Maeda. 2009. Arabic Gigaword Fourth Edition. LDC catalog number No. LDC2009T30, ISBN 1-58563-532-4.
MADAMIRA: A Fast, Comprehensive Tool for Morphological Analysis and Disambiguation of Arabic. Arfath Pasha, Mohamed Al-Badrashiny, Ahmed El Kholy, Ramy Eskander, Mona Diab, Nizar Habash, Manoj Pooleery, Owen Rambow, Ryan Roth, Proceedings of LREC. LRECReykjavik, IcelandArfath Pasha, Mohamed Al-Badrashiny, Ahmed El Kholy, Ramy Eskander, Mona Diab, Nizar Habash, Manoj Pooleery, Owen Rambow, and Ryan Roth. 2014. MADAMIRA: A Fast, Comprehensive Tool for Morphologi- cal Analysis and Disambiguation of Arabic. In In Proceedings of LREC, Reykjavik, Iceland. |
8,774,560 | Applying Discourse Analysis and Data Mining Methods to Spoken OSCE Assessments | This paper looks at the transcribed data of patient-doctor consultations in an examination setting. The doctors are internationally qualified and enrolled in a bridging course as preparation for their Australian Medical Council examination. In this study, we attempt to ascertain if there are measurable linguistic features of the consultations, and to investigate whether there is any relevant information about the communicative styles of the qualifying doctors that may predict satisfactory or non-satisfactory examination outcomes. We have taken a discourse analysis approach in this study, where the core unit of analysis is a 'turn'. We approach this problem as a binary classification task and employ data mining methods to see whether the application of which to richly annotated dialogues can produce a system with an adequate predictive capacity. | [
18697469
] | Applying Discourse Analysis and Data Mining Methods to Spoken OSCE Assessments
ManchesterCopyright ManchesterColing 2008. August 2008
Meladel Mistica mmistica@csse.unimelb.edu.au
School of Languages, Cultures and Linguistics
The University of Melbourne CSSE
Monash University
Timothy Baldwin
School of Languages, Cultures and Linguistics
The University of Melbourne CSSE
Monash University
Marisa Cordella marisa.cordella@arts.monash.edu.au
School of Languages, Cultures and Linguistics
The University of Melbourne CSSE
Monash University
Simon Musgrave simon.musgrave@arts.monash.edu.au
School of Languages, Cultures and Linguistics
The University of Melbourne CSSE
Monash University
Applying Discourse Analysis and Data Mining Methods to Spoken OSCE Assessments
Proceedings of the 22nd International Conference on Computational Linguistics
the 22nd International Conference on Computational LinguisticsManchesterColing 2008. August 2008
This paper looks at the transcribed data of patient-doctor consultations in an examination setting. The doctors are internationally qualified and enrolled in a bridging course as preparation for their Australian Medical Council examination. In this study, we attempt to ascertain if there are measurable linguistic features of the consultations, and to investigate whether there is any relevant information about the communicative styles of the qualifying doctors that may predict satisfactory or non-satisfactory examination outcomes. We have taken a discourse analysis approach in this study, where the core unit of analysis is a 'turn'. We approach this problem as a binary classification task and employ data mining methods to see whether the application of which to richly annotated dialogues can produce a system with an adequate predictive capacity.
Introduction
This paper describes our experimentation with applying data mining methods to transcribed doctorpatient consultations. It is in essence a discovery project: we apply methods to a field and task that is not ordinarily associated with such approaches in order to ascertain whether this could make for a tractable learning task.
The task involves the extraction of discourse features from doctor-patient consultations performed by international medical graduates c 2008.
Licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported license (http://creativecommons.org/licenses/by-nc-sa/3.0/). Some rights reserved. (IMGs), and what is known as 'simulated patients' (Vu et al., 1994), respectively. The IMGs are enrolled in a bridging course in Melbourne as preparation for their Australian Medical Council (AMC) examination, the successful completion of which is one of the pre-requisites to becoming a fully accredited practitioner in Australia. This partially replicates the AMC examination by studying in detail how IMGs perform two objective structured clinical examinations (OSCEs). See Section 2 for full details of the examination environment and participants involved.
The main questions raised when initiating this study were:
• How objective is the testing?
• What is the importance placed on language skills in OSCE environments?
• What makes for a successful OSCE?
In this research, we aim to build classifiers that make reasonable predictions of the data being tested, and possibly point us in the right direction with respect to the questions above. From the classifiers we build, we also hope to ascertain which of our features best predict a successful examination.
We organise the paper as follows. In Section 2, we briefly describe the examination environment and process, the marking scheme, and the participants involved in the testing of the IMGs. We also outline some of the issues that have arisen with regard to the current methods of IMG testing. In Section 3, we present details of the data used. Section 4 describes the features we develop for the task and discusses the reasoning behind the selection of features from a discourse analysis perspective. Section 5 discusses the results of the experiments, with further examination of the data. The last two sections, Sections 6 and 7, comprise a discussion of the results and concluding remarks.
Background
With Western nations becoming increasingly reliant on medical professionals trained overseas, there is, in turn, a growing need to develop a reliable means of objectively assessing IMGs. The shortage of medical doctors is a worldwide phenomenon currently affecting many Western societies such as the UK, Canada, US and New Zealand, which compete for the best medical practitioners available around the world. Australia is not immune to this global phenomenon, and in the last two decades the shortage of local medical practitioners in Australia has worsened (Birrell et al., 2004). Challenges to the healthcare system in the country are particularly evident in the areas of providing medical care for a growing elderly population and of servicing rural areas, where locally trained doctors do not feel particularly attracted to practise medicine (Han et al., 2006). Currently 35% of the rural medical workforce and 20% of the total national medical workforce consist of IMGs (Flynn, 2006). These figures may increase even further in some regions (Spike, 2006), as preparation of fully educated and trained local medical graduates takes up to thirteen years to complete.
There is considerable disparity among IMGs in their background training, clinical skills, understanding of the health system and communication skills (McGrath, 2004). In order to be registered to practice in Australia, IMGs must successfully complete the Australian Medical Council examinations and a period of supervised training. The medical knowledge of IMGs is assessed in two ways: by multiple choice examinations and by clinical examinations. This second form of examination consists of a series of simulated medical consultations in which a role-player takes the part of the patient, and the IMG's professional knowledge, lay-cultural knowledge, socio-cultural assumptions, institutional norms, and values and personal experiences are all in full display during the unfolding of the medical event (Roberts et al., 2003). Whenever cultural factors are not shared with their patients, the interpretative schema and therefore the comprehension of speech are affected by this lack of commonality in the participants' inferences and contextual cues (Gumperz, 1999).
Such effects are likely to cause miscommunication in medical visits and have a potential negative effect on patients' satisfaction in the consultation. Identification of the communication difficulties faced by IMGs can therefore inform modifications to the training provided to IMGs when they prepare for the Australian Medical Council examinations, as well as suggesting more nuanced and targeted procedures for assessing communicative skills within those examinations, all with the goal of working toward a better equipped medical workforce for the future. The use of automated analytic procedures to try to establish objective criteria for communicative success is an important step in this process.
Assessing language knowledge and competence quantitatively is not a novel concept in second language learning assessment. However, the application of data mining methods to automatically assess language proficiency in a discourse setting is novel. Levow et al. (1999) propose an architecture to automatically assess language proficiency. In their paper, they propose an architecture that employs data mining methods, but do not build classifiers over their spoken data to test this proposal. A closely related line of research is on the automatic classification of discourse elements to assess the quality of a written genre (Burstein et al., 2001). Like this work, it focuses on extracting features from the discourse as a whole. But unlike this study, the authors extract high level features, such as rhetorical structure, of written discourse. The study we present in this paper is rather unique in its approach to language assessment.
Data
The data is taken from transcribed recordings of examinations from students enrolled in a bridging course at Box Hill Hospital in Melbourne, Australia. Each candidate was video-recorded enacting medical consultation scenarios with what is known as a standardised or simulated patient (SP). This method of testing is known as an objective structured clinical examination (OSCE), which is an emulation of a doctor-patient consultation, much like a role-play setting.
In this study, the role of the patient (SP) is enacted by a qualified doctor who follows a script, and has well-defined ailment(s) and accompanying concerns. Even though the SP assumes the same ailment and disposition with all the candidates, the interaction between the candidate and the SP is uncued and free-form. They simply present the information in a standardised manner across all candidates and perform the role of the patient as felicitously as possible.
For this set of examinations there are 2 types of OSCE stations referred to as STD (sexually transmitted disease -genital herpes) and BC (bowel cancer). The SP for the STD station is played by a female doctor. The patient she plays has genital herpes and is concerned about how this will affect her chances of falling pregnant, and how this condition may also affect her baby. The SP for the BC station is played by an older male. A tumour is discovered in his bowel lining and he is reluctant to undergo any treatment because one of his good friends suffered a similar condition and his quality of life was severely diminished.
Even though the consultation is free to be negotiated between doctor (candidate) and patient (simulated patient), each of the OSCEs cannot exceed 8 minutes, and is terminated by the examiner if it does so.
Transcription
The recordings are transcribed in ELAN, a multimedia annotation tool developed at the Max Planck Institute, to help encode low-level linguistic features such as overlapping and timing information. The information and features extracted from the discourse are largely based on a 'turn'.
Here we consider a turn as being normally dominated by one speaker. It can be made up of multiple intonation units. When there is backchannelling, overlapping, or any interruption by the other participant, then the turn is encoded as ending at the end of the interrupted intonation unit. Otherwise, transition pauses commonly signal turn changes, unless latching occurs.
Given that the OSCE setting aims to emulate as close as possible a real medical consultation, this interaction, like all uncued spoken dialogues, also has evidence of complicated turn-taking negotiations, disfluent and unintelligible speech, interrupted speech, challenges for the floor, and the like, all of which must be encoded and noted in ELAN. Transcribing such data is not a trivial matter. In addition, transcribing the data in order to extract these features is also a demanding task in itself, which makes creating data for such tasks an involved process.
Disfluencies and repairs are encoded in a limited way, only by way of marking up truncated or unfinished words. We also do not take a fine-grained approach in encoding delaying strategies (Clark et al., 2002), that is we do not differentiate whether the uh or ah encoded represents lexical search, a wish to hold the floor, a wish to give up the floor or buying time to construct what to say next.
OSCE scoring
In an OSCE setting, candidates are given an overall pass or fail rating for each station by an OSCE examiner observing the interaction. This overall evaluation can be based on a number of performance criteria which tests the candidates medical, clinical and communication skills (Grand'Maison et al., 1992). The OSCE marking scheme used for this study consists of 5 assessable categories, as follows:
APPROACH: the ability of the candidate to communicate with the patient;
HISTORY: the ability of the candidate to collect medical history;
INTERPRETATION: how well does the candidate interpret his or her investigation in order to formulate an appropriate diagnosis;
MANAGEMENT: how well does the candidate formulate a management plan for the diagnosis; COUNSELLING: is the candidate able to give appropriate counselling to the patient.
The first category tests language knowledge and competency both at the lexical and discourse level, while the remaining four categories test medical knowledge and clinical competency.
which include 'lexical introduction', and 'lexical repeat'. In encoding features of conversational dominance, we focus on participatory dominance (Itakura, 2000), which looks at which speaker contributes most to the dialogue in terms of content.
Lexical introduction refers to a non-stop word that is introduced by the doctor (IMG) or the patient (SP), while lexical repeat encodes how many times a word introduced by the other interlocutor is repeated by the speaker.
Almost all of the features developed are continuous, based on timing information or word counts. The only binary feature used encodes whether the doctor initiates the consultation or not.
As mentioned in the previous section, the features developed were largely based on turns. This is to capture, along with other features such as overlapping and pauses, the interactional aspect of the communication. For example, conversational cooperation and speaker reassurance can be captured with these features. Another aspect to the development of these features, particularly for the lexical-based features, is whether the IMG has a suitable vocabularly and if they employ it appropriately in the interaction.
We arrive at 11 feature sets from which we build our classifiers, as described in Table 1.
Not all features are exclusive to any one feature set, that is, it is possible for a single feature to belong to a number of feature sets.
The sets were designed to isolate possible characteristics of not only the discourse as a whole, but how the participants negotiated their interaction. These features sets were developed from observing each of the consultations with the expectation that these were salient and determining features of a successful examination.
Experiments
There was a total of 11 OSCE candidates, all of whom performed an STD and a BC station, giving us in total 22 instances for this binary classification task to predict a pass or fail examination result. Of the 22 instances, we had 5 failures and 17 passes. Given the small number of instances, we maximised our dataset by employing 10-fold stratified cross-validation, as well as leave-one-out cross-validation which uses all but one instance in training and the held-out instance for testing. The baseline system we use for comparison is zero-R, or majority vote. For our supervised classifier, we employ a lazy learner in the form of the IB1 algorithm implemented in WEKA.
Results for Feature Sets
Our initial classifiers held some promise. The classifier built from all of the features was equivalent to the baseline system, and the combination of the word-based features surpassed the baseline's results, as shown in Table 2.
To evaluate our system, we employ simple classification accuracy, in addition to precision, recall and F-score. Classification accuracy is the proportion of correct predictions by the classifier, irrespective of class. Precision gauges how successful the pass predictions of a given classifier are, while recall gives us an indication of how successful a given classifier is at identifying the candidates who actually passed. Finally, F-score is a composite of precision and recall, and gives us an overall performance rating relative to passed candidates.
The least successful classifier was built on the
10-fold cross validation
Leave-one-out cross validation Feature set Accuracy Precision Recall F-score Accuracy Precision Recall F-score baseline . feature set based on timing, which contains information such as the overall length of the dialogue, the overall length of transition pauses, inturn pauses and other time-based features. This was most surprising because as a general observation, candidates who allowed extended pauses and uncomfortable silences were those who seemed to perform poorly, and those who did not leave too many silences, and could maintain the flow of the dialogue, seemed to perform well. Given the small number of training instances each classifier is based on, these first results were somewhat encouraging. With respect to the baseline, the overall performance of two of the systems equalled or surpassed the baseline in terms of F-score. Most of the classifiers performed well in terms of precision but less well in terms of recall, i.e. when the classifiers predicted a pass they were generally correct, but there were significant numbers of candidates who were predicted to have failed but passed in practice.
Data Introspection Retrospectively
Although the results show promise, it was expected that more of the feature sets would return more favourable results. The possible reasons why the time-based features, and many of the other feature sets developed, did not perform as well as expected may have been because the features used in building the classifiers could have been combined in a better way, or because the data itself had too many anomalies or was too disparate. We would expect that extra data could iron out such anomalies, but developing additional data is expensive and more recordings are not always available. The advantage of having a small dataset is that we are able to do fine-grained annotation of the data, but the obvious disadvantage is that we cannot easily generate extra amounts of training data.
One very noticeable feature of the OSCE stations was that the STD SP had a very different communicative style to that of the the BC SP. Based on this observation we conducted tests given the hypothesis that the possible bias in the data could have stemmed from having two very different testing approaches from the two SPs. In general, the BC SP was more leading and in a sense more forgiving with the candidates. In contrast to this, the STD SP tended to be more felicitous in her role as a patient, allowing awkward silences and not prompting the candidates for further exploration.
We conduct the Mann-Whitney test, a rank sum test, over the data in order to diagnose whether the poor results were due to the distribution of the data or whether the classifiers built with the selected features were simply poor predictors. The Mann-Whitney test ascertains whether there is a difference in the population mean of the two samples given, without making any assumptions about the distribution of the data.
We sub-sample the data in two ways in examining its homogeneity: (a) FAIL juxtaposed with PASS candidates; and (b) BC juxtaposed with STD stations. Test (a) essentially tests which examinable category contributes the most to a pass or fail outcome, whilst test (b) examines whether there is an inherent difference in the way the test- Table 4: Mann-Whitney z-score for failed and passed samples ing was conducted between the BC and STD stations.
BC vs. STD
We use the ranking from the 5 assessable categories outlined in Section 3 and obtain the Mann-Whitney z-score for each category. The z-score gives us an indication of how disparate the two separated datasets, BC and STD, are. The further away from 0 the z-score is, the greater the evidence that BC and STD data are not from the same population, and should be treated as such. The results of this test, as seen in Table 3, show that these two groups differ quite markedly: the candidates were consistently marked differently for all assessable categories except HISTORY. This is a striking peculiarity because each candidate was tested in both the STD and BC stations.
Based on the above, we can posit that the distinct testing styles of the STD and BC SPs were the reason for our original lacklustre results, and that the two data samples need to be treated separately for the classifiers to perform consistently.
FAIL vs. PASS
In addition to the BC vs. STD test, we also test how the failing candidates differ from the passing candidates across the evaluation criteria.
The main idea behind this test is to see which of the assessable categories contributed the most in the overall outcome of the examination. For this test, we would not expect the absolute z-score of any of the assessment components to exceed the absolute z-score of the OVERALL category given that it is the cumulative scores of all categories.
The results in Table 4 suggest that APPROACH correlates most highly with the pass/fail divide in the OSCE assessments, followed by HISTORY, then INTERPRETATION and MANAGEMENT, and finally COUNSELLING. Recall that APPROACH is the component that assesses language and communication skills. In particular, it assesses the style and appropriateness of the way candidates convey information, from lexical choice to displaying empathy through communication style. Given that APPROACH correlates most strongly with the assessment result, the decision to focus our feature engineering efforts on linguistic aspects of the doctor-patient interaction would appear justified.
Results for STD & BC Data
Given the results from the Mann-Whitney tests reported in the previous section, we separate the data into two lots: those from the STD station, and those from the BC station.
Even though there were very few instances in the original dataset, we aim to see in these experiments whether this separation improves the performance of the classifiers. We build classifiers over each dataset using the same features as before.
The results of the tests performed over the separated datasets, as shown in Table 5, show a big improvement over the baseline for STD, while the BC dataset is more problematic.
In the STD group, we see that four feature sets, all, turns, wordBased and patient equal or surpass the baseline F-score.
In contrast to this, upon examination of the performance of the classifiers built over the BC dataset, we do not observe any improvements over the baseline and the results are markedly worse than those for the combined dataset. Having said this, when we combine the outputs of the two component classifiers, the F-score for all features is 0.882, an improvement over the original combined system.
Discussion
The OSCE assessment does not merely examine the language skills of the candidates, but it also as- sesses the efficacy of their communication skills in conveying correct and accurate medical information within a clinical setting. It can be seen from Table 4 that there is a high correlation between the overall pass or fail and the assessable category AP-PROACH.
The examiners' subjectivity of overall performance is minimised by the highly structured examination setup and well-defined assessment criteria. However, as shown in Table 3, the communicative style of the SP is a contributing factor to the perception of successful clinical and communication skills. The Mann-Whitney tests suggest that an SP's approach and their apparent satisfaction during the clinical encounter can affect the judgement of the examiner.
Additional inspection of the data revealed that the assessment criteria which focused on language and communication skills correlated highly with an overall pass grade, moreso than the other criteria. This seems to suggest that more emphasis should be placed on language skills and communication style in the assessment of the candidates.
Assessing language competency is no trivial matter, and capturing the linguistic features of dialogues in an attempt to define competence, as we have done here, is a demanding task in itself. Although many of our features were focused on turntaking, speaker response and interaction, we did not develop features that encompass the information structure of the communicative event.
It is assumed that miscommunication between non-native and native speakers of a language is due to a lack of language knowledge pertaining to syn-tax, morphology or lexical semantics. However many of these communication difficulties arise not because of this lack of grammatical knowledge, but through a difference in discourse styles or information structure as governed by different cultures (Wiberg, 2003;Li, 1999).
Given that the word-based feature sets were the most successful predictors of an OSCE outcome, future work of this kind could make use of medical-based lexicons to gauge whether technical or non-technical word usage in such environments is judged favourably. In addition, further work should be done to test the hypothesis that information structure or rhetorical structure does impact on overall perception of a successful communication, such as a variation on the methods employed by Burstein et al. (2001).
One obvious improvement to this study would be to reduce the expense in producing the annotated data. Future work could also be done in automatically extracting features from non-transcribed data, such as timing information based on pause length and the turn length of each speaker.
Conclusions
In this research, we have built classifiers over transcribed doctor-patient consultations in an attempt to predict OSCE outcomes. We achieved encouraging results based on a range of lexical and discourse-oriented features.
In our first experiments, we combined the data from two discrete stations in an attempt to maximise training data, and achieved modest results. Subsequent analysis with the Mann-Whitney test indicated both that success in the APPROACH category correlates strongly with an overall successful OSCE, and that the data for the two stations is markedly different in nature. Based on this finding, we conduct tests over the data for the individual stations with noticeable improvements to the results.
The results of this exploratory study have been quite encouraging, given the novel domain and limited data. We have shown that a data mining approach to OSCE assessment is feasible, which we hope will open the way to increased interest in automated medical assessment based on linguistic analysis.
Table 2 :
2Classification results for STD and BC
Table 3 :
3Mann-Whitney z-score for BC and STD samples (OVERALL is the cumulative total of all 5
categories)
Category OVERALL APPROACH HISTORY INTERPRETATION MANAGEMENT COUNSELLING
z-score
-3.29
-3.13
-2.43
-2.31
-2.31
-1.57
Table 5 :
5Results for separated BC and STD datasets (leave-one-out)
Feature EngineeringWe extracted a total of 38 features from the transcribed data. Some of these features are based on what is marked up according to the transcription scheme, while others are based on timing information or lexical information as encoded in ELAN. These include features such as signals for delaying speaking or hesitation(Clark et al., 2002), features of conversational dominance(Itakura, 2000), the manner in which turn-taking is negotiated(Sacks et al., 1974), temporal features such as pausing (tenBosch et al., 2005), as well as our own features,
Medicare Plus and overseas trained doctors. Bob Birrell, Lesleyanne Hawthorne, People and Place. 122Birrell, Bob, Lesleyanne Hawthorne. 2004. Medicare Plus and overseas trained doctors. People and Place, 12(2):83-99.
On temporal aspects of turn taking in conversational dialogues. Bosch, Nelleke Louis, Lou Oostdijk, Boves, Speech Communication. 47ten Bosch, Louis, Nelleke Oostdijk, Lou Boves. 2005. On temporal aspects of turn taking in conversational dialogues. Speech Communication, 47(2005):80- 86.
Jill Burstein, Daniel Marcu, Slava Andreyev, Martin Chodorow, Towards Automatic Classification of Discourse Elements in Essays. ACL. Burstein, Jill, Daniel Marcu, Slava Andreyev, Martin Chodorow. 2001. Towards Automatic Classification of Discourse Elements in Essays. ACL, 90-97.
Using uh and um in spontaneous speaking. Herber H Clark, E Jean, Fox Tree, Cognition. 84Clark, Herber H., Jean E. Fox Tree. 2002. Using uh and um in spontaneous speaking. Cognition, 84(2002):73-111.
. Joanna Flynn, Medical Release. Australian Medical Council. 17Flynn, Joanna. 2006. Medical Release. Australian Medical Council, 17(August 2006).
Large-scale use of an objective, structured clinical examination for licensing family physicians. Grand'maison, Joëlle Paul, Paul Lescop, Carlos A Rainsberry, Brailovsky, Canadian Medical Association. 14610Grand'Maison, Paul, Joëlle Lescop, Paul Rainsberry, Carlos A. Brailovsky. 1992. Large-scale use of an objective, structured clinical examination for licens- ing family physicians. Canadian Medical Associa- tion, 146(10):1735-1740.
On Interactional Sociolinguistic Method. John Gumperz, Talk. Management Settings S. Sarangi and C. RobersGumperz, John. 1999. On Interactional Sociolinguis- tic Method. In Talk, Work and Institutional Order. Discourse in Medical, Mediation and Management Settings S. Sarangi and C. Robers (eds), 453-471.
Integratoin and retention of international medical graduates in rural communities. A typological analysis. The Australian Sociological Association. Gil- Han, John Soo, Humphreys, 42Han, Gil-Soo, John .S Humphreys. 2006. Integratoin and retention of international medical graduates in rural communities. A typological analysis. The Aus- tralian Sociological Association, 42(2):189-207.
Describing conversational dominance. Hiroko Itakura, Journal of Pragmatics. 33Itakura, Hiroko. 2000. Describing conversational dominance. Journal of Pragmatics, 33(2001):1859- 1880.
Modeling the language assessment process and result: Proposed architecture for an automatic oral proficiency assessment. Gina-Anne Levow, Mari Broman Olsen, Workshop On Computer Mediated Language Assessment And Evaluation In Natural Language Processing. Levow, Gina-Anne, Mari Broman Olsen. 1999. Mod- eling the language assessment process and result: Proposed architecture for an automatic oral profi- ciency assessment. Workshop On Computer Medi- ated Language Assessment And Evaluation In Natu- ral Language Processing.
Comunication Information in Conversations: A Cross-cultural Comparison. Han Li, Zao, International Journal of Intercultural Relations. 233Li, Han Zao. 1999. Comunication Information in Con- versations: A Cross-cultural Comparison. Interna- tional Journal of Intercultural Relations, 23(3):387- 409.
Overseas-trained doctors. Integration of overseas-trained doctors in the Australian medical workforce. Barry Mcgrath, The Medical Journal of Australia. 18111McGrath, Barry. 2004. Overseas-trained doctors. Inte- gration of overseas-trained doctors in the Australian medical workforce. The Medical Journal of Aus- tralia, 181(11/12):640-642.
A discourse analysis study of 'good' and 'poor' communication in an OSCE: a proposed new framework for teaching students. Celia Roberts, Val Wass, Roger Jones, Srikant Sarangi, Annie Gillett, Medical Education. 50Roberts, Celia, Val Wass, Roger Jones, Srikant Sarangi, Annie Gillett. 2003. A discourse analysis study of 'good' and 'poor' communication in an OSCE: a proposed new framework for teaching students. Medical Education, 50:192-201.
A Simplest Systematics for the Organization of Turn-Taking for Conversation. Sacks, Emanuel A Harvey, Gail Schegloff, Jefferson, Language. 504Sacks, Harvey, Emanuel A. Schegloff, Gail Jefferson. 1974. A Simplest Systematics for the Organiza- tion of Turn-Taking for Conversation. Language, 50(4):696-735.
International Medical Graduates: The Australian perspective. Neil Spike, Acad Med. 81Spike, Neil. 2006. International Medical Graduates: The Australian perspective. Acad Med, 81):842- 846.
Use of Standardized Patients in Clinical Assessments: Recents Developments and Measurement Findings. Nu Vu, Howard S Viet, Barrows, Educational Researcher. 233Vu, Nu Viet, Howard S. Barrows. 1994. Use of Stan- dardized Patients in Clinical Assessments: Recents Developments and Measurement Findings. Educa- tional Researcher, 23(3):23-30.
Interactional context in L2 dialogues. Eva Wiberg, Journal of Pragmatics. 35Wiberg, Eva. 2003. Interactional context in L2 dia- logues. Journal of Pragmatics, 35(2003):389-407. |
958,094 | Demo of iMAG possibilities: MT--postediting, translation quality evaluation, parallel corpus production | An interactive Multilingual Access Gateway (iMAG) dedicated to a web site S (iMAG-S) is a good tool to make S accessible in many languages immediately and without editorial responsibility. Visitors of S as well as paid or unpaid post-editors and moderators contribute to the continuous and incremental improvement of the most important textual segments, and eventually of all. Pre-translations are produced by one or more free MT systems. Continuous use since 2008 on many web sites and for several access languages shows that a quality comparable to that of a first draft by junior professional translators is obtained in about 40% of the (human) time, sometimes less. There are two interesting side effects obtainable without any added cost: iMAGs can be used to produce high-quality parallel corpora and to set up a permanent task-based evaluation of multiple MT systems. We will demonstrate (1) the multilingual access to a web site, with online postediting of MT results "à la Google", (2) postediting in "advanced mode", using SECTra_w as a back-end, enabling online comparison of MT systems, (3) task-oriented built-in evaluation (postediting time), and (4) application to a large web site to get a trilingual parallel corpus where each segment has a reliability level and a quality score. KEYWORDS: Online post-editing, interactive multilingual access gateway, free MT evaluation TITLE AND ABSTRACT IN CHINESE iMAG功能展示 : 机器翻译 后 编辑, 翻 译质量 评 估,平行语 料生成 简述 一个iMAG (interactive Multilingual Access Gateway, 多语言交互式网关) 是很好的面向一个 网站的工具,它可以提供对该网站的多语言访问,并且无需任何编辑。通过iMAG访问该 网站的用户,可以作为有偿或无偿的后编辑人员或是管理者,来对该网站的文本段进行可 持续的、增量的改进。该网站的预翻译是由一个或多个免费的MT系统提供的。自从2008 年以来,通过iMAG对多个网站进行多语言的持续访问结果表明,对于相对翻译质量,首 轮由初级翻译者提供的翻译,使用iMAG只占纯人工翻译40%的时间,或更少。iMAG有两 个非常吸引人的方面并且无需额外成本:iMAG能用于产生高质量的平行语料,而且可以 通过多个MT系统对其进行长久性的评估。我们将要展示:(1) 多语言访问目标网站,并对 Google提供的预翻译进行在线后编辑,(2) 后编辑的高级模式,SECTra作为后台模块,可 实现MT系统的在线比较,(3) 面向任务的评估 (后编辑时间),和 (4) 应用到大型网站, 可获得三种语言的平行语料,每个文字段都拥有可靠性和质量的评分。 关键词:在线后编辑,多语言交互网关,免费MT评估 475 | [] | Demo of iMAG possibilities: MT--postediting, translation quality evaluation, parallel corpus production
December 2012
Ling Wang lingxiao.wang@imag.fr
Xiao
Ying Zhang ying.zhang@imag.fr
Christian Boitet christian.boitet@imag.fr
Valerie Bellynck valerie.bellynck@imag.fr
Demo of iMAG possibilities: MT--postediting, translation quality evaluation, parallel corpus production
Proceedings of COLING 2012: Demonstration Papers
COLING 2012: Demonstration PapersMumbaiDecember 2012
An interactive Multilingual Access Gateway (iMAG) dedicated to a web site S (iMAG-S) is a good tool to make S accessible in many languages immediately and without editorial responsibility. Visitors of S as well as paid or unpaid post-editors and moderators contribute to the continuous and incremental improvement of the most important textual segments, and eventually of all. Pre-translations are produced by one or more free MT systems. Continuous use since 2008 on many web sites and for several access languages shows that a quality comparable to that of a first draft by junior professional translators is obtained in about 40% of the (human) time, sometimes less. There are two interesting side effects obtainable without any added cost: iMAGs can be used to produce high-quality parallel corpora and to set up a permanent task-based evaluation of multiple MT systems. We will demonstrate (1) the multilingual access to a web site, with online postediting of MT results "à la Google", (2) postediting in "advanced mode", using SECTra_w as a back-end, enabling online comparison of MT systems, (3) task-oriented built-in evaluation (postediting time), and (4) application to a large web site to get a trilingual parallel corpus where each segment has a reliability level and a quality score. KEYWORDS: Online post-editing, interactive multilingual access gateway, free MT evaluation TITLE AND ABSTRACT IN CHINESE iMAG功能展示 : 机器翻译 后 编辑, 翻 译质量 评 估,平行语 料生成 简述 一个iMAG (interactive Multilingual Access Gateway, 多语言交互式网关) 是很好的面向一个 网站的工具,它可以提供对该网站的多语言访问,并且无需任何编辑。通过iMAG访问该 网站的用户,可以作为有偿或无偿的后编辑人员或是管理者,来对该网站的文本段进行可 持续的、增量的改进。该网站的预翻译是由一个或多个免费的MT系统提供的。自从2008 年以来,通过iMAG对多个网站进行多语言的持续访问结果表明,对于相对翻译质量,首 轮由初级翻译者提供的翻译,使用iMAG只占纯人工翻译40%的时间,或更少。iMAG有两 个非常吸引人的方面并且无需额外成本:iMAG能用于产生高质量的平行语料,而且可以 通过多个MT系统对其进行长久性的评估。我们将要展示:(1) 多语言访问目标网站,并对 Google提供的预翻译进行在线后编辑,(2) 后编辑的高级模式,SECTra作为后台模块,可 实现MT系统的在线比较,(3) 面向任务的评估 (后编辑时间),和 (4) 应用到大型网站, 可获得三种语言的平行语料,每个文字段都拥有可靠性和质量的评分。 关键词:在线后编辑,多语言交互网关,免费MT评估 475
Introduction
An iMAG is a website used as a gateway allowing a multilingual access to one (in general) or several elected websites. The name "iMAG" stands for interactive Multilingual Access Gateway.
Apparently, an iMAG is similar to existing well-known translation gateways such as Google Translate, Systran, Reverso, etc. The first essential difference is that an iMAG is only used for elected websites. This allows the iMAG to manage the multilingualization of certain websites better than existing translation gateways. With an iMAG, we can enhance the quality of translated pages, starting from raw output of general-purpose and free MT servers, usually of low quality and often understandable unless one understands enough of the source language.
An iMAG is dedicated to an elected website, or rather to the elected sublanguage defined by one or more URLs and their textual content. It contains a translation memory (TM), both dedicated to the elected sublanguage. Segments are pre-translated not by a unique MT system, but by a (selectable) set of MT systems. Systran and Google are mainly used now, but specialized systems developed from the post-edit part of the TM, and based on Moses, will be also used in the future.
iMAG also contains a module SECTra (Système d'Exploitation de Corpus de Traductions sur le web), in English, "Contributive Operating System of Translation Corpora on the Web". SECTra is a Web-based system offering several services, such as supporting MT evaluation campaigns and online post-editing of MT results, to produce reference translations adapted to classical MT systems not built by machine learning from a parallel corpus.
2
Manipulation and pre-translation in the iMAG page 2.1 Access multilingual website by iMAG Figure 1 shows the iMAG access interface to LIG (the Grenoble computer science laboratory) website. We choose target language (Chinese) in the pull-down menu. The page is now accessed in Chinese language. One or more free MT servers, in this case Google Translate and Systran, produce initial translations.
Post-edition and evaluation in web page context
In iMAG, user can also optimize the translation results. As shown in Figure 2, when the user moves the mouse on translation unit (for example: a word, a title), the system will automatically pop up a small dialog box. This dialog box display source language content in blue font, and user can post edit and evaluate the translation results.
Figure 2. Optimize translation results in iMAG interface
If user is an anonymous, or non-privileged, this optimize translation and ratings only display to him, and he can't enter the advanced mode. If user has privilege, optimize translation and ratings will be stored in the system database, and also display to publics. If database contains multiple optimizes translations, system will select translation, which has the highest scores and time recently. For those users who have the appropriate permissions, they can come into "Advanced mode", and arrives into SECTra. This will be described in chapter 3.
Visualization of the translation quality
In the translated page, users can view quickly and clearly the translation quality of web pages by "reliability" mode. As shown in Figure 3, a color bracket encloses each translation unit. If user post-edit in this page, then his result will be displayed directly on the page, at the same time, bracket's color will be changed based on user permissions. Green brackets indicate that the translation results are edited and saved by privileged user. Orange means the translation results are edited and saved locally by anonymous users (only for anonymous users). Red indicate the translation results have never been edited. If the user clicks on the Original button, the left side of the browser will display the translation results; the right side displays the source language page.
Interaction between iMAG and SECTra
Another possibility is to use the advanced mode (see the chapter 2.2), which consists in postediting a pseudo-document that is in fact a part of the translation memory.
Remark: content of chapter 2 will be on display in the video 1.
Post-edition of TMs in "Advance Mode" (SECTra)
In order to obtain the translation results with the high quality, post-editing is a very important point, but also the most time-consuming work. In "Advance Mode" (SECTra), we can quickly get high quality translation results with minimum price. Figure 4 shows the interface of SECTra post-editing features.
Proposition of translation in SECTra
The first time user create an iMAG, he can select different machines translations systems for his website. Certainly he can also add new machine translation system later in SECTra. In interface of post edit, SECTra allows us to do operations for machine translation results (such as Google Translate, Systran translation), and translation memory database.
• For machine translation: clear translation result, re-call the machine translation system, and use the translation result.
• For translation memory: delete translation memory, use translation memory
Comparison between current translation and MT/TM results
As shown in Figure 5, users can compare distance between the current translation and translation memory, or between the current translation and machine translation.
Figure 5: Comparison between current translation and MT/TM results
SECTra can also provide a reference language, which helps users to better post-edit, as shown in Figure 6.
Post-edition and evaluation
Users can also vote the number of stars and the value of rating for these post-editions. The number of stars is the control ability of the language pair (the source language and the target language) of the current post-edition. The value of rating is the satisfaction level of the current post-edition.
In the process of post-edition, the system will automatically record the time and segments number. As the first two authors are Chinese, they have experimented with French-Chinese (on a town web site) and with Chinese-English (on a Chinese web site dedicated to a famous Chinese NLP scientist). Here are the results.
Visualization of post-edition in iMAG Web pages
Post-edition results will be displayed directly on the iMAG Web page, and bracket's color will be changed based on user permissions (see the chapter 2.3).
Remark: content of chapter 3 will be on display in the video 2.
To obtain a high-quality parallel corpus
Filtering and selection of segments
On the platform of the SECTra, the user can export corpus of TM. At the time of export, we can filter segments by stars and scores. In Figure 7, for example the source language is French, the target language is Chinese, and we can select part of segments for export.
Production of parallel corpus and download
For the selected parallel corpus, the system will generate two txt files, and users may download these files, the results shown in Figure 9.
Conclusion and perspectives
Continuous use since 2008 on many web sites and for several access languages shows that a quality comparable to that of a first draft by junior professional translators is obtained in about 40% of the (human) time, sometimes less.
In the near future, the system will be integrated Moses, and based on Moses for provide more accurate TA results.
Figure 1 :
1Access website of Grenoble computer science laboratory by iMAG
Figure 3 .
3iMAG page display in "reliability", "original" mode
Figure 4 :
4Advanced mode (SECTra screen).
Figure 6 :
6Interface with reference language
Figure 7 .
7Interface of export corpus
Figure 8 .Figure 9 .
89Corpus Downloaded corpus filesRemark: content of chapter 4 will be on display in the video 3.
A Web-oriented System to Manage the Translation of an Online Encyclopedia Using Classical MT and Deconversion from UNL. Proc. CCC 2009. CCC 2009The iMAG concept: multilingual access gateway to an elected Web site. with incremental quality increase through collaborative post-edition of MT pretranslationsA Web-oriented System to Manage the Translation of an Online Encyclopedia Using Classical MT and Deconversion from UNL. Proc. CCC 2009 (2010) The iMAG concept: multilingual access gateway to an elected Web site with incremental quality increase through collaborative post-edition of MT pretranslations.
Online Collaborative System for Evaluating, Post-editing and Presenting MT Translation Corpora. 27-31/5/08Proc. LREC-08. ELRA/ELDALREC-08Marrakech8pan Online Collaborative System for Evaluating, Post-editing and Presenting MT Translation Corpora. Proc. LREC-08, Marrakech, 27-31/5/08, ELRA/ELDA, ed., 8 p.
A Web-oriented System to Manage the Translation of an Online Encyclopedia Using Classical MT and Deconversion from UNL. C.-P Huynh, H Blanchon, H.-T Nguyen, 9/12/08Proc. CI-2008 (WS of ASWC-08). CI-2008 (WS of ASWC-08)Bangkok8pHuynh C.-P., Blanchon H. & Nguyen H.-T. (2008) A Web-oriented System to Manage the Translation of an Online Encyclopedia Using Classical MT and Deconversion from UNL. Proc. CI-2008 (WS of ASWC-08), Bangkok, 9/12/08, ACL, ed., 8 p. |
17,310,394 | I Can Sense It: a comprehensive online system for WSD | We have developed an online interface for running all the current state-of-the-art algorithms for WSD. This is motivated by the fact that exhaustive comparison of a new Word Sense Disambiguation (WSD) algorithm with existing state-of-the-art algorithms is a tedious task. This impediment is due to one of the following reasons: (1) the source code of the earlier approach is not available and there is a considerable overhead in implementing it or (2) the source code/binary is available but there is some overhead in using it due to system requirements, portability issues, customization issues and software dependencies. A simple tool which has no overhead for the user and has minimal system requirements would greatly benefit the researchers. Our system currently supports 3 languages, viz., English, Hindi and Marathi, and requires only a web-browser to run. To demonstrate the usability of our system, we compare the performance of current state-of-the-art algorithms on 3 publicly available datasets. | [
12803768,
2679145,
11174540
] | I Can Sense It: a comprehensive online system for WSD
December 2012
Sal Il
J Oshi M I T Esh
M Khapr mikhapra@in.ibm.com
Pushpak Bhat
IBM Research India
BangaloreIndia (
IBM Research India
BangaloreIndia (
CSE Department
IIT Bombay
MumbaiIndia
I Can Sense It: a comprehensive online system for WSD
Proceedings of COLING 2012: Demonstration Papers
COLING 2012: Demonstration PapersMumbaiDecember 2012WSD System 247
We have developed an online interface for running all the current state-of-the-art algorithms for WSD. This is motivated by the fact that exhaustive comparison of a new Word Sense Disambiguation (WSD) algorithm with existing state-of-the-art algorithms is a tedious task. This impediment is due to one of the following reasons: (1) the source code of the earlier approach is not available and there is a considerable overhead in implementing it or (2) the source code/binary is available but there is some overhead in using it due to system requirements, portability issues, customization issues and software dependencies. A simple tool which has no overhead for the user and has minimal system requirements would greatly benefit the researchers. Our system currently supports 3 languages, viz., English, Hindi and Marathi, and requires only a web-browser to run. To demonstrate the usability of our system, we compare the performance of current state-of-the-art algorithms on 3 publicly available datasets.
Introduction
Several WSD algorithms have been proposed in the past ranging from knowledge based to unsupervised to supervised methods. These algorithms have their own merits and demerits, and hence it is desirable to compare a new algorithm with all of these to put the results in the right perspective. Even when the implementations of these algorithms are publicly available, running them can be cumbersome as it involves the following tedious steps:
1. Resolving portability issues of operating systems (e.g., linux/windows) 2. Adhering to specific input/output formats 3. Installation issues involving software dependencies 4. Run-time issues pertaining to system requirements The above process needs to be repeated for every algorithm that the user wants to compare her/his system with. Further, in many cases, there is no publicly available implementation of the algorithm, in which case the user has to bear significant overhead of re-implementing these algorithms.
To circumvent the above problems and to ensure ease of use, we have developed an online system, which allows the user to run several state-of-the-art algorithms. There is no overhead for the user, all (s)he needs is a web browser and the input file which may be sense tagged. Further, we also make provision for the developers of the new algorithms to integrate their algorithm in our system. This can be done by implementing a java interface exposed by us and upload the class file on our web-page.
Some of the important aspects of our system are as follows:
1. Collection of several approaches -Users can obtain results for state-of-the-art approaches like IMS (Zhong and Ng, 2010), PPR (Agirre et al., 2009), knowledge based approaches (Patwardhan et al., 2005) etc, for an easy comparison of all approaches on a single dataset. 2. Parallel execution of several algorithms -The user can choose to run multiple algorithms in parallel, over the same dataset. The associated overhead of scheduling jobs and managing system resources is handled by the server and the user is exempted of these hassles. 3. Minimum supervision -After submitting his/her request, the end user can continue with their work without having to constantly monitor the task. Our interface notifies the user when the results are available. Once the users is notified, (s)he needs to download a single zip file which contains all the output files. 4. User Friendly -Currently available systems are mostly without a Graphical User Interface (GUI). Our interface is visually aesthetic, can be viewed on different screen resolutions, and is very easy to use. 5. Easy interpretability of the output -Our interface provides an option of viewing the output files online where the disambiguated words are shown along with the gloss of the sense present in the wordnets. Most of the available tools only generate the results with the sense offsets, which are machine readable, but make the manual analysis difficult. 6. Evaluation using standard metrics -Our system evaluates all the algorithms selected by the user using standard evaluation metrics (precision, recall and F-score). This allows the user to easily compare the performance of the selected algorithms. 7. Unifies the input/output formats -Existing systems use non-standard input/output formats, which results in an additional burden of converting the dataset in the required formats. Our system supports different types of input file formats so that less conversions are required while providing the inputs. Further, the outputs are provided in a format compatible with UKB format, which can be easily parsed.
8. Plug-and-play design -If an implementation of a new approach is provided, it can be easily plugged into the system. Apart from exposing his/her algorithm to the public, and thereby increasing its visibility/use, this also allows the user to outsource her/his own computational load.
In this system demonstration, we explain the system which powers our interface. The paper is organized as follows: We describe the existing, publicly available systems in section 2. In section 3 we provide technical details about our system. Section 4 summarizes the evaluation results on 3 standard datasets. Section 5 concludes the paper presenting the salient points of our system and some future enhancements for our system.
Related Work
There are a few algorithms for which the implementation is publicly available. These include UKB (Agirre et al., 2009), IMS (Zhong and Ng, 2010), SenseLearner (Mihalcea and Csomai, 2005) and SenseRelate (Patwardhan et al., 2005). However, most of these have one or more of the overheads listed above. For example, UKB is currently available only for linux platforms. Further, the user needs to download and install these systems separately and run them on her/his machine which increases the computational cost. SenseLearner has an online interface, but in contrast to our system, it provides only a single algorithm and does not enable the user to compare the performance of different algorithms.
Our system is a one-stop-shop for comparing several algorithms (including UKB, IMS and SenseRelate) with minimum computational and manual overhead for the user. We would like to mention that internally our system uses the implementations provided by UKB, IMS and SenseRelate and hence it would provide the same results as obtained by independently downloading and using these systems. Apart from UKB, IMS and SenseRelate, our system also provides an implementation for McCarthy's approach (Koeling and McCarthy, 2007) and IWSD (Khapra et al., 2010).
3 System Details Figure 1 shows the main interface of our system. We first provide an overview of the system introducing the inputs which it expects followed by explaining the online output viewer, which is an interesting feature of our system. We also provide details about the mechanism with which new algorithms can be easily added to the system. Kindly refer to the figure while reading this section.
Interface Design
To support various web browsers on different operating systems, we have designed the web interface using standard open technologies. The interface runs using PHP5 1 on the server side, and for the GUI, we have used a javascript framework viz., ExtJS v4.0 2 which provides a neat and aesthetic display to the user.
User input
In order to use the system interface, the user needs to provide the following inputs:
1. Language: The language of the corpus file(s) for which the WSD algorithm needs to run. As In the most basic form, the user can simply upload a plain text file containing one sentence per line. However, most algorithms perform better if the data is POS tagged. Our system does not perform POS tagging. Hence, we allow the user to upload POS tagged data in which each sentence is represented in the following format: wor d 1 _<pos 1 > wor d 2 _<pos 2 > · · · wor d n _<pos n > where <pos 1 >, <pos 2 >, etc., are the POS tags of the respective words, and can take one of the 4 values, 1: Noun:, 2: Verb, 3: Adverb, 4: Adjective (since currently available wordnets support only these POS tags). If the user has sense marked gold data and wants to evaluate the performance of different algorithms on this data, then he/she can submit the input in the following format: wor d 1 _<pos 1 ><o f f set 1 > wor d 2 _<pos 2 ><o f f set 2 > · · · wor d n _<pos n ><o f f set n > where <o f f set 1 >, <o f f set 2 >, etc., are the wordnet sense offsets of the respective words.
For processing these formats, our system requires a morphological analyzer or stemmer from the respective language. The gold data sense offsets will be compared against the outcome of the algorithm. Our algorithms use Princeton WordNet v2.1 for English. In addition to these simple formats, we also provide support to the following file format which is compatible with UKB: wor d 1 #<roots 1 >#<pos 1 >#<index>#<o f f set 1 > wor d 2 #<roots 2 >#<pos 2 >#<index># <o f f set 2 > · · · wor d n #<roots n >#<pos n >#<index>#<o f f set n > where <roots 1 >, <roots 2 >, etc., represent morphological roots of the respective words. <in-dex> represents the position of the word in the sentence and is stored as w 1 , w 2 and so on. Please note that this format requires the input data to be at least POS tagged and optionally sense annotated. In case if the data is not sense annotated, the <o f f set> field will be represented with '1' for the words which are to be disambiguated and 0 otherwise. The output files generated by our system follow this format. 5. E-mail address: Depending on the size of the input and the number/type of algorithms chosen, the system will take some time to compute the results. For ease of use, an e-mail is sent to the user, once the computation is done. This email specifies a link from where (s)he will be able to download all the results. 6. Input (Data): The user can either type the text to be disambiguated in the text box provided on the submission form, or (s)he can choose to upload a file containing the text. The uploaded file can be a zipped directory, in which case, it will be extracted on the server side, and all the constituent files will be used as the input dataset for running the algorithm.
Online output viewer
Our system generates the output files in UKB format as stated earlier. This output can be easily parsed, however, it is not suitable for manual analysis. Our interface provides the users with a facility of viewing the output online, where the sense tags are accompanied with the sense gloss and examples as available in the wordnet. This enables the user to easily comprehend the output. Figure 2 shows a screen-shot of the online output viewer. The interface also provides an output pane, where the results of the job are summarized, and a link to the results is provided, so that the user can download them. The same link is also sent to the user to the specified e-mail address.
Integration of new algorithms
As mentioned earlier, our system can integrate new algorithms on-the-fly. The developers who have a new algorithm, and wish to make it a part of our system, can submit their algorithms online, and such algorithms automatically get added to our system when uploaded. Currently, our system can not automatically handle the software dependencies exhibited by the new algorithms, if any. In such cases, the new algorithm will not be useful to end users. To prevent this, the developers of the new algorithm can contact us and get the dependencies fixed. Our system runs on a linux based server, and the dependencies must be in the form of publicly available software packages compatible with linux systems.
The interaction between our system and the new algorithm will be in form a shell script, the details of which are provided on our web interface 3 . This shell script can, in turn, call its own resources, binaries and other shell scripts to read the input text files provided in specific format, and produce the output text files in specific format. The detailed instructions for integration, along with the sample text files, can also be accessed from our web interface.
Empirical evaluation
To demonstrate the use of our system, we have evaluated the performance of all the algorithms on 3 standard datasets 4 . The results are summarized in
Conclusion
In this paper, we have described a system which allows for easy comparison of several state-of-theart WSD systems. Since our system is an online system, it minimizes the overhead on the end user by eliminating the installation issues. Our system only depends on a morphological analyzer for the input data. Further, since all the computation takes place on our server, it drastically reduces the system requirements and computational efforts for the end user. The interface to the system is extremely user friendly, aesthetic and supports multiple input file formats. New algorithms can be integrated easily with our system with minimal additional efforts on part of the developer. The system also provides an online results viewer which is useful for manual analysis as it provides the sense gloss and examples for each disambiguated word.
In the future, we would like our system to support more and more languages.
3 http://www.cfilt.iitb.ac.in/wsd-demo 4 http://www.cfilt.iitb.ac.in/wsd/annotated_corpus
Figure 1 :
1screen-shot of the online interface showing the results of a job along with the links to view the output files in our online output viewer of now, we provide the user an option of selecting between English, Hindi and Marathi since we have the training datasets and morphological analyzers available for these languages. 2. Domain: Some of the state-of-the-art algorithms are domain specific, and in general, the WSD systems show better performance when the domain of the experiment is known in advance. Apart from an option of Tourism and Health domains for the languages mentioned above, we also support News domain for Hindi, and SemCor domain for English. 3. Algorithms to be run: The user can select one or more from the following:• IWSD (Iterative WSD) -A supervised WSD algorithm by(Khapra et al., 2010) • IMS (It Makes Sense) -An SVM based approach by(Zhong and Ng, 2010) • PPR (Personalized Page Rank) -A knowledge based approach by(Agirre et al., 2009) • McCarthy's approach -An unsupervised state-of-the-art algorithm by(Koeling and McCarthy, 2007) • Knowledge based measures -SenseRelate(Patwardhan et al., 2005) supports several knowledge based measures for WSD. We support 3 measures, viz., Lesk, Lin and JCN out of these.• RB (Random Baseline) • WFS (Wordnet First Sense Baseline) • MFS (Most Frequent Sense Baseline) 4. Input file format:
Figure 2 :
2screen-shot of the online interface showing the online output viewer
table 1. There are several knowledge based measures which SenseRelate supports. We show the results for a representative measure, viz., Lesk (KB-Lesk).Table 1: Precision, Recall and F-scores for various algorithms supported by our systemAlgorithm
Tourism
Health
SemCor
P%
R%
F%
P%
R%
F%
P%
R%
F%
IWSD
77.00
76.66
76.83
78.78
78.42
78.60
67.42
66.27
66.82
PPR
53.10
53.10
53.10
51.10
51.10
51.10
50.42
50.42
50.42
IMS
78.82
78.76
78.79
79.64
79.59
79.61
68.38
67.82
68.02
McCarthy's approach
51.85
49.32
50.55
N/A
N/A
N/A
N/A
N/A
N/A
RB
25.50
25.50
25.50
24.61
24.61
24.61
21.08
21.08
21.08
WFS
62.15
62.15
62.15
64.67
64.67
64.67
63.49
63.49
63.49
MFS
77.60
75.2
76.38
79.43
76.98
78.19
67.75
65.87
66.57
KB-Lesk
50.86
50.84
50.85
51.80
51.78
51.79
39.59
39.24
39.41
http://php.net/downloads.php 2 http://www.sencha.com/products/extjs/
Knowledge-based wsd on specific domains: Performing better than generic supervised wsd. E Agirre, O L D Lacalle, A Soroa, Proceedings of IJCAI. IJCAIAgirre, E., Lacalle, O. L. D., and Soroa, A. (2009). Knowledge-based wsd on specific domains: Performing better than generic supervised wsd. In In Proceedings of IJCAI.
Domain-specific word sense disambiguation combining corpus basedand wordnet based parameters. M Khapra, S Shah, P Kedia, P Bhattacharyya, Proc. of GWC. of GWC10Khapra, M., Shah, S., Kedia, P., and Bhattacharyya, P. (2010). Domain-specific word sense dis- ambiguation combining corpus basedand wordnet based parameters. In Proc. of GWC, volume 10.
Sussx: Wsd using automatically acquired predominant senses. R Koeling, D Mccarthy, Proceedings of ACL/SIGLEX SemEval. ACL/SIGLEX SemEvalKoeling, R. and McCarthy, D. (2007). Sussx: Wsd using automatically acquired predominant senses. In Proceedings of ACL/SIGLEX SemEval, pages 314-317.
Senselearner: Word sense disambiguation for all words in unrestricted text. R Mihalcea, A Csomai, Proceedings of the ACL 2005 on Interactive poster and demonstration sessions. the ACL 2005 on Interactive poster and demonstration sessionsAssociation for Computational LinguisticsMihalcea, R. and Csomai, A. (2005). Senselearner: Word sense disambiguation for all words in unrestricted text. In Proceedings of the ACL 2005 on Interactive poster and demonstration sessions, pages 53-56. Association for Computational Linguistics.
Senserelate:: Targetword: a generalized framework for word sense disambiguation. S Patwardhan, S Banerjee, T Pedersen, Proceedings of the ACL 2005 on Interactive poster and demonstration sessions. the ACL 2005 on Interactive poster and demonstration sessionsAssociation for Computational LinguisticsPatwardhan, S., Banerjee, S., and Pedersen, T. (2005). Senserelate:: Targetword: a generalized framework for word sense disambiguation. In Proceedings of the ACL 2005 on Interactive poster and demonstration sessions, pages 73-76. Association for Computational Linguistics.
It makes sense: A wide-coverage word sense disambiguation system for free text. Z Zhong, H Ng, Proceedings of the ACL 2010 System Demonstrations. the ACL 2010 System DemonstrationsAssociation for Computational LinguisticsZhong, Z. and Ng, H. (2010). It makes sense: A wide-coverage word sense disambiguation system for free text. In Proceedings of the ACL 2010 System Demonstrations, pages 78-83. Association for Computational Linguistics. |
396,930 | Detecting Social Roles in Twitter | For social media analysts or social scientists interested in better understanding an audience or demographic cohort, being able to group social media content by demographic characteristics is a useful mechanism to organise data. Social roles are one particular demographic characteristic, which includes work, recreational, community and familial roles. In our work, we look at the task of detecting social roles from English Twitter profiles. We create a new annotated dataset for this task. The dataset includes approximately 1,000 Twitter profiles annotated with social roles. We also describe a machine learning approach for detecting social roles from Twitter profiles, which can act as a strong baseline for this dataset. Finally, we release a set of word clusters obtained in an unsupervised manner from Twitter profiles. These clusters may be useful for other natural language processing tasks in social media. | [
629094,
6210216,
11664683,
10986188,
14258704,
1916754,
7478738,
2986205
] | Detecting Social Roles in Twitter
Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 1, 2016. 2016
Sunghwan Mac Kim
Stephen Wan
CSIROCécile Paris Data61
Sydney
Australia {mac Kim
Stephen Wan
Cecile Paris}@csiro
Au
Detecting Social Roles in Twitter
Proceedings of The Fourth International Workshop on Natural Language Processing for Social Media
The Fourth International Workshop on Natural Language Processing for Social MediaAustin, TXAssociation for Computational LinguisticsNovember 1, 2016. 2016
For social media analysts or social scientists interested in better understanding an audience or demographic cohort, being able to group social media content by demographic characteristics is a useful mechanism to organise data. Social roles are one particular demographic characteristic, which includes work, recreational, community and familial roles. In our work, we look at the task of detecting social roles from English Twitter profiles. We create a new annotated dataset for this task. The dataset includes approximately 1,000 Twitter profiles annotated with social roles. We also describe a machine learning approach for detecting social roles from Twitter profiles, which can act as a strong baseline for this dataset. Finally, we release a set of word clusters obtained in an unsupervised manner from Twitter profiles. These clusters may be useful for other natural language processing tasks in social media.
Introduction
Social media platforms such as Twitter have become an important communication medium in society. As such, social scientists and media analysts are increasingly turning to social media as a cheap and large-volume source of real-time data, supplementing "traditional" data sources such as interviews and questionnaires. For these fields, being able to examine demographic factors can be a key part of analyses. However, demographic characteristics are not always available on social media data. Consequently, there has been a growing body of work in- vestigating methods to estimate a variety of demographic characteristics from social media data, such as gender and age on Twitter and Facebook (Mislove et al., 2011;Sap et al., 2014) and YouTube (Filippova, 2012). In this work we focus on estimating social roles, an under-explored area.
In social psychology literature, Augoustinos et al. (2014) provide an overview of schemata for social roles, which includes achieved roles based on the choices of the individual (e.g., writer or artist) and ascribed roles based on the inherent traits of an individual (e.g., teenager or schoolchild). Social roles can represent a variety of categories including gender roles, family roles, occupations, and hobbyist roles. Beller et al. (2014) have explored a set of social roles (e.g., occupation-related and familyrelated social roles) extracted from the tweets. They used a pragmatic definition for social roles: namely, the word following the simple self-identification pattern "I am a/an ". In contrast, our manually annotated dataset covers a wide range of social roles without using this fixed pattern, since it is not necessarily mentioned before the social roles.
On Twitter, users often list their social roles in their profiles. Figure 1, for example, shows the Twitter profile of a well-known Australian chef, Manu Feildel (@manufeildel). His profile provides infor-mation about his social roles beyond simply listing occupations. We can see that he has both a profession, Chef, as well as a community role, Judge on My Kitchen Rules (MKR), which is an Australian cooking show.
The ability to break down social media insights based on social roles is potentially a powerful tool for social media analysts and social scientists alike. For social media analysts, it provides the opportunity to identify whether they reach their target audience and to understand how subsets of their target audience (segmented by social role) react to various issues. For example, a marketing analyst may want to know what online discussions are due to parents versus other social roles.
Our aim in this paper is to provide a rich collection of English Twitter profiles for the social role identification task. The dataset includes a approxmately 1,000 Twitter profiles, randomly selected, which we annotated with social roles. Additionally, we release unsupervised Twitter word clusters that will be useful for other natural language processing (NLP) tasks in social media. 1 Finally, we investigate social role tagging as a machine learning problem. A machine learning framework is described for detecting social roles in Twitter profiles.
Our contributions are threefold:
• We introduce a new annotated dataset for identifying social roles in Twitter. • We release a set of Twitter word clusters with respect to social roles. • We propose a machine learning model as a strong baseline for the task of identifying social roles from Twitter profiles.
Crowdsourcing Annotated Data
Twitter user profiles often list a range of interests that they associate with, and these can vary from occupations to hobbies (Beller et al., 2014;Sloan et al., 2015). The aim of our annotation task was to manually identify social role-related words in English Twitter profile descriptions. A social role is defined as a single word that could be extracted from the description. These can include terms such as engineer, Figure 2: The Crowdflower annotation interface.
mother, and fan. For instance, we obtain Musician and Youtuber as social roles from "Australian Musician and Youtuber who loves purple!". 2 To study social roles in Twitter profiles, we compiled a dataset of approximately 1,000 randomly selected English Twitter profiles which were annotated with social roles. These samples were drawn from a large number of Twitter profiles crawled by a social network-based method (Dennett et al., 2016). Such a dataset provides a useful collection of profiles for researchers to study social media and to build machine learning models.
Annotations were acquired using the crowdsourcing platform Crowdflower. 3 , which we now outline.
Crowdflower Annotation Guidelines
We asked Crowdflower annotators to identify social roles in the Twitter profiles presented to them, using the following definition: "Social roles are words or phrases that could be pulled out from the profile and inserted into the sentence I am a/an . . . ". Note that the profile does not necessarily need to contain the phrase "I am a/an" before the social role, as described in Section 1.
The annotation interface is presented in Figure 2. The annotator is asked to select spans of text. Once a span of text is selected, the interface copies this text into a temporary list of candidate roles. The annotator can confirm that the span of text should be kept as a role (by clicking the 'add' link which moves the text span to a second list representing the "final candidates"). It is also possible to remove a candidate role from the list of final candidates (by clicking 'remove'). Profiles were allowed to have more than one social role.
Annotators were asked to keep candidate roles as short as possible as in the following instruction: if the Twitter profile contains "Bieber fan", just mark the word "fan". 4 Finally, we instructed annotators to only mark roles that refer to the owner of the Twitter profile. For example, annotators were asked not to mark wife as a role in: I love my wife. Our Crowdflower task was configured to present five annotation jobs in one web page. After each set of five jobs, the annotator could proceed to the next page.
Crowdflower Parameters
To acquire annotations as quickly as possible, we used the highest speed setting in Crowdflower and did not place additional constraints on the annotator selection, such as language, quality and geographic region. The task took approximately 1 week. We offered 15 cents AUD per page. To control annotation quality, we utilised the Crowdflower facility to include test cases called test validators, using 50 test cases to evaluate the annotators. We required a minimum accuracy of 70% on test validators.
Summary of Annotation Process
At the completion of the annotation procedure, Crowdflower reported the following summary statistics that provide insights on the quality of the annotations. The majority of the judgements were sourced from annotators deemed to be trusted (i.e., reliable annotators) (4750/4936). Crowdflower reported an inter-annotator agreement of 91.59%. Table 1 presents some descriptive statistics for our annotated dataset. We observe that our Twitter profile dataset contains 488 unique roles.
In Table 2, we present the top 10 ranked social roles. As can be seen, our extracted social roles include terms such as student and fan, highlighting that social roles in Twitter profiles include a diverse range of personal attributes. In Table 3 one role. The remaining descriptions (21.1%) contain more than one social role.
Word Clusters
We can easily access a large-scale unlabelled dataset using the Twitter API, supplementing our dataset, to apply unsupervised machine learning methods to help in social role tagging. Previous work showed that word clusters derived from an unlabelled dataset can improve the performance of many NLP applications (Koo et al., 2008;Turian et al., 2010;Spitkovsky et al., 2011;Kong et al., 2014). This finding motivates us to use a similar approach to improve tagging performance for Twitter profiles. Two clustering techniques are employed to generate the cluster features: Brown clustering (Brown et al., 1992) and K-means clustering (MacQueen, 1967). The Brown clustering algorithm induces a hierarchy of words from an unannotated corpus, and it allows us to directly map words to clusters. Word embeddings induced from a neural network are often useful representations of the meaning of words, encoded as distributional vectors. Unlike Brown clustering, word embeddings do not have any form of clusters by default. K-means clustering is thus used on the resulting word vectors. Each word is mapped to the unique cluster ID to which it was assigned, and these cluster identifiers were used as features. learner, superintendent, pyp, lifelong, flipped, preparatory, cue, yearbook, preschool, intermediate, nwp, school, primary, grades, prek, distinguished, prep, dojo, isd, hpe, ib, esl, substitute, librarian, nbct, efl, headteacher, mfl, hod, elem, principal, sped, graders, nqt, eal, tchr, secondary, tdsb, kindergarten, edd, instructional, elementary, keystone, grade, exemplary, classroom, pdhpe 384 musician, songwriter, singer, troubadour, arranger, composer, drummer, session, orchestrator, saxophonist, keyboardist, percussionist, guitarist, soloist, instrumentalist, jingle, trombonist, vocal, backing, virtuoso, bassist, vocalist, pianist, frontman We used 6 million Twitter profiles that were automatically collected by crawling a social network starting from a seed set of Twitter accounts (Dennett et al., 2016) to derive the Brown clusters and word embeddings for this domain. For both methods, the text of each profile description was normalised to be in lowercase and tokenised using whitespace and punctuation as delimiters.
To obtain the Brown clusters, we use a publicly available toolkit, wcluster 5 to generate 1,000 clusters with the minimum occurrence of 40, yielding 47,167 word types. The clusters are hierarchically structured as a binary tree. Each word belongs to one cluster, and the path from the word to the root of the tree can be represented as a bit string. These can be truncated to refer to clusters higher up in the tree.
To obtain word embeddings, we used the skipgram model as implemented in word2vec 6 , a neural network toolkit introduced by (Mikolov et al., 2013), to generate a 300-dimension word vector based on a 10-word context window size. We then used K-means clustering on the resulting 47,167 word vectors (k=1,000). Each word was mapped to the unique cluster ID to which it was assigned. Tables 4 and 5 show some examples of Brown clusters and word2vec clusters respectively, for three social roles: writer, teacher and musician. We note that similar types of social roles are grouped into the same clusters in both methods. For instance, orchestrator and saxophonist are in the same cluster containing musician. Both clusters are able to capture 5 https://github.com/percyliang/ brown-cluster 6 https://code.google.com/p/word2vec/ the similarities of abbreviations of importance to social roles, for example, tchr → teacher, nbct → National Board Certified Teachers, hpe → Health and Physical Education.
Identifying Social Roles
Social Role Tagger
This section describes a tagger we developed for the task of identifying social roles given Twitter profiles.
Here, we treat social role tagging as a sequence labelling task. We use the MALLET toolkit (McCallum, 2002) implementation of Conditional Random Fields (CRFs) (Lafferty et al., 2001) to automatically identify social roles in Twitter profiles as our machine learning framework. More specifically, we employ a first-order linear chain CRF, in which the preceding word (and its features) is incorporated as context in the labelling task. In this task, each word is tagged with one of two labels: social roles are tagged with R (for "role"), whereas the other words are tagged by O (for "other"). The social role tagger uses two categories of features: (i) basic lexical features and (ii) word cluster features. The first category captures lexical cues that may be indicative of a social role. These features include morphological, syntactic, orthographic and regular expression-based features (McCallum and Li, 2003;Finkel et al., 2008). The second captures semantic similarities, as illustrated in Tables 4 and 5 (Section 3). To use Brown clusters in CRFs, we use eight bit string representations of different lengths to create features representing the ancestor clusters of the word. For word2vec clusters, the cluster identifiers are used as features in CRFs. If a word is not associated with any clustering, its corresponding cluster features are set to null in the feature vector for that word.
Evaluation
We evaluate our tagger on the annotated Twitter dataset using precision, recall and F1-score. We use 10-fold cross-validation and report macro-averages. Significance tests are performed using the Wilcoxon signed-rank test (Wilcoxon, 1945). We compare the CRF-based tagger against a keyword spotting (KWS) method. This baseline uses social roles labelled in the training data to provide keywords to spot for in the test profiles without considering local context. On average, over the 10-fold crossvalidation, 54% of the social roles in the test set are seen in the training set. This indicates that the KWS baseline has potential out-of-vocabulary (OOV) problems for unseen social roles. To reduce overfitting in the CRF, we employ a zero mean Gaussian prior regulariser with one standard deviation. To find the optimal feature weights, we use the limited-memory BFGS (L-BFGS) (Liu and Nocedal, 1989) algorithm, minimising the regularised negative log-likelihood. All CRFs are trained using 500 iterations of L-BFGS with the Gaussian prior variance of 1 and no frequency cutoff for features, inducing approximately 97,300 features. We follow standard approaches in using the forwardbackward algorithm for exact inference in CRFs. Table 6 shows the evaluation results of 10-fold cross-validation for the KWS method and the CRF tagger. With respect to the different feature sets, we find that the combination of the word cluster features obtained by the two methods outperform the basic features in terms of F1 (77.9 vs. 72.5 respectively), in general providing a statistically significant improvement of approximately 5% (p<0.01).
The improvement obtained with word cluster fea-tures lends support to the intuition that capturing similarity in vocabulary within the feature space helps with tagging accuracy. Word cluster models provide a means to compare words based on semantic similarity, helping with cases where lexical items in the test set are not found in the training set (e.g., linguist, evangelist, teamster). In addition, the cluster features allow CRFs to detect informal and abbreviated words as social roles. Our tagger identifies both teacher and tchr as social roles from the two sentences: "I am a school teacher" and "I am a school tchr". This is particularly useful in social media because of the language variation in vocabulary that is typically found.
In this experiment, we show that social role tagging is possible with a reasonable level of performance (F1 77.9), significantly outperforming the KWS baseline (F1 69.0). This result indicates the need for a method that captures the context surrounding word usage. This allows language patterns to be learned from data that disambiguate word sense and prevents spurious detection of social roles from the data. This is evidenced by the lower precision and F1-score for the KWS baseline, which over-generates candidates for social roles.
Conclusion and Future Work
In this work, we constructed a new manually annotated English Twitter profile dataset for social role identification task. In addition, we induced Twitter word clusters from a large unannotated corpus with respect to social roles. We make these resources publicly available in the hope that they will be useful in research on social media. Finally, we developed a social role tagger using CRFs, and this can serve as a strong baseline in this task. In future work, we will look into being able to identify multi-word social roles to obtain a finer-grained categorisation (e.g., "chemical engineer" vs. "software engineer").
Figure 1 :
1An example of a Twitter profile.
Table 1 :
1Descriptive statistics for the annotated data.
, we see that more than half (56.2%) of the descriptions do not contain any role, and approximately 22.7% containSocial role Frequency
student
25
fan
24
girl
16
writer
14
teacher
13
geek
12
author
11
artist
10
directioner
9
designer
8
Table 2 :
2Top 10 ranked social roles in Twitter profiles.Number of roles Frequency (%)
0
552 (56.2)
1
213 (22.7)
2
101 (10.3)
3
45 (4.6)
4
31 (3.2)
5
23 (2.3)
6
8 (0.8)
7
2 (0.2)
8
6 (0.6)
9
2 (0.2)
Table 3 :
3Frequencies of number of roles that are used to annotate one Twitter profile in our dataset.
teacher, tutor, preacher, homeschooler, nbct, hod, dutchman, nqt, tchr 0101101111110 musician, philologist, orchestrator, memoirist, dramatist, violist, crooner, flautist, filmaker, humourist, dramaturg, harpist, flutist, trumpeter, improvisor, trombonist, musicologist, organist, puppeteer, laureate, poetess, hypnotist, audiobook, comedienne, saxophonist, cellist, scriptwriter, narrator, muso, essayist, improviser, satirist, thespian, ghostwriter, arranger, humorist, violinist, magician, lyricist, playwright, pianist, screenwriter, novelist, performer, philosopher, composer, comedian, filmmaker, poet Bit string
Words related to social role
010110111100
writer, nwriter, scribbler, writter, glutton
01011010111110
Table 4 :
4Examples of Brown clusters with respect to social roles: writer, teacher and musician.Cluster
Words related to social role
937
writer, freelance, interviewer, documentarian, erstwhile, dramaturg, biographer, reviewer, bookseller, essayist, unpublished, critic, author, aspiring,
filmmaker, dramatist, playwright, laureate, humorist, screenwriter, storyteller, ghostwriter, copywriter, scriptwriter, proofreader, copyeditor, poet,
memoirist, satirist, podcaster, novelist, screenplay, poetess
642
teacher,
Table 5 :
5Examples of word2vec clusters with respect to social roles: writer, teacher and musician.
Table 6 :
610-fold cross-validation macro-average re-
sults on the annotated dataset. (Brown: Brown clus-
ter features, W2V: Word2vec cluster features).
Our dataset and word clusters are publicly available at https://data.csiro.au.
This is a real example. 3 crowdflower.com
While this decision could give us a coarse-grain granularity of social roles, it was an application-specific requirement from a visualisation point of view to minimise roles.
AcknowledgmentsThe authors are grateful to anonymous So-cialNLP@EMNLP reviewers. We would also like to thank David Milne, who ran the data collection and annotation procedure, and Bo Yan, who collected the additional data for the clustering methods.
Social cognition: an integrated introduction. Martha Augoustinos, Iain Walker, Ngaire Donaghue, SAGE Londonthird editionMartha Augoustinos, Iain Walker, and Ngaire Donaghue. 2014. Social cognition: an integrated introduction. SAGE London, third edition.
I'm a belieber: Social roles via self-identification and conceptual attributes. Charley Beller, Rebecca Knowles, Craig Harman, Shane Bergsma, Margaret Mitchell, Benjamin Van Durme, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsBaltimore, MarylandAssociation for Computational LinguisticsCharley Beller, Rebecca Knowles, Craig Harman, Shane Bergsma, Margaret Mitchell, and Benjamin Van Durme. 2014. I'm a belieber: Social roles via self-identification and conceptual attributes. In Pro- ceedings of the 52nd Annual Meeting of the Associ- ation for Computational Linguistics, pages 181-186, Baltimore, Maryland, June. Association for Computa- tional Linguistics.
Classbased n-gram models of natural language. F Peter, Peter V Brown, Robert L Desouza, Vincent J Mercer, Jenifer C Della Pietra, Lai, Computational Linguistics. 184Peter F. Brown, Peter V. deSouza, Robert L. Mercer, Vin- cent J. Della Pietra, and Jenifer C. Lai. 1992. Class- based n-gram models of natural language. Computa- tional Linguistics, 18(4):467-479, December.
Tweetripple: Understanding your twitter audience and the impact of your tweets. Amanda Dennett, Surya Nepal, Cecile Paris, Bella Robinson, Proceedings of the 2nd IEEE International Conference on Collaboration and Internet Computing. the 2nd IEEE International Conference on Collaboration and Internet ComputingPittsburgh, PA, USAIEEEAmanda Dennett, Surya Nepal, Cecile Paris, and Bella Robinson. 2016. Tweetripple: Understanding your twitter audience and the impact of your tweets. In Proceedings of the 2nd IEEE International Conference on Collaboration and Internet Computing, Pittsburgh, PA, USA, November. IEEE.
User demographics and language in an implicit social network. Katja Filippova, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningJeju Island, KoreaAssociation for Computational LinguisticsKatja Filippova. 2012. User demographics and language in an implicit social network. In Proceedings of the 2012 Joint Conference on Empirical Methods in Nat- ural Language Processing and Computational Natu- ral Language Learning, pages 1478-1488, Jeju Island, Korea, July. Association for Computational Linguis- tics.
Efficient, feature-based, Conditional Random Field parsing. Jenny Rose Finkel, Alex Kleeman, Christopher D Manning, Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Lingpeng Kong, Nathan Schneider, Swabha Swayamdipta, Archna Bhatia, Chris Dyer, and Noah A. Smiththe 46th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesColumbus, Ohio; Doha, QatarAssociation for Computational LinguisticsProceedings of the 2014 Conference on Empirical Methods in Natural Language ProcessingJenny Rose Finkel, Alex Kleeman, and Christopher D. Manning. 2008. Efficient, feature-based, Condi- tional Random Field parsing. In Proceedings of the 46th Annual Meeting of the Association for Compu- tational Linguistics: Human Language Technologies, pages 959-967, Columbus, Ohio, June. Association for Computational Linguistics. Lingpeng Kong, Nathan Schneider, Swabha Swayamdipta, Archna Bhatia, Chris Dyer, and Noah A. Smith. 2014. A dependency parser for tweets. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1001-1012, Doha, Qatar, October. Association for Computational Linguistics.
Simple semi-supervised dependency parsing. Terry Koo, Xavier Carreras, Michael Collins, Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 46th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesColumbus, OhioAssociation for Computational LinguisticsTerry Koo, Xavier Carreras, and Michael Collins. 2008. Simple semi-supervised dependency parsing. In Pro- ceedings of the 46th Annual Meeting of the Associa- tion for Computational Linguistics: Human Language Technologies, pages 595-603, Columbus, Ohio, June. Association for Computational Linguistics.
Conditional Random Fields: Probabilistic models for segmenting and labeling sequence data. John D Lafferty, Andrew Mccallum, Fernando C N Pereira, Proceedings of the Eighteenth International Conference on Machine Learning. the Eighteenth International Conference on Machine LearningSan Francisco, CA, USAMorgan Kaufmann Publishers IncJohn D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional Random Fields: Proba- bilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, pages 282-289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
On the limited memory BFGS method for large scale optimization. C Dong, Jorge Liu, Nocedal, Mathematical Programming. 453Dong C. Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45(3):503-528, Dec.
Some methods for classification and analysis of multivariate observations. J Macqueen, Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability. the Fifth Berkeley Symposium on Mathematical Statistics and ProbabilityBerkeley, CaliforniaUniversity of California Press1J. MacQueen. 1967. Some methods for classification and analysis of multivariate observations. In Proceed- ings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics, pages 281-297, Berkeley, California. University of Califor- nia Press.
Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. Andrew Mccallum, Wei Li, Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2003 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesEdmonton, CanadaAssociation for Computational LinguisticsAndrew McCallum and Wei Li. 2003. Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 188-191, Edmonton, Canada. Association for Computational Linguistics.
Mallet: A machine learning for language toolkit. Andrew Kachites Mccallum, Andrew Kachites McCallum. 2002. Mallet: A machine learning for language toolkit.
Linguistic regularities in continuous space word representations. Tomas Mikolov, Yih Wen-Tau, Geoffrey Zweig, Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAtlanta, GeorgiaAssociation for Computational LinguisticsTomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies, pages 746-751, Atlanta, Georgia, June. Association for Computational Linguistics.
Understanding the demographics of twitter users. Alan Mislove, Sune Lehmann, Yong-Yeol Ahn, Jukka-Pekka Onnela, J Niels Rosenquist, Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media. the Fifth International AAAI Conference on Weblogs and Social MediaThe AAAI PressAlan Mislove, Sune Lehmann, Yong-Yeol Ahn, Jukka- Pekka Onnela, and J. Niels Rosenquist. 2011. Un- derstanding the demographics of twitter users. In Pro- ceedings of the Fifth International AAAI Conference on Weblogs and Social Media, pages 554-557. The AAAI Press.
Developing age and gender predictive lexica over social media. Maarten Sap, Gregory Park, Johannes Eichstaedt, Margaret Kern, David Stillwell, Michal Kosinski, Lyle Ungar, Hansen Andrew Schwartz, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. the 2014 Conference on Empirical Methods in Natural Language ProcessingDoha, QatarAssociation for Computational LinguisticsMaarten Sap, Gregory Park, Johannes Eichstaedt, Mar- garet Kern, David Stillwell, Michal Kosinski, Lyle Ungar, and Hansen Andrew Schwartz. 2014. De- veloping age and gender predictive lexica over social media. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1146-1151, Doha, Qatar, October. Association for Computational Linguistics.
Who tweets? deriving the demographic characteristics of age, occupation and social class from twitter user meta-data. Luke Sloan, Jeffrey Morgan, Pete Burnap, Matthew Williams, PLoS ONE. 1033Luke Sloan, Jeffrey Morgan, Pete Burnap, and Matthew Williams. 2015. Who tweets? deriving the de- mographic characteristics of age, occupation and so- cial class from twitter user meta-data. PLoS ONE, 10(3):e0115545, 03.
Unsupervised dependency parsing without gold part-of-speech tags. I Valentin, Hiyan Spitkovsky, Angel X Alshawi, Daniel Chang, Jurafsky, Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. the 2011 Conference on Empirical Methods in Natural Language ProcessingEdinburgh, Scotland, UK.Association for Computational LinguisticsValentin I. Spitkovsky, Hiyan Alshawi, Angel X. Chang, and Daniel Jurafsky. 2011. Unsupervised dependency parsing without gold part-of-speech tags. In Proceed- ings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1281-1290, Ed- inburgh, Scotland, UK., July. Association for Compu- tational Linguistics.
Word representations: A simple and general method for semi-supervised learning. Joseph Turian, Lev-Arie Ratinov, Yoshua Bengio, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. the 48th Annual Meeting of the Association for Computational LinguisticsUppsala, SwedenAssociation for Computational LinguisticsJoseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceed- ings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384-394, Uppsala, Sweden, July. Association for Computational Linguis- tics.
Individual comparisons by ranking methods. Frank Wilcoxon, Biometrics Bulletin. 16Frank Wilcoxon. 1945. Individual comparisons by rank- ing methods. Biometrics Bulletin, 1(6):80-83. |
21,703,463 | Evaluating Inflectional Complexity Crosslinguistically: a Processing Perspective | The paper provides a cognitively motivated method for evaluating the inflectional complexity of a language, based on a sample of "raw" inflected word forms processed and learned by a recurrent self-organising neural network with fixed parameter setting. Training items contain no information about either morphological content or structure. This makes the proposed method independent of both meta-linguistic issues (e.g. format and expressive power of descriptive rules, manual or automated segmentation of input forms, number of inflectional classes etc.) and language-specific typological aspects (e.g. word-based, stem-based or template-based morphology). Results are illustrated by contrasting Arabic, English, German, Greek, Italian and Spanish. | [] | Evaluating Inflectional Complexity Crosslinguistically: a Processing Perspective
Claudia Marzi claudia.marzi@ilc.cnr.it
Institute for Computational Linguistics-CNR
Marcello Ferro marcello.ferro@ilc.cnr.it
Institute for Computational Linguistics-CNR
Ouafae Nahli ouafae.nahli@ilc.cnr.it
Institute for Computational Linguistics-CNR
Patrizia Belik patrizia_belik@yahoo.com
Universitat Politècnica de Valencia
Stavros Bompolas stavros.bompolas@gmail.com
University of Patras
Vito Pirrelli vito.pirrelli@ilc.cnr.it
Institute for Computational Linguistics-CNR
Evaluating Inflectional Complexity Crosslinguistically: a Processing Perspective
paradigm-based morphologyinflectional complexityprediction-based processingrecurrent self-organising networks
The paper provides a cognitively motivated method for evaluating the inflectional complexity of a language, based on a sample of "raw" inflected word forms processed and learned by a recurrent self-organising neural network with fixed parameter setting. Training items contain no information about either morphological content or structure. This makes the proposed method independent of both meta-linguistic issues (e.g. format and expressive power of descriptive rules, manual or automated segmentation of input forms, number of inflectional classes etc.) and language-specific typological aspects (e.g. word-based, stem-based or template-based morphology). Results are illustrated by contrasting Arabic, English, German, Greek, Italian and Spanish.
Introduction
There is little doubt that some languages are inflectionally more complex than others. Everybody would agree with the intuitive statement that the English conjugation system is simpler than the German system, and that the latter is, in turn, simpler than the verb system of Modern Standard Arabic. However, the naïve view is faced with two apparent paradoxes. When linguists try to pinpoint the source of this complexity, the task is far more elusive than expected, and goes well beyond a purely descriptive notion of diversity in the battery of realisational means (e.g. number of different affixes, number of cells in the corresponding paradigms, amount of stem allormophy etc.) provided by each system. Besides, there seems to be a poor correlation between our intuitive notion of morphological complexity and actual evidence of the pace of acquisition of more or less complex inflectional systems in child language. In some cases, apparently simpler inflectional markers may take more time to be acquired than formally more complex and articulated ones. What looks like a prohibitively difficult learning task in the light of the complexity and uncertainty of the inference steps required for mastering it, may turn out to be relatively unproblematic for human speakers. In the present paper we entertain a usage-oriented, cognitively motivated approach to issues of morphological complexity, based on a neurobiologically inspired model of word processing and learning, and explore its theoretical and computational implications.
Background
Assessing and understanding the comparative complexity of the inflectional system of a language relative to a functionally-equivalent system of another language remains an open question, which has animated much of the contemporary debate on the nature of word knowledge and its connection with issues of word usage (Ackerman and Malouf, 2013;Bane, 2008;Bearman et al., 2015;Juola, 1998;Moscoso del Prado Martín et al., 2004). In a crosslinguistic perspective, the way morphosyntac-tic features are contextually realised through processes of word inflection probably represents the widest dimension of grammatical crosslinguistic variation, somewhat belittling universal invariances along other dimensions (Evans and Levinson, 2009). Descriptive linguists have often approached the issue of comparative inflectional complexity by providing comprehensive catalogues of the morphological markers and patterns in a given language or languages (Bickel and Nichols, 2005;McWorther, 2001;Shosted, 2006). Accordingly, the complexity of an inflectional system is measured by simply enumerating the number of category values instantiated in the system (e.g. person, number or tense features) and the range of available markers for their realisation: the bigger the number, the more difficult the resulting system. The notion of Enumerative Complexity (or E-complexity) is however dubious (Ackerman and Malouf, 2013). Suppose we have two hypothetical inflectional systems, each with two categories only (say, singular and plural) and three different endings for each category: A, B, C for singular, and D, E and F for plural. In the first system, paradigms are found to present three possible pairs of endings only: <A, D>, <B, E>, <C, F> (corresponding to three different inflection classes). In the second system, any combination is attested. Clearly, the latter system would be more difficult to learn than the former, as it makes it harder to infer the plural form of a word from its singular form. Nonetheless, both systems present the same degree of E-complexity. Of late, less combinatorial approaches to morphological description have played down the role of E-complexity in inflection. These approaches, generally referred to as "paradigm-based", or "word-based", or "abstractive" grammatical frameworks, examine the systemic organisation of underlying patterns of surface variation, to conceive of an inflectional system as a network of implicative relations holding between fully-inflected forms (Blevins, 2003;Blevins, 2016;Burzio, 1998;Bybee, 1995;Bybee and McClelland, 2005;Matthews, 1991;Pirrelli and Battista, 2000). Implicative relations allow novel forms to be pre-dicted and inferred on the basis of known forms, thereby making it easier for a human speaker to process, retain and access them. Not only do implicative relations shed light on the way children come to master the inflectional system of their mother tongue, but they also constrain systems of word shapes, providing a limit on the range of Ecomplexity that languages can afford. A number of information theoretic approaches have been proposed to model this view in terms of Kolmogorov complexity (Kolmogorov, 1965) and Shannon entropy (Shannon, 1948). The idea behind Kolmogorov complexity is to measure a dataset of inflected forms as the shortest possible grammar needed to describe them. This however leads to a definition of morphological complexity heavily dependent on the grammar formalism adopted (Bane, 2008;Walther and Sagot, 2011). Ackerman and Malouf (2013) use Shannon's information entropy to quantify prediction of an inflected form as a paradigm-based change in the speaker's uncertainty. They conjecture that inflectional systems tend to minimise the average conditional entropy of predicting each form in a paradigm on the basis of any other form of the same paradigm (Low Conditional Entropy Conjecture or LCEC). This is measured by looking at the distribution of inflectional markers across inflection classes in the morphological system of a language. Although LCEC proves to be able to capture a substantial part of the inferential complexity within paradigms, it presupposes a segmentation of inflected forms into stems and affixes, while ignoring implicative relations holding between stem allomorphs. Use of principal parts can remedy this in a principled way (Finkel and Stump, 2007, among others). However, while entropy measures can provide extremely valuable insights into the organisation of static, synchronic paradigms, there are crucial complementary questions about how such patterns are processed and learned which remain unaddressed.
In what follows, we will focus on these important issues from a neuro-computational perspective. In particular, we are interested in evaluating the net effect of the complexity of an inflectional system on the processing behaviour of a recurrent neural network, excluding the role of word token frequency effects on prediction-driven processing (Pickering and Garrod, 2013). To factor out frequency effects, we ran simulations on uniformly distributed inflectional data. Our work can hence be understood as a purely morphological evaluation of complexity, based on lexical rather than corpus data. Since uniform distributions increase the entropy of a system, our results define some sort of upper bounds for inflectional complexity: if all factors (including frequency) are taken into account, the effects we observe here will likely be more prone to potentially confounding factors. This is in the spirit of information-theoretic work on paradigm-based morphology, as well as 'discriminative learning' research in animal behaviour and language learning (Rescorla and Wagner, 1972;Ramscar and Yarlett, 2007;Ramscar and Dye, 2011), and justifies our choice of a specific type of recurrent neural network, namely a Temporal Self-Organising Map (Ferro et al., 2011;Marzi and Pirrelli, 2015;Pirrelli et al., 2015;Marzi et al., 2016) as a workbench for simulating pardigm-based effects. Ultimately, it is intended to bridge the gap between an algorith-mic/mathematical understanding of processing-based morphological complexity (Balling and Baayen, 2008;Balling and Baayen, 2012), and the neurobiological (or implementational) level of Marr's hierarchy (Marr, 1982).
Method and data
According to Dressler and colleagues (Bittner et al., 2003), European languages can be arranged along an inflectional complexity continuum, ranging from a more inflectingfusional type (left) to a more isolating type (right):
Lithuanian→Greek→Russian→Croatian→Italian→
Spanish→German→Dutch→French→English.
Somewhat paradoxically, developmental evidence provides an indication that inflectional contrasts in prototypically inflecting verb systems are reported to be acquired at an earlier stage than inflectional contrasts in more isolating verb systems. 1 Here, we would like to investigate the related question about how degrees of inflectional complexity/regularity affect word processing strategies. For this purpose, we analyse the performance of recurrent self-organising neural networks learning a few languages in the typological continuum above: namely, English, German, Greek, Italian and Spanish. To broaden our typological data, Standard Modern Arabic was added to the range of tested languages. For each language we sampled the 50 topfrequency verb paradigms found in a few reference resources: CELEX (Baayen et al., 1995) for German and English; the Paisà Corpus (Lyding et al., 2014) for Italian; the European Spanish Subcorpus of the Spanish Ten-Ten Corpus (www.sketchengine.co.uk); the SUBTLEX-GR corpus (Dimitropoulou et al., 2010) for Modern Greek; the Penn Arabic Treebank (Maamouri et al., 2004). To control paradigm implicative relations, we selected a comparable set of 15 paradigm cells (14 cells for Arabic). 2 The sample contains a shared set of 6 present and 6 past tense forms for English, German, Greek, Italian and Spanish. Infinitive, gerund/present participle and past participle forms were added for English, German, Italian and Spanish, whereas 3 singular forms of the simple future were included for Modern Greek. The Arabic set contains 7 imperfective and 7 perfective forms, including 1S, 2MS, 3MS, 3FS, 1P, 2MP, 3MP cells. Only inflected "raw" forms from the selected cells were included for training a recurrent neural network, with no additional morphological information. Each language-specific dataset is administered to a Temporal Self-Organising Map (hereafter TSOM, see section 3.2. for more details) for 100 epochs. In one epoch, all word forms are randomly input to the map five times, and each training session was repetated five times with results averaged over repetitions to control random variabil-ity. TSOM parameters are identically initialised across the 6 languages, with the only exception of available memory nodes (Table 1). 3
The data
The selected paradigm cells for the target languages offer evidence of graded levels of morphological (ir-)regularities.
Greek, Italian, Spanish and German present highly inflecting conjugation systems, with extensive stem allomorphy, exhibiting varying degrees of (ir)regularity. Inflecting processes include prefixation, suffixation, vowel alternation, infixation and suppletion. Arabic stem formation is based on the interspersion of discontinuous consonantal roots and variable vowel patterns. English offers the by far simplest inflectional system, with extensive syncretism and a rather dichotomous subdivision of paradigms between regular and irregular ones. For all test languages except Modern Greek, word forms are orthographically transcribed, and administered to the network one symbol at a time as raw letter strings (starting with the start-of-word symbol '#' and ending with the end-of-word symbol '$'), with no information about their morphological structure. To account for the complex interaction between morphologically-conditioned and phonologically-conditioned stem allomorphy in Greek conjugation (Ralli, 2005;Ralli, 2006), Greek word forms are transcribed phonologically, and input one segment at a time. Once more, no information about morphological structure is input. To assess the network sensitivity to morphological structure and the processing behaviour of the map across morpheme boundaries (see section 4.), after training, word forms in all test languages were segmented morphologically according to a prefix-stem-suffix schema: e.g. Greek e-krin-a 'I judged', German ge-dach-t 'thought' (past participle), Arabic ya-ktub-u 'he writes'. Stem allomorphs within a single paradigm (whether morphologically/phonologically predictable or not) are segmented as whole units, with no explicit indication of either the root or the alternating pattern: e.g. Arabic katab-a 'he wrote' vs. ya-ktub-u 'he writes'. Only purely suffixal stem formation is segmented: e.g. Greek AGApi-s-A 'I loved', Italian perd-ut-o 'lost' (past participle).
Recurrent self-organising neural networks
TSOMs are recurrent self-organising networks consisting of two-dimensional grids of artificial memory/processing nodes that learn to dynamically memorise input strings as chains of maximally-responding processing nodes (Best Matching Units, or BMUs), whose level of sensitivity to input symbols in context is a continuous function of their distributional regularities in training (Ferro et al., 2011;Marzi and Pirrelli, 2015;Pirrelli et al., 2015;Marzi et al., 2016). In a TSOM, each processing node has two layers of synaptic connectivity: an input layer, connecting each node to the current input stimulus (i.e. orthographic or phonological symbols), and a (re-entrant) temporal layer, connecting each node to all other nodes. Every time a symbol is presented to the input layer, activation propagates to all map nodes through input and temporal connections, and the most highly activated node (BMU) is calculated (see Figure 1). Given the BMU at time t, the temporal layer encodes the expectation of the current BMU for the node to be activated at time t+1. The strength of the connection between consecutively activated BMUs is trained through the following principles of discriminative learning: given the input bigram ab, the connection strength between the BMU that get mostly activated for a at time t and the BMU for b at time t+1 will: (i) increase if a often precedes b in training (entrenchment), (ii) decrease if b is often preceded by a symbol other than a (competition). The complex interaction between entrenchment and competition in a TSOM accounts for important dynamic effects of self-organisation of stored words (Marzi et al., 2014;Marzi et al., 2016). In particular, at a sublexical level, systematically recurrent patterns tend to recruit contextsensitive specialised (and stronger) chains of BMUs. If the bigram 'ab' is repeatedly input to the TSOM, the map tends to develop a specialised BMU('b') for 'b' in 'ab' and a highly-weighted outward connection from BMU('a') to BMU('b'), reflecting a strong expectation of BMU('a') for a prospective BMU('b'). In detail, during training, weights on both connectivity layers are adjusted in an experiencedependent fashion: after an initial period of random variability, where nodes activate chaotically, a map gradually develops more and more specialised sequence of BMUs for word forms -or sub-lexical chains -that are functionally dependent on the frequency distribution and the amount of formal redundancy in the training data. On the one hand, specialised inter-node connectivity makes BMUs less confusable and more salient, as they receive stronger support through temporal connections than any other node. On the other hand, less specialised and more blended BMUs are densely and less strongly connected with many others, to meet the input of more words. When a TSOM is trained Figure 1: Functional architecture of a Temporal Self-Organising Map (TSOM). Each input word form is presented by a unique time-series of symbols, which are administered one at a time.
on higly redundant input data such as verb paradigms, specialisation and blending may interact. By inputting all verb forms with a uniform token distribution, we factor out the effect of frequency and focus our analysis on the effect of formal redundancy only. Thus, due to the prediction-driven bias of the temporal layer of re-entrant connections, strong expectations over upcoming input symbols account for successful serial word processing, with processing accuracy being a function of how confident the TSOM is about the position of the current symbol in the input string. These dynamics make it possible to test the behaviour of a TSOM on specific lexical tasks: word recall and serial word processing. For each time series of input symbols (i.e. each word form), the processing response of the map is represented by the synchronic activation pattern of all the BMUs that most highly get activated for that input sequence. Thus, the task of word recall tests how accurately a map can retrieve the input word from its synchronic activation pattern, namely how accurately the activation nodes of the map can encode information about the timing of the input symbols that make up the word. Accuracy in recall verifies that, for each input form, activation propagation (i.e. sequential activation) of nodes within each synchronic pattern correctly activates the BMUs associated with the symbols of each word. Scores are given in Table 2, showing very high accuracy and remarkably cross-linguistic similarity. Conversely, serial word processing can be monitored by evaluating the ability of a map to predict an incrementally presented input word. Proceduraly, by presenting one symbol at a time on the input layer, a TSOM is prompted to complete the current input string by anticipating the upcoming BMU to be activated. Anticipation/prediction scores across input words are calculated by incrementally assigning each correctly anticipated symbol in the input form a 1-point score, i.e. the anticipation score of the preceding symbol incremented by 1. Otherwise, for unpredicted symbols the score is 0. The more input symbols are anticipated, the easier the prediction of that word. Results will be given and discuss in the ensuing section (4.).
Results and discussion
All results in this section are analysed with linear models for mixed effects (LMERs). For all models/languages, we treated TSOM instances, verb paradigms and word forms as random effects. In particular, we show how inflectional systems of different complexity (independent variables) affect TSOM processing, by focusing on symbol prediction rate as a dependent variable. Figure 2 plots, for each language, the rate of symbol prediction in serial word processing. It should be appreciated that Arabic, German, Italian and Spanish exhibit remarkably similar trends, with not significantly different slopes (pvalues >.05). Only Greek and English present significantly different slopes (p-values <.001), with Greek forms being the hardest to process (lower slope), and English forms the easiest ones (higher slope).
To evaluate the impact of formal transparency on processing, the effect of regularity is fitted in a second LMER model where languages are considered as random effects. Across our selected languages, verb forms in regular paradigms are systematically more predictable (p-value <.001) than forms in irregular ones, as shown by the marginal plot in Figure 3.
To investigate in more detail the impact of inflectional complexity on processing, we fitted an LMER of symbol prediction for each language, with classes of morphological regularity (regulars vs. irregulars) and morphological structure (stem vs. suffix) as fixed effects (Figure 4). The marginal plots in Figure 4 better show a clear serial processing effect of the distance of an input symbol to the stem-ending boundary, over and above the length of the input string. Unsurprisingly, Italian and Spanish show a very similar behaviour, with irregular forms exhibiting fusional effects that blur the boundary between stem and inflectional endings, and comparable (but not identical) number of stem allomorphs (Boyé and Cabredo Hofherr, 2006;Pirrelli, 2000). Remarkably, both German and Greek exhibit systematic (albeit not always predictable) processes of stem formation, followed by a fairly homogenous pool of inflectional endings. As a result, in both languages, the base stem (or present stem) is often followed by a highly embedded and unpredictable sequence of symbols which account for the negative slopes in the corresponding segments. In Arabic imperfective forms, prefixation is used to convey person features. This makes selection of inflectional endings fairly predictable, given the stem. Finally, in our pool of languages, English offers the by far simplest inflectional system, with extensive syncretism and a rather dichotomous subdivision of paradigms between regular and irregular ones. Slopes are also modulated by degrees of regularity/transparency of the stem. Discontinuous patterns of morphological structure are often found in irregular paradigms of concatenative languages (e.g. English drink/drunk, German finden/fanden), and are systematically attested in non-concatenative morphologies (e.g. Arabic kataba/yaktubu). It is well known in the literature on se- Figure 4: For each language, marginal plots of interaction effects between morphological (ir-)regularity and distance to morpheme boundary, in LMER models fitting the number of symbols predicted by TOMs for stem and suffix. Fixed effects: regularity (dashed lines) vs. irregularity (solid lines), distance to morpheme boundary, stem and suffix as separate patterns, suffix length. Random effects: TSOM instances, paradigms, word forms. rial alignment that discontinuous patterns are more difficult to be processed and tracked down (Hahn and Bailey, 2005). In the present context, stem allomorphs are less predictable since their "uniqueness point", i.e. the point at which they can be distinguished from all neighbouring allomorphs, are normally delayed, slowing down processing (Balling and Baayen, 2012). Other things being equal, 4 the order of magnitude of this competing effect is a function of the number of stem allomorphs: the more they are, the more confusable the input stem is. Conversely, in regular paradigms the same stem shows up systematically in all cells. Hence the stem suffers from no intra-paradigmatic competition. These factors provide, on average, a net processing advantage of stems in regular paradigms, as confirmed by the significant difference in prediction rate between stems of regular vs. irregular paradigms in all languages (Figure 4). However, the clear advantage in stem processing is somewhat compensated by the difference in the prediction rate on suffixes. In German, Greek, Italian and Spanish suffixes in irregulars are predicted significantly more easily than suffixes in regular forms, as shown by the steeper segments in the positive x range of Figure 4. Besides, for all languages, there is a deeper drop in prediction rate at the stem-suffix boundary (for x = 0 as the first symbol of the suffix) in regular forms. In fact, stem allomorphs typically select only a subset of paradigm cells. Hence they can be followed by fewer inflectional endings than regular stems are. This reduces processing uncertainty, by constraining the range of possible continuations at the stem-suffix boundary of irregularly inflected forms. As a result, irregulars tend to blur the TSOM sensitivity to the verb morphological structure, favouring a somewhat more holistic processing strategy. Results and statistical significance are confirmed when we consider a more fine-grained meausure for inflectional complexity based on a gradient of morphological regularity, which takes into account the number of stem alternants of a given paradigm. 5 It represents a graded -and continuous -meausure of paradigmatic (ir-)regularity that considers, for each inflected form, the number of stem-sharing forms (or stem family size), instead of a dichotomous and formal classification of paradigms (regulars vs. irregulars). Thus, given the number of inflected form-types for each paradigms, the average stem family size correlates better with the non-categorical idea of inflectional complexity.
Concluding remarks
Our evidence is in line with Low Conditional Entropy Conjecture (Ackerman and Malouf, 2013). The processing cost of considerably different inflectional systems appears to oscillate within a fairly limited range of variation, whose upper bound and lower bound are marked, in our language sample, by Modern Greek and English respectively. All other conjugations present no statistically significant differences in the processing overhead they require, in spite of their typological diversity, which is nonetheless reflected by the different processing profiles exhibited by sublexical constituents in the different languages. In a functional perspective, this evidence can be interpreted as the result of a balancing act between two potentially competing communicative requirements: (i) a recognitiondriven tendency for a maximally contrastive system; and (ii) a production-driven bias for a maximally generalisable inflection system, where, for each paradigm, all forms in the paradigm can possibly be deduced from any one of its forms. This interpretation is also compatible with another clear pattern shown by our data. In each of our sample languages, the difference between the processing cost of forms in irregular paradigms compared with the processing cost of forms in irregular paradigms shows an interesting structuresensitive profile. The higher processing cost of irregular stems is compensated by a lower cost in processing the inflectional endings selected by irregular stems. Once more, these structural effects tend to reduce processing costs at the level of the whole word, making the inflectional system as functional as possible from an information theoretic perspective. In recognising that scale effects play an important role in the processing behaviour of our model at the word level, and that constrains on word processing are likely to obtain universally, we also highlight the fundamental communicative role of words as optimal-sized units for describing general functional tendencies in language, and for studying language as a complex information system. Inflectional complexity is multifactorial and dynamic. Its variability can be observed and measured on many counts: number and types of stem allomorphs, number and types of inflectional affixes, transparency/compositionality effects, stem-stem predictability, stem-affix predictability, affix-affix predictability, intra-paradigmatic and interparadigmatic frequency distributions etc. In this paper, we investigated inflectional complexity by controlling a number of interacting factors through language-specific training regimes, on which we ran a psycho-linguistically plausible computer model of inflection learning. In this way, we could understand more of factor interaction through a quantitative analysis of the way the performance of our system is affected across different training regimes. Methodologically, it allows for much more flexible and controlled test/analysis protocols than those commonly used with human subjects in experimental psycholinguistics. In addition, understanding more of the real cognitive hurdles a human learner has to face in the process of effectively acquiring an inflectional system of average complexity may also shed some light on optimal practices for language teaching.
Figure 2 :
2Marginal plot of interaction effects between language and distance to morpheme boundary, in an LMER model fitting the number of symbols predicted by TOMs. Fixed effects: languages, distance to morpheme boundary. Random effects: TSOM instances, paradigms, word forms.
Figure 3 :
3Marginal plot of interaction effects between categorical (ir-)regularity and distance to morpheme boundary, in an LMER model fitting the number of symbols predicted by TOMs. Fixed effects: irregulars (I) vs. regulars (R), distance to morpheme boundary. Random effects: languages, TSOM instances, paradigms, word forms.
For example,Noccetti (2003) reports that the transition from pre-to proto-morphology in Italian verb acquisition has an early onset at Brown's stage II, with mean length of utterance 2(Brown, 1973), in contrast with the comparative late emergence of the third-person singular marker -s in the acquisition of the English present tense.2 The full set of data, for each language, is available at http://www.comphyslab.it/redirect/?id=lrec2018_data
For the sake of data comparability, the number of memory nodes for each language was decided empirically to control for cross-linguistic differences in cardinality and length of word types (seeTable 1). For all trained languages, the percentage of used nodes among all available nodes ranges between 31% and 35%.
The effect is modulated by other factors we are not controlling here: i.e. the formal similarity between the input stem and its intra-paradigmatic competitors, the entropy of the paradigm, the lexical neighbourhood of the word form.
This graded notion takes into accout exceptional alternating stems in otherwise regular paradigms (e.g. Italian aprire/aperto and Spanish abrir/abierto, "open" infinitive/"opened" past participle). At the same time, it captures the difference between partially irregular paradigms and radically idiosynchratic ones.
Morphological organization: The low conditional entropy conjecture. F Ackerman, R Malouf, Language89Ackerman, F. and Malouf, R. (2013). Morphological orga- nization: The low conditional entropy conjecture. Lan- guage, 89(3):429-464.
The CELEX Lexical Database (CD-ROM). H R Baayen, P Piepenbrock, L Gulikers, PAUniversity of PennsylvaniaLinguistic Data Consortium. PhiladelphiaBaayen, H. R., Piepenbrock, P., and Gulikers, L., (1995). The CELEX Lexical Database (CD-ROM). Linguistic Data Consortium, University of Pennsylvania, Philadel- phia, PA.
Morphological effects in auditory word recognition: Evidence from danish. Language and Cognitive processes. L W Balling, R H Baayen, 23Balling, L. W. and Baayen, R. H. (2008). Morphological effects in auditory word recognition: Evidence from dan- ish. Language and Cognitive processes, 23(7-8):1159- 1190.
Probability and surprisal in auditory comprehension of morphologically complex words. L W Balling, R H Baayen, Cognition. 1251Balling, L. W. and Baayen, R. H. (2012). Probability and surprisal in auditory comprehension of morphologically complex words. Cognition, 125(1):80-106.
Quantifying and measuring morphological complexity. M Bane, Proceedings of the 26th West Coast Conference on Formal Linguistics. the 26th West Coast Conference on Formal LinguisticsBane, M. (2008). Quantifying and measuring morpholog- ical complexity. In Proceedings of the 26th West Coast Conference on Formal Linguistics, pages 69-76.
Understanding and Measuring Morphological Complexity. Matthew Bearman, Oxford University PressMatthew Bearman, et al., editors. (2015). Understanding and Measuring Morphological Complexity. Oxford Uni- versity Press.
Inflectional synthesis of the verb. B Bickel, J Nichols, The World Atlas of Language Structures. David Gil Martin Haspelmath, Matthew S. Dryer et al.Oxford University PressBickel, B. and Nichols, J. (2005). Inflectional synthesis of the verb. In David Gil Martin Haspelmath, Matthew S. Dryer et al., editors, The World Atlas of Language Structures, pages 94-97. Oxford University Press.
Development of Verb Inflection in First Language Acquisition: a crosslinguistic perspective. Dagmar Bittner, Mouton de GruyterBerlinDagmar Bittner, et al., editors. (2003). Development of Verb Inflection in First Language Acquisition: a cross- linguistic perspective. Mouton de Gruyter, Berlin.
Stems and paradigms. J P Blevins, Language. 792Blevins, J. P. (2003). Stems and paradigms. Language, 79(2):737-767.
Word and Paradigm Morphology. J P Blevins, Oxford University PressOxfordBlevins, J. P. (2016). Word and Paradigm Morphology. Oxford University Press, Oxford.
The structure of allomorphy in spanish verb inflection. G Boyé, P Cabredo Hofherr, Cuadernos de Lingüística. 13Boyé, G. and Cabredo Hofherr, P. (2006). The structure of allomorphy in spanish verb inflection. Cuadernos de Lingüística, 13:9-24.
A first language: the early stages. R Brown, George Allen I& UnwinBrown, R. (1973). A first language: the early stages. George Allen I& Unwin.
Multiple correspondence. L Burzio, Lingua. 104Burzio, L. (1998). Multiple correspondence. Lingua, 104:79-109.
Alternatives to the combinatorial paradigm of linguistic theory based on domain general principles of human cognition. J Bybee, J L Mcclelland, The Linguistic Review. 222-4Bybee, J. and McClelland, J. L. (2005). Alternatives to the combinatorial paradigm of linguistic theory based on domain general principles of human cognition. The Lin- guistic Review, 22(2-4):381-410.
Regular morphology and the lexicon. J Bybee, Language and Cognitive Processes. 105Bybee, J. (1995). Regular morphology and the lexicon. Language and Cognitive Processes, 10(5):425-455.
Subtitle-based word frequencies as the best estimate of reading behavior: the case of greek. M Dimitropoulou, J A Duñabeitia, A Avilés, J Corral, M Carreiras, Frontiers in Psychology. 1218Dimitropoulou, M., Duñabeitia, J. A., Avilés, A., Corral, J., and Carreiras, M. (2010). Subtitle-based word frequen- cies as the best estimate of reading behavior: the case of greek. Frontiers in Psychology, 1(218):1-12.
The myth of language universals: Language diversity and its importance for cognitive science. N Evans, S C Levinson, Behavioral and Brain Sciences. 32Evans, N. and Levinson, S. C. (2009). The myth of lan- guage universals: Language diversity and its importance for cognitive science. Behavioral and Brain Sciences, (32):429-92.
A selforganizing model of word storage and processing: implications for morphology learning. M Ferro, C Marzi, V Pirrelli, Lingue e Linguaggio. 102Ferro, M., Marzi, C., and Pirrelli, V. (2011). A self- organizing model of word storage and processing: impli- cations for morphology learning. Lingue e Linguaggio, 10(2):209-226.
Principal parts and morphological typology. R Finkel, G Stump, Morphology. 17Finkel, R. and Stump, G. (2007). Principal parts and mor- phological typology. Morphology, 17:39-75.
What makes words sound similar? Cognition. U Hahn, T M Bailey, 97Hahn, U. and Bailey, T. M. (2005). What makes words sound similar? Cognition, 97:227-267.
Measuring linguistic complexity: The morphological tier. P Juola, Journal of Quantitative Linguistics. 35Juola, P. (1998). Measuring linguistic complexity: The morphological tier. Journal of Quantitative Linguistics, 3(5):206-213.
Three approaches to the quantitative definition of information. Problems of information transmission. A N Kolmogorov, 1Kolmogorov, A. N. (1965). Three approaches to the quan- titative definition of information. Problems of informa- tion transmission, 1(1):1-7.
The paisá corpus of italian web texts. V Lyding, E Stemle, C Borghetti, M Brunello, S Castagnoli, F Dell' Orletta, H Dittmann, A Lenci, V Pirrelli, Proceedings of the 9th Web as Corpus Workshop. the 9th Web as Corpus WorkshopWaCLyding, V., Stemle, E., Borghetti, C., Brunello, M., Castag- noli, S., Dell' Orletta, F., Dittmann, H., Lenci, A., and Pirrelli, V. (2014). The paisá corpus of italian web texts. Proceedings of the 9th Web as Corpus Workshop (WaC-
@ Eacl, Association for Computational Linguistics. @ EACL 2014, pages 36-43. Association for Compu- tational Linguistics.
The penn arabic treebank: Building a large-scale annotated arabic corpus. M Maamouri, A Bies, T Buckwalter, W Mekki, NEMLAR conference on Arabic language resources and tools. 27Maamouri, M., Bies, A., Buckwalter, T., and Mekki, W. (2004). The penn arabic treebank: Building a large-scale annotated arabic corpus. In NEMLAR conference on Arabic language resources and tools, volume 27, pages 466-467.
Vision. A Computational Investigation into the Human Representation and Processing of Visual Information. D Marr, Freeman and CompanyMarr, D. (1982). Vision. A Computational Investigation into the Human Representation and Processing of Visual Information. Freeman and Company.
A neuro-computational approach to understanding the mental lexicon. C Marzi, V Pirrelli, Journal of Cognitive Science. 164Marzi, C. and Pirrelli, V. (2015). A neuro-computational approach to understanding the mental lexicon. Journal of Cognitive Science, 16(4):493-535.
Morphological structure through lexical parsability. C Marzi, M Ferro, V Pirrelli, Lingue e Linguaggio. 132Marzi, C., Ferro, M., and Pirrelli, V. (2014). Morpholog- ical structure through lexical parsability. Lingue e Lin- guaggio, 13(2):263-290.
Effects of frequency and regularity in an integrative model of word storage and processing. C Marzi, M Ferro, F A Cardillo, V Pirrelli, Italian Journal of Linguistics. 281Marzi, C., Ferro, M., Cardillo, F. A., and Pirrelli, V. (2016). Effects of frequency and regularity in an inte- grative model of word storage and processing. Italian Journal of Linguistics, 28(1):79-114.
P H Matthews, Morphology. Cambridge University Press. CambridgeMatthews, P. H. (1991). Morphology. Cambridge Univer- sity Press, Cambridge.
The world's simplest grammars are creole grammars. J Mcworther, Linguistic Typology. 5McWorther, J. (2001). The world's simplest grammars are creole grammars. Linguistic Typology, (5):125-166.
Putting the bits together: An information theoretical perspective on morphological processing. Moscoso Del Prado Martín, F Kostić, A Baayen, R H , Cognition. 941Moscoso del Prado Martín, F., Kostić, A., and Baayen., R. H. (2004). Putting the bits together: An informa- tion theoretical perspective on morphological process- ing. Cognition, 94(1):1-18.
editors, Development of Verb Inflection in First Language Acquisition: a cross-linguistic perspective. S Noccetti, Dagmar Bittner, et al.,BerlinAcquisition of verb morphology in italian: A case study. Mouton de GruyterNoccetti, S. (2003). Acquisition of verb morphology in italian: A case study. In Dagmar Bittner, et al., edi- tors, Development of Verb Inflection in First Language Acquisition: a cross-linguistic perspective, pages 351- 378. Mouton de Gruyter, Berlin.
An integrated theory of language production and comprehension. M J Pickering, S Garrod, Behavioral and Brain Sciences. 36Pickering, M. J. and Garrod, S. (2013). An integrated the- ory of language production and comprehension. Behav- ioral and Brain Sciences, 36:329-392.
The paradigmatic dimension of stem allomorphy in italian verb inflection. V Pirrelli, M Battista, Italian Journal of Linguistics. 122Pirrelli, V. and Battista, M. (2000). The paradigmatic di- mension of stem allomorphy in italian verb inflection. Italian Journal of Linguistics, 12(2):307-380.
Computational complexity of abstractive morphology. V Pirrelli, M Ferro, C Marzi, Understanding and Measuring Morphological Complexity. Matthew Baerman, et al.OxfordOxford University PressPirrelli, V., Ferro, M., and Marzi, C. (2015). Com- putational complexity of abstractive morphology. In Matthew Baerman, et al., editors, Understanding and Measuring Morphological Complexity, pages 141-166. Oxford University Press, Oxford.
Paradigmi in morfologia. Un approccio interdisciplinare alla flessione verbale dell'italiano. V Pirrelli, Pirrelli, V. (2000). Paradigmi in morfologia. Un approc- cio interdisciplinare alla flessione verbale dell'italiano.
. Pisa Istituti Editoriali E Poligrafici Internazionali, A Ralli, Morfologia [MorphologyIstituti Editoriali e Poligrafici Internazionali, Pisa. Ralli, A. (2005). Morfologia [Morphology].
. Athens Patakis, Patakis, Athens.
On the role of allomorphy in inflectional morphology: evidence from dialectal variation. A Ralli, Polimetrica, MonzaRalli, A., (2006). On the role of allomorphy in inflectional morphology: evidence from dialectal variation, pages 123-152. Polimetrica, Monza.
Learning language from the input: Why innate constraints can't explain noun compounding. M Ramscar, M Dye, Cognitive Psychology. 621Ramscar, M. and Dye, M. (2011). Learning language from the input: Why innate constraints can't explain noun compounding. Cognitive Psychology, 62(1):1-40.
Linguistic selfcorrection in the absence of feedback: A new approach to the logical problem of language acquisition. M Ramscar, D Yarlett, Cognitive Science. 316Ramscar, M. and Yarlett, D. (2007). Linguistic self- correction in the absence of feedback: A new approach to the logical problem of language acquisition. Cognitive Science, 31(6):927-960.
A theory of pavlovian conditioning: variations in the effectiveness of reinforcement and non-reinforcement. R A Rescorla, A R Wagner, Classical conditioning II: Current research and theory. Abraham H Black et al.New YorkAppleton-Century-CroftsRescorla, R. A. and Wagner, A. R. (1972). A theory of pavlovian conditioning: variations in the effectiveness of reinforcement and non-reinforcement. In Abraham H Black et al., editors, Classical conditioning II: Current research and theory, pages 64-99. Appleton-Century- Crofts, New York.
A mathematical theory of communication. C E Shannon, Bell System Technical Journal. 27Shannon, C. E. (1948). A mathematical theory of commu- nication. Bell System Technical Journal, 27:379-423.
Correlating complexity: a typological approach. R Shosted, Linguistic Typology. 10Shosted, R. (2006). Correlating complexity: a typological approach. Linguistic Typology, (10):1-40.
Modélisation et implémentation de phénomènes flexionnels non-canoniques. G Walther, B Sagot, Walther, G. and Sagot, B. (2011). Modélisation et implé- mentation de phénomènes flexionnels non-canoniques.
. Traitement Automatique des Langues. 252Traitement Automatique des Langues, 2(52):91-122. |
18,270,214 | This paper briefly sketches new work-inprogress (i) developing task-based scenarios where human-robot teams collaboratively explore real-world environments in which the robot is immersed but the humans are not, (ii) extracting and constructing "multi-modal interval corpora" from dialog, video, and LIDAR messages that were recorded in ROS bagfiles during task sessions, and (iii) testing automated methods to identify, track, and align co-referent content both within and across modalities in these interval corpora. The pre-pilot study and its corpora provide a unique, empirical starting point for our longerterm research objective: characterizing the balance of explicitly shared and tacitly assumed information exchanged during effective teamwork. | [
13427863,
9137775,
13091133,
7825911
] | Association for Computational LinguisticsCopyright Association for Computational LinguisticsApril 26-30 2014. 2014
Clare R Voss clare.r.voss.civ@mail.mil
Army Research Laboratory
20783AdelphiMD
Taylor Cassidy taylor.cassidy.ctr@mail.mil
Army Research Laboratory
20783AdelphiMD
IBM T. J. Watson Research Center
10532HawthorneNY
Douglas Summers-Stay douglas.a.summers-stay.civ@mail.mil
Army Research Laboratory
20783AdelphiMD
Proceedings of the of the EACL 2014 Workshop on Dialogue in Motion (DM)
the of the EACL 2014 Workshop on Dialogue in Motion (DM)Gothenburg, SwedenAssociation for Computational LinguisticsApril 26-30 2014. 2014Collaborative Exploration in Human-Robot Teams: What's in Their Corpora of Dialog, Video, & LIDAR Messages?
This paper briefly sketches new work-inprogress (i) developing task-based scenarios where human-robot teams collaboratively explore real-world environments in which the robot is immersed but the humans are not, (ii) extracting and constructing "multi-modal interval corpora" from dialog, video, and LIDAR messages that were recorded in ROS bagfiles during task sessions, and (iii) testing automated methods to identify, track, and align co-referent content both within and across modalities in these interval corpora. The pre-pilot study and its corpora provide a unique, empirical starting point for our longerterm research objective: characterizing the balance of explicitly shared and tacitly assumed information exchanged during effective teamwork.
Overview
Robots that are able to move into areas where people cannot during emergencies and collaboratively explore these environments by teaming with humans, have tremendous potential to impact search and rescue operations. For human-robot teams to conduct such shared missions, humans need to trust that they will be kept apprised, at a miniu- To begin documenting the communication challenges humans face in taking a robot's perspective, we conducted a pre-pilot study 1 to record, identify and track the dialog, video, and LIDAR information that is explicitly shared by, or indirectly available to, members of human-robot teams when conducting collaborative tasks.
Approach
We enlisted colleagues to be the commander (C) or the human (R) controlling a mobile physical robot in such tasks. Neither could see the robot. Only R could "see for" the robot, via its onboard video camera and LIDAR. C and R communicated by text chat on their computers, as in this example, R 41: I can see in the entrance. C 42: Enter and scan the first room. Utterances R 41 & C 42 occur when the robot is outdoors (Fig. 1) and R 44 & C 45 occur after it moves indoors (Fig. 2). Although our approach resembles a Wizard and Oz paradigm (Riek, 2012), 1 Statisticians say pre-pilots are for "kicking the tires," early-stage tests of scenarios, equipment, and data collection. with C as User and R as Wizard controlling the robot, there is no intent for R to deceive C.
In these dialog snippets, notice that the doors mentioned in R 44 are not visible in the image of that utterance's time interval and, even if they had been visible, their referents were contextdependent and ambiguous. How are the robot and human to refer to the same door? This challenge entails resolving several types of co-reference (linguistic, are they talking about the same door? visual, are they looking at the door? navigational, is one backing into a door no longer in view but previosuly stored in its map?) Successful communication on human-robot teams, where humans send messages to direct robot movements and receive robot-processed messages as the robot navigates, entails effective identification of named referents (such as doors), both within and across available modalities during exploratory tasks. The research question is, how might the identification and alignment of entities using combinations of (i) NLP on dialog, (ii) image processing on the video and LIDAR stream, with (iii) robot position, motion, and orientation coordinates, support more effective human-robot missions?
We conducted the pre-pilot study with ten trial sessions to collect multi-modal data from C-R and R-only scenarios (Table 1). Each session involved a single participant playing the role of R with control over the physical robot, or two participants, one person playing R and one playing C.
Team
R's Task R only Rotate in place and describe surroundings. R only Move along road, describe surroundings. C, R Follow C's guidance in navigating building's perimeter, describe surroundings. C, R Follow C's guidance in searching buildings for specified objects. Participants sat indoors and could not see the robot outside, roughly 30 meters away. In each session, R was instructed to act as though he or she were situated in the robot's position and to obey C. R was to consider the robot's actions as R's own, and to consider available video and LIDAR point cloud feeds as R's own perceptions.
Equipment
All participants worked from their own computers. Each was instructed, for a given scenario, to be either C or R and to communicate by text only.
On their screen they saw a dedicated dialog (chat) window in a Linux terminal. For sessions with both C and R, the same dialog content (the ongoing sequence of typed-in utterances) appeared in the dialog window on each of their screens. The physical robot ran under the Robot Operating System (ROS) (Quigley et al., 2009), equipped with a video camera, laser sensors, magnetometer, GPS unit, and rotary encoders. R could "see for the robot" via two ROS rviz windows with live feeds for video from the robot's camera and constructed 3D point cloud frames. 2 R had access to rotate and zoom functions to alter the screen display of the point cloud. C saw only a static bird'seye-view map of the area. R remotely controlled over a network connection the robot's four wheels and its motion, using the left joystick of an X-Box controller.
Collection
During each session, all data from the robot's sensors and dialog window was recorded via the rosbag tool and stored in a single bagfile. 3 A bagfile contains typed messages. Each message contains a timestamp (specified at nanosecond granularity) and values for that message type's attributes. Message types geometry msgs/PoseStamped, for example, contain a time stamp, a three-dimensional location vector and a four-dimensional orientation vector that indicates an estimate of the robot's location and the direction in which it is facing. The robot's rotary encoders generate these messages as the robot moves. The primary bagfile message types most relevant to our initial analyses 4 were: 1) instant messenger/StringStamped that included speaker id, text utterances 2) sensor msgs/PointCloud2 that included LIDAR data 3) sensor msgs/CompressedImage with compressed, rectified video images 4) sensor msgs/GPS, with robot coordinates Message types are packaged and published at different rates: some are published automatically at regular intervals (e.g., image frames), while others depend on R, C, or robot activity (e.g., dialog utterances). And the specific rate of publication for some message types can be limited at times by network bandwidth constraints (e.g. LIDAR data). Summary statistics for our initial pre-pilot collec-tion consisting of ten task sessions conducted over two days, and that together spanned roughly five hours in real-time, are presented in Table 2
From Collection to Interval Corpora
After collecting millions of messages in the prepilot with content in different modalities, the immediate research challenge has been identifying the time interval that covers the messages directly related to the content in each utterance. We extracted each utterance message u and its corresponding time stamp t. For a given u, we extracted the five image, five point cloud, and five GPS messages immediately preceding and the five of each immediately following u, based on message time-stamps, for a total of thirty sensor messages per utterance. These message types were published independent of the robot's movement, approximately once per second. In the second phase, we assigned the earliest and latest time stamp from the first-phase messages to delimit an interval [t s , t e ] and conducted another extraction round from the bagfile, this time pulling out all messages with time stamps in that interval as published by the rotary encoders, compass, and inertial measurement unit, only when the robot moved. The messages from both phases constitute a tensecond interval corpus for u.
These interval corpora serve as a first approximation at segmenting the massive stream published at nanosecond-level into units pertaining to commander-robot dialog during the task at hand. With manual inspection, we found that many automatically-constructed intervals do track relevant changes in the robot's location. For example, the latest interval in a task's time sequence that was constructed with the robot being outside a building is distinct from the first interval that covers when the robot moves inside the building. 5
Corpora Language Processing
Each utterance collected from the sessions was tokenized, parsed, and semantically interpreted using SLURP (Brooks et al., 2012), a welltested NLP front-end component of a human-robot system. 6 The progression in SLURP's analysis pipeline for utterance C 45 is shown in Figure 3.
SLURP extracts a parse tree (top-left), identifies a sub-tree that constitutes a verb-argument structure, and enumerates possibly matching sensespecific verb frames from VerbNet (Schuler, 2005) (bottom-left). VerbNet provides a syntactic to semantic role mapping for each frame (top-right). SLURP selects the best mapping and generates a compact semantic representation (bottom-right). 7 In this example, the correct sense of "scan" is selected (investigate-35.4) along with a frame that matches the syntactic parse. Overall, half the commands run through SLURP generated a semantic interpretation. Of the other half, roughly one quarter failed or had errors at parsing and the other quarter at the argument matching stage. Our next step is to augment SLURP's lexicon and retrain a parser for new vocabulary so that we can directly map semantic structures of the prepilot corpora into ResearchCyc 8 , an extensive ontology, for cross-reference to other events and objects, already stored and possibly originated as visual input. Following McFate (2010), we will test 6 https://github.com/PennNLP/SLURP. 7 Verbnet associates each frame with a conjunction of boolean semantic predicates that specify how and when event participants interact, for an event variable (not shown). 8 ResearchCyc and CycL are trademarks of Cycorp, Inc.
Image Processing
Interval corpus images were labelled by a neural network trained for visual scene classification (Munoz, 2013) of nine material classes: dirt, foliage, grass, road, sidewalk, sky, wall, wood, and ground cover (organic debris). Figures 4 and 5 show the images from Figures 1 and 2 with two additional versions: one with colored zones for system-recognized class boundaries and another with colored zones as trasparent overlays on the original. The classes differentiate terrain types that work well with route-finding techniques that leverage them in selecting traversible paths. As the robot systems are enhanced with more sophisticated path planning software, that knowledge may be combined with recognized zones to send team members messages about navigation problems as the robot explores where they cannot go. Accuracy is limited at the single image level: the actual grass in Figure 4 is mostly mis-classified as dirt (blue) along with some correctly identified grass (green), while the floor in Figure 5 is misclassified as road, although much of what shows through the window is correctly classified as foliage. We are experimenting with automatically assigning natural language (NL) labels to a range of objects and textures recognized in images from other larger datasets. We can retrieve labeled images stored in ResearchCyc via NL query converted into CycL, allowing a commander to, for example, ask questions about objects and regions using terms related to but not necessarily equal to the original recognition system-provided labels.
Related Work
We are aware of no other multi-modal corpora obtained from human-robot teams conducting exploratory missions with collected dialog, video and other sensor data. Corpora with a robot recording similar data modalities do exist (Green et al., 2006;Wienke et al., 2012;Maas et al., 2006) but for fundamentally different tasks. Tellex et al. (2011) and Matuszek et al. (2012) pair commands with formal plans without dialog and Zender et al. (2008) and Randelli et al. (2013) build multi-level maps but with a situated commander. Eberhard et al. (2010)'s CReST corpus contains a set-up similar to ours minus the robot; a human task-solver wears a forward-facing camera instead. The SCARE corpus (Stoia et al., 2008) records similar modalities but in a virtual environment, where C has full access to R's video feed. Other projects yielded corpora from virtual environments that include route descriptions without dialog (Marge and Rudnicky, 2011;MacMahon et al., 2006;Vogel and Jurafsky, 2010) or referring expressions without routes (Schütte et al., 2010;Fang et al., 2013), assuming pre-existing abstractions from sensor data.
Conclusion and Ongoing Work
We have presented our pre-pilot study with data collection and corpus construction phases. This work-in-progress requires further analysis. We are now processing dialog utterances for more systematic semantic interpretation using disambiguated VerbNet frames that map into ResearchCyc predicates. We will run object recognition software retrained on a broader range of objects so that it can be applied to images that will be labelled and stored in ResearchCyc micro-worlds for subsequent co-reference with terms in the dialog utterances. Ultimately we want to establish in real time links across parts of messages in different modalities that refer to the same abstract entities, so that humans and robots can share their separately-obtained knowledge about the entities and their spatial relations -whether seen, sensed, described, or inferred -when communicating on shared tasks in environments.
Figure 1 :
1Outside View: Video Image & LIDAR. mum, of where the robot is and what it is sensing, as it moves about without them present.
R 44 :
44I see a door to the right and a door to the left. C 45: Scan next open room on left.
Figure 2 :
2Inside View: Video Image & LIDAR. Brightness and contrast of video image increased for print publication.
Figure 3 :
3Analyses of Scan next open room on left.
Figure 4 :
4Outside View: Image, Zones, Overlay the mapping of matched VerbNet frames to Re-searchCyc's semantic predicates to assess its lexical coverage for our corpora.
Figure 5 :
5Inside View: Image, Zones, Overlay. Brightness and contrast of video image and overlay increased for print publication.
Table 1 :
1Pre-pilot Scenarios.
.#bagfile msgs
15, 131K #dialog utts
434
min per sn
140, 848 min per sn
15
max per sn
3, 030K max per sn
116
#tokens
3, 750 #image msgs
10, 650
min per sn
200 min per sn
417
max per sn
793 max per sn
1, 894
#unique words
568 #LIDAR msgs
8, 422
min per sn
84 min per sn
215
max per sn
176 max per sn
2, 250
Table 2 :
2Collection Statistics (sn = session).
LIDAR measures distance from robot by illuminating targets with robot lasers and generates point cloud messages.3 http://wiki.ros.org/rosbag 4 We omit here details of ROS topics, transformation messages, and other sensor data collected in the pre-pilot.
This appears likely due to the paced descriptions in R's utterances. Another pre-pilot is needed to test this hypothesis.
AcknowledgmentsOver a dozen engineers and researchers assisted us in many ways before, during, and after the prepilot, providing technical help with equipment and data collection, as well as participating in the prepilot. We cannot list everyone here, but special thanks to Stuart Young for providing clear guidance to everyone working with us.
Make it so: Continuous, flexible natural language interaction with an autonomous robot. Daniel J Brooks, Constantine Lignos, Cameron Finucane, Mikhail S Medvedev, Ian Perera, Vasumathi Raman, Hadas Kress-Gazit, Mitch Marcus, Holly A Yanco, Proc. AAAI. AAAIDaniel J. Brooks, Constantine Lignos, Cameron Finu- cane, Mikhail S. Medvedev, Ian Perera, Vasumathi Raman, Hadas Kress-Gazit, Mitch Marcus, and Holly A. Yanco. 2012. Make it so: Continu- ous, flexible natural language interaction with an au- tonomous robot. In Proc. AAAI, pages 2-8.
The indiana "cooperative remote search task" (crest) corpus. Kathleen M Eberhard, Hannele Nicholson, Sandra Kübler, Susan Gundersen, Matthias Scheutz, Proc. LREC. LRECKathleen M. Eberhard, Hannele Nicholson, Sandra Kübler, Susan Gundersen, and Matthias Scheutz. 2010. The indiana "cooperative remote search task" (crest) corpus. In Proc. LREC.
Towards situated dialogue: Revisiting referring expression generation. Rui Fang, Changsong Liu, Lanbo She, Joyce Y Chai, Proc. EMNLP. EMNLPRui Fang, Changsong Liu, Lanbo She, and Joyce Y. Chai. 2013. Towards situated dialogue: Revisiting referring expression generation. In Proc. EMNLP, pages 392-402.
Developing a contextualized multimodal corpus for human-robot interaction. Anders Green, Helge Httenrauch, Kerstin Severinson Eklundh, Proc. LREC. LRECAnders Green, Helge Httenrauch, and Kerstin Severin- son Eklundh. 2006. Developing a contextualized multimodal corpus for human-robot interaction. In Proc. LREC.
Towards a multimodal topic tracking system for a mobile robot. Jan F Maas, Britta Wrede, Gerhard Sagerer, Proc. INTERSPEECH. INTERSPEECHJan F. Maas, Britta Wrede, and Gerhard Sagerer. 2006. Towards a multimodal topic tracking system for a mobile robot. In Proc. INTERSPEECH.
Walk the talk: Connecting language, knowledge, and action in route instructions. Matt Macmahon, Brian Stankiewicz, Benjamin Kuipers, Proc. AAAI. AAAIMatt MacMahon, Brian Stankiewicz, and Benjamin Kuipers. 2006. Walk the talk: Connecting language, knowledge, and action in route instructions. In Proc. AAAI, pages 1475-1482.
The teamtalk corpus: Route instructions in open spaces. Matthew Marge, Alexander I Rudnicky, Proc. RSS, Workshop on Grounding Human-Robot Dialog for Spatial Tasks. RSS, Workshop on Grounding Human-Robot Dialog for Spatial TasksMatthew Marge and Alexander I Rudnicky. 2011. The teamtalk corpus: Route instructions in open spaces. In Proc. RSS, Workshop on Grounding Human-Robot Dialog for Spatial Tasks.
Learning to parse natural language commands to a robot control system. Cynthia Matuszek, Evan Herbst, Luke S Zettlemoyer, Dieter Fox, Proc. ISER. ISERCynthia Matuszek, Evan Herbst, Luke S. Zettlemoyer, and Dieter Fox. 2012. Learning to parse natural language commands to a robot control system. In Proc. ISER, pages 403-415.
Expanding verb coverage in cyc with verbnet. Clifton Mcfate, Proc. ACL, Student Research Workshop. ACL, Student Research WorkshopClifton McFate. 2010. Expanding verb coverage in cyc with verbnet. In Proc. ACL, Student Research Workshop, pages 61-66.
Inference Machines: Parsing Scenes via Iterated Predictions. Daniel Munoz, Carnegie Mellon UniversityPh.D. thesisDaniel Munoz. 2013. Inference Machines: Pars- ing Scenes via Iterated Predictions. Ph.D. thesis, Carnegie Mellon University.
ROS: an open-source robot operating system. Morgan Quigley, Ken Conley, Brian Gerkey, Josh Faust, Tully B Foote, Jeremy Leibs, Rob Wheeler, Andrew Y Ng, Proc. ICRA, Workshop on Open Source Software. ICRA, Workshop on Open Source SoftwareMorgan Quigley, Ken Conley, Brian Gerkey, Josh Faust, Tully B. Foote, Jeremy Leibs, Rob Wheeler, and Andrew Y. Ng. 2009. ROS: an open-source robot operating system. In Proc. ICRA, Workshop on Open Source Software.
Knowledge acquisition through human-robot multimodal interaction. Gabriele Randelli, Maria Taigo, Luca Bonanni, Daniele Iocchi, Nardi, Intelligent Service Robotics. 61Gabriele Randelli, Taigo Maria Bonanni, Luca Iocchi, and Daniele Nardi. 2013. Knowledge acquisition through human-robot multimodal interaction. Intel- ligent Service Robotics, 6(1):19-31.
Wizard of oz studies in hri: A systematic review and new reporting guidelines. D Laurel, Riek, Journal of Human-Robot Interaction. 11Laurel D Riek. 2012. Wizard of oz studies in hri: A systematic review and new reporting guidelines. Journal of Human-Robot Interaction, 1(1).
Verbnet: A Broadcoverage, Comprehensive Verb Lexicon. Karin Kipper Schuler, University of PennsylvaniaPh.D. thesisKarin Kipper Schuler. 2005. Verbnet: A Broad- coverage, Comprehensive Verb Lexicon. Ph.D. the- sis, University of Pennsylvania.
Visual salience and reference resolution in situated dialogues: A corpus-based evaluation. Niels Schütte, John D Kelleher, Brian Mac Namee, Proc. AAAI, Fall Symposium: Dialog with Robots. AAAI, Fall Symposium: Dialog with RobotsNiels Schütte, John D. Kelleher, and Brian Mac Namee. 2010. Visual salience and reference reso- lution in situated dialogues: A corpus-based evalu- ation. In Proc. AAAI, Fall Symposium: Dialog with Robots.
Scare: a situated corpus with annotated referring expressions. Laura Stoia, Darla Magdalena Shockley, Donna K Byron, Eric Fosler-Lussier, Proc. LREC. LRECLaura Stoia, Darla Magdalena Shockley, Donna K. By- ron, and Eric Fosler-Lussier. 2008. Scare: a situ- ated corpus with annotated referring expressions. In Proc. LREC.
Understanding natural language commands for robotic navigation and mobile manipulation. Stefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew R Walter, Seth J Banerjee, Nicholas Teller, Roy, Proc. AAAI. AAAIStefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew R. Walter, Ashis Gopal Banerjee, Seth J. Teller, and Nicholas Roy. 2011. Understanding nat- ural language commands for robotic navigation and mobile manipulation. In Proc. AAAI.
Learning to follow navigational directions. Adam Vogel, Daniel Jurafsky, Proc. ACL. ACLAdam Vogel and Daniel Jurafsky. 2010. Learning to follow navigational directions. In Proc. ACL, pages 806-814.
A framework for the acquisition of multimodal human-robot interaction data sets with a whole-system perspective. Johannes Wienke, David Klotz, Sebastian Wrede, Proc. LREC, Workshop on Multimodal Corpora for Machine Learning. LREC, Workshop on Multimodal Corpora for Machine LearningJohannes Wienke, David Klotz, and Sebastian Wrede. 2012. A framework for the acquisition of mul- timodal human-robot interaction data sets with a whole-system perspective. In Proc. LREC, Work- shop on Multimodal Corpora for Machine Learning.
Conceptual spatial representations for indoor mobile robots. Hendrik Zender, Patric Martínez Mozos, G-Jm Jensfelt, Wolfram Kruijff, Burgard, Robotics and Autonomous Systems. 566Hendrik Zender, O Martínez Mozos, Patric Jensfelt, G- JM Kruijff, and Wolfram Burgard. 2008. Concep- tual spatial representations for indoor mobile robots. Robotics and Autonomous Systems, 56(6):493-502. |
|
15,388,570 | A proposal to automatically build and maintain gazetteers for Named Entity Recognition by using Wikipedia | This paper describes a method to automatically create and maintain gazetteers for Named Entity Recognition (NER). This method extracts the necessary information from linguistic resources. Our approach is based on the analysis of an on-line encyclopedia entries by using a noun hierarchy and optionally a PoS tagger. An important motivation is to reach a high level of language independence. This restricts the techniques that can be used but makes the method useful for languages with few resources. The evaluation carried out proves that this approach can be successfully used to build NER gazetteers for location (F 78%) and person (F 68%) categories. | [
1671874,
23418116,
14098062,
8885713,
7701908
] | A proposal to automatically build and maintain gazetteers for Named Entity Recognition by using Wikipedia
Antonio Toral atoral@dlsi.ua.es
University of Alicante Carretera San Vicente S
03690AlicanteSpain
Rafael Muñoz rafael@dlsi.ua.es
University of Alicante
Carretera San Vicente S/N03690AlicanteSpain
A proposal to automatically build and maintain gazetteers for Named Entity Recognition by using Wikipedia
This paper describes a method to automatically create and maintain gazetteers for Named Entity Recognition (NER). This method extracts the necessary information from linguistic resources. Our approach is based on the analysis of an on-line encyclopedia entries by using a noun hierarchy and optionally a PoS tagger. An important motivation is to reach a high level of language independence. This restricts the techniques that can be used but makes the method useful for languages with few resources. The evaluation carried out proves that this approach can be successfully used to build NER gazetteers for location (F 78%) and person (F 68%) categories.
Introduction
Named Entity Recognition (NER) was defined at the MUC conferences (Chinchor, 1998) as the task consisting of detecting and classifying strings of text which are considered to belong to different classes (e.g. person, location, organization, date, time). Named Entities are theoretically identified and classified by using evidence. Two kinds of evidence have been defined (McDonald, 1996). These are internal and external evidence. Internal evidence is the one provided from within the sequence of words that constitute the entity. In contrast, external evidence is the criteria that can be obtained by the context in which entities appear.
Since the time NER was introduced, mainly two approaches have been adopted to deal with this task. One is referred as knowledge-based and uses explicit resources like rules and gazetteers, which commonly are hand-crafted. The other follows the learning paradigm and usually uses as a resource a tagged corpus which is used to train a supervised learning algorithm.
In the knowledge-based approach two kind of gazetteers can be distinguished. On one hand there are trigger gazetteers, which contain key words that indicate the possible presence of an entity of a given type. These words usually are common nouns. E.g. ms. indicates that the entity after it is a person entity. On the other hand there are entity gazetteers which contain entities themselves, which usually are proper nouns. E.g. Portugal could be an instance in a location gazetteer.
Initially, and specially for the MUC conferences, most of the NER systems developed did belong to the knowledge-based approach. This approach proved to be able to obtain high scores. In fact, the highest score obtained by a knowledgebased system in MUC-7 reached F 93.39 % (Mikheev et al., 1998). However, this approach has an important problem: gazetteers and rules are difficult and tedious to develop and to maintain. If the system is to be used for an open domain, linguistic experts are needed to build the rules, and besides, it takes too much time to tune these resources in order to obtain satisfactory results. Because of this, lately most of the research falls into the learning-based paradigm.
Regarding the creation and maintenance of gazetteers, several problems have been identified, these are mainly:
• Creation and maintenance effort
• Overlaps between gazetteers
The first problem identified assumes that the gazetteers are manually created and maintained. However, this is not always the case. Gazetteers could be automatically created and maintained by extracting the necessary information from available linguistic resources, which we think is a promising line of future research.
Several research works have been carried out in this direction. An example of this is a NER system which uses trigger gazetteers automatically extracted from WordNet (Magnini et al., 2002) by using wordnet predicates. The advantage in this case is that the resource used is multilingual and thus, porting it to another language is almost straightforward (Negri and Magnini, 2004).
There is also a work that deals with automatically building location gazetteers from internet texts by applying text mining procedures (Ourioupina, 2002), (Uryupina, 2003). However, this work uses linguistic patterns, and thus is language dependent. The author claims that the approach may successfully be used to create gazetteers for NER.
We agree with (Magnini et al., 2002) that in order to automatically create and maintain trigger gazetteers, using a hierarchy of common nouns is a good approach. Therefore, we want to focus on the automatically creation and maintenance of entity gazetteers. Another reason for this is that the class of common nouns (the ones being triggers) is much more stable than the class of proper names (the ones in entity gazetteers). Because of this, the maintenance of the latter is important as new entities to be taken into account appear. For example, if we refer to presidents, the trigger word used might be 'president' and it is uncommon that the trigger used to refer to them changes over time. On the other hand, the entities being presidents change as new presidents appear and current presidents will disappear.
Our aim is to find a method which allow us to automatically create and maintain entity gazetteers by extracting the necessary information from linguistic resources. An important restriction though, is that we want our method to be as independent of language as possible.
The rest of this paper is structured as follows.
In the next section we discuss about our proposal. Section three presents the results we have obtained and some comments about them. Finally, in section four we outline our conclusions and future work.
Approach
In this section we present our approach to automatically build and maintain dictionaries of proper nouns. In a nutshell, we analyse the entries of an encyclopedia with the aid of a noun hierarchy. Our motivation is that proper nouns that form entities can be obtained from the entries in an encyclopedia and that some features of their definitions in the encyclopedia can help to classify them into their correct entity category.
The encyclopedia used has been Wikipedia 1 . According to the English version of Wikipedia 2 , Wikipedia is a multi-lingual web-based, freecontent encyclopedia which is updated continuously in a collaborative way. The reasons why we have chosen this encyclopedia are the following:
• It is a big source of information. By December 2005, it has over 2,500,000 definitions. The English version alone has more than 850,000 entries.
• Its content has a free license, meaning that it will always be available for research without restrictions and without needing to acquire any license.
• It is a general knowledge resource. Thus, it can be used to extract information for open domain systems.
• Its data has some degree of formality and structure (e.g. categories) which helps to process it.
• It is a multilingual resource. Thus, if we are able to develop a language independent system, it can be used to create gazetteers for any language for which Wikipedia is available.
• It is continuously updated. This is a very important fact for the maintenance of the gazetteers.
The noun hierarchy used has been the noun hierarchy from WordNet (Miller, 1995). This is a widely used resource for NLP tasks. Although initially being a monolingual resource for the English language, a later project called EuroWordNet (Vossen, 1998), provided wordnet-like hierarchies for a set of languages of the European Union. Besides, EuroWordNet defines a language independent index called Inter-Lingual-Index (ILI) which allows to establish relations between words in wordnets of different languages. The ILI facilitates also the development of wordnets for other languages.
From this noun hierarchy we consider the nodes (called synsets in WordNet) which in our opinion represent more accurately the different kind of entities we are working with (location, organization and person). For example, we consider the synset 6026 as the corresponding to the entity class Person. This is the information contained in synset number 6026:
person, individual, someone, somebody, mortal, human, soul --(a human being; "there was too much for one person to do")
Given an entry from Wikipedia, a PoS-tagger (Carreras et al., 2004) is applied to the first sentence of its definition. As an example, the first sentence of the entry Portugal in the Simple English Wikipedia 3 is presented here: For every noun in a definition we obtain the synset of WordNet that contains its first sense 4 . We follow the hyperonymy branch of this synset until we arrive to a synset we have considered belonging to an entity class or we arrive to the root of the hierarchy. If we arrive to a considered synset, then we consider that noun as belonging to the entity class of the considered synset. The following example may clarify this explanation:
portugal --> LOCATION 3 http://simple.wikipedia.org/wiki/Portugal 4 We have also carried out experiments taking into account all the senses provided by WordNet. However, the performance obtained is not substantially better while the processing time increases notably.
country --> LOCATION south-west --> NONE europe --> LOCATION
As it has been said in the abstract, the application of a PoS tagger is optional. The algorithm will perform considerably faster with it as with the PoS data we only need to process the nouns. If a PoS tagger is not available for a language, the algorithm can still be applied. The only drawback is that it will perform slower as it needs to process all the words. However, through our experimentation we can conclude that the results do not significantly change.
Finally, we apply a weighting algorithm which takes into account the amount of nouns in the definition identified as belonging to the different entity types considered and decides to which entity type the entry belongs. This algorithm has a constant Kappa which allows to increase or decrease the distance required within categories in order to assign an entry to a given class. The value of Kappa is the minimum difference of number of occurrences between the first and second most frequent categories in an entry in order to assign the entry to the first category. In our example, for any value of Kappa lower than 4, the algorithm would say that the entry Portugal belongs to the location entity type.
Once we have this basic approach we apply different heuristics which we think may improve the results obtained and which effect will be analysed in the section about results.
The first heuristic, called is instance, tries to determine whether the entries from Wikipedia are instances (e.g. Portugal) or word classes (e.g. country). This is done because of the fact that named entities only consider instances. Therefore, we are not interested in word classes. We consider that an entry from Wikipedia is an instance when it has an associated entry in WordNet and it is an instance. The procedure to determine if an entry from Word-Net is an instance or a word class is similar to the one used in (Magnini et al., 2002).
The second heuristic is called is in wordnet. It simply determines if the entries from Wikipedia have an associated entry in WordNet. If so, we may use the information from WordNet to determine its category.
Experiments and results
We have tested our approach by applying it to 3517 entries of the Simple English Wikipedia which were randomly selected. Thus, these entries have been manually tagged with the expected entity category 5 . The distribution by entity classes can be seen in table 1:
As it can be seen in table 1, the amount of entities of the categories Person and Location are balanced but this is not the case for the type Organization. There are very few instances of this type. This is understandable as in an encyclopedia locations and people are defined but this is not the usual case for organizations.
According to what was said in section 2, we considered the heuristics explained there by carrying out two experiments. In the first one we applied the is instance heuristic. The second experiment considers the two heuristics explained in section 2 (is instance and is in wordnet). We do not present results without the first heuristic as through our experimentation it proved to increase both recall and precision for every entity category.
For each experiment we considered two values of a constant Kappa which is used in our algorithm. The values are 0 and 2 as through experimentation we found these are the values which provide the highest recall and the highest precision, respectively. Results for the first experiment can be seen in table 2 and results for the second experiment in table 3.
As it can be seen in these tables, the best recall for all classes is obtained in experiment 2 with Kappa 0 (table 3) while the best precision is obtained in experiment 1 with Kappa 2 (table 2).
The results both for location and person categories are in our opinion good enough to the purpose of building and maintaining good quality gazetteers after a manual supervision. However, the results obtained for the organization class are very low. This is mainly due to the fact of the high interaction between this category and location combined with the practically absence of traditional entities of the organization type such as companies. This interaction can be seen in the indepth results which presentation follows.
In order to clarify these results, we present more in-depth data in tables 4 and 5. These tables present an error analysis, showing the false posi-tives, false negatives, true positives and true negatives among all the categories for the configuration that provides the highest recall (experiment 2 with Kappa 0) and for the one that provides the highest precision (experiment 1 with Kappa 2).
In tables 4 and 5 we can see that the interactions within classes (occurrences tagged as belonging to one class but NONE and guessed as belonging to other different class but NONE) is low. The only case in which it is significant is between location and organization. In table 5 we can see that 12 entities tagged as organization are classified as LOC while 20 tagged as organization are guessed with the correct type. Following with these, 5 entities tagged as location where classified as organization. This is due to the fact that countries and related entities such as "European Union" can be considered both as organizations or locations depending on their role in a text.
Conclusions
We have presented a method to automatically create and maintain entity gazetteers using as resources an encyclopedia, a noun hierarchy and, optionally, a PoS tagger. The method proves to be helpful for these tasks as it facilitates the creation and maintenance of this kind of resources.
In our opinion, the principal drawback of our system is that it has a low precision for the configuration for which it obtains an acceptable value of recall. Therefore, the automatically created gazetteers need to pass a step of manual supervision in order to have a good quality.
On the positive side, we can conclude that our method is helpful as it takes less time to automatically create gazetteers with our method and after that to supervise them than to create that dictionaries from scratch. Moreover, the updating of the gazetteers is straightforward; just by executing the procedure, the new entries in Wikipedia (the entries that did not exist at the time the procedure was performed the last time) would be analysed and from these set, the ones detected as entities would be added to the corresponding gazetteers.
Another important fact is that the method has a high degree of language independence; in order to apply this approach to a new language, we need a version of Wikipedia and WordNet for that language, but the algorithm and the process does not change. Therefore, we think that our method can be useful for the creation of gazetteers for lan- During the development of this research, several future works possibilities have appeared. Regarding the task we have developed, we consider to carry out new experiments incorporating features that Wikipedia provides such as links between pairs of entries. Following with this, we consider to test more complex weighting techniques for our algorithm.
Besides, we think that the resulting gazetteers for the configurations that provide high precision and low recall, although not being appropriate for building gazetteers for NER systems, can be interesting for other tasks. As an example, we consider to use them to extract verb frequencies for the entity categories considered which can be later used as features for a learning based Named Entity Recogniser.
Table 2 :
2Experiment 1. Results applying is instance heuristic 62.88 96.03 76.00 16.17 20.00 17.88 43.19 84.74 57.22 2 77.68 89.60 83.21 13.95 10.90 12.24 46.10 62.71 53.14k
LOC
ORG
PER
prec
rec
F β=1 prec
rec
F β=1 prec
rec
F β=1
0
Table 3 :
3Experiment 2. Results applying is instance and is in wordnet heuristicsTagged
Guessed
NONE LOC ORG PER
NONE
2777
33
1
11
LOC
175
229
0
0
ORG
52
1
2
0
PER
163
1
0
72
Table 4 :
4Results fn-fp (results 1 k=2)Tagged
Guessed
NONE LOC ORG PER
NONE
2220
196
163
243
LOC
8
387
5
4
ORG
20
12
20
3
PER
30
9
2
195
Table 5 :
5Results fn-fp (results 2 k=0) guages in which NER gazetteers are not available but have Wikipedia and WordNet resources.
http://www.wikipedia.org 2 http://en.wikipedia.org/wiki/Main Page
This data is available for research at http://www. dlsi.ua.es/˜atoral/index.html\#resources
AcknowledgementsThis research has been partially funded by the Spanish Government under project CICyT number TIC2003-07158-C04-01 and by the Valencia Government under project number GV04B-268.We also would like to specially thank Borja Navarro for his valuable help on WordNet.
Freeling: An Open-Source Suite of Language Analyzers. X Carreras, I Chao, L Padró, M Padró, Proceedings of the 4th LREC Conference. the 4th LREC ConferenceX. Carreras, I. Chao, L. Padró, and M. Padró. 2004. Freeling: An Open-Source Suite of Language Ana- lyzers. In Proceedings of the 4th LREC Conference.
Overview of MUC-7. N Chinchor, Proceedings of the Seventh Message Understanding Conference (MUC-7). the Seventh Message Understanding Conference (MUC-7)N. Chinchor. 1998. Overview of MUC-7. In Proceed- ings of the Seventh Message Understanding Confer- ence (MUC-7).
A wordnet-based approach to named entities recognition. B Magnini, M Negri, R Preete, H Tanev, Proceedings of SemaNet '02: Building and Using Semantic Networks. SemaNet '02: Building and Using Semantic NetworksB. Magnini, M. Negri, R. Preete, and H. Tanev. 2002. A wordnet-based approach to named entities recog- nition. In Proceedings of SemaNet '02: Building and Using Semantic Networks, pages 38-44.
Internal and external evidence in the identification and semantic categorization of proper names. Corpus Processing for Lexical Aquisition. D Mcdonald, D. McDonald. 1996. Internal and external evidence in the identification and semantic categorization of proper names. Corpus Processing for Lexical Aqui- sition, pages 21-39, chapter 2.
Description of the LTG system used for MUC-7. A Mikheev, C Grover, M Moens, Seventh Message Understanding Conference. MUC-7; Fairfax, VirginiaProceedings of a ConferenceA. Mikheev, C. Grover, and M. Moens. 1998. Descrip- tion of the LTG system used for MUC-7. In Seventh Message Understanding Conference (MUC-7): Pro- ceedings of a Conference held in Fairfax, Virginia, 29 April-1 May.
Wordnet: A lexical database for english. G A Miller, Communications of ACM. 11G. A. Miller. 1995. Wordnet: A lexical database for english. Communications of ACM, (11):39-41.
Using wordnet predicates for multilingual named entity recognition. M Negri, B Magnini, Proceedings of The Second Global Wordnet Conference. The Second Global Wordnet ConferenceM. Negri and B. Magnini. 2004. Using wordnet pred- icates for multilingual named entity recognition. In Proceedings of The Second Global Wordnet Confer- ence, pages 169-174.
Extracting geographical knowledge from the internet. O Ourioupina, Proceedings of the ICDM-AM International Workshop on Active Mining. the ICDM-AM International Workshop on Active MiningO. Ourioupina. 2002. Extracting geographical knowl- edge from the internet. In Proceedings of the ICDM- AM International Workshop on Active Mining.
Semi-supervised learning of geographical gazetteers from the internet. O Uryupina, Proceedings of the HLT-NAACL 2003 Workshop on Analysis of Geographic References. the HLT-NAACL 2003 Workshop on Analysis of Geographic ReferencesO. Uryupina. 2003. Semi-supervised learning of geo- graphical gazetteers from the internet. In Proceed- ings of the HLT-NAACL 2003 Workshop on Analysis of Geographic References, pages 18-25.
P Vossen, Introduction to eurowordnet. Computers and the Humanities. 32P. Vossen. 1998. Introduction to eurowordnet. Com- puters and the Humanities, 32:73-89. |
1,221,886 | Open Text Semantic Parsing Using FrameNet and WordNet | This paper describes a rule-based semantic parser that relies on a frame dataset (FrameNet), and a semantic network (WordNet), to identify semantic relations between words in open text, as well as shallow semantic features associated with concepts in the text. Parsing semantic structures allows semantic units and constituents to be accessed and processed in a more meaningful way than syntactic parsing, moving the automation of understanding natural language text to a higher level.Here, the category (cat) is defined as adjective, the type is descriptive, degree is base form. We also record the attr feature, which is derived from the attribute relation in Word-Net, and links a descriptive adjective to the attribute (noun) it modifies, such as slow speed. | [
9491739,
62182406
] | Open Text Semantic Parsing Using FrameNet and WordNet
Lei Shi leishi@unt.edu
Department of Computer Science and Engineering
University of North Texas
Rada Mihalcea
Department of Computer Science and Engineering
University of North Texas
Open Text Semantic Parsing Using FrameNet and WordNet
This paper describes a rule-based semantic parser that relies on a frame dataset (FrameNet), and a semantic network (WordNet), to identify semantic relations between words in open text, as well as shallow semantic features associated with concepts in the text. Parsing semantic structures allows semantic units and constituents to be accessed and processed in a more meaningful way than syntactic parsing, moving the automation of understanding natural language text to a higher level.Here, the category (cat) is defined as adjective, the type is descriptive, degree is base form. We also record the attr feature, which is derived from the attribute relation in Word-Net, and links a descriptive adjective to the attribute (noun) it modifies, such as slow speed.
Introduction
The goal of the semantic parser is to analyze the semantic structure of a natural language sentence. Similar in spirit with the syntactic parser -whose goal is to parse a valid natural language sentence into a parse tree indicating how the sentence can be syntactically decomposed into smaller syntactic constituents -the purpose of the semantic parser is to analyze the structure of sentence meaning. Sentence meaning is composed by entities and interactions between entities, where entities are assigned semantic roles, and can be further modified by other modifiers. The meaning of a sentence is decomposed into smaller semantic units connected by various semantic relations by the principle of compositionality, and the parser represents the semantic structureincluding semantic units as well as semantic relations, connecting them into a formal format.
One major problem faced by many natural language understanding applications that rely on syntactic analysis of text, is the fact that similar syntactic patterns may introduce different semantic interpretations. Likewise, similar meanings can be syntactically realized in many different ways. The semantic parser attempts to solve this problem, and produces a syntax-independent representation of sentence meaning, so that semantic constituents can be accessed and processed in a more meaningful and flexible way, avoiding the sometimes rigid interpretations produced by a syntactic analyzer. For instance, the sentences I boil water and water boils contain a similar relation between water and boil, even though they have different syntactic structures.
In this paper, we describe the main components of the semantic parser, and illustrate the basic procedures involved in parsing semantically open text. Our semantic parser departs from current approaches in statistics-based annotations of semantic structures. Instead, we are using publicly available lexical resources (FrameNet and WordNet) as a starting point to derive rules for a rule-based semantic parser.
Semantic Structure
Semantics is the denotation of a string of symbols, either a sentence or a word. Similar to a syntactic parser, which shows how a larger string is formed by smaller strings from a formal point of view, the semantic parser shows how the denotation of a larger string -sentence, is formed by denotations of smaller strings -words. Syntactic relations can be described using a set of rules about how a sentence string is formally generated using word strings. Instead, semantic relations between semantic constituents depend on our understanding of the world, which is across languages and syntax.
We can model the sentence semantics as describing entities and interactions between entities. Entities can represent physical objects, as well as time, places, or ideas, and are usually formally realized as nouns or noun phrases. Interactions, usually realized as verbs, describe relationships or interactions between participating entities. Note that a participant can also be an interaction, which can be regarded as an entity nominalized from an interaction. We assign semantic roles to participants and their semantic relations are identified by the case frame introduced by their interaction. In a sentence, participants and interactions can be further modified by various modifiers, including descriptive modifiers that describe attributes such as drive slowly, restrictive modifiers that enforce a general denotation to become more specific such as musical instrument, referential modifiers that indicate particular instances such as the pizza I ordered. Other semantic relations can also be identified, such as coreference, complement, and others. Based on the prin-ciple of compositionality, the sentence semantic structure is recursive, similar to a tree.
Note that the semantic parser analyzes shallow-level semantics, which is derived directly from linguistic knowledge, such as rules about semantic role assignment, lexical semantic knowledge, and syntactic-semantic mappings, without taking into account any context or common sense knowledge. Hence, the parser can be used as an intermediate semantic processing level before higher levels of text understanding.
Knowledge Bases for Semantic Parsing
The parser relies on two main types of knowledge -about words, and about relations between words. The first type of knowledge is drawn from WordNet -a large lexical database with rich information about words and concepts. We refer to this as word-level knowledge. The latter is derived from FrameNet -a resource that contains information about different situations, called frames, in which semantic relations are syntactically realized in natural language sentences. We call this sentence-level knowledge. In addition to these two lexical knowledge bases, the parser also utilizes a set of manually defined rules, which encode mappings from syntactic structures to semantic relations, and which are used to handle those structures not explicitly addressed by FrameNet or WordNet. In this section, we describe the type of information extracted from these knowledge bases, and show how this information is encoded in a format accessible to the semantic parser.
Sentence Level Knowledge
FrameNet (Johnson et al., 2002) provides the knowledge needed to identify case frames and semantic roles. FrameNet is based on the theory of frame semantics, and defines a sentence level ontology. In frame semantics, a frame corresponds to an interaction and its participants, both of which denote a scenario, in which participants play some kind of roles. A frame has a name, and we use this name to identify the semantic relation that groups together the semantic roles. Nouns, verbs and adjectives can be used to identify frames.
Each annotated sentence in FrameNet exemplifies a possible syntactic realization for the semantic roles associated with a frame for a given target word. By extracting the syntactic features and corresponding semantic roles from all annotated sentences in the FrameNet corpus, we are able to automatically build a large set of rules that encode the possible syntactic realizations of semantic frames.
Rules Learned from FrameNet
FrameNet data "is meant to be lexicographically relevant, not statistically representative" (Johnson et al., 2002), and therefore we are using FrameNet as a starting point to derive rules for a rule-based semantic parser.
To build the rules, we are extracting several syntactic features. Some are explicitly encoded in FrameNet, such as the grammatical function (GF) and phrase type (PT) features. In addition, other syntactic features are extracted from the sentence context. One such feature is the relative position (RP) to the target word. Another feature is the voice of the sentence. If the phrase type is prepositional phrase (PP), we also record the actual preposition that precedes the phrase.
After we extract all these syntactic features, the semantic role is appended to the rule, which creates a mapping from syntactic features to semantic roles.
Feature sets are arranged in a list, the order of which is identical to that in the sentence. Altogether, the rule for a possible realization of a frame exemplified by a tagged sentence is an ordered sequence of syntactic features with their semantic roles. For example, the corresponding formalized rule for the sentence I had chased Selden over the moor is: [active, [ext,np,before,theme], [obj,np,after,goal], [comp,pp,after,over,path]] In FrameNet, there are multiple annotated sentences for each frame to demonstrate multiple possible syntactic realizations. All possible realizations of a frame are collected and stored in a list for that frame, which also includes the target word, its syntactic category, and the name of the frame. All the frames defined in FrameNet are transformed into this format, so that they can be easily handled by the rule-based semantic parser.
Word Level Knowledge
WordNet (Miller, 1995) is the resource used to identify shallow semantic features that can be attached to lexical units. For instance, attribute relations, adjective/adverb classifications, and others, are semantic features extracted from Word-Net and stored together with the words, so that they can be directly used in the parsing process.
All words are uniformly defined, regardless of their class. Features are assigned to each word, including syntactic and shallow semantic features, indicating the functions played by the word. Syntactic features are used by the featureaugmented syntactic analyzer to identify grammatical errors and produce syntactic information for semantic role assignment. Semantic features encode lexical semantic information extracted from WordNet that is used to determine semantic relations between words in various situations.
Features can be arbitrarily defined, as long as there are rules to handle them. The features we define encode information about the syntactic category of a word, number and countability for nouns, transitivity and form for verbs, type, degree, and attribute for adjectives and adverbs, and others.
For example, for the adjective slow, the entry in the lexicon is defined as:
The Semantic Parser
The parsing algorithm is implemented as a rule-based system. The general procedure of semantic parsing consists of three main steps: (1) syntactic parsing into an intermediate format, using a feature-augmented syntactic parser, and assignment of shallow semantic features; (2) semantic role assignment;
(3) application of default rules.
Feature Augmented Syntactic/Semantic Analyzer
The semantic parser is based on dependencies between words that are identified using a structure analyzer. The analyzer generates an intermediate format, where target words and syntactic arguments are explicitly identified, so that they can be matched against the rules derived from FrameNet. The intermediate format also encodes some shallow semantic features, including word level semantics (e.g. attribute, gender), and semantic relations that have direct syntactic correspondence (e.g. modifier types). The function of the sentence is also identified, as assertion, query, yn-query, command.
The analyzer is based on a feature augmented grammar, and has the capability of detecting if a sentence is grammatically correct (unlike statistical parsers, which attempt to parse any sentence, regardless of their well-formness). Constituents are assigned with features, and the grammar consists of a set of rules defining how constituents can connect to each other, based on the values of their features.
Since features can contain both syntactic and semantic information, the analyzer can reject some grammatically incorrect sentences such as: I have much apples, You has my car, or even some semantically incorrect sentences: The technology is very military 1 .
Semantic Role Assignment
In the process of semantic role assignment, we first start by identifying all possible frames, according to the target word. Next, a matching algorithm is used to find the most likely match among all rules derived for these frames, to identify the correct frame (if several are possible), and assign semantic roles.
In a sentence describing an interaction, we usually select the verb or predicative adjective as the target word, which triggers the sentence level frame. A noun can also play the role of target word, but only within the scope of the noun phrase it belongs to, and it can be used to assign semantic roles only to its modifiers.
The matching algorithm relies on a scoring function to evaluate the similarity between two sequences of syntactic features. The matching starts from left to right. Whenever an exact match is found, the score will be increased by 1. It should be noted that the search sequence is uni-directional which means that once you find a match, you can go ahead to check features to the right, but you cannot go back to check rules you have already checked. This guarantees that syntactic features are matched in the right order, and the order of sequence in the rule is maintained. Since the frame of a target word may have multiple possible syntactic realizations, which are exemplified by different sentences in the corpus, we try to match the syntactic features in the intermediate format with all the rules available for the target word, and compare their matching scores. The rule with the highest score is selected, and used for semantic role assignment. Through this scoring scheme, the matching algorithm tries to maximize the number of syntactic realizations for semantic roles defined in FrameNet rules.
Notice that the semantic role assignment is performed recursively, until all roles within frames triggered by all target words are assigned.
Walk-Through Example
Assume the following two rules, derived from FrameNet for the target word come: Using the matching/scoring algorithm, the score for matching A' to rule 1 is determined as 3, and to rule 2 as 2. Hence, the matching algorithm selects rule 1, and the semantic role for train is mode of transportation. Similarly, when we match B' to rule 1, we obtain a score of 2, and a larger score of 3 for matching with rule 2. Therefore, for the second case, the role assigned to home is source.
Applying Default Rules
In a sentence, semantic roles are played by the subject, objects, and the prepositional phrases attached to the interaction described by the sentence. However, FrameNet defines roles only for some of these elements, and therefore the meaning of some sentence constituents cannot be determined using the rules extracted from FrameNet. In order to handle these constituents, and allow for a complete semantic interpretation of the sentence, we have defined a set of default rules that are applied as a last step in the process of semantic parsing. For example, FrameNet defines a role for the prepositional phrase on him in "I depend on him", but it does not define a role for the phrase on the street in "I walk on the street". To handle the interpretation of this phrase, we apply the default rule that "on something" modifies the location attribute of an interaction.
We have defined about 100 such default rules, which are assigned in the last step of the semantic parsing process, if no other rule could be applied in previous steps. After this step, the semantic structure of the sentence is produced.
Parser Output and Evaluation
The semantic parser is demonstrated in this conference, which is perhaps the best evaluation we can offer. We illustrate here the output of the semantic parser on a natural language sentence, and show the corresponding semantic structure and tree. For example, for the sentence I like to eat Mexican food because it is spicy, the semantic parser produces the following encoding of sentence type, frames, semantic constituents and roles, and various attributes and modifiers: experiencer, [[entity, [i] The corresponding semantic tree is shown in Figure 1. We have conducted evaluations of the semantic role assignment algorithm on 350 sentences randomly selected from FrameNet. The test sentences were removed from the FrameNet corpus, and the rules-learning procedure described earlier in the paper was invoked on this reduced corpus. All test sentences were then semantically parsed, and full semantic annotations were produced for each sentence. Notice that the evaluation is conducted only for semantic role assignment -since this is the only information available in FrameNet. The other semantic annotations produced by the parser (e.g. attribute, gender, countability) are not evaluated at this point, since there are no hand-validated annotations of this kind available in current resources.
T = assertion P = [[
Both frames and frame elements are automatically identified by the parser. Out of all the elements correctly identified, we found that 74.5% were assigned with the correct role (this is therefore the accuracy of role assignment), which compares favorably with previous results reported in the literature for this task. Notice also that since this is a rule-based approach, the parser does not need large amounts of annotated data, but it works well the same for words for which only one or two sentences are annotated.
Related Work
All previous work in semantic parsing has exclusively focused on labeling semantic roles, rather than analyzing the full structure of sentence semantics, and is usually based on statistical models -e.g. (Gildea and Jurafsky, 2000), (Fleischman et al., 2003). To our knowledge, there was no previous attempt on performing semantic annotations using alternative rule-based algorithms. However, a rule-based approach is closer to the way humans interpret the semantic structure of a sentence. Moreover, as mentioned earlier, the FrameNet data is not meant to be "statistically representative", but rather illustrative for various language constructs, and therefore a rule-based approach is more suitable for this lexical resource.
Conclusions
We described a rule-based approach to open text semantic parsing. The semantic parser has the capability to analyze the semantic structure of a sentence, and show how the meaning of the entire sentence is composed of smaller semantic units, linked by various semantic relations. The parsing process relies on rules derived from a frame dataset (FrameNet) and a semantic network (WordNet). We believe that the semantic parser will prove useful for a range of language processing applications that require knowledge of text meaning, including word sense disambiguation, information extraction, question answering, machine translation, and others.
Figure 1 :
1to eat Mexican food, because it is spicy. Semantic parse tree (am = attributive modifier, rm = referential modifier, sm = restrictive modifier)
Since military is not a descriptive adjective, it cannot be connected to the degree modifier very.
Maximum entropy models for FrameNet classification. M Fleischman, N Kwon, E Hovy, Proceedings of 2003 Conference on Empirical Methods in Natural Language Processing EMNLP-2003. 2003 Conference on Empirical Methods in Natural Language Processing EMNLP-2003Sapporo, JapanM. Fleischman, N. Kwon, and E. Hovy. 2003. Maximum en- tropy models for FrameNet classification. In Proceedings of 2003 Conference on Empirical Methods in Natural Language Processing EMNLP-2003, Sapporo, Japan.
Automatic labeling of semantic roles. D Gildea, D Jurafsky, Proceedings of the 38th Annual Conference of the Association for Computational Linguistics (ACL-00). the 38th Annual Conference of the Association for Computational Linguistics (ACL-00)Hong KongD. Gildea and D. Jurafsky. 2000. Automatic labeling of semantic roles. In Proceedings of the 38th Annual Conference of the As- sociation for Computational Linguistics (ACL-00), pages 512- 520, Hong Kong, October.
C Johnson, C Fillmore, M Petruck, C Baker, M Ellsworth, J Ruppenhofer, E Wood, FrameNet: Theory and Practice. C. Johnson, C. Fillmore, M. Petruck, C. Baker, M. Ellsworth, J. Ruppenhofer, and E. Wood. 2002. FrameNet: Theory and Practice. http://www.icsi.berkeley.edu/ framenet.
Wordnet: A lexical database. G Miller, Communication of the ACM. 3811G. Miller. 1995. Wordnet: A lexical database. Communication of the ACM, 38(11):39-41. |
259,376,848 | ABCD Team at SemEval-2023 Task 12: An Ensemble Transformer-based System for African Sentiment Analysis | This paper describes the system of the ABCD team for three main tasks in the SemEval-2023 Task 12: AfriSenti-SemEval for Low-resource African Languages using Twitter Dataset. We focus on exploring the performance of ensemble architectures based on the soft voting technique and different pre-trained transformerbased language models. The experimental results show that our system has achieved competitive performance in some Tracks in Task A: Monolingual Sentiment Analysis, where we rank the Top 3, Top 2, and Top 4 for the Hause, Igbo and Moroccan languages. Besides, our model achieved competitive results and ranked 14 th place in Task B (multilingual) setting and 14 th and 8 th place in Track 17 and Track 18 of Task C (zero-shot) setting. | [
13572193,
208117506,
220347683,
52967399,
227062345,
51890955,
240225648,
207880568,
52011482
] | ABCD Team at SemEval-2023 Task 12: An Ensemble Transformer-based System for African Sentiment Analysis
July 13-14, 2023
Dang Van Thin
University of Information Technology
Ho Chi Minh CityVietnam
National University
Ho Chi Minh CityVietnam
Dai Ba Nguyen
University of Information Technology
Ho Chi Minh CityVietnam
National University
Ho Chi Minh CityVietnam
Dang Ba Qui
Nong Lam University
Ho Chi Minh cityVietnam
Duong Ngoc Hao
University of Information Technology
Ho Chi Minh CityVietnam
National University
Ho Chi Minh CityVietnam
Ngan Luu
Thuy Nguyen
University of Information Technology
Ho Chi Minh CityVietnam
National University
Ho Chi Minh CityVietnam
ABCD Team at SemEval-2023 Task 12: An Ensemble Transformer-based System for African Sentiment Analysis
Proceedings of the The 17th International Workshop on Semantic Evaluation (SemEval-2023)
the The 17th International Workshop on Semantic Evaluation (SemEval-2023)July 13-14, 2023
This paper describes the system of the ABCD team for three main tasks in the SemEval-2023 Task 12: AfriSenti-SemEval for Low-resource African Languages using Twitter Dataset. We focus on exploring the performance of ensemble architectures based on the soft voting technique and different pre-trained transformerbased language models. The experimental results show that our system has achieved competitive performance in some Tracks in Task A: Monolingual Sentiment Analysis, where we rank the Top 3, Top 2, and Top 4 for the Hause, Igbo and Moroccan languages. Besides, our model achieved competitive results and ranked 14 th place in Task B (multilingual) setting and 14 th and 8 th place in Track 17 and Track 18 of Task C (zero-shot) setting.
Introduction
The AfriSenti-SemEval Shared Task 12 (Muhammad et al., 2023b) aims at building Sentiment Analysis (SA) systems for 17 African languages, including Hausa, Yoruba, Igbo, Nigerian Pidgin from Nigeria, Amharic, Tigrinya, and Oromo from Ethiopia, Swahili from Kenya and Tanzania, Algerian Arabic dialect from Algeria, Kinyarwanda from Rwanda, Twi from Ghana, Mozambique Portuguese from Mozambique and Moroccan Arabic/Darija from Morocco. This shared task has three main tasks, including two zero-shot tracks, one multilingual track and 12 monolingual tracks. The zero-shot tracks require training two zero-shot models where each model works for only one language, and each model has training from the 12 languages in the monolingual tracks. The multilingual track requires training multilingual SA models that are able to handle all languages. Twelve monolingual tracks need training individual monolingual models where each model works for only one language. The dataset involves tweets labelled with three sentiment classes (positive, negative, neutral) in 14 African languages. Each tweet is annotated by three annotators following the annotation guidelines in (Mohammad, 2016). They used a form of a majority vote to determine the sentiment of the tweet (Mohammad, 2022;Yimam et al., 2020). All the tweet of the dataset is code-mixed, which can increase the performance of the model.
In this paper, we propose an ensemble architecture for the AfriSenti-Semeval shared task. Our ensemble architecture is based on the pre-trained transformer-based language models and soft voting technique. In our case, the AfroXLMR, AfriB-ERTa, and LaBSE are employed as the base classifier because these models support African languages in the shared task. The final prediction is combined from the base classifier using the soft voting technique.
The rest of the paper is organized as follows. Section 2 provides the related work. The system description is presented in Section 3, followed by evaluation results in Section 5. The experimental setup and conclusion is discussed in Section 4 and Section 6, respectively.
Related Work
Sentiment Analysis has been studied in the NLP field for two past decades for low-resource languages. However, in the case of low-resourced languages such as African languages, the studies have yet to progress well compared to those of high-resourced languages due to the reasons such as the unavailability of annotated corpora.
In the Arabic language, a set of different machine learning and deep learning approaches has been applied to the sentiment analysis task of the language (Abdulla et al., 2013;Duwairi et al., 2014;Heikal et al., 2018;Shoukry and Rafea, 2012). In (Abdulla et al., 2013), the authors have constructed an Arabic sentiment analysis dataset extracted from Twitter and classified it as a negative and positive sentiment. The methods they used were both lexicon-based in which they constructed a set of words labelled with some polarity indicators, and using those constructed lexicons; they implemented an algorithm that tells if the whole tweet is positive or negative depending on the number of positive or negative lexicon it contains. At the same time, they have also used supervised methods of machine learning models such as support vector machine (SVM) (Zhang, 2001), and K-nearest neighbours (KNN) (Bishop and Nasrabadi, 2006) trained on the constructed dataset. They have achieved the best result by using SVM classifiers when compared to other methods they have used.
In other Arabic sentiment analysis studies by (Heikal et al., 2018), the authors have used deep learning models to outperform the state-of-the-art result. Their approach was an ensemble of convolutional neural network (CNN) (Heaton, 2018) and long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) models through which they have achieved an F1-Score of 64.46% beating the previous state-of-the-art by more than 10%. However, Arabic has also benefited from the recent breakthrough in NLP studies related to transformer models and transfer learning techniques. A pretrained Arabic version of BERT has achieved stateof-the-art results in many NLP downstream tasks of Arabic, including sentiment analysis, named entity recognition and question answering (Antoun et al., 2020).
In the case of Amram et al. (2018), the authors studied the effect of the input as tokens or morphemes. They concluded that for linear neural network architectures such as multi-layer perceptron (MLP) (Heaton, 2018) and (CNN), the token level granularity had achieved higher accuracy, while for LSTM architecture, the level of the morpheme achieved better results. Furthermore, they achieved state-of-the-art results for the Hebrew sentiment analysis task, using the CNN network of token level input with more than the accuracy of 89%.
With the emergence of deep learning models, Amharic sentiment analysis has also benefited from the breakthrough in the field of NLP using deep learning. In another study for Amharic sentiment classification, Yeshiwas Getachew and Abebe Alemu (Alemu, 2018) used a deep learning approach to model sentiment analysis tasks of the language. They have constructed a new dataset collected from social media platforms, Facebook, and labelled by Amrahic linguist experts. Using a fully connected neural network with different parameter tuning, including the number of neurons, they achieved an accuracy of 96.61% in the validation set.
System Description
Approach
The diagram in Figure 1 illustrates our ensemble approach for Task A. The framework consists of three main layers: a pre-processing layer, a layer of contextual language-based models, and a voting ensemble layer. Firstly, the input text is subject to several processing steps in the pre-processing layer. Following this, we fine-tune different pre-trained contextual language models in order to obtain probability outputs of labels. Finally, the probability outputs from individual models are combined using a soft voting technique to get the final prediction. The detailed structure of the framework is described in the following.
Pre-processing Layer: Pre-processing is one of the essential components in the classification system in the NLP field. However, the languages in the shared task are unfamiliar; therefore, we design a standard list of pre-processing steps for all languages, including:
• Word Normalization: We used the regular expression technique to normalize some words or phrases which are the same meanings in the sentence. For example, we replace the "URL" with the "website" word.
• Noise Removal: We observed that there are many noises, such as punctuation and special characters, in the dataset. We found that these noises are not necessary for the sentence-level dataset. Therefore, we remove it from the samples.
• Emoji Replacement: Because of the large amount of emoji in the dataset, up to 28.85% (based on our statistics), the emoji is very important for the model which can increase the performance of the model. We replace the emoji with three labels such as negative, neutral and positive.
Fine-tuning Language Model: As can be seen in Figure 1, we utilize the power of three pre-trained contextual language models, including AfroXLMR (Alabi et al., 2022), AfiBERTa (Ogueji To fine-tune the language models, we followed the approach of (Devlin et al., 2019a), which is presented in detail below:
Given a pre-processed Twitter with N words: X " tx 1 , x 2 , ...., x N u, we first use a corresponding tokenizer to prepare the inputs. Then, we employ a pre-trained language model with L transformer layers to calculate the contextualized representations H L " th L cls , h L 1 , ...., h L N u P R Nˆdim h , where dim h is the dimension of the representation vector. Finally, we extract the contextualized representation h L cls of [CLS] token in the last L transformer layer as the feature of the input. The obtained representations is directly fed to the linear layer with the Softmax activation function to calculate the score vectorŷ paq for each class.
y " sof tmaxpW¨h L cls`b q(1)
where W and b are the learnable parameters of the output layer. We use the Category Cross-Entropy loss to optimize the model.
Soft voting Scheme: Our motivation for applying an ensemble approach is to take advantage of the performances of various models. Given predictions tŷ θ 1 ,ŷ θ 2 , ..,ŷ θn u the n base classifiers. We applied the simple soft voting technique to merge the predictions of the base models. In our case, the individual classifiers are treated equally. We sum up the probability output of n classifiers and choose the sentiment class with the highest probability as the final prediction.
Pre-trained Contextual Language Models
We briefly explain the three pre-trained language models used in this paper.
• AfroXLMR: AfroXLMR is a large language model developed by (Alabi et al., 2022) and is released for the community of researchers in African languages. It is based on the XLM-RoBERTa architecture and applies the multilingual adaptive fine-tuning technique to 17 African languages and three high-resource languages (English, French, and Arabic) simultaneously.
• AfriBERTa: AfriBERTa is a transformerbased multilingual language model trained on 11 African languages, all of which are low-resource. It had developed by (Ogueji et al., 2021). The author had trained a transformer (Vaswani et al., 2017) with the standard masked language modelling objective of (Devlin et al., 2019b) without next sentence prediction. This is also the same approach used in XLM-R (Conneau et al., 2020).
• LaBSE: LaBSE is a language-agnostic BERT sentence embedding model supporting up to 109 languages. This language model was developed by (Feng et al., 2022). To have the best method for learning multilingual sentence embeddings, they combined the best methods for learning monolingual and cross-lingual representations, including: masked language modelling (MLM), translation language modelling (TLM) (Lample and Conneau, 2019), dual encoder translation ranking (Guo et al., 2018), and additive margin softmax (Yang et al., 2019).
Experimental Setup
Data and Preprocessing: We utilized the official training set for training models. As the competition rules stipulated, no additional data was used during the training process (Muhammad et al., 2023a). The development set was used to optimize the hyper-parameters for each track and task.
Evaluation Metrics: The evaluation metric for three Tasks (A, B, C) is a Weighted F1-score between submission and test gold set.
Configuration Settings:
We implemented our models using Trainer API from Hugging Face library (Wolf et al., 2020). The maximum input length is set as 128 tokens, and the number of epochs is set as 10 with a batch size of 32 for all languages. We used an AdamW optimizer with a linear schedule warmup technique.
Submitted Systems: We submitted the different models based on the task to the evaluation phase. For task A -Monolingual SA, we submit the performance of the ensemble soft voting model for all languages. For task B, because of the limitation of computation resource languages, we divide the training into 5 folds and train the LaBSE model on each fold to predict the test set. Then, we combine the results of 5 folds using the soft voting technique. For Task C, our strategy is to utilize the Google Translation 1 to translate the source language to a target language. In our case, we translate the test set on Track 17 and Track 18 into the Hausa language. We choose the Hause language as the source language for the following reasons: (1) the number of samples in the training set and distribution between labels.
Results and Discussion
In this section, we present the official results of our final submission model for three main Tasks in the AfriSenti-SemEval Shared Task competition. For task A, we only compare results with the results from the two top teams for each track. In comparison, we report the top 5 systems for task B and task C, respectively.
Task A: Monolingual Sentiment Classification Table 1 presents the performances of our ensemble model compared with two top teams for 12 tracks. Our system gives competitive results on the three tracks such as Track 1 (Hausa), Track 2 (Yoruba), Track 3 (Igbo) , Track 7 (Moroccan), and Track 8 (Swahili). Unfortunately, our submission system is not effective for the remainder Tracks. To explore the reason, we report the results of base models and ensemble systems on the development set. As seen in Table 2, we observe that the performance of the ensemble model decreases significantly on some Tracks (e.g. Track 4, Track 11) due to the poor performance of base models. Moreover, we noticed that our approach is effective for Tracks with a lot of samples and balance in the training data. In this study, we intend to investigate the effectiveness of soft voting techniques based on different transformer-based models for African languages. Therefore, we used the ensemble model as the final submission system instead of the best model on the development set.
Task A: Multilingual Sentiment Classification Table 3 present the results of our submission on Task B. Officially, we achieved the F1-score of 69.22% on the test set (Top 14). As mentioned in Section 4, due to the limitation of computational resources, we must split the training set as the 5 folds and use the LaBSE model as the main classifiers. Then we use the soft-voting technique to merge the prediction of 5 folds on the test set.
Task C: Zero-Shot Sentiment Classification Table 4 shows our submission results compared with the top 5 systems in Task C -Zero-shot sentiment analysis. We ranked 14 th and 8 th among all participating systems for Track 17 and Track 18, respectively. One of the reasons for the poor performance of our submission system is the error in the translation process to translate the
Conclusion
In this paper, we presented a simple and efficient ensemble architecture for sentiment analysis tasks in the SemEval-2023 Task 12: AfriSenti-SemEval. Our system is based on fine-tuning the pre-trained transformer-based language model as the base classifiers and the soft voting technique to combine the prediction of different base classifiers. Our experiments demonstrated that it achieves competitive results on some languages in Task A: Monolingual Sentiment Analysis without relying on any additional resources. For future work, we plan to improve our system to handle the imbalanced problem in some languages. Besides, data augmentation is also a promising direction to enhance the overall system's performance.
Figure 1 :
1The soft voting ensemble architecture based on the combination of fine-tuning multilingual different contextual language models. et al., 2021), and LaBSE (Feng et al., 2022) model as the base classifiers.
Table 1 :
1Results of our best system compared with two top systems on 12 tracks for Task A: Monolingual Sentiment Classification.Track 1: Hausa
Track 2: Yoruba
Track 3:Igbo
Track 4:Nigerian Pidgin
Team
F1-score Team
F1-score Team
F1-score Team
F1-score
Top 1
82.62
Top 1
80.16
Top 1
82.96
Top 1
75.96
Top 2
82.04
Top 2
80.08
Top 3
81.51
Top 2
75.75
Ours (Top 3)
81.50
Ours (Top 6)
79.73
Ours (Top 2)
82.28
Ours (Top 18)
66.30
Track 5:Amharic
Track 6:Algerian Arabic
Track 7:Moroccan
Track 8: Swahili
Team
F1-score Team
F1-score Team
F1-score Team
F1-score
Top 1
78.42
Top 1
74.20
Top 1
64.83
Top 1
65.68
Top 2
72.18
Top 2
73.00
Top 2
63.54
Top 2
64.89
Ours (Top 18)
58.05
Ours (Top 21)
63.50
Ours (Top 4)
61.54
Ours (Top 8)
63.10
Track 9:Kinyarwanda
Track 10:Twi
Track 11:Mozambican
Track 12:Xitsonga
Team
F1-score Team
F1-score Team
F1-score Team
F1-score
Top 1
72.63
Top 1
68.28
Top 1
74.98
Top 1
60.67
Top 2
72.50
Top 2
67.58
Top 2
73.83
Top 2
60.32
Ours (Top 15)
67.36
Ours (Top 11)
65.61
Ours (Top 19)
67.21
Ours (Top 10)
53.92
Table 2 :
2Results of the base models and the ensemble transformer-based architecture on the development set for Task A: Monolingual Sentiment Analysis.Track 1: Hausa
Track 2: Yoruba
Track 3:Igbo
Track 4:Nigerian Pidgin
Model
F1-score Model
F1-score
Model
F1-score Model
F1-score
AfroXLMR
78.91
AfroXLMR
74.71
AfroXLMR
80.18
AfroXLMR
75.02
AfriBERTa
80.28
AfriBERTa
76.95
AfriBERTa
81.28
AfriBERTa
73.73
LaBSE
80.61
LaBSE
77.31
LaBSE
81.86
LaBSE
74.15
Ensemble
81.30
Ensemble
78.49
Ensemble
82.64
Ensemble
73.80
Track 5:Amharic
Track 6:Algerian Arabic
Track 7:Moroccan
Track 8: Swahili
Model
F1-score Model
F1-score
Model
F1-score Model
F1-score
AfroXLMR
57.31
AfroXLMR
65.33
AfroXLMR
74.28
AfroXLMR
60.94
AfriBERTa
58.04
AfriBERTa
45.99
AfriBERTa
58.61
AfriBERTa
59.88
LaBSE
58.73
LaBSE
64.79
LaBSE
74.49
LaBSE
57.22
Ensemble
59.45
Ensemble
64.04
Ensemble
73.89
Ensemble
61.84
Track 9:Kinyarwanda
Track 10:Twi
Track 11:Mozambican
Track 12:Xitsonga
Model
F1-score Model
F1-score
Model
F1-score Model
F1-score
AfroXLMR
64.86
AfroXLMR
62.34
AfroXLMR
64.13
AfroXLMR
51.93
AfriBERTa
61.92
AfriBERTa
63.25
AfriBERTa
58.67
AfriBERTa
55.83
LaBSE
65.81
LaBSE
64.93
LaBSE
66.11
LaBSE
53.78
Ensemble
64.95
Ensemble
64.23
Ensemble
62.59
Ensemble
58.01
Table 3 :
3Results of our best system compared with five top systems on Track 16 for Task B: Multilingual Sentiment Classification.source language to the target language.Rank
Team
F1-score
Top 1
BCAI-AIR3
75.06
Top 2
king001
74.96
Top 3
DN
72.55
Top 4
ymf924
72.34
Top 5
mitchelldehaven
72.33
Ours (Top 14) ABCD Team
69.22
Table 4 :
4Results of our best system compared with five top systems on Track 17 and Track 18 for the Task C: Zero-Shot Sentiment Classification.Track 17: Zero-shot Tigrinya
Track 18: Zero-shot Oromo
Rank
Team
F1-score
Rank
Team
F1-score
Top 1
BCAI-AIR3
70.86
Top 1
mitchelldehaven
46.23
Top 2
king001
70.47
Top 2
UCAS
45.82
Top 3
ymf924
70.39
Top 3
ymf924
45.34
Top 4
uid
69.90
Top 4
UM6P
45.27
Top 5
TBS
69.61
Top 5
TBS
45.12
Ours (Top 14) ABCD Team
60.53
Ours (Top 8) ABCD Team
42.64
https://translate.google.com/
AcknowledgementsWe would like to thank the anonymous reviewers for their valuable comments and the AfriSenti-SemEval organization for organizing the shared task competition. This research was supported by The VNUHCM-University of Information Technology's Scientific Research Support Fund.
Arabic sentiment analysis: Lexicon-based and corpus-based. A Nawaf, Abdulla, A Nizar, Mohammed A Ahmed, Mahmoud Shehab, Al-Ayyoub, IEEE Jordan conference on applied electrical engineering and computing technologies (AEECT). IEEENawaf A Abdulla, Nizar A Ahmed, Mohammed A She- hab, and Mahmoud Al-Ayyoub. 2013. Arabic senti- ment analysis: Lexicon-based and corpus-based. In 2013 IEEE Jordan conference on applied electrical engineering and computing technologies (AEECT), pages 1-6. IEEE.
Adapting pretrained language models to African languages via multilingual adaptive fine-tuning. O Jesujoba, David Ifeoluwa Alabi, Marius Adelani, Dietrich Mosbach, Klakow, Proceedings of the 29th International Conference on Computational Linguistics. the 29th International Conference on Computational LinguisticsGyeongju, Republic of KoreaInternational Committee on Computational LinguisticsJesujoba O. Alabi, David Ifeoluwa Adelani, Marius Mosbach, and Dietrich Klakow. 2022. Adapting pre- trained language models to African languages via multilingual adaptive fine-tuning. In Proceedings of the 29th International Conference on Computational Linguistics, pages 4336-4349, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Deep learning approach for amharic sentiment analysis university of gondar. Y Alemu, Y Alemu. 2018. Deep learning approach for amharic sentiment analysis university of gondar.
Representations and architectures in neural sentiment analysis for morphologically rich languages: A case study from modern hebrew. Adam Amram, Anat Ben David, Reut Tsarfaty, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsAdam Amram, Anat Ben David, and Reut Tsarfaty. 2018. Representations and architectures in neu- ral sentiment analysis for morphologically rich lan- guages: A case study from modern hebrew. In Pro- ceedings of the 27th International Conference on Computational Linguistics, pages 2242-2252.
Arabert: Transformer-based model for arabic language understanding. Wissam Antoun, Fady Baly, Hazem Hajj, arXiv:2003.00104arXiv preprintWissam Antoun, Fady Baly, and Hazem Hajj. 2020. Arabert: Transformer-based model for arabic language understanding. arXiv preprint arXiv:2003.00104.
Pattern recognition and machine learning. M Christopher, Bishop, M Nasser, Nasrabadi, Springer4Christopher M Bishop and Nasser M Nasrabadi. 2006. Pattern recognition and machine learning, volume 4. Springer.
Unsupervised cross-lingual representation learning at scale. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov, 10.18653/v1/2020.acl-main.747Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAlexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019a. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019b. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Narmeen Sha'ban, and Sally Rushaidat. M Rehab, Raed Duwairi, Marji, 2014 5th international conference on information and communication systems (ICICS). IEEESentiment analysis in arabic tweetsRehab M Duwairi, Raed Marji, Narmeen Sha'ban, and Sally Rushaidat. 2014. Sentiment analysis in arabic tweets. In 2014 5th international conference on infor- mation and communication systems (ICICS), pages 1-6. IEEE.
Language-agnostic BERT sentence embedding. Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, Wei Wang, 10.18653/v1/2022.acl-long.62Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandLong Papers1Association for Computational LinguisticsFangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Ari- vazhagan, and Wei Wang. 2022. Language-agnostic BERT sentence embedding. In Proceedings of the 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 878-891, Dublin, Ireland. Association for Computa- tional Linguistics.
. Mandy Guo, Qinlan Shen, Yinfei Yang, Heming Ge, Daniel Cer, Gustavo Hernandez Abrego, KeithMandy Guo, Qinlan Shen, Yinfei Yang, Heming Ge, Daniel Cer, Gustavo Hernandez Abrego, Keith
Effective parallel corpus mining using bilingual sentence embeddings. Noah Stevens, Yun-Hsuan Constant, Brian Sung, Ray Strope, Kurzweil, 10.18653/v1/W18-6317Proceedings of the Third Conference on Machine Translation: Research Papers. the Third Conference on Machine Translation: Research PapersBrussels, BelgiumAssociation for Computational LinguisticsStevens, Noah Constant, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Effective parallel corpus mining using bilingual sentence embeddings. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 165-176, Brus- sels, Belgium. Association for Computational Lin- guistics.
Jeff Heaton ; Ian Goodfellow, Aaron Courville, isbn: 0262035618. Genetic Programming and Evolvable Machines. The mit press800Deep learningJeff Heaton. 2018. Ian goodfellow, yoshua bengio, and aaron courville: Deep learning: The mit press, 2016, 800 pp, isbn: 0262035618. Genetic Programming and Evolvable Machines, 19(1-2):305-307.
Sentiment analysis of arabic tweets using deep learning. Maha Heikal, Marwan Torki, Nagwa El-Makky, Procedia Computer Science. 142Maha Heikal, Marwan Torki, and Nagwa El-Makky. 2018. Sentiment analysis of arabic tweets using deep learning. Procedia Computer Science, 142:114-122.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735- 1780.
Guillaume Lample, Alexis Conneau, arXiv:1901.07291Crosslingual language model pretraining. arXiv preprintGuillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. arXiv preprint arXiv:1901.07291.
A practical guide to sentiment annotation: Challenges and solutions. Saif Mohammad, Proceedings of the 7th workshop on computational approaches to subjectivity, sentiment and social media analysis. the 7th workshop on computational approaches to subjectivity, sentiment and social media analysisSaif Mohammad. 2016. A practical guide to senti- ment annotation: Challenges and solutions. In Pro- ceedings of the 7th workshop on computational ap- proaches to subjectivity, sentiment and social media analysis, pages 174-179.
Ethics sheet for automatic emotion recognition and sentiment analysis. M Saif, Mohammad, Computational Linguistics. 482Saif M Mohammad. 2022. Ethics sheet for automatic emotion recognition and sentiment analysis. Compu- tational Linguistics, 48(2):239-278.
Idris Shamsuddeen Hassan Muhammad, Abinew Abdulmumin, Nedjma Ali Ayele, David Ifeoluwa Ousidhoum, Adelani, Ibrahim Seid Muhie Yimam, Meriem Sa'id Ahmad, Beloucif, M Saif, Sebastian Mohammad, Oumaima Ruder, Pavel Hourrane, Felermino Dário Mário António Brazdil, Davis Ali, Salomey David, Osei, Bello Shehu, Falalu Bello, Tajuddeen Ibrahim, Samuel Gwadabe, Rutunda, 10.48550/arXiv.2302.08956Tadesse Belay, Wendimu Baye Messelle, Hailu Beshada Balcha, Sisay Adugna Chala, Hagos Tesfahun Gebremichael, Bernard Opoku, and Steven Arthur. 2023a. AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages. Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Abinew Ali Ayele, Nedjma Ousidhoum, David Ife- oluwa Adelani, Seid Muhie Yimam, Ibrahim Sa'id Ahmad, Meriem Beloucif, Saif M. Mohammad, Se- bastian Ruder, Oumaima Hourrane, Pavel Brazdil, Felermino Dário Mário António Ali, Davis David, Salomey Osei, Bello Shehu Bello, Falalu Ibrahim, Tajuddeen Gwadabe, Samuel Rutunda, Tadesse Be- lay, Wendimu Baye Messelle, Hailu Beshada Balcha, Sisay Adugna Chala, Hagos Tesfahun Gebremichael, Bernard Opoku, and Steven Arthur. 2023a. AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages.
Meriem Beloucif, and Sebastian Ruder. 2023b. SemEval-2023 Task 12: Sentiment Analysis for African Languages. Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Seid Muhie Yimam, David Ifeoluwa Adelani, Ibrahim Sa'id Ahmad, Nedjma Ousidhoum, Abinew Ali Ayele, Saif M. MohammadAssociation for Computational LinguisticsProceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)Shamsuddeen Hassan Muhammad, Idris Abdulmu- min, Seid Muhie Yimam, David Ifeoluwa Ade- lani, Ibrahim Sa'id Ahmad, Nedjma Ousidhoum, Abinew Ali Ayele, Saif M. Mohammad, Meriem Beloucif, and Sebastian Ruder. 2023b. SemEval- 2023 Task 12: Sentiment Analysis for African Lan- guages (AfriSenti-SemEval). In Proceedings of the 17th International Workshop on Semantic Evalua- tion (SemEval-2023). Association for Computational Linguistics.
Small data? no problem! exploring the viability of pretrained multilingual language models for lowresourced languages. Kelechi Ogueji, Yuxin Zhu, Jimmy Lin, Proceedings of the 1st Workshop on Multilingual Representation Learning. the 1st Workshop on Multilingual Representation LearningPunta Cana, Dominican RepublicAssociation for Computational LinguisticsKelechi Ogueji, Yuxin Zhu, and Jimmy Lin. 2021. Small data? no problem! exploring the viability of pretrained multilingual language models for low- resourced languages. In Proceedings of the 1st Work- shop on Multilingual Representation Learning, pages 116-126, Punta Cana, Dominican Republic. Associa- tion for Computational Linguistics.
Sentencelevel arabic sentiment analysis. Amira Shoukry, Ahmed Rafea, 2012 international conference on collaboration technologies and systems (CTS). IEEEAmira Shoukry and Ahmed Rafea. 2012. Sentence- level arabic sentiment analysis. In 2012 international conference on collaboration technologies and sys- tems (CTS), pages 546-550. IEEE.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Illia Kaiser, Polosukhin, Advances in Neural Information Processing Systems. Curran Associates, Inc30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Quentin Drame, Alexander Lhoest, Rush, 10.18653/v1/2020.emnlp-demos.6Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsOnlineAssociation for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
Improving multilingual sentence embedding using bidirectional dual encoder with additive margin softmax. Yinfei Yang, Gustavo Hernandez Abrego, Steve Yuan, Mandy Guo, Qinlan Shen, Daniel Cer, Yun-Hsuan Sung, Brian Strope, Ray Kurzweil, arXiv:1902.08564arXiv preprintYinfei Yang, Gustavo Hernandez Abrego, Steve Yuan, Mandy Guo, Qinlan Shen, Daniel Cer, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2019. Im- proving multilingual sentence embedding using bi- directional dual encoder with additive margin soft- max. arXiv preprint arXiv:1902.08564.
Exploring amharic sentiment analysis from social media texts: Building annotation tools and classification models. Hizkiel Seid Muhie Yimam, Abinew Mitiku Alemayehu, Chris Ayele, Biemann, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsSeid Muhie Yimam, Hizkiel Mitiku Alemayehu, Abinew Ayele, and Chris Biemann. 2020. Exploring amharic sentiment analysis from social media texts: Building annotation tools and classification models. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1048-1060.
An introduction to support vector machines and other kernel-based learning methods. Tong Zhang, 22Ai MagazineTong Zhang. 2001. An introduction to support vector machines and other kernel-based learning methods. Ai Magazine, 22(2):103-103. |
218,974,150 | The CLARIN Knowledge Centre for Atypical Communication Expertise | This paper introduces a new CLARIN Knowledge Center which is the K-Centre for Atypical Communication Expertise (ACE for short) which has been established at the Centre for Language and Speech Technology (CLST) at Radboud University. Atypical communication is an umbrella term used here to denote language use by second language learners, people with language disorders or those suffering from language disabilities, but also more broadly by bilinguals and users of sign languages. It involves multiple modalities (text, speech, sign, gesture) and encompasses different developmental stages. ACE closely collaborates with The Language Archive (TLA) at the Max Planck Institute for Psycholinguistics in order to safeguard GDPR-compliant data storage and access. We explain the mission of ACE and show its potential on a number of showcases and a use case.ACE will offer the following services through its website: Information and guidelines about:consent (forms) hosting corpora and datasets containing atypical communication where to find corpora and datasets containing atypical communication Helpdesk/consultancy for questions on the above topics 5 https://www.ru.nl/cls/our-research/research-groups/firstlanguage-acquisition/ 6 https://www.ru.nl/cls/our-research/research-groups/languagespeech-learning-therapy/ 7 https://www.ru.nl/cls/our-research/research-groups/signlanguage-linguistics/ 8 https://tla.mpi.nl/resources/ 9 https://www.clarin.eu/content/component-metadata 10 http://delad.net/ 11 https://talkbank.org/ 12 https://sshopencloud.eu/ | [
28722153,
220445601,
21684862
] | The CLARIN Knowledge Centre for Atypical Communication Expertise
May 2020
Henk Van Den Heuvel
CLS/CLST
Radboud University
Nijmegen
Nelleke Oostdijk n.oostdijk@let.ru.nl
CLS/CLST
Radboud University
Nijmegen
Caroline Rowland
The Language Archive
MPI for Psycholinguistics
Nijmegen
Donders Institute for Brain
Cognition & Behaviour
Nijmegen
Paul Trilsbeek paul.trilsbeek@mpi.nl
The Language Archive
MPI for Psycholinguistics
Nijmegen
The CLARIN Knowledge Centre for Atypical Communication Expertise
Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)
the 12th Conference on Language Resources and Evaluation (LREC 2020)MarseilleMay 20203312infrastructureatypical communicationlanguage resources
This paper introduces a new CLARIN Knowledge Center which is the K-Centre for Atypical Communication Expertise (ACE for short) which has been established at the Centre for Language and Speech Technology (CLST) at Radboud University. Atypical communication is an umbrella term used here to denote language use by second language learners, people with language disorders or those suffering from language disabilities, but also more broadly by bilinguals and users of sign languages. It involves multiple modalities (text, speech, sign, gesture) and encompasses different developmental stages. ACE closely collaborates with The Language Archive (TLA) at the Max Planck Institute for Psycholinguistics in order to safeguard GDPR-compliant data storage and access. We explain the mission of ACE and show its potential on a number of showcases and a use case.ACE will offer the following services through its website: Information and guidelines about:consent (forms) hosting corpora and datasets containing atypical communication where to find corpora and datasets containing atypical communication Helpdesk/consultancy for questions on the above topics 5 https://www.ru.nl/cls/our-research/research-groups/firstlanguage-acquisition/ 6 https://www.ru.nl/cls/our-research/research-groups/languagespeech-learning-therapy/ 7 https://www.ru.nl/cls/our-research/research-groups/signlanguage-linguistics/ 8 https://tla.mpi.nl/resources/ 9 https://www.clarin.eu/content/component-metadata 10 http://delad.net/ 11 https://talkbank.org/ 12 https://sshopencloud.eu/
Background and Aims
Over the past years the European Research Infrastructure for Language Resources and Technology (CLARIN; clarin.eu) has taken shape (Hinrichs et al., 2014;De Jong et al., 2018). The infrastructure is directed towards researchers in the humanities and social sciences. It provides users access to distributed data and tools through a single sign-on online environment (De Jong, 2019). Apart from the technical infrastructure and accompanying protocols, CLARIN has been investing in what is referred to as the Knowledge Sharing Infrastructure (KSI) 1 . The KSI should ensure that knowledge and expertise as regards the technical infrastructure, the way it operates and how it can be used, is shared between all stakeholders, from resource and technology providers to end users. In the CLARIN networked organizational structure, the knowledge (K-)centres play a central role in the dissemination of (specialized) knowledge and expertise. K-centres can advise on issues pertaining to data collection and data management, can provide information as regards available resources and services, where to find and how to access them, and provide support for various methodologies and applications. Kcentres can also offer training courses in their respective fields of expertise. At present there are 20 certified K-centres 2 . One of the latest additions is the K-Centre for Atypical Communication Expertise (ACE for short) which has been established at the Centre for Language and Speech Technology (CLST) at Radboud University. The mission of ACE is to support researchers engaged in investigating what can be characterised as atypical communication. Atypical communication is an umbrella term used here to denote language use by second language learners, people with language disorders or those suffering from language disabilities, but also to languages that pose particularly difficult issues for analysis, such as sign languages and languages spoken in a multilingual context. It involves multiple modalities (text, speech, sign, gesture) and encompasses different developmental stages. The target audience for ACE includes linguists, psychologists, neuroscientists, computer scientists, speech and language therapists and education specialists. The website of ACE is: https://ace.ruhosting.nl/ Data originating in a context of atypical communication are particularly sensitive as regards privacy and ethical issues. While collecting, storing, processing and using such data, researchers are bound by strict rules and procedural requirements imposed by ethical committees and the GDPR (see e.g. van den Heuvel et al., 2020). At all stages appropriate measures must be in place so as to prevent unwanted disclosure. In some cases, this requires that the original data remain stored in a dark archive and cannot be copied or distributed in any form. ACE can advise resource owners and users on how they can preserve sensitive data in a safe manner, from the point where the raw data come into existence up to the moment where the data and information obtained from it are shared with others. Atypical communication data are also special when it comes to the methods and tools for processing and using the data. Often guidelines and tools that have been developed and are used for standard data cannot be used or require adaptations or special settings; in some other cases dedicated tools are available. ACE is wellpositioned to inform researchers who want to work with language development data, data of adults and children with speech disorders, or users of sign language on the availability of such tools and guidelines. ACE can advise on what is feasible and how to go about it.
A Fruitful Partnership
Within Radboud University the Knowledge Centre has CLST 3 as its core but it has close links to researchers and research groups within the Centre for Language Studies 4 with ample expertise in the fields of language acquisition 5 , language learning and therapy 6 , and sign language 7 . Within CLARIN, CLST has the status of C Centre and Trust Centre and as such provides metadata to the infrastructure and enables access to tools and web applications through the Federated Identity services that CLARIN offers. For hosting data and corpora for atypical communication and making these accessible in a FAIR manner, CLST has established a close collaboration with The Language Archive (TLA). TLA is situated at the Max Planck Institute for Psycholinguistics (MPI) in Nijmegen. As a CLARIN B Centre 8 the goal of TLA is to provide a unique record of how people around the world use language in everyday life. They focus on collecting spoken and signed language materials in audio and video form along with transcriptions, analyses, annotations and other types of relevant material such as photos and accompanying notes. TLA offers storage of sensitive data (speech, audio and transcripts) and supports the CMDI 9 metadata framework. TLA also supports strong authentication procedures, layered access to data, and persistent identification. For corpora of speech from people with language disorders ACE works closely together with the DELAD initiative 10 . Especially for this type of resources there is a close collaboration with CMU's Talkbank / Clinical banks 11 . Our collaboration allows that data can be registered at Talkbank and obtains its metadata and landing page at the Talkbank website whereas the storage of and authentication of access to the 'raw' data (typical audio and video) data is handled at TLA (see also Section 4). For giving access to critical data ACE is also involved in the SSHOC project 12 in which Task 5.4 is devoted to making an inventory of systems and technologies suitable to conduct research on critical data which is relevant for offering various ways of accessing critical data stored at central repositories where they can be downloaded or at shielded repositories where they can only be remotely accessed.
Technical assistance for designing, creating, annotating, formatting and metadating resources of atypical communication Outreach: presentations, workshops contributions, etc.
The items in the list above meet a number of demands for which researchers have expressed a need. Typically, assistance in designing and collecting corpora containing atypical communication with consent forms that are GDPR-proof, is considered of great value, as are references to available guidelines and tools for annotating such resources. How to make the resources accessible and share them with other researchers is another issue for which special expertise is requested. ACE is happy to advise on such issues and also to participate in projects where the acquisition and/or creation of such data collections is foreseen.
Show Cases
The website of ACE presents a number of show cases. We mention the rich corpora of speech from children and adults with language disorders collected in the VALID project (Klatter at al., 2014) and stored at TLA. Within VALID, four existing digital datasets were curated in order to make them available for scientific research in CLARIN-compatible format. The datasets included are:
SLI RU-Kentalis database, containing around 40 hours of audio and 150,000 transcribed words Bilingual deaf children RU-Kentalis database, containing around 9 hours of video and 19,500 transcribed words ADHD and SLI corpus UvA database, containing around 26 hours of video and 23,000 transcribed words Deaf adults RU database, containing results of a writing task in ScriptLog format.
More information about these datasets can be found at https://validdata.org/clarin-project/datasets/. This page also contains a link to the persistent identifier of the curated datasets at TLA 13 . Another show case is the P-MoLL dataset 14 , which is accessible to all registered users of TLA. The project P-Moll (=Modalität von Lernervarietäten im Längsschnitt) was run at the Free University in Berlin by Prof. Norbert Dittmar from 1987 to 1992. It dealt with the study of the acquisition of modality in German as a second language by untutored adult immigrants with Polish or Italian as their native language. The longitudinal data collection covers about two and a half years of the learners' acquisition process. It contains their oral speech production from different elicitation tasks and free conversations with native speakers and consists of approximately 100 hours of audio, 16 hours of video and 520,000 transcribed words (Dittmar et al., 1990). 13 https://hdl.handle.net/1839/00-8C315BC1-AD5E-4348-9A79-A41FE3DE1150 14 https://hdl.handle.net/1839/00-0000-0000-0000-4EAB-A Another example of a well-documented dataset on second language learning is the LESLLA corpus. LESLLA stands for Literacy Education and Second Language Learning for Adults, see https://www.leslla.org/. The corpus contains speech of 15 low-educated learners of Dutch as a second language. All of them are women; 8 are Turkish, 7 Moroccan. (Turks and Moroccans are the two largest immigrant groups in the Netherlands.) At the time of the recordings, they were between 22 and 45 years old. Participants had to carry out five tasks which all involved spoken language but varied from strictly controlled to semi-spontaneous. In total, the corpus contains around 30 hours of audio and about 180,000 transcribed words. An extensive description of the curated corpus can be found in . This corpus is also accessible at TLA 15 . The LeaP (Learning Prosody in a Foreign Language) corpus 16 was collected with the goal of studying the acquisition of prosody by non-native speakers of German and English. The German and English parts of the corpus contain audio recordings of 62 and 50 different speakers respectively, with a wide variety of native languages. The more than 12 hours of audio recordings are transcribed and annotated by hand, resulting in approximately 72,000 transcribed and annotated words. Part-of-speech tagging 15 https://hdl.handle.net/1839/00-37EBCC6D-04A5-4598-88E2-E0F390D5FCE1 16 https://hdl.handle.net/1839/00-0000-0000-000A-3D5E-1 and lemmatization were carried out automatically. A detailed description of the corpus can be found in the manual that is included. The Dutch Bilingual Database 17 is another rather substantial collection of data fitting in the scope of ACE and hosted at TLA. It results from a number of projects and research programmes that were directed at investigating multilingualism and comprises data originating from Dutch, Sranan, Sarnami, Papiamentu, Arabic, Berber and Turkish speakers. In total, it contains over 500 hours of audio recordings, 10 hours of video recordings, and approximately 615,000 transcribed words. It is accessible to any academic user. Further, TLA also hosts a wealth of sign language corpora. Many of these are carefully annotated using the ELAN annotation software 18 . Figure 1 shows an example. The Corpus NGT (Nederlandse Gebarentaal / Dutch Sign Language) 1920 is a highly systematically collected dataset of 92 signers of Dutch Sign Language. It contains over 72 hours of dialogues recorded on video from different angles, using a variety of tasks and genres. A significant part of the recordings has been manually annotated using ELAN, with approximately 200,000 annotation tokens in 17 https://hdl.handle.net/1839/00-0000-0000-0001-4AF0-7 18 https://tla.mpi.nl/tools/tla-tools/elan/ 19 https://hdl.handle.net/1839/00-0000-0000-0004-DF8E-6 20 https://www.ru.nl/corpusngtuk/ Figure 1. The ELAN multimedia annotation tool that is produced at The Language Archive is widely used for the transcription, annotation and analysis of all sorts of language recordings, including sign languages. the latest version. The largest part of the corpus is freely accessible. One could debate in how far sign language is a form of atypical communication. In our view what makes the language atypical is that it results from a way of dealing with a (hearing) deficiency; the non-atypical part is that sign language as such is a mature version of language as any other. This makes it different from the true atypical variants.
Use Case
In Section 2 we mentioned our collaboration with CMU's Talkbank. As a use case for the curation of a dataset, registering it at the Talkbank and storing the primary data (only) at TLA, we processed the Polish Cued Speech Corpus of Hearing-Impaired Children. The corpus contains legacy data of 20 hearing impaired children aged between 8 and 12 years (11 girls and 9 boys), and was kindly provided by A. Trochimyuk-Lorenc and K. Klessa from the University of Warsaw (Institute of Applied Polish Studies). The corpus is described in Trochymiuk (2003Trochymiuk ( , 2007. The curation of this dataset involved the creation of CMDI metadata records as well as the creation of a script for normalizing filenames and for converting the text files into CHAT formatincluding the required metadata headers that could partially be derived from the filenames. Once the CHAT transcripts have been added to the Talkbank database, the Handle persistent identifier to the collection containing the audio files in The Language Archive 21 will be added to the landing page, such that users will be able to download them there. Since the structures and systems of the Talkbank and TLA repositories differ quite significantly, a script was created to extract specific file types from collections in the Fedora Commons repository system at TLA and to put those into a structure that can be easily ingested into the Talkbank repository. The script also transforms TLA's metadata into Talkbank metadata, which is relatively straightforward as both are based on the IMDI 22 metadata schema.
Reaching out
The ACE Centre's services will be publicised in a variety of ways. Its launch in December 2019 was announced via a press release published on both the Radboud University and Max Planck Institute websites. After launch, all information about the ACE was made available via its website: https://ace.ruhosting.nl/. Advice and personalised information will be provided via the helpdesk. Centre personnel will further dissemination information and advice via invited presentations and at workshops as well as via webinars and screencasts published on the website. The DELAD network is preparing a workshop with CLARIN in June/July in which the opportunities that ACE offers for researchers studying language disorders will be a main theme.
https://www.clarin.eu/content/knowledge-sharing 2 https://www.clarin.eu/content/knowledge-centres
https://www.ru.nl/clst/ and https://www.ru.nl/cls/ourresearch/research-groups/language-speech-technology/ 4 https://www.ru.nl/cls/
https://hdl.handle.net/1839/77ea572d-f4c4-48d8-b67b-956f946b59c5 22 https://tla.mpi.nl/imdi-metadata/
The Corpus NGT: an online corpus for professionals and laymen, In: Construction and Exploitation of Sign Language Corpora. 3rd Workshop on the Representation and Processing of Sign Languages. O Crasborn, I Zwitserlood, ELDA, ParijsCrasborn, O. & Zwitserlood, I. (2008) The Corpus NGT: an online corpus for professionals and laymen, In: Construction and Exploitation of Sign Language Corpora. 3rd Workshop on the Representation and Processing of Sign Languages,O. Crasborn, T. Hanke, E. Efthimiou, I. Zwitserlood & E. Thoutenhoofd, eds. ELDA, Parijs. pp 44-49.
CLARIN: Towards FAIR and Responsible Data Science Using Language Resources. De Jong, F Maegaard, B De Smedt, K Fišer, D Van Uytvanck, D , Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)De Jong, F., Maegaard, B., De Smedt, K., Fišer, D. and Van Uytvanck,D. (2018). CLARIN: Towards FAIR and Responsible Data Science Using Language Resources. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), May 2018, pp. 3259-3264.
CLARIN -Infrastructural support for impact through the study of language as social and cultural data. De Jong, F , Stay Tuned to the Future. Impact of the research infrastructures for social sciences and humanities, Lessico Intellettuale Europeo, LIE-CXXVIII. B. Maegaard, R. Pozzo, A. Melloni and M. WoollardDe Jong, F. (2019). CLARIN -Infrastructural support for impact through the study of language as social and cultural data. In B. Maegaard, R. Pozzo, A. Melloni and M. Woollard (Eds.), Stay Tuned to the Future. Impact of the research infrastructures for social sciences and humanities, Lessico Intellettuale Europeo, LIE- CXXVIII, 121-129.
Die Erlernung modaler Konzepte des Deutschen durch erwachsene polnische Migranten: Eine empirische Längsschnittstudie. N Dittmar, A Reich, R Skiba, M Schumacher, H Terborg, Informationen Deutsch als Fremdsprache: Info DaF. 172Dittmar, N., Reich, A., Skiba, R., Schumacher, M., & Terborg, H. (1990). Die Erlernung modaler Konzepte des Deutschen durch erwachsene polnische Migranten: Eine empirische Längsschnittstudie. In: Informationen Deutsch als Fremdsprache: Info DaF 17(2), pp. 125- 172.
The LeaP corpus. A multilingual corpus of spoken learner German and learner English. Multilingual Corpora and Multilingual Corpus Analysis. Amsterdam: Benjamins. Thomas Schmidt and Kai WörnerUlrikeGut, Ulrike (2012). The LeaP corpus. A multilingual corpus of spoken learner German and learner English. In Thomas Schmidt and Kai Wörner (eds.), Multilingual Corpora and Multilingual Corpus Analysis. Amsterdam: Benjamins, pp. 3-23.
The CLARIN Research Infrastructure: Resources and Tools for E-Humanities Scholars. Erhard & Steven Hinrichs, Krauwer, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014). the Ninth International Conference on Language Resources and Evaluation (LREC-2014)Hinrichs, Erhard & Steven Krauwer (2014). The CLARIN Research Infrastructure: Resources and Tools for E- Humanities Scholars. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014), May 2014, pp. 1525-31.
Vulnerability in Acquisition, Language Impairments in Dutch: Creating a VALID Data Archive. J Klatter, R Van Hout, H Heuvel, Van Den, P Fikkert, A Baker, J De Jong, F Wijnen, E Sanders, P Trilsbeek, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014). the Ninth International Conference on Language Resources and Evaluation (LREC-2014)Klatter, J., Van Hout, R., Heuvel, H. van den, Fikkert, P., Baker, A., De Jong J., Wijnen, F., Sanders, E., Trilsbeek, P. (2014). Vulnerability in Acquisition, Language Impairments in Dutch: Creating a VALID Data Archive. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014), May 2014, pp. 1525-31.
Voiced Realisations of Plosives in Word Initial Position by Hearing Impaired Children. Acoustic Phonetics Analysis. A Trochymiuk, Beiträge der Europäischen Slavistischen Linguistic. K. Böttger., S. Dönninghaus., & R. Marzari.16BandTrochymiuk A. (2003). Voiced Realisations of Plosives in Word Initial Position by Hearing Impaired Children. Acoustic Phonetics Analysis. In K. Böttger., S. Dönninghaus., & R. Marzari. (Eds), Die Welt der Slaven, Band 16, Beiträge der Europäischen Slavistischen Linguistic, Band 6, München, pp. 111- 123
Realization of the voicedvoiceless contrast by hearing impaired children. A Trochymiuk, Studia Phonetica Posnaniensia. 7Trochymiuk A. (2005). Realization of the voiced- voiceless contrast by hearing impaired children, Studia Phonetica Posnaniensia, vol. 7, pp. 75-96.
The Dutch LESLLA Corpus. E Sanders, I Van De Craats, De Lint, V , Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014). the Ninth International Conference on Language Resources and Evaluation (LREC-2014)Sanders, E., Van de Craats, I, De Lint, V. (2014). The Dutch LESLLA Corpus. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014), May 2014, pp. 2715-2718.
Corpora of disordered speech in the light of the GDP: Two use cases from the DELAD initiative. H Van Den Heuvel, A Kelli, K Klessa, S Salaasti, Proceedings of the 12th International Conference on Language Resources and Evaluation (LREC2020). the 12th International Conference on Language Resources and Evaluation (LREC2020)Van den Heuvel, H., Kelli, A., Klessa, K., Salaasti, S. (2020). Corpora of disordered speech in the light of the GDP: Two use cases from the DELAD initiative. In Proceedings of the 12th International Conference on Language Resources and Evaluation (LREC2020).
Language Resource References. Language Resource References
The Corpus NGT. A digital open access corpus of movies and annotations of Sign Language of the Netherlands. Centre for Language Studies. O Crasborn, I Zwitserlood, J Ros, 1839/00- 0000-0000-0004-DF8E-6. ISLRN: 175-346-174-413-3Radboud Universiteit NijmegenCrasborn, O., Zwitserlood, I. & Ros, J. (2008). The Corpus NGT. A digital open access corpus of movies and annotations of Sign Language of the Netherlands. Centre for Language Studies, Radboud Universiteit Nijmegen. URL: http://hdl.handle.net/hdl:1839/00- 0000-0000-0004-DF8E-6. ISLRN: 175-346-174-413-3.
The P-MoLL corpus. N Dittmar, A Reich, R Skiba, M Schumacher, H Terborg, Dittmar, N., Reich, A., Skiba, R., Schumacher, M., & Terborg, H. (2002) The P-MoLL corpus. Distributed by The Language Archive: https://hdl.handle.net/1839/00- 0000-0000-0000-4EAB-A
Deaf adults RU database. . J Emmerik, Van, 944-022-313-325-3Emmerik. J. van (2014) Deaf adults RU database. ISLRN 944-022-313-325-3. https://hdl.handle.net/1839/00- 97AF29EA-877D-422A-BAF7-25FA269351A6
LeaP corpus. Distributed by The Language Archive. U Gut, Gut, U. (2009) LeaP corpus. Distributed by The Language Archive: https://hdl.handle.net/1839/00-0000-0000- 000A-3D5E-1
Bilingual deaf children RU-Kentalis database. E Kolen, 941-351-623-486-4Kolen, E. (2014) Bilingual deaf children RU-Kentalis database. ISLRN 941-351-623-486-4.
Polish Cued Speech Corpus of Hearing-Impaired Children. A Lorenc, Lorenc, A. (2019) Polish Cued Speech Corpus of Hearing-Impaired Children, Distributed by The Language Archive: https://hdl.handle.net/1839/77ea572d-f4c4-48d8-b67b- 956f946b59c5
SLI RU-Kentalis database. A Made, Van Der, Made, A. van der (2014) SLI RU-Kentalis database. ISLRN 541-534-411-504-6.
Dutch Bilingual Database. Distributed by The Language Archive. Muysken, Muysken et al. (2008) Dutch Bilingual Database. Distributed by The Language Archive : https://hdl.handle.net/1839/00-0000-0000-0001-4AF0-7
ADHD and SLI corpus UvA database. E Parigger, 456-360-189-350-0Parigger, E. (2014) ADHD and SLI corpus UvA database. ISLRN 456-360-189-350-0.
The curated Dutch LESLLA corpus. Distributed by CLARIN via The Language Archive. E Sanders, I Van De Craats, De Lint, V , Sanders, E., Van de Craats, I, De Lint, V. (2014) The curated Dutch LESLLA corpus. Distributed by CLARIN via The Language Archive: https://hdl.handle.net/1839/00-37EBCC6D-04A5-4598- 88E2-E0F390D5FCE1 |
219,310,068 | [] | Adapting Self-training for Semantic Role Labeling
Association for Computational LinguisticsCopyright Association for Computational Linguistics13 July 2010. 2010
Rasoul Samad
FCSIT
University of Malaya
50406Kuala LumpurMalaysia
Zadeh Kaljahi
FCSIT
University of Malaya
50406Kuala LumpurMalaysia
Adapting Self-training for Semantic Role Labeling
Proceedings of the ACL 2010 Student Research Workshop
the ACL 2010 Student Research WorkshopUppsala, Sweden; cAssociation for Computational Linguistics13 July 2010. 2010
Supervised semantic role labeling (SRL) systems trained on hand-crafted annotated corpora have recently achieved state-of-the-art performance. However, creating such corpora is tedious and costly, with the resulting corpora not sufficiently representative of the language. This paper describes a part of an ongoing work on applying bootstrapping methods to SRL to deal with this problem. Previous work shows that, due to the complexity of SRL, this task is not straight forward. One major difficulty is the propagation of classification noise into the successive iterations. We address this problem by employing balancing and preselection methods for self-training, as a bootstrapping algorithm. The proposed methods could achieve improvement over the base line, which do not use these methods.
Introduction
Semantic role labeling has been an active research field of computational linguistics since its introduction by Gildea and Jurafsky (2002). It reveals the event structure encoded in the sentence, which is useful for other NLP tasks or applications such as information extraction, question answering, and machine translation (Surdeanu et al., 2003). Several CoNLL shared tasks (Carreras and Marquez, 2005;Surdeanu et al., 2008) dedicated to semantic role labeling affirm the increasing attention to this field.
One important supportive factor of studying supervised statistical SRL has been the existence of hand-annotated semantic corpora for training SRL systems. FrameNet (Baker et al., 1998) was the first such resource, which made the emergence of this research field possible by the seminal work of Gildea and Jurafsky (2002). However, this corpus only exemplifies the semantic role assignment by selecting some illustrative examples for annotation. This questions its suita-bility for statistical learning. Propbank was started by Kingsbury and Palmer (2002) aiming at developing a more representative resource of English, appropriate for statistical SRL study.
Propbank has been used as the learning framework by the majority of SRL work and competitions like CoNLL shared tasks. However, it only covers the newswire text from a specific genre and also deals only with verb predicates.
All state-of-the-art SRL systems show a dramatic drop in performance when tested on a new text domain (Punyakanok et al., 2008). This evince the infeasibility of building a comprehensive hand-crafted corpus of natural language useful for training a robust semantic role labeler.
A possible relief for this problem is the utility of semi-supervised learning methods along with the existence of huge amount of natural language text available at a low cost. Semi-supervised methods compensate the scarcity of labeled data by utilizing an additional and much larger amount of unlabeled data via a variety of algorithms.
Self-training (Yarowsky, 1995) is a semisupervised algorithm which has been well studied in the NLP area and gained promising result. It iteratively extend its training set by labeling the unlabeled data using a base classifier trained on the labeled data. Although the algorithm is theoretically straightforward, it involves a large number of parameters, highly influenced by the specifications of the underlying task. Thus to achieve the best-performing parameter set or even to investigate the usefulness of these algorithms for a learning task such as SRL, a thorough experiment is required. This work investigates its application to the SRL problem.
Related Work
The algorithm proposed by Yarowsky (1995) for the problem of word sense disambiguation has been cited as the origination of self-training. In that work, he bootstrapped a ruleset from a small number of seed words extracted from an online dictionary using a corpus of unannotated English text and gained a comparable accuracy to fully supervised approaches.
Subsequently, several studies applied the algorithm to other domains of NLP. Reference resolution (Ng and Cardie 2003), POS tagging (Clark et al., 2003), and parsing (McClosky et al., 2006) were shown to be benefited from self-training. These studies show that the performance of selftraining is tied with its several parameters and the specifications of the underlying task.
In SRL field, He and Gildea (2006) used selftraining to address the problem of unseen frames when using FrameNet as the underlying training corpus. They generalized FrameNet frame ele-ments to 15 thematic roles to control the complexity of the process. The improvement gained by the progress of self-training was small and inconsistent. They reported that the NULL label (non-argument) had often dominated other labels in the examples added to the training set. Lee et al. (2007) attacked another SRL learning problem using self-training. Using Propbank instead of FrameNet, they aimed at increasing the performance of supervised SRL system by exploiting a large amount of unlabeled data (about 7 times more than labeled data). The algorithm variation was similar to that of He and Gildea (2006), but it only dealt with core arguments of the Propbank. They achieved a minor improvement too and credited it to the relatively poor performance of their base classifier and the insufficiency of the unlabeled data.
SRL System
To have enough control over entire the system and thus a flexible experimental framework, we developed our own SRL system instead of using a third-party system. The system works with PropBank-style annotation and is described here.
Syntactic Formalism: A Penn Treebank constituent-based approach for SRL is taken. Syntactic parse trees are produced by the reranking parser of Charniak and Johnson (2005).
Architecture: A two-stage pipeline architecture is used, where in the first stage less-probable argument candidates (samples) in the parse tree are pruned, and in the next stage, final arguments are identified and assigned a semantic role. However, for unlabeled data, a preprocessing stage identifies the verb predicates based on the POS tag assigned by the parser. The joint argument identification and classification is chosen to decrease the complexity of self-training process.
Features: Features are listed in table 1. We tried to avoid features like named entity tags to less depend on extra annotation. Features marked with * are used in addition to common features in the literature, due to their impact on the performance in feature selection process.
Classifier: We chose a Maximum Entropy classifier for its efficient training time and also its built-in multi-classification capability. Moreover, the probability score that it assigns to labels is useful in selection process in self-training. The Maxent Toolkit 1 was used for this purpose. While the general theme of the self-training algorithm is almost identical in different implementations, variations of it are developed based on the characteristics of the task in hand, mainly by customizing several involved parameters. Figure 1 shows the algorithm with highlighted parameters.
The size of seed labeled data set L and unlabeled data U, and their ratio are the fundamental parameters in any semi-supervised learning. The data used in this work is explained in section 5.1.
In addition to performance, efficiency of the classifier (C) is important for self-training, which is computationally expensive. Our classifier is a compromise between performance and efficiency. Table 2 shows its performance compared to the state-of-the-art (Punyakanok et al. 2008) when trained on the whole labeled training set.
Stop criterion (S) can be set to a predetermined number of iterations, finishing all of the unlabeled data, or convergence of the process in terms of improvement. We use the second option for all experiments here.
In each iteration, one can label entire the unlabeled data or only a portion of it. In the latter case, a number of unlaleled examples (p) are selected and loaded into a pool (P). The selection can be based on a specific strategy, known as preselection (Abney, 2008) or simply done according to the original order of the unlabeled data. We investigate preselection in this work.
After labeling the p unlabeled data, training set is augmented by adding the newly labeled data. Two main parameters are involved in this step: selection of labeled examples to be added to training set and addition of them to that set.
Selection is the crucial point of self-training, in which the propagation of labeling noise into upcoming iterations is the major concern. One can select all of labeled examples, but usually only a number of them (n), known as growth size, based on a quality measure is selected. This measure is often the confidence score assigned by the classifier. To prevent poor labelings diminishing the quality of training set, a threshold (t) is set on this confidence score. Selection is also influenced by other factors, one of which being the balance between selected labels, which is explored in this study and explained in detail in the section 4.3.
The selected labeled examples can be retained in unlabeled set to be labeled again in next iterations (delibility) or moved so that they are labeled only once (indelibility). We choose the second approach here.
Preselection
While using a pool can improve the efficiency of the self-training process, there can be two other motivations behind it, concerned with the performance of the process.
One idea is that when all data is labeled, since the growth size is often much smaller than the labeled size, a uniform set of examples preferred by the classifier is chosen in each iteration. This leads to a biased classifier like the one discussed in previous section. Limiting the labeling size to a pool and at the same time (pre)selecting divergence examples into it can remedy the problem.
The other motivation is originated from the fact that the base classifier is relatively weak due to small seed size, thus its predictions, as the measure of confidence in selection process, may not be reliable. Preselecting a set of unlabeled examples more probable to be correctly labeled by the classifier in initial steps seems to be a useful strategy against this fact.
We examine both ideas here, by a random preselection for the first case and a measure of simplicity for the second case. Random preselection is built into our system, since we use randomized 1-Add the seed example set L to currently empty training set T. 2-Train the base classifier C with training set T. 3-Iterate the following steps until the stop criterion S is met. a-Select p examples from U into pool P. b-Label pool P with classifier C c-Select n labeled examples with the highest confidence score whose score meets a certain threshold t and add to training set T. d-Retrain the classifier C with new training set. Table 2: Performances of the current system (Cur) and the state-of-the-art (Punyakanok et al., 2008) training data. As the measure of simplicity, we propose the number of samples extracted from each sentence; that is we sort unlabeled sentences in ascending order based on the number of samples and load the pool from the beginning.
Selection Balancing
Most of the previous self-training problems involve a binary classification. Semantic role labeling is a multi-class classification problem with an unbalanced distribution of classes in a given text. For example, the frequency of A1 as the most frequent role in CoNLL training set is 84,917, while the frequency of 21 roles is less than 20. The situation becomes worse when the dominant label NULL (for non-arguments) is added for argument identification purpose in a joint architecture. This biases the classifiers towards the frequent classes, and the impact is magnified as self-training proceeds.
In previous work, although they used a reduced set of roles (yet not balanced), He and Gildea (2006) and Lee et al. (2007), did not discriminate between roles when selecting highconfidence labeled samples. The former study reports that the majority of labels assigned to samples were NULL and argument labels appeared only in last iterations.
To attack this problem, we propose a natural way of balancing, in which instead of labeling and selection based on argument samples, we perform a sentence-based selection and labeling. The idea is that argument roles are distributed over the sentences. As the measure for selecting a labeled sentence, the average of the probabilities assigned by the classifier to all argument samples extracted from the sentence is used.
Experiments and Results
In these experiments, we target two main problems addressed by semi-supervised methods: the performance of the algorithm in exploiting unlabeled data when labeled data is scarce and the domain-generalizability of the algorithm by using an out-of-domain unlabeled data.
We use the CoNLL 2005 shared task data and setting for testing and evaluation purpose. The evaluation metrics include precision, recall, and their harmonic mean, F1.
The Data
The labeled data are selected from Propbank corpus prepared for CoNLL 2005 shared task. Our learning curve experiments on varying size of labeled data shows that the steepest increase in F1 is achieved by 1/10 th of CoNLL training data. Therefore, for training a base classifier as highperformance as possible, while simulating the labeled data scarcity with a reasonably small amount of it, 4000 sentence are selected randomly from the total 39,832 training sentences as seed data (L). These sentences contain 71,400 argument samples covering 38 semantic roles out of 52 roles present in the total training set.
We use one unlabeled training set (U) for indomain and another for out-of-domain experiments. The former is the remaining portion of CoNLL training data and contains 35,832 sentences (698,567 samples). The out-of-domain set was extracted from Open American National Corpus 2 (OANC), a 14-million words multigenre corpus of American English. The whole corpus was preprocessed to prune some problematic sentences. We also excluded the biomed section due to its large size to retain the domain balance of the data. Finally, 304,711 sentences with the length between 3 and 100 were parsed by the syntactic parser. Out of these, 35,832 sentences were randomly selected for the experiments reported here (832,795 samples).
Two points are worth noting about the results in advance. First, we do not exclude the argument roles not present in seed data when evaluating the results. Second, we observed that our predicate-identification method is not reliable, since it is solely based on POS tags assigned by parser which is error-prone. Experiments with gold predicates confirmed this conclusion.
The Effect of Balanced Selection
Figures 2 and 3 depict the results of using unbalanced and balanced selection with WSJ and OANC data respectively. To be comparable with previous work (He and Gildea, 2006), the growth size (n) for unbalanced method is 7000 samples and for balanced method is 350 sentences, since each sentence roughly contains 20 samples. A probability threshold (t) of 0.70 is used for both cases. The F1 of base classifier, best-performed classifier, and final classifier are marked.
When trained on WSJ unlabeled set, the balanced method outperforms the other in both WSJ (68.53 vs. 67.96) and Brown test sets (59.62 vs. 58.95). A two-tail t-test based on different random selection of training data confirms the statistical significance of this improvement at p<=0.05 level. Also, the self-training trend is more promising with both test sets. When trained on OANC, the F1 degrades with both methods as self-training progress. However, for both test sets, the best classifier is achieved by the balanced selection (68.26 vs. 68.15 and 59.41 vs. 58.68). Moreover, balanced selection shows a more normal behavior, while the other degrades the performance sharply in the last iterations (due to a swift drop of recall).
Consistent with previous work, with unbalanced selection, non-NULL-labeled unlabeled samples are selected only after the middle of the process. But, with the balanced method, selection is more evenly distributed over the roles.
A comparison between the results on Brown test set with each of unlabeled sets shows that indomain data generalizes even better than out-ofdomain data (59.62 vs. 59.41 and also note the trend). One apparent reason is that the classifier cannot accurately label the out-of-domain unlabeled data successively used for training. The lower quality of our out-of-domain data can be another reason for this behavior. Furthermore, the parser we used was trained on WSJ, so it negatively affected the OANC parses and consequently its SRL results.
The Effect of Preselection
Figures 4 and 5 show the results of using pool with random and simplicity-based preselection with WSJ and OANC data respectively. The pool size (p) is 2000, and growth size (n) is 1000 sentences. The probability threshold (t) used is 0.5.
Comparing these figures with the previous figures shows that preselection improves the selftraining trend, so that more unlabeled data can still be useful. This observation was consistent with various random selection of training data.
Between the two strategies, simplicity-based method outperforms the random method in both self-training trend and best classifier F1 (68.45 vs. 68.25 and 59.77 vs. 59.3 with WSJ and 68.33 vs. 68 with OANC), though the t-test shows that the F1 difference is not significant at p<=0.05. This improvement does not apply to the case of using OANC data when tested with Brown data
Conclusion and Future Work
This work studies the application of self-training in learning semantic role labeling with the use of unlabeled data. We used a balancing method for selecting newly labeled examples for augmenting the training set in each iteration of the selftraining process. The idea was to reduce the effect of unbalanced distribution of semantic roles in training data. We also used a pool and examined two preselection methods for loading unlabeled data into it. These methods showed improvement in both classifier performance and self-training trend. However, using out-of-domain unlabeled data for increasing the domain generalization ability of the system was not more useful than using indomain data. Among possible reasons are the low quality of the used data and the poor parses of the out-of-domain data.
Another major factor that may affect the selftraining behavior here is the poor performance of the base classifier compared to the state-of-theart (see Table 2), which exploits more complicated SRL architecture. Due to high computational cost of self-training approach, bootstrapping experiments with such complex SRL approaches are difficult and time-consuming.
Moreover, parameter tuning process shows that other parameters such as pool-size, growth number and probability threshold are very effective. Therefore, more comprehensive parameter tuning experiments than what was done here is required and may yield better results.
We are currently planning to port this setting to co-training, another bootstrapping algorithm. One direction for future work can be adapting the architecture of the SRL system to better match with the bootstrapping process. Another direction can be adapting bootstrapping parameters to fit the semantic role labeling complexity.
Figure 2 :
2Balanced (B) and Unbalanced (U) Selection with WSJ Unlabeled Data
(59.27 vs. 59.38), where, however, the difference is not statistically significant. The same conclusion to the section 5.2 can be made here.67.96
67.77
67.95
68.53
68.1
58.95
57.99
58.58
59.62
59.09
57
59
61
63
65
67
69
0
7000
14000
21000
28000
35000
F1
Number of Unlabeled Sentences
WSJ test (U)
WSJ test (B)
Brown test (U)
Brown test (B)
Figure 3: Balanced (B) and Unbalanced (U) Selection
with OANC Unlabeled Data
68.15
65.75
67.95
68.26
67.14
58.68
55.64
58.58
59.41
58.41
55
57
59
61
63
65
67
69
0
7000
14000
21000
28000
35000
F1
Number of Unlabeled Sentences
WSJ test (U)
WSJ test (B)
Brown test (U)
Brown test (B)
Figure 4: Random (R) and Simplicity (S) Pre-selection
with WSJ Unlabeled Data
68.25 68.14
67.95
68.45 68.44
59.3
58.55
58.58
59.77
59.34
57
59
61
63
65
67
69
0
5000 10000 15000 20000 25000 30000 35000
F1
Number of Unlabeled Sentences
WSJ test (R)
WSJ test (S)
Brown test (R)
Brown test (S)
Figure 5: Random (R) and Simplicity (S) Pre-selection
with OANC Unlabeled Data
68
67.39
67.95
68.33
67.45
59.38
59.17
58.58
59.27
59.08
57
59
61
63
65
67
69
0
5000 10000 15000 20000 25000 30000 35000
F1
Number of Unlabeled Sentences
WSJ test (R)
WSJ test (S)
Brown test (R)
Brown test (S)
http://homepages.inf.ed.ac.uk/lzhang10/maxent_tool kit.html
http://www.americannationalcorpus.org/OANC
Semisupervised Learning for Computational Linguistics. S Abney, Chapman and HallLondonAbney, S. 2008. Semisupervised Learning for Compu- tational Linguistics. Chapman and Hall, London.
The Berkeley FrameNet project. F Baker, C Fillmore, J Lowe, Proceedings of COLING-ACL. COLING-ACLBaker, F., Fillmore, C. and Lowe, J. 1998. The Berke- ley FrameNet project. In Proceedings of COLING- ACL, pages 86-90.
Coarse-to-fine nbest parsing and MaxEnt discriminative reranking. E Charniak, M Johnson, Charniak, E. and Johnson, M. 2005. Coarse-to-fine n- best parsing and MaxEnt discriminative reranking.
Proceedings of the 43rd Annual Meeting of the ACL. the 43rd Annual Meeting of the ACLIn Proceedings of the 43rd Annual Meeting of the ACL, pages 173-180.
Introduction to the CoNLL-2005 shared task: Semantic role labeling. X Carreras, L Marquez, Proceedings of the 9th Conference on Natural Language Learning (CoNLL). the 9th Conference on Natural Language Learning (CoNLL)Carreras, X. and Marquez, L. 2005. Introduction to the CoNLL-2005 shared task: Semantic role labe- ling. In Proceedings of the 9th Conference on Nat- ural Language Learning (CoNLL), pages. 152-164.
Bootstrapping POS taggers using Unlabeled Data. S Clark, R J Curran, M Osborne, Proceedings of the 7th Conference on Natural Language Learning At HLT-NAACL 2003. the 7th Conference on Natural Language Learning At HLT-NAACL 2003Clark S., Curran, R. J. and Osborne M. 2003. Boot- strapping POS taggers using Unlabeled Data. In Proceedings of the 7th Conference on Natural Language Learning At HLT-NAACL 2003, pages 49-55.
Automatic labeling of semantic roles. D Gildea, D Jurafsky, CL. 283Gildea, D. and Jurafsky, D. 2002. Automatic labeling of semantic roles. CL, 28(3):245-288.
Self-training and Cotraining for Semantic Role Labeling: Primary Report. S He, H Gildea, TR. 891University of Colorado at BoulderHe, S. and Gildea, H. 2006. Self-training and Co- training for Semantic Role Labeling: Primary Re- port. TR 891, University of Colorado at Boulder
From Treebank to PropBank. P Kingsbury, M Palmer, Proceedings of the 3rd International Conference on Language Resources and Evaluation. the 3rd International Conference on Language Resources and Evaluation2002Kingsbury, P. and Palmer, M. 2002. From Treebank to PropBank. In Proceedings of the 3rd Interna- tional Conference on Language Resources and Evaluation (LREC-2002).
Investigation of Weakly Supervised Learning for Semantic Role Labeling. J Lee, Y Song, H Rim, Proceedings of the Sixth international Conference on Advanced Language Processing and Web information Technology (ALPIT 2007). the Sixth international Conference on Advanced Language Processing and Web information Technology (ALPIT 2007)Lee, J., Song, Y. and Rim, H. 2007. Investigation of Weakly Supervised Learning for Semantic Role Labeling. In Proceedings of the Sixth international Conference on Advanced Language Processing and Web information Technology (ALPIT 2007), pages 165-170.
Effective self-training for parsing. D Mcclosky, E Charniak, Johnson , M , Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the ACL. the Main Conference on Human Language Technology Conference of the North American Chapter of the ACLMcClosky, D., Charniak, E., and Johnson, M. 2006. Effective self-training for parsing. In Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the ACL, pages 152-159.
Weakly supervised natural language learning without redundant views. V Ng, C Cardie, Proceedings of the 2003 Conference of the North American Chapter of the ACL on Human Language Technology. the 2003 Conference of the North American Chapter of the ACL on Human Language TechnologyNg, V. and Cardie, C. 2003. Weakly supervised natu- ral language learning without redundant views. In Proceedings of the 2003 Conference of the North American Chapter of the ACL on Human Lan- guage Technology, pages 94-101.
The Importance of Syntactic Parsing and Inference in Semantic Role Labeling. V Punyakanok, D Roth, W Yi, CL. 342Punyakanok, V., Roth, D. and Yi, W. 2008. The Im- portance of Syntactic Parsing and Inference in Se- mantic Role Labeling. CL, 34(2):257-287.
Using predicate argument structures for information extraction. M Surdeanu, S Harabagiu, J Williams, P Aarseth, Proceedings of the 41 st Annual Meeting of the ACL. the 41 st Annual Meeting of the ACLSurdeanu, M., Harabagiu, S., Williams, J. and Aar- seth, P. 2003. Using predicate argument structures for information extraction. In Proceedings of the 41 st Annual Meeting of the ACL, pages 8-15.
The CoNLL 2008 shared task on joint parsing of syntactic and semantic dependencies. M Surdeanu, R Johansson, A Meyers, L Marquez, J Nivre, Proceedings of the 12 th Conference on Natural Language Learning (CoNLL). the 12 th Conference on Natural Language Learning (CoNLL)Surdeanu, M., Johansson, R., Meyers, A., Marquez, L. and Nivre, J. 2008. The CoNLL 2008 shared task on joint parsing of syntactic and semantic de- pendencies. In Proceedings of the 12 th Conference on Natural Language Learning (CoNLL), pages 159-177.
Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. E Yarowsky, proceeding of the 33 rd Annual Meeting of ACL. eeding of the 33 rd Annual Meeting of ACLYarowsky, E. 1995. Unsupervised Word Sense Dis- ambiguation Rivaling Supervised Methods. In pro- ceeding of the 33 rd Annual Meeting of ACL, pages 189-196. |
||
14,691,060 | Unsupervised Parse Selection for HPSG | Parser disambiguation with precision grammars generally takes place via statistical ranking of the parse yield of the grammar using a supervised parse selection model. In the standard process, the parse selection model is trained over a hand-disambiguated treebank, meaning that without a significant investment of effort to produce the treebank, parse selection is not possible. Furthermore, as treebanking is generally streamlined with parse selection models, creating the initial treebank without a model requires more resources than subsequent treebanks. In this work, we show that, by taking advantage of the constrained nature of these HPSG grammars, we can learn a discriminative parse selection model from raw text in a purely unsupervised fashion. This allows us to bootstrap the treebanking process and provide better parsers faster, and with less resources. | [
16332736,
10540932,
3143783,
11599080,
1331239,
6300554,
329483,
628455,
2547341,
885002,
1830575,
15984880,
2623723,
1105,
10271125,
3542138,
5060957,
2727455
] | Unsupervised Parse Selection for HPSG
Association for Computational LinguisticsCopyright Association for Computational LinguisticsOctober 2010. 2010
Rebecca Dridan rdridan@csse.unimelb.edu.au
Dept. of Computer Science and Software Engineering
University of Melbourne
Australia
Timothy Baldwin
Dept. of Computer Science and Software Engineering
University of Melbourne
Australia
Unsupervised Parse Selection for HPSG
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
the 2010 Conference on Empirical Methods in Natural Language ProcessingMIT, Massachusetts, USA, 9; cAssociation for Computational Linguistics11October 2010. 2010
Parser disambiguation with precision grammars generally takes place via statistical ranking of the parse yield of the grammar using a supervised parse selection model. In the standard process, the parse selection model is trained over a hand-disambiguated treebank, meaning that without a significant investment of effort to produce the treebank, parse selection is not possible. Furthermore, as treebanking is generally streamlined with parse selection models, creating the initial treebank without a model requires more resources than subsequent treebanks. In this work, we show that, by taking advantage of the constrained nature of these HPSG grammars, we can learn a discriminative parse selection model from raw text in a purely unsupervised fashion. This allows us to bootstrap the treebanking process and provide better parsers faster, and with less resources.
Introduction
Parsing with precision grammars is generally a twostage process: (1) the full parse yield of the precision grammar is calculated for a given item, often in the form of a packed forest for efficiency (Oepen and Carroll, 2000;Zhang et al., 2007); and (2) the individual analyses in the parse forest are ranked using a statistical model ("parse selection"). In the domain of treebank parsing, the Charniak and Johnson (2005) reranking parser adopts an analogous strategy, except that ranking and pruning are incorporated into the first stage, and the second stage is based on only the top-ranked parses from the first stage. For both styles of parsing, however, parse selection is based on a statistical model learned from a pre-existing treebank associated with the grammar. Our interest in this paper is in completely removing this requirement of parse selection on explicitly treebanked data, ie the development of fully unsupervised parse selection models.
The particular style of precision grammar we experiment with in this paper is HPSG (Pollard and Sag, 1994), in the form of the DELPH-IN suite of grammars (http://www.delph-in.net/). One of the main focuses of the DELPH-IN collaboration effort is multilinguality. To this end, the Grammar Matrix project has been developed which, through a set of questionnaires, allows grammar engineers to quickly produce a core grammar for a language of their choice. Bender (2008) showed that by using and expanding on this core grammar, she was able to produce a broad-coverage precision grammar of Wambaya in a very short amount of time. However, the Grammar Matrix can only help with the first stage of parsing. The statistical model used in the second stage of parsing (ie parse selection) requires a treebank to learn the features, but as we explain in Section 2, the treebanks are created by parsing, preferably with a statistical model. In this work, we look at methods for bootstrapping the production of these statistical models without having an annotated treebank. Since many of the languages that people are building new grammars for are under-resourced, we can't depend on having any external information or NLP tools, and so the methods we examine are purely unsupervised, using nothing more than the grammars them-selves and raw text. We find that, not only can we produce models that are suitable for kick-starting the treebanking process, but the accuracy of these models is comparable to parsers trained on gold standard data (Clark and Curran, 2007b;Miyao and Tsujii, 2008), which have been successfully used in applications .
The problem
The current method of training a parse selection model uses the [incr tsdb()] treebanking mechanism (Oepen, 2001) and works well for updating models for mature grammars, although even for these grammars, building a new model for a different domain requires a time-consuming initial treebanking effort. The treebanks used with DELPH-IN grammars are dynamic treebanks (Oepen et al., 2004) created by parsing text and having an annotator select the correct analysis (or discard all of them). The annotation process involves making binary decisions based on so-called parse discriminants (Carter, 1997). Whenever the grammar is changed, the treebank can be quickly updated by re-parsing and re-applying the old annotation decisions. This treebanking process not only produces gold standard trees, but also a set of non-gold trees which provides the negative training data necessary for a discriminative maximum entropy model.
The standard process for creating a parse selection model is:
1. parse the training set, recording up to 500 highest-ranking parses for each sentence;
2. treebank the training set;
3. extract features from the gold and non-gold parses;
4. learn feature weights using the TADM toolkit. 1 (Malouf, 2002) The useful training data from this process is the parses from those sentences for which: more than one parse was found; and at least one parse has been annotated as correct. That is, there needs to be both gold and non-gold trees for any sentence to be used in training the discriminative model. There are two issues with this process for new grammars. Firstly, treebanking takes many personhours, and is hence both time-consuming and expensive. Complicating that is the second issue: Nbest parsing requires a statistical model. While it is possible to parse exhaustively with no model, parsing is much slower, since the unpacking of results is time-consuming. Selective unpacking (Zhang et al., 2007) speeds this up a great deal, but requires a parse selection model. Treebanking is also much slower when the parser must be run exhaustively, since there are usually many more analyses to manually discard.
This work hopes to alleviate both problems. By producing a statistical model without requiring human treebanking, we can have a working and efficient parser with less human effort. Even if the top-1 parses this parser produces are not as accurate as those trained on gold standard data, this model can be used to produce the N -best analyses for the treebanker. Since our models are much better than random selection, we can afford to reduce N and still have a reasonable likelihood that the correct parse is in that top N , making the job of the treebanker much faster, and potentially leading to even better parse selection accuracy based on semi-supervised or fully-supervised parse selection.
Data and evaluation
Our ultimate goal is to use these methods for underresourced languages but, since there are no preexisting treebanks for these languages, we have no means to measure which method produces the best results. Hence, in this work, we experiment with languages and grammars where we have gold standard data, in order to be able to evaluate the quality of the parse selection models. Since we have gold-standard trained models to compare with, this enables us to fully explore how these unsupervised methods work, and show which methods are worth trying in the more time-consuming and resourceintensive future experiments on other languages. It is worth reinforcing that the gold-standard data is used for evaluation only, except in calculating the supervised parse selection accuracy as an upperbound.
The English Resource Grammar (ERG: Flickinger (2002)) is an HPSG-based grammar of English that has been under development for many person years. In order to examine the cross-lingual applicability of our methods, we also use Jacy, an HPSG-based grammar of Japanese (Siegel and Bender, 2002). In both cases, we use grammar versions from the "Barcelona" release, from mid-2009.
Training Data
Both of our grammars come with statistical models, and the parsed data and gold standard annotations used to create these models are freely available. As we are trying to simulate a fully unsupervised setup, we didn't want any influence from these earlier models. Hence, in our experiments we used the parsed data from those sentences that received less than 500 parses and ignored any ranking, thus annulling the effects of the statistical model. This led to a reduced data set, both in the number of sentences, and in the fact that the more ambiguous sentences were discarded, but it allows clean comparison between different methods, without incorporating external information. The details of our training sets are shown in Table 1, 2 indicating that the sentence lengths are relatively short, and hence the ambiguity (measured as average parses per sentence) is low for both our grammars. The ambiguity figures also suggest that the Japanese grammar is more constrained (less ambiguous) than the English grammar, since there are, on average, more parses per sentence for English, even with a lower average sentence length.
Test Data
The test data sets used throughout our experiments are described in Table 2. The tc-006 data set is from the same Tanaka Corpus (Tanaka, 2001) which was used for the Japanese training data. There is a wider variety of treebanked data available for the English grammar than for the Japanese. We use the jhpstg t data set, which consists of text from Norwegian tourism brochures, from the same LOGON corpus as the English training data (Oepen et al., 2004). In order to have some idea of domain effects, we also use the catb data set, the text of an essay on opensource development. 3 We see here that the sentences are longer, particularly for the English data. Also, since we are not artificially limiting the parse ambiguity by ignoring those with 500 or more parses, the ambiguity is much higher. This ambiguity figure gives some indication of the difficulty of the parse selection task. Again we see that the English sentences are more ambiguous, much more in this case, making the parse selection task difficult. In fact, the English ambiguity figures are an under-estimate, since some of the longer sentences timed out before producing a parse count. This ambiguity can be a function of the sentence length or the language itself, but also of the grammar. A more detailed and informative grammar makes more distinctions, not all of which are relevant for every analysis.
Evaluation
The exact match metric is the most common accuracy metric used in work with the DELPH-IN tool set, and refers to the percentage of sentences for which the top parse matched the gold parse in every way. This is akin to the sentence accuracy that is occasionally reported in the parsing literature, except that it also includes fine-grained syntactico-semantic features that are not often present in other parsing frameworks. Exact match is a useful metric for parse selection evaluation, but it is very blunt-edged, and gives no way of evaluating how close the top parse was to the gold standard. Since these are very detailed analyses, it is possible to get one detail wrong and still have a useful analysis. Hence, in addition to exact match, we also use the EDM N A evaluation defined by Dridan (2009). This is a predicateargument style evaluation, based on the semantic output of the parser (MRS: Minimal Recursion Semantics (Copestake et al., 2005)). This metric is broadly comparable to the predicate-argument dependencies of CCGBank (Hockenmaier and Steedman, 2007) or of the ENJU grammar (Miyao and Tsujii, 2008), and also somewhat similar to the grammatical relations (GR) of the Briscoe and Carroll (2006) version of DepBank. The EDM N A metric matches triples consisting of predicate names and the argument type that connects them. 4
Initial Experiments
All of our experiments are based on the same basic process: (1) for each sentence in the training data described in Section 3.1, label a subset of analyses as correct and the remainder as incorrect;
(2) train a model using the same features and learner as in the standard process of Section 2; (3) parse the test data using that model; and (4) evaluate the accuracy of the top analyses. The differences lay in how the 'correct' analyses are selected each time. Each of the following sections detail different methods for nominating which of the (up to 500) analyses from the training data should be considered pseudo-gold for training the parse selection model.
Upperbound and baseline models
As a first step we evaluated each data set using an upperbound and a baseline model. The upperbound model in this case is the model trained with gold standard annotations. These accuracy figures are slightly lower than others found in the literature for this data, since, to allow for comparison, we limited the training data to the sets described in Table 1. By throwing out those sentences with more than 500 parses, we exclude much of the data that is used in the standard model and so our exact match figures are slightly lower than might be expected. For the baseline model, we used random selection to select our gold analyses. For this experiment, we randomly assigned one parse from each sentence in the training data to be correct (and the remainder of analyses as incorrect), and then used that 'gold standard' to train the model. Results for the upperbound and baseline models are shown in Tables 3 and 4. As expected, the results for Japanese are much higher, since the lower ambiguity makes this an easier task. The catb test set results suffer, not only from being longer, more ambiguous sentences, but also because it is completely out of the domain of the training data.
The exact match results from the random baseline are approximately what one might expect, given the respective ambiguity levels in Table 2. The EDM figures are perhaps higher than might be expected given random selection from the entire parse forest. This results from using a precision grammar, with an inbuilt notion of grammaticality, hence constraining the parser to only produce somewhat reasonable parses, and creating a reasonably high baseline for our parse selection experiments.
We also tried a separate baseline, eliminating the parse selection model altogether, and using random selection directly to select the top analysis. The exact match and EDM precision results were slightly lower than using random selection to train a model, which may be due to the learner giving weight to features that are common across the training data, but the differences weren't significant. Recall was significantly lower when using random selection directly, due to the time outs caused by running without a model. For this reason, we use the random selection-based model results as our baseline for the other unsupervised parse selection models, noting that correctly identifying approximately three quarters of the dependencies in the jhpstg t set, and over 80% when using the Japanese grammar, is a fairly high baseline.
First attempts
As a first approach to unsupervised parse selection, we looked at two heuristics to designate some number of the analyses as 'gold' for training. Both of these heuristics looked independently at the parses of each sentence, rather than calculating any numbers across the whole training set.
The first method builds on the observation from the random selection-based model baseline experiment that just giving weight to common features could improve parser accuracy. In this case, we looked at the edges of the parsing chart. For each sentence, we counted the number of times an edge was present in an analysis, and used that number (normalised by the total number of times any edge was used) as the edge weight. We then calculated an analysis score by summing the edge weights of all the edges in that analysis, and dividing by the number of edges, to give an average edge weight for an analysis. All analyses that had the best analysis score for a sentence were designated 'gold'. Since it was possible for multiple analyses to have the same score, there could be multiple gold analyses for any one sentence. If all the analyses had the same score, this sentence could not be used as part of the training data. This method has the effect of selecting the parse(s) most like all the others, by some definitions the centroid of the parse forest. This has some relationship to the partial training method described by Clark and Curran (2006), where the most frequent dependencies where used to train a model for the C&C CCG parser. In that case, however, the dependencies were extracted only from analyses that matched the gold standard supertag sequence, rather than the whole parse forest. The second heuristic we tried is one often used as a baseline method: degree of right (or left) branching. In this instance, we calculated the degree of branching as the number of right branches in a parse divided by the number of left branches (and vice versa for Japanese, a predominantly left-branching language). In the same way as above, we designated all parses with the best branching score as 'gold'. Again, this is not fully discriminatory, and it was common to get multiple gold trees for a given sentence. Table 5 shows the results for these two methods. All the results show an improvement over the baseline, with all but the F-score for the Edges method of tc-006 being at a level of statistical significance. 5 The only statistically significant difference between the Edges and Branching methods is over the jhpstg t data set. While improvement over random is encouraging, the results were still uninspiring and so we moved on to slightly more complex methods, described in the next section.
Supertagging Experiments
The term supertags was first used by Bangalore and Joshi (1999) to describe fine-grained part of speech tags which include some structural or dependency information. In that original work, the supertags were LTAG (Schabes and Joshi, 1991) elementary trees, and they were used for the purpose of speeding up parsing by restricting the allowable leaf types. Subsequent work involving supertags has mostly focussed on this efficiency goal, but they can also be used to inform parse selection. Dalrymple (2006) and Blunsom (2007) both look at how discriminatory a tag sequence is in filtering a parse forest. This work has shown that tag sequences can be successfully used to restrict the set of parses produced, but generally are not discriminatory enough to distinguish a single best parse. Toutanova et al. (2002) present a similar exploration but also go on to include probabilities from a HMM model into the parse selection model as features. There has also been some work on using lexical probabilities for domain adaptation of a model (Hara et al., 2007;Rimell and Clark, 2008). In Dridan (2009), tag sequences from a supertagger are used together with other factors to re-rank the top 500 parses from the same parser and English grammar we use in this research, and achieve some improvement in the ranking where tagger accuracy is sufficiently high. We use a similar method, one level removed, in that we use the tag sequences to select the 'gold' parse(s) that are then used to train a model, as in the previous sections.
Gold Supertags
In order to test the viability of this method, we first experimented using gold standard tags, extracted from the gold standard parses. Supertags come in many forms, depending on both the grammar formalism and the implementation. For this work, we use HPSG lexical types (lextypes), the native word classes in the grammars. These lextypes encode part of speech and subcategorisation information, as well as some more idiosyncratic features of words, such as restrictions on preposition forms, mass/count distinctions and comparative versus superlative forms of adjectives. As a few examples from the English grammar, v np le represents a basic transitive verb, while n pp c-of le represents a count noun that optionally takes a prepositional phrase complement headed by of. The full definition of a lextype consists of a many-featured AVM (attribute value matrix), but the type names have been deliberately chosen to represent the main features of each type. In the Dridan (2009) work, parse ranking showed some improvement when morphological information was added to the tags. Hence, we also look at more fine-grained tags constructed by concatenating appropriate morphological rules onto the lextypes, as in v np le:past verb orule (ie a simple transitive verb with past tense).
We used these tags by extracting the tag sequence from the leaf types of all the parses in the forest, marking as 'gold' any parse that had the same sequence as the gold standard parse and then training the models as before. Table 6 shows the results from parsing with models based on both the basic lextype and the lextype with morphology. The results are promising. They still fall well below training purely on gold standard data (at least for the in-domain sets), since the tag sequences are not fully discriminatory and hence noise can creep in, but accuracy is significantly better than the heuristic methods tried earlier. This suggested that, at least with a reasonably accurate tagger, this was a viable strategy for training a model. With no significant difference between the basic and +morph versions of the tag set, we decided to use the basic lextypes as tags, since a smaller tag set should be easier to tag with. However, we first had to train a tagger, without using any gold standard data.
Unsupervised Supertagging
Research into unsupervised part-of-speech tagging with a tag dictionary (sometimes called weakly supervised POS tagging) has been going on for many years (cf Merialdo (1994), Brill (1995)), but generally using a fairly small tag set. The only work we know of on unsupervised tagging for the more complex supertags is from Baldridge (2008), and more recently, Ravi et al. (2010a). In this work, the constraining nature of the (CCG) grammar is used to mitigate the problem of having a much more ambiguous tag set. Our method has a similar underlying idea, but the implementation differs both in the way we extract the word-to-tag mappings, and also how we extract and use the information from the grammar to initialise the tagger model.
We chose to use a simple first-order Hidden Markov Model (HMM) tagger, using the implemen-tation of Dekang Lin, 6 which re-estimates probabilities, given an initial model, using the Baum-Welch variant of the Expectation-Maximisation (EM) algorithm. One possibility for an initial model was to extract the word-to-lextype mappings from the grammar lexicon as Baldridge does, and make all starting probabilities uniform. However, our lexicon maps between lextypes and lemmas, rather than inflected word forms, which is what we'd be tagging. That is to say, from the lexicon we could learn that the lemma walk can be tagged as v pp * dir le, but we could not directly extract the fact that therefore walked should also receive that tag. 7 For this reason, we decided it would be simplest to initialise our probability estimates using the output of the parser, feeding in only those tag sequences which are compatible with analyses in the parse forest for that item. This method takes advantage of the fact that, because the grammars are heavily constrained, the parse forest only contains viable tag sequences.
Since parsing without a model is slow, we restricted the training set to those sentences shorter than a specific word length (12 for English and 15 for Japanese, since that was the less ambiguous grammar and hence faster). Table 7 shows how much parsed data this gave us. From this parsed data we extracted tag-to-word and tag-to-tag frequency counts from all parses for all sentences, and used these frequencies to produce the emission and transition probabilities, respectively. The emission probabilities were taken directly from the normalised frequency counts, but for the transition probabilities we allow for all possible transitions, and add one to all counts before normalising. This model we call our initial counts model. The EM trained model is then produced by starting with this initial model and running the Baum-Welch algorithm using raw text sentences from the training corpus.
Supertagging-based parse selection models
We use both the initial counts and EM trained models to tag the training data from Table 1 used in the gold tag experiment. Since we could no longer assume that our tag sequence would be present within the extracted tag sequences, we used the percentage of tokens from a parse whose lextype matched our tagged sequence as the parse score. Again, we marked as 'gold' any parse that had the best parse score for each sentence, and trained a new parse selection model. Table 8 shows the results of parsing with these models. The results are impressive, significantly higher than all our previous unsupervised methods.
Interestingly, we note that there is no significant difference between the initial count and EM trained models for the English data. To explore why this might be so, we looked at the tagger accuracy for both models over the respective training data sets, shown in Table 9. The results are not conclusive. For both languages, the EM trained model is less accurate, though not significantly so for Japanese. However, this insignificant tagger accuracy decrease for Japanese produced a significant increase in parser accuracy, while a more pronounced tagger accuracy decrease had no significant effect on parser accuracy in English. There is much potential for further work in this direction, experimenting with more training data or more estimation iterations, or even looking at different estimators as suggested in Johnson (2007) and Ravi et al. (2010b). There is also the issue of whether tag accuracy is the best measure for indicating potential parse accuracy. The Japanese parsing results are already equivalent to those achieved using gold standard tags. It is possible that parsing accuracy is reasonably insensitive to tagger accuracy, but it is also possible that there is a better metric to look at, such as tag accuracy over frequently confused tags.
Discussion
The results of Table 8 show that, using no human annotated data, we can get exact match results that are almost half way between our random baseline and our gold-standard-trained upperbound. EDM Fscores of 90% and 83% over in-domain data compare well with dependency-based scores from other parsers, although a direct comparison is very difficult to do (Clark and Curran, 2007a;. It still remains to see whether this level of accuracy is good enough to be useful. The main aim of this work is to bootstrap the treebanking process for new grammars, but to conclusively show the efficacy of our methods in that situation requires a long-term experiment that we are now starting, based on the results we have here. Another possible use for these methods was alluded to in Section 2: producing a new model for a new domain.
Results at every stage have been much worse for the catb data set, compared to the other jhpstg t English data set. While sentence length plays some part, the major reason for this discrepancy was domain mismatch between the training and test data. One method that has been successfully used for domain adaption in parsing is self-training (McClosky et al., 2006). In this process, data from the new domain is parsed with the parser trained on the old do- main, and then the top analyses of the parsed new domain data are added to the training data, and the parser is re-trained. This is generally considered a semi-supervised method, since the original parser is trained on gold standard data. In our case, we wanted to test whether parsing data from the new domain using our unsupervised parse selection model was accurate enough to still get an improvement using self-training for domain adaptation. It is not immediately clear what one might consider to be the 'domain' of the catb test set, since domain is generally very vaguely defined. In this case, there was a limited amount of text available from other essays by the same author. 8 While the topics of these essays vary, they all relate to the social side of technical communities, and so we used this to represent in-domain data for the catb test set. It is, however, a fairly small amount of data for self-training, being only around 1000 sentences. We added the results of parsing this data to the training set we used to create the initial counts model and again retrained and parsed. Table 10 shows the results. Previous results for the catb data set are given for comparison.
The results show that the completely unsupervised parse selection method produces a top parse that is at least accurate enough to be used in selftraining, providing a cheap means of domain adaptation. In future work, we hope to explore this avenue of research further.
Conclusions and Further Work
Comparing Tables 8 and 4, we can see that for both English and Japanese, we are able to achieve parse selection accuracy well above our baseline of a ran-dom selection-based model using only the information available in the grammar and raw text. This was in part because it is possible to extract a reasonable tagging model from uncorrected parse data, due to the constrained nature of these grammars. These models will hopefully allow grammar engineers to more easily build statistical models for new languages, using nothing more than their new grammar and raw text.
Since fully evaluating the potential for building models for new languages is a long-term ongoing experiment, we looked at a more short-term evaluation of our unsupervised parse selection methods: building models for new domains. A preliminary self-training experiment, using our initial counts tagger trained model as the starting point, showed promising results for domain adaptation.
There are plenty of directions for further work arising from these results. The issues surrounding what makes a good tagger for this purpose, and how can we best learn one without gold training data, would be one possibly fruitful avenue for further exploration. Another interesting slant would be to investigate domain effects of the tagger. Previous work has already found that training just a lexical model on a new domain can improve parsing results. Since the optimal tagger 'training' we saw here (for English) was merely to read off frequency counts for parsed data, it would be easy to retrain the tagger on different domains. Alternatively, it would be interesting so see how much difference it makes to train the tagger on one set of data, and use that to tag a model training set from a different domain. Other methods of incorporating the tagger output could also be investigated. Finally, a user study involving a grammar engineer working on a new language would be useful to validate the results we found here and confirm whether they are indeed helpful in bootstrapping a new grammar.
Table 2 :
2Test data, showing the average word length per
sentence, and also the ambiguity measured as the average
number of parses found per sentence. Note that the ambi-
guity figures for the English test sets are under-estimates,
since some of the longer sentences timed out before giv-
ing an analysis count.
Table 3 :
3Accuracy of the gold standard-based parse selection model.Test Set
Exact
EDM
Match Precision Recall F-score
tc-006
17.26
0.779
0.839
0.807
jhpstg t
12.48
0.720
0.748
0.734
catb
8.30
0.646
0.698
0.671
Table 4 :
4Accuracy of the baseline model, trained on randomly selected pseudo-gold analyses.
Table 5 :
5Accuracy for each test set, measured both as percentage of sentences that exactly matched the gold standard, and f-score over elementary dependencies.
Table 6 :
6Accuracy using gold tag sequence compatibility to select the 'gold' parse(s).
and then compared this with the extracted tag sequences 6 Available from http://webdocs.cs.ualberta.ca/˜lindek/hmm.htm
7 Morphological processing occurs before lexicon lookup in
the PET parser.
Japanese English
Parsed Sentences
9760
3770
Average Length
9.63
6.36
Average Parses
80.77
96.29
Raw Sentences
13500
9410
Raw Total Words
146053 151906
Table 7 :
7Training data for the HMM tagger (both the
parsed data from which the initial probabilities were de-
rived, and the raw data which was used to estimated the
EM trained models).
Test Set
Exact Match
F-score
Initial
EM
Initial
EM
counts trained
counts trained
tc-006
32.85
40.38
0.888
0.898
jhpstg t
26.29
24.04
0.831
0.827
catb
14.61
14.42
0.782
0.783
Table 8 :
8Accuracy using tag sequences from a HMM tag-
ger to select the 'correct' parse(s). The initial counts
model was based on using counts from a parse forest
to approximate the emission and transition probabilities.
The EM trained model used the Baum Welch algorithm to
estimate the probabilities, starting from the initial counts
state.
Table 9 :
9Tagger accuracy over the training data, using both the initial counts and the EM trained models.
Source of 'Gold' Data Exact Match F-scoreRandom Selection
8.30
0.671
Supertags (initial counts)
14.61
0.782
Gold Standard
22.29
0.839
Self-training
15.92
0.791
Table 10 :
10Accuracy results over the out-of-domain catb data set, using the initial counts unsupervised model to produce in-domain training data in a self-training set up. The previous results are shown for easy comparison.
Any sentences that do not have both gold and non-gold analyses (ie, had no correct parse, only one parse, or none) are not included in these figures.
The Cathedral and the Bazaar, by Eric Raymond. Available from: http://catb.org/esr/writings/ cathedral-bazaar/
The full EDM metric also includes features such as tense and aspect, but this is less comparable to the other metrics mentioned.
All statistical significance tests in these experiments use the computationally-intensive randomisation test described inYeh (2000), with p < 0.05.
http://www.catb.org/esr/writings/ homesteading/
AcknowledgementsThis research was supported by Australian Research Council grant no. DP0988242 and Microsoft Research Asia.
Weakly supervised supertagging with grammar-informed initialization. Jason Baldridge, Proceedings of the 22nd International Conference on Computational Linguistics. the 22nd International Conference on Computational LinguisticsManchester, UKJason Baldridge. 2008. Weakly supervised supertagging with grammar-informed initialization. In Proceedings of the 22nd International Conference on Computa- tional Linguistics (Coling 2008), pages 57-64, Manch- ester, UK.
Supertagging: an approach to almost parsing. Srinivas Bangalore, Aravind K Joshi, Computational Linguistics. 252Srinivas Bangalore and Aravind K. Joshi. 1999. Su- pertagging: an approach to almost parsing. Compu- tational Linguistics, 25(2):237-265.
The grammar matrix: An open-source starterkit for the rapid development of cross-linguistically consistent broad-coverage precision grammars. Emily M Bender, Dan Flickinger, Stephan Oepen, Proceedings of the Workshop on Grammar Engineering and Evaluation at the 19th International Conference on Computational Linguistics. the Workshop on Grammar Engineering and Evaluation at the 19th International Conference on Computational LinguisticsTaipei, TaiwanEmily M. Bender, Dan Flickinger, and Stephan Oepen. 2002. The grammar matrix: An open-source starter- kit for the rapid development of cross-linguistically consistent broad-coverage precision grammars. In Proceedings of the Workshop on Grammar Engineer- ing and Evaluation at the 19th International Con- ference on Computational Linguistics, pages 8-14, Taipei, Taiwan.
Evaluating a crosslinguistic grammar resource: A case study of Wambaya. Emily M Bender, Proceedings of the 46th Annual Meeting of the ACL. the 46th Annual Meeting of the ACLColumbus, USAEmily M. Bender. 2008. Evaluating a crosslinguistic grammar resource: A case study of Wambaya. In Pro- ceedings of the 46th Annual Meeting of the ACL, pages 977-985, Columbus, USA.
Structured Classification for Multilingual Natural Language Processing. Philip Blunsom, Department of Computer Science and Software Engineering, the University of MelbournePh.D. thesisPhilip Blunsom. 2007. Structured Classification for Multilingual Natural Language Processing. Ph.D. thesis, Department of Computer Science and Software Engineering, the University of Melbourne.
Unsupervised learning of disambiguation rules for part of speech tagging. Eric Brill, Proceedings of the Third Workshop on Very Large Corpora. the Third Workshop on Very Large CorporaCambridge, USAEric Brill. 1995. Unsupervised learning of disambigua- tion rules for part of speech tagging. In Proceedings of the Third Workshop on Very Large Corpora, pages 1-13, Cambridge, USA.
Evaluating the accuracy of an unlexicalised statistical parser on the PARC DepBank. Ted Briscoe, John Carroll, Proceedings of the 44th Annual Meeting of the ACL and the 21st International Conference on Computational Linguistics. the 44th Annual Meeting of the ACL and the 21st International Conference on Computational LinguisticsSydney, AustraliaTed Briscoe and John Carroll. 2006. Evaluating the accuracy of an unlexicalised statistical parser on the PARC DepBank. In Proceedings of the 44th Annual Meeting of the ACL and the 21st International Confer- ence on Computational Linguistics, pages 41-48, Syd- ney, Australia.
The treebanker: a tool for supervised training of parsed corpora. David Carter, Proceedings of a Workshop on Computational Environments for Grammar Development and Linguistic Engineering. a Workshop on Computational Environments for Grammar Development and Linguistic EngineeringMadrid, SpainDavid Carter. 1997. The treebanker: a tool for super- vised training of parsed corpora. In Proceedings of a Workshop on Computational Environments for Gram- mar Development and Linguistic Engineering, pages 9-15, Madrid, Spain.
Coarse-tofine n-best parsing and maxent discriminative reranking. Eugene Charniak, Mark Johnson, Proceedings of the 43rd Annual Meeting of the ACL. the 43rd Annual Meeting of the ACLAnn Arbor, USAEugene Charniak and Mark Johnson. 2005. Coarse-to- fine n-best parsing and maxent discriminative rerank- ing. In Proceedings of the 43rd Annual Meeting of the ACL, pages 173-180, Ann Arbor, USA.
Partial training for a lexicalized-grammar parser. Stephen Clark, R James, Curran, Proceedings of the Human Language Technology Conference of the North American Chapter of the ACL (NAACL). the Human Language Technology Conference of the North American Chapter of the ACL (NAACL)New York City, USAStephen Clark and James R. Curran. 2006. Partial train- ing for a lexicalized-grammar parser. In Proceedings of the Human Language Technology Conference of the North American Chapter of the ACL (NAACL), pages 144-151, New York City, USA.
Formalismindependent parser evaluation with CCG and Dep-Bank. Stephen Clark, James R Curran, Proceedings of the 45th Annual Meeting of the ACL. the 45th Annual Meeting of the ACLPrague, Czech RepublicStephen Clark and James R. Curran. 2007a. Formalism- independent parser evaluation with CCG and Dep- Bank. In Proceedings of the 45th Annual Meeting of the ACL, pages 248-255, Prague, Czech Republic.
Widecoverage efficient statistical parsing with CCG and log-linear models. Stephen Clark, James R Curran, Computational Linguistics. 334Stephen Clark and James R. Curran. 2007b. Wide- coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4):493-552.
Minimal recursion semantics: An introduction. Ann Copestake, Dan Flickinger, Ivan A Sag, Carl Pollard, Research on Language and Computation. 34Ann Copestake, Dan Flickinger, Ivan A. Sag, and Carl Pollard. 2005. Minimal recursion semantics: An in- troduction. Research on Language and Computation, vol 3(no 4):pp 281-332.
How much can part-of-speech tagging help parsing?. Mary Dalrymple, Natural Language Engineering. 124Mary Dalrymple. 2006. How much can part-of-speech tagging help parsing? Natural Language Engineering, 12(4):373-389.
Using lexical statistics to improve HPSG parsing. Rebecca Dridan, Saarland UniversityPh.D. thesisRebecca Dridan. 2009. Using lexical statistics to im- prove HPSG parsing. Ph.D. thesis, Saarland Univer- sity.
On building a more efficient grammar by exploiting types. Dan Flickinger, Stephan Oepen, Dan Flickinger, Jun'ichi Tsujii, and Hans Uszkoreit, editorsCSLI PublicationsStanfordCollaborative Language EngineeringDan Flickinger. 2002. On building a more efficient grammar by exploiting types. In Stephan Oepen, Dan Flickinger, Jun'ichi Tsujii, and Hans Uszkoreit, edi- tors, Collaborative Language Engineering, pages 1- 17. Stanford: CSLI Publications.
Evaluating impact of re-training a lexical disambiguation model on domain adaptation of an HPSG parser. Tadayoshi Hara, Yusuke Miyao, Jun'ichi Tsujii, Proceedings of the 10th International Conference on Parsing Technology. the 10th International Conference on Parsing TechnologyPrague, Czech RepublicTadayoshi Hara, Yusuke Miyao, and Jun'ichi Tsujii. 2007. Evaluating impact of re-training a lexical dis- ambiguation model on domain adaptation of an HPSG parser. In Proceedings of the 10th International Con- ference on Parsing Technology (IWPT 2007), pages 11-22, Prague, Czech Republic.
CCGbank: A corpus of CCG derivations and dependency structures extracted from the Penn Treebank. Julia Hockenmaier, Mark Steedman, Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)Prague, Czech Republic33Why doesnt EM find good HMM POS-taggers?Julia Hockenmaier and Mark Steedman. 2007. CCG- bank: A corpus of CCG derivations and dependency structures extracted from the Penn Treebank. Compu- tational Linguistics, 33(3):355-396, September. Mark Johnson. 2007. Why doesnt EM find good HMM POS-taggers? In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning (EMNLP-CoNLL), pages 296-305, Prague, Czech Republic.
A comparison of algorithms for maximum entropy parameter estimation. Robert Malouf, Proceedings of the 6th Conference on Natural Language Learning. the 6th Conference on Natural Language LearningTaipei, TaiwanRobert Malouf. 2002. A comparison of algorithms for maximum entropy parameter estimation. In Pro- ceedings of the 6th Conference on Natural Language Learning, Taipei, Taiwan.
Effective self-training for parsing. David Mcclosky, Eugene Charniak, Mark Johnson, Proceedings of the Human Language Technology Conference of the NAACL. the Human Language Technology Conference of the NAACLNew York City, USADavid McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceed- ings of the Human Language Technology Conference of the NAACL, pages 152-159, New York City, USA.
Tagging english text with a probabilistic model. Bernard Merialdo, Computational Linguistics. 202Bernard Merialdo. 1994. Tagging english text with a probabilistic model. Computational Linguistics, 20(2):155-171.
Feature forest models for probabilistic HPSG parsing. Yusuke Miyao, Jun'ichi Tsujii, Computational Linguistics. 341Yusuke Miyao and Jun'ichi Tsujii. 2008. Feature for- est models for probabilistic HPSG parsing. Computa- tional Linguistics, 34(1):35-80.
Towards framework-independent evaluation of deep linguistic parsers. Yusuke Miyao, Kenji Sagae, Jun'ichi Tsujii, Proceedings of the GEAF 2007 Workshop. the GEAF 2007 WorkshopPalo Alto, CaliforniaYusuke Miyao, Kenji Sagae, and Jun'ichi Tsujii. 2007. Towards framework-independent evaluation of deep linguistic parsers. In Proceedings of the GEAF 2007 Workshop, Palo Alto, California.
Task-oriented evaluation of syntactic parsers and their representations. Yusuke Miyao, Rune Saetre, Kenji Sagae, Proceedings of the 46th Annual Meeting of the ACL. the 46th Annual Meeting of the ACLColumbus, USATakuya Matsuzaki, and Jun'ichi TsujiiYusuke Miyao, Rune Saetre, Kenji Sagae, Takuya Mat- suzaki, and Jun'ichi Tsujii. 2008. Task-oriented eval- uation of syntactic parsers and their representations. In Proceedings of the 46th Annual Meeting of the ACL, pages 46-54, Columbus, USA.
Ambiguity packing in constraint-based parsing -practical results. Stephan Oepen, John Carroll, Proceedings of the 1st Conference of the North American Chapter of the Association for Computational Linguistics. the 1st Conference of the North American Chapter of the Association for Computational LinguisticsSeattle, USAStephan Oepen and John Carroll. 2000. Ambiguity pack- ing in constraint-based parsing -practical results. In Proceedings of the 1st Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics, pages 162-169, Seattle, USA.
LinGO redwoods. a rich and dynamic treebank for HPSG. Stephan Oepen, Dan Flickinger, Kristina Toutanova, Christopher D Manning, Journal of Research in Language and Computation. 24Stephan Oepen, Dan Flickinger, Kristina Toutanova, and Christopher D. Manning. 2004. LinGO redwoods. a rich and dynamic treebank for HPSG. Journal of Re- search in Language and Computation, 2(4):575-596.
User manual. Stephan Oepen, Saarbrücken, GermanyComputational Linguistics, Saarland Universityincr tsdb(Stephan Oepen. 2001. [incr tsdb()] -competence and performance laboratory. User manual, Computational Linguistics, Saarland University, Saarbrücken, Ger- many.
Head-Driven Phrase Structure Grammar. Carl Pollard, Ivan A Sag, University of Chicago PressChicago, USACarl Pollard and Ivan A. Sag. 1994. Head-Driven Phrase Structure Grammar. University of Chicago Press, Chicago, USA.
Minimized models and grammar-informed initialization for supertagging with highly ambiguous lexicons. Sujith Ravi, Jason Baldridge, Kevin Knight, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. the 48th Annual Meeting of the Association for Computational LinguisticsUppsala, SwedenSujith Ravi, Jason Baldridge, and Kevin Knight. 2010a. Minimized models and grammar-informed initializa- tion for supertagging with highly ambiguous lexicons. In Proceedings of the 48th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 495-503, Uppsala, Sweden.
Fast, greedy model minimization for unsupervised tagging. Sujith Ravi, Ashish Vaswani, Kevin Knight, David Chiang, Proceedings of the 23rd International Conference on Computational Linguistics. the 23rd International Conference on Computational LinguisticsBeijing, ChinaSujith Ravi, Ashish Vaswani, Kevin Knight, and David Chiang. 2010b. Fast, greedy model minimization for unsupervised tagging. In Proceedings of the 23rd In- ternational Conference on Computational Linguistics (Coling 2010), pages 940-948, Beijing, China.
Adapting a lexicalized-grammar parser to contrasting domains. Laura Rimell, Stephen Clark, Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. the 2008 Conference on Empirical Methods in Natural Language ProcessingHonolulu, USALaura Rimell and Stephen Clark. 2008. Adapting a lexicalized-grammar parser to contrasting domains. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing (EMNLP 2008), pages 475-484, Honolulu, USA.
Parsing with lexicalized tree adjoining grammar. Yves Schabes, Aravind K Joshi, Current Issues in Parsing Technology, chapter. Masaru Tomita3KluwerYves Schabes and Aravind K. Joshi. 1991. Parsing with lexicalized tree adjoining grammar. In Masaru Tomita, editor, Current Issues in Parsing Technology, chap- ter 3, pages 25-48. Kluwer.
Efficient deep processing of japanese. Melanie Siegel, Emily M Bender, Proceedings of the 3rd Workshop on Asian Language Resources and International Standardization. Coling. the 3rd Workshop on Asian Language Resources and International Standardization. ColingTaipei, TaiwanMelanie Siegel and Emily M. Bender. 2002. Efficient deep processing of japanese. In Proceedings of the 3rd Workshop on Asian Language Resources and Interna- tional Standardization. Coling 2002 Post-Conference Workshop., Taipei, Taiwan.
Compilation of a multilingual parallel corpus. Yasuhito Tanaka, Proceedings of PACLING 2001. PACLING 2001Kitakyushu, JapanYasuhito Tanaka. 2001. Compilation of a multilingual parallel corpus. In Proceedings of PACLING 2001, pages 265-268, Kitakyushu, Japan.
Parse disambiguation for a rich HPSG grammar. Kristina Toutanova, D Chistopher, Stuart M Manning, Dan Shieber, Stephan Flickinger, Oepen, First Workshop on Treebanks and Linguistic Theories (TLT2002). Kristina Toutanova, Chistopher D. Manning, Stuart M. Shieber, Dan Flickinger, and Stephan Oepen. 2002. Parse disambiguation for a rich HPSG grammar. In First Workshop on Treebanks and Linguistic Theories (TLT2002), pages 253-263.
More accurate tests for the statistical significance of result differences. Alexander Yeh, Proceedings of the 18th International Conference on Computational Linguistics (COLING 2000). the 18th International Conference on Computational Linguistics (COLING 2000)Saarbrcken, GermanyAlexander Yeh. 2000. More accurate tests for the sta- tistical significance of result differences. In Proceed- ings of the 18th International Conference on Compu- tational Linguistics (COLING 2000), pages 947-953, Saarbrcken, Germany.
Efficiency in unification-based n-best parsing. Yi Zhang, Stephan Oepen, John Carroll, Proceedings of the 10th international conference on parsing technologies (IWPT 2007). the 10th international conference on parsing technologies (IWPT 2007)Prague, Czech RepublicYi Zhang, Stephan Oepen, and John Carroll. 2007. Ef- ficiency in unification-based n-best parsing. In Pro- ceedings of the 10th international conference on pars- ing technologies (IWPT 2007), pages 48-59, Prague, Czech Republic. |
14,327,001 | A Search Task Dataset for German Textual Entailment | We present the first freely available large German dataset for Textual Entailment (TE). Our dataset builds on posts from German online forums concerned with computer problems and models the task of identifying relevant posts for user queries (i.e., descriptions of their computer problems) through TE. We use a sequence of crowdsourcing tasks to create realistic problem descriptions through summarisation and paraphrasing of forum posts. The dataset is represented in RTE-5 Search task style and consists of 172 positive and over 2800 negative pairs. We analyse the properties of the created dataset and evaluate its difficulty by applying two TE algorithms and comparing the results with results on the English RTE-5 Search task. The results show that our dataset is roughly comparable to the RTE-5 data in terms of both difficulty and balancing of positive and negative entailment pairs. Our approach to create task-specific TE datasets can be transferred to other domains and languages. | [
1458690,
13472899,
7647654,
6233479,
1614922,
9714936,
12919101
] | A Search Task Dataset for German Textual Entailment
Britta D Zeller zeller@cl.uni-heidelberg.de
Department of Computational Linguistics
Heidelberg University
Germany
Sebastian Padó pado@cl.uni-heidelberg.de
Department of Computational Linguistics
Heidelberg University
Germany
A Search Task Dataset for German Textual Entailment
We present the first freely available large German dataset for Textual Entailment (TE). Our dataset builds on posts from German online forums concerned with computer problems and models the task of identifying relevant posts for user queries (i.e., descriptions of their computer problems) through TE. We use a sequence of crowdsourcing tasks to create realistic problem descriptions through summarisation and paraphrasing of forum posts. The dataset is represented in RTE-5 Search task style and consists of 172 positive and over 2800 negative pairs. We analyse the properties of the created dataset and evaluate its difficulty by applying two TE algorithms and comparing the results with results on the English RTE-5 Search task. The results show that our dataset is roughly comparable to the RTE-5 data in terms of both difficulty and balancing of positive and negative entailment pairs. Our approach to create task-specific TE datasets can be transferred to other domains and languages.
Introduction
Textual Entailment (TE) is a binary relation between two utterances, a Text T and a Hypothesis H, which holds if "a human reading T would infer that H is most likely true" (Dagan et al., 2005). Example 1 shows a positive entailment (T entails H 1 ) and a negative entailment (T does not entail H 2 ).
(1) T: Yoko Ono unveiled a bronze statue of her late husband, John Lennon, to complete the official renaming of England's Liverpool Airport as Liverpool John Lennon Airport.
H 1 : Yoko Ono is John Lennon's widow.
H 2 : John Lennon renamed Liverpool Airport.
The appeal of Textual Entailment is that it can arguably meet a substantial part of the semantic processing requirements of a range of language processing tasks such as Question Answering (Harabagiu and Hickl, 2006), Information Extraction (Romano et al., 2006), or Summarisation (Harabagiu et al., 2007). Consequently, there is now a research community that works on and improves Textual Entailment technology. In this spirit, the main TE forum, the yearly Recognising Textual Entailment (RTE) Challenge, has created a number of datasets that incorporate the properties of particular tasks, such as Semantic Search in RTE-5 or Novelty Detection in RTE-7 (Bentivogli et al., 2011). At the same time, work on RTE on has focused almost exclusively on English. There is at most a handful of studies on Textual Entailment in other languages, notably German and Italian (Wang and Neumann, 2008;Negri et al., 2009;Bos et al., 2009) as well as a study on cross-lingual entailment (Mehdad et al., 2010). 1 Consequently, virtually no TE technology is available for non-English languages. What is more, it is not clear how well existing algorithms for English RTE carry over to other languages, which might show very different types of surface variation from English. The same limitation exists in terms of genre/register. Virtually all existing datasets have been created from "clean" corpora -that is, properly tokenised, grammatical text, notably Wikipedia. Again, the question arises how well TE algorithms would do on noisier genres like transcribed speech or user-generated content. Arguably, it would benefit the community to have a larger variety of datasets at hand for such investigations.
This paper reports our creation and analysis of a German dataset for TE that is derived from social media data, as is produced every day on a large scale by of non-professional web users. This type of data respects linguistic norms such as spelling and grammar less than traditional textual entailment datasets (Agichtein et al., 2008), which present challenges to semantic processing.
We concentrate on a search task on a computer user forum that deals with computer problems: given a problem statement formulated by a user, identify all relevant forum threads that describe this problem. We created queries for a sample of forum threads by crowdsourcing. We asked annotators to summarise the threads and to paraphrase the summaries to achieve high syntactic and lexical variability. The resulting summaries can be understood as queries (problem statements) corresponding to the original posts. The search for relevant posts given a query can be phrased as a TE problem as follows: queries are hypotheses that are entailed by forum posts (texts) T iff the forum post is relevant for the query (Peñas et al., 2008).
Plan of the paper. Section 2 defines the task in more detail and describes the rationale behind our definition of the crowdsourcing tasks. Section 3 provides a detailed analysis of the queries that were produced by crowdsourcing. Section 4 assesses the difficulty of the dataset by modelling it with the RTE system EDITS (Kouylekov and Negri, 2010). Finally we relate our study to prior work and sum up.
2 Creating a German Social Media TE Dataset with Crowdsourcing
Rationale
As mentioned above, the promise of TE lies in its ability to model NLP tasks. One of the best-established of these tasks is search, which has been a part of the RTE challenges since RTE-5 . In this setup, given a query statement and a set of documents, a document is relevant if it entails the query. That is, the documents serve as candidate texts T for a hypothesis H given by the query. We apply this setup to social media texts that discuss computer problems. Our use case is that a user has a particular problem with their machine and wants to retrieve the set of relevant documents from computer problem forums. In terms of the entailment-based search task, the Ts are given by a corpus of German computer forum threads. More specifically, we use the first post of each thread, since an analysis showed that the first post usually contains the problem description. What is missing, however, are plausible queries (i.e., Hs). We create these queries by asking laypersons to summarise posts through Amazon Mechanical Turk (AMT) (Snow et al., 2008). This involves three steps:
Summarisation. Given the first post of a forum thread (T), summarise the content in one sentence (H*).
Paraphrasing. Paraphrase H* into another sentence (H) by changing both syntax and lexical choice.
Validation. Given original post (T) and paraphrased summary (H), assess if H correctly summarises T.
Step 1 maps documents onto potential queries; these queries might however be still very close to the original verbalisation in the document. On the semantic level, we assume that summarisation can lose information, but not create new information; thus, summaries should be entailed by the original texts (Harabagiu et al., 2007).
Step 2 allows that there is an amount of syntactic and lexical variance between T and H that is realistic for a search task. On the semantic level, we assume that paraphrasing preserves information; that is, input and output of this step should generally exhibit a high degree of semantic equivalence. Finally, Step 3 allows us to detect and remove bad queries produced by unmotivated or sloppy turkers. Thus, queries validated by Step 3 will be entailed by the original documents.
Crowdsourcing Details
We sampled 25 first posts of threads from a corpus of German computer self-help forums as Ts, each for which we generate several Hs. The posts were selected so that their length matches the distribution over lengths for all first posts in the corpus. All 25 posts have a length between 50 and 120 words. Task 1: Summarisation. In the first step, we asked AMT workers to write a concise summary of a forum post, summarising the central points of the original text in a declarative sentence. We also provide an example text with summary. Turkers could mark a text as unsummarisable, but had to indicate a reason. The task was conducted by five turkers for each forum post, leading to 25 * 5 = 125 potential summaries. Two posts were discarded as unsummarisable since they referred heavily to another forum post, which left us with 115 summaries. We paid 0.20 USD for each summary. (Total: 23 USD)
Task 2: Paraphrasing. In this task, workers had to reformulate the summaries produced in the first task. They were asked to replace words by appropriate synonyms and to change the sentence structure, while still maintain the meaning of the original sentence. The workers of Task 2 were not shown the original forum posts, only the summaries. Again, there was the possibility to leave the text unparaphrased, indicating a reason. Each sentence was paraphrased by two turkers, resulting in 115 * 2 = 230 paraphrases.
We decided to discard four of the 230 paraphrases, including their two input sentences (summaries from Task 1). We found that these input sentences provide overly generic summaries of their posts to be usable. For example, a post which dealt with various strategies to solve pop-up problems in Firefox was summarised as "Mein Rechneröffnet selbstständig Webseiten [...]." ("My computer opens web pages on its own [...]."). We paid 0.10 USD for each of the 230 paraphrases. (Total: 23 USD)
Task 3: Validation. This task asked workers to judge whether the paraphrased summaries resulting from Task 3 are correct summaries of the problem described in T. 2 Possible answers were (a) perfect summary ("ps"); (b) incomplete summary that is missing central concept ("is"); (c) no ("ns"). We also asked turkers to verify that the paraphrased summaries were complete, grammatical, declarative sentences. Each T/H pair was assessed by 3 turkers who were paid 0.05 USD for each assessment. (Total: 35 USD) Surprisingly, the most frequently chosen category was not "is" (41% of all assessments), but "ps" (43%). About 16% of the paraphrases summaries are judged as "ns". To assess reliability, we computed a confusion matrix. In our three-annotation AMT setup where annotators are not necessarily constant across sentences, we decided to count the three pairwise annotations (a1-a2, a2-a3, a1-a3) for each sentence. Since the order of the annotators is random, we normalised to the order "ps" < "is" < "ns". Table 1 shows the results. Satisfactorily, the diagonal, corresponding to matching judgements, shows the highest numbers. In total, 49% of the judgement pairs agree. The largest group of disagreements is "ps"/"is"; the number of "is"/"ns" cases is lower by a factor of two, and the number of "ns"/"ps" cases smaller by another factor of 2. We interpret these number as indication that the annotation task is fairly difficult, but that there is in particular a large number of clear correct cases. We build on this observation below.
Compilation of the Dataset
For each T/H pair, Task 3 provides us with three judgements on an ordinal scale with three categories: perfect summary ("ps"), incomplete summary ("is"), no summary ("ns"). The next question is how to select cases of true entailment and true non-entailment from this dataset.
Positive entailment pairs. As for entailment, we start by discarding all pairs that were tagged as "ns" by at least one rater. The situation is less clear for "is" cases: on one hand, hypotheses can drop information Assessments Occurrence 38 45 50 20 7 11 21 Selected as (Non-)Entailment 37 41 42 7 7 2 1 Table 2: Association between AMT assessments and final entailment relations present in the text while preserving entailment; on the other hand, the absence of important information in the summary can indicate difficulties with the original text or the summary. Thus, to keep precision high, we decided to manually check all "is"/"ps" T/H pairs. The left-hand part of Table 2 shows that in fact, the ratio of proper entailments trails off from almost 100% for "ps-ps-ps" to about one third for "is-is-is". In total, we obtained 127 positive entailment pairs in this manner.
ps-ps-ps ps-ps-is ps-is-is is-is-is ns-ns-ns ns-ns-is ns
-is-is Entailment Y Y Y Y N N N
During the extraction, we noted that one of the 23 forum posts did not yield reliable assessments for any of its generated hypotheses and discarded it.
Negative entailment pairs. Negative entailment pairs come from two sources. First, "ns" T/H pairs are cases where turkers missed the semantic core of the original text. These cases might be particularly informative non-entailment pairs because they are near the decision boundary. For example, one author asks whether a virus can settle down on the motherboard. The corresponding generated hypothesis turned the question into a fact, stating that "My BIOS has been infected by a virus.". Again, we checked all pairs with at least one "ns" judgement by hand. As the right-hand side of Table 2 shows, we find the same pattern as for positive pairs: perfect non-entailment for instances with perfect agreement on "ns", and lower non-entailment ratio for increasing "is" ratio. Rejected pairs are e.g. very generic and fuzzy summaries or refer only to a minor aspect of the problem described in the forum. Unfortunately, this strategy only results in 10 negative entailment T/H pairs. The second source of negative pairs are combinations of verified Hs with "other" Ts, that is, Ts from which they were not created. In fact, we can pair each of the 137 validated distinct Hs with all other Ts, resulting in 21 * 137 = 2877 additional non-entailment T/H pairs. However, since the domain of computer problems is relatively narrow, a few post topics are so close to each other that generated hypotheses are entailed by multiple texts. While this effect is usually ignored in machine learning (Bergsma et al., 2008), our goal is a clean dataset. Therefore, we manually checked all cross-pairs with similar topics (e.g. virus attacks) for positive entailment relations. Indeed, we found hypotheses which were general enough to match other texts. We removed 45 such pairs from the negative entailment pairs and added them to the set of positive pairs.
In total, we obtained 172 positive and 2842 negative entailment T/H pairs for 22 Ts and 137 distinct Hs. At a cost of 82 USD, this corresponds to an average of 50 cents for each explicitly generated positive pair, but just 3 cents for each T/H pair in the complete dataset. From the 226 AMT-generated pairs, we use 56% as positive pairs and 4% as negative pairs. We discard the remaining, inconsistently judged, 40%.
Discussion
The three tasks vary in their nature and difficulty. As mentioned above, we paid more for Task 1 than for Task 2, since it involved authoring a text. The amount of time needed for the tasks confirms this assumption: Task 1 took about 80 seconds per annotation, Task 2 only about 60 seconds. In terms of difficulty, Task 1 seems to be the easier one, though: We removed only a small number of post summaries from Task 1, but had to disregard a number of paraphrases from Task 2 (cf. Section 2.3). We believe that two factors contribute to this observation: (a), it is easier to summarise a complete text than to paraphrase a sentence out of context; (b), we deliberately asked workers in Task 2 to introduce as much variance as possible, which can lead to somewhat unnatural statements. Finally, the assessment Task 3 is the fastest one, requiring only about 30 seconds per annotation. Our results show that both with regard to positive and negative entailment, three consistent judgements are sufficient for an almost perfect guarantee of the respective relation (cf. Table 2), but only a comparatively small sample of our data fall into these categories (around 15% for positive and 3% for negative entailment, respectively). Creators of a dataset therefore have the option of either making up for this loss by starting with more initial data, which leads to a higher overall cost, or to perform a subsequent expert-driven manual pass over the inconsistent candidates, as we did.
Analysis of Created Data
This Section illustrates the properties and problems of each step.
Task 1: Summarisation
Linguistic properties. Table 3 illustrates some of the phenomena appearing in the summarisation task, which seem to be largely specific to the particular genre (web forum texts) that we consider, while appearing less frequent in standard trainig data like newswire. Example 1/1 shows a typical "telegram style" summary which omits determiners and copula; Example 25/1 shows that not all summaries are even grammatical (underlined word). A comparison of examples 1/2 and 1/3 shows that the summaries either retain the personal point of view typically used by the original posts (using first-personal personal or possessive pronouns) or employ generic, impersonal formulations such as (pseudo-)passives. In one example, the AMT worker even cited the author of the original post using the introduction "Micha fragt, ob [. . . ]" ("Micha asks whether [. . . ]"). Similarly, 12 summaries use interrogative form (Example 25/3) like the original posts even though we explicitly asked the turkers to generate declarative sentences. Finally, example 20/2 illustrates typical writing errors, including the omission of punctuation and the defiance of German capitalisation rules. It is notable that turkers used this style, which is typically used for writing forum posts, even in the rather more formal AMT task environment. It occurs more frequently for original posts with the same style. Arguably, the turkers perceived this as the "correct" manner to summarise such posts, as our guidelines did not address this question. Content properties. Most summaries reproduced the original content correctly. The turkers apparently concentrated more on the content, i.e. writing a good summary, than formal task details, resulting, e.g. in interrogative formulations. This is not untypical for crowdsourcing tasks (Chen and Dolan, 2010). Nonetheless, reproducing the context correctly was not trivial: some forum posts are rambling or vague and difficult to summarise. Summaries of such posts often either (a) do not cover the whole content or (b) are incorrect. Cases (a) lead to assessments of medium reliability in Task 3 ("H is an incomplete, but valid summary of T"). Cases (b) lead to negative entailment cases.
As intended, the results of Task 1 are significantly shorter than the original texts, with an average length of 11 words (min 3, max 39 words). Often, they use more general wording, e.g. "Der Prozessor läuft schnell heiß." ("The processor runs hot quickly") for a description containing a concrete temperature.
Task 2: Paraphrasing
Linguistic properties. In the paraphrasing task, workers were asked to change both syntax and word choice whenever possible. Although texts can contain many content words that are hard to paraphrase (e.g. basic level terms such as table), the problem is alleviated in the software domain where abbreviations and English loanwords that can be substituted easily are frequent (examples 2/1/1, 2/1/2, 10/2/1 in Table 4). The most frequent change was the replacement of verbs by synonyms and nouns by synonyms or hypernyms, as in examples 3/3/1 and 9/5/2. Some turkers modified both syntax and lexemes to vary support verb constructions (5/4/2).
While these phenomena are all "generic" paraphrasing devices that have been observed in previous studies on English and newswire text (Lin and Pantel, 2002;Bannard and Callison-Burch, 2005), we find two more classes of paraphrasing patterns that are specific to German and the social media domain, respectively. Prominent among German-specific changes are the large number of nominalisations (8/3/2) as well as active/passive switches (13/5/2). Next to the regular passive construction with the auxiliary werden, we often see "pseudo-passives" which use lassen combined with the reflexivised verb (4/4/2).
As for domain-specific patterns, we frequently observe the alternation of interrogative and declarative sentences (17/3/2) noted before which is caused by the tendency of the original posts to formulate problems as questions. Again, personalised and generic expressions alternate (4/4/2), which typically involves rephrasing first-person statements as third-person or impersonal ones -often though (pseudo-)passives.
The quality is generally higher in Task 2 than it is in Task 1. Although we asked the turkers to generate paraphrases by changing both syntax and lexis, they frequently modified just the syntax. However, this is not critical, since the summaries already exhibit varied word choice, so that there is enough variance between T and the corresponding true entailment Hs to avoid oversimplifying the TE task.
Content properties. Recall that no context was given in the paraphrasing task to avoid influencing the turkers with regard to vocabulary and syntax. In most cases, context was also not necessary. However, this also meant that some semantic errors occurred as a result of ambiguous formulations in summaries that were propagated into the paraphrase. For example, the author of one forum post explains that a BIOS update has failed and that he is afraid of restarting the computer. The corresponding summary "Fehlermeldung nach Bios-Update, Rechner trotzdem neustarten?" ("Error message after Bios update, restart computer anyway?") is paraphrased with "Ich erhalte nach dem Update meines BIOS eine Fehlermeldung, soll ich den PC neu starten?" ("I get an error message after the BIOS update, should I restart the PC?"), which has rather the meaning of restarting the PC in order to overcome the problem. Consequently, the assessment in Task 3 was controversial (ps-is-ns, see Section 2.3) and lead to a rejection of the T/H pair. In the best case, such errors can also lead to clear rejections (ns-ns-ns).
A specific problem that we observed was the lack of domain knowledge by turkers. For example, the summary "Anschluss von einem zusätzlichem SATA-Gerät . . . " ("Connection of an additional SATA device . . . ") becomes "ich möchte Hardware von SATA . . . anschließen" ("I want to connect hardware (made) by SATA . . . "). This is an incorrect paraphrase: SATA is not a hardware manufacturer, but a type of interface. This problem extended to Task 3, where assessments were controversial (ps-is-ns).
Finally, some turkers, contrary to instructions, produced summaries of the summaries. These texts became very short and were often marked as "is" (valid but incomplete) in Task 3. We observed that it was mostly turkers who already participated in Task 1 who acted in this manner. We feel that there is a tension regarding re-employing workers who participated in previous tasks: quality may profit from their previous training, but suffer from their bias to approach the second task with the same mindset as the first one.
Task 3: Validation
The output of the validation task allows us to correlate the quality ratings of T/H pairs to their linguistic properties. We observe a broad overlap between assessments of the type "is" and hypotheses which are very short or whose content is very general, e.g. due to the usage of hypernyms. Accordingly, T/H pairs which are marked consistently as "ps" concern either hypotheses which are relatively comprehensive, or texts which describe rather simple situations. At the opposite end of the scale, T/H pairs with three "ns" assessments arise from to propagated errors. T/H pairs marked with all three categories, ps-is-ns, make up only about 3%. These cases frequently refer to posts with complex queries such as users describing a sequence of problems. Such posts are hard to summarise and to evaluate, but are also unlikely search queries. The average length of the Hs selected through Task 3 is 11.4 words (min 5, max 22).
In sum, we interpret the three-stage crowdsourcing task as a success: The first two tasks generate a broad variation of potentially true T/H pairs, while the third task enables a filtering of dubious pairs. Although the linguistic quality of the obtained hypotheses shows clear imperfections, the quality of the original texts is equally low: the resulting T/H pairs reflect particularities of the social media domain. Example 2 shows (part of) a T/H pair; note the ungrammaticality in both T and H.
(2) T: [...]
Modelling the Dataset with Textual Entailment Systems
In order to evaluate the difficulty of the dataset that we have created, we performed experiments with two different TE engines. We split our dataset into a development and a test set. Both sets are identical in terms of size (1507 T/H pairs) and amount of positive and negative pairs (86 and 1421 pairs, respectively).
The first system is EDITS (Negri et al., 2009), version 3.0. 3 EDITS uses string edit distance as a proxy of semantic similarity between T and H and classifies pairs as entailing if their normalised edit distance is below a threshold θ which can be optimised on a development set. While additional entailment knowledge can be included, no such knowledge is currently available for German and we use the default weights. The second system is a simple word overlap strategy which approximates semantic similarity through the fraction of H words that also occur in T (Monz and de Rijke, 2001). Again, pairs are classified as entailing if this fraction is larger than a threshold θ.
We preprocessed the data by lemmatising it with TreeTagger (Schmid, 1994) and removing stop words, employing a German stop word list which includes keywords from the social media domain. 4 The thresholds θ for both systems were set by optimising the F 1 score for positive entailment on the train set. Table 5 shows the results for the word overlap model and EDITS. The very high accuracy values merely reflect the predominance of the negative entailment class; we therefore concentrate on the F-score statistics for positive entailment. We find that edit distance outperforms word overlap with F 1 scores of .44 and .38, respectively. Since the main difference between the two approaches is that edit distance is sensitive to word order, order information appears to be indeed informative: reordering between T and H do not incur costs in the word overlap model, but they do in the edit distance model. Example 3 shows a T/H pair with high word overlap, but negative entailment. It is correctly classified by EDITS, but misclassified by the word overlap model. The most direct point of comparison for our dataset is the RTE-5 search pilot ). The two main differences are language (English vs. German) and genre (newswire vs. social media). We found our dataset to be slightly easier to model. Part of the reason is the somewhat more balanced positive/negative distribution in our dataset: a random baseline achieves an F-Score of 8.4% on RTE-5 and 10.4% on our data. However, the improvement of informed models is also somewhat higher: EDITS without additional knowledge resources achieves 32.6% F-Score on RTE-5 (+24% over the baseline) ) and 44% F-Score on our dataset (+34% over the baseline). We believe that this is due to the greater coherence of our dataset: it deals with just one topic, while the RTE-5 dataset covers ten topics. We also observe that the Hs in RTE-5 are shorter than ours (avg. length 8.75 words vs. 11.4) which presumably leads to worse sparsity problems. Nevertheless, the results on the two datasets for baselines and simple methods are still remarkably similar.
Related work
In the Textual Entailment community, particularly in the studies who create datasets and resources, there is a strong focus on the English language (Androutsopoulos and Malakasiotis, 2010). All RTE datasets, the most widely used experimental materials, are in English. A few datasets have been created for other languages. To our knowledge, only an Italian one (Bos et al., 2009) and a Spanish one are freely available (Peñas et al., 2006). Datasets for other languages have been created in the context of the CLEF QA Answer Validation and Machine Reading tasks, but do not appear to be available to the general community. We have employed crowdsourcing, a technique whose practice has expanded greatly over the last years (Snow et al., 2008). It has rarely been used for Textual Entailment, though, since high-quality crowdsourcing relies on the ability to formulate the task in layman's terms, which is challenging for entailment. We avoided this problem by asking turkers to provide summaries and paraphrases in two separate steps. Wang and Callison-Burch (2010) also use crowdsourcing to collect hypotheses for TE. In contrast to us, they do not ask turkers for full summaries and paraphrases, but have them extract facts from texts and create counter-facts from facts by inserting negations, using antonyms, or changing adverbs.
Finally, Bernhard and Gurevych (2008) present a study on data that is similar to ours. Their goal is the automatic collection of paraphrases for English questions on social Q&A sites. Employing similar methods to us (e.g., word overlap and edit distance), they achieve very good results. Their task is simpler in that in concentrates on paraphrase relations among statements rather than summarisation relations between texts and statements. This paper makes two contributions. The first one is a freely available dataset 5 for Textual Entailment tasks which covers (a) a new language, namely German; and (b), a new genre, namely web forum text. The dataset models a search task on web forums, with short queries as hypotheses and forum posts as text candidates. Being constructed from real social media data, our data is more noisy than existing RTE datasets and shows novels linguistic paraphrasing phenomena such as switches between interrogative and declarative sentences. We consider our dataset to be a test bed for TE algorithms that have to deal with spontaneous and sloppy language, e.g. for other social media areas or on transcribed spoken language.
Our second contribution is a crowdsourcing-based procedure to create the dataset which can be applied to other languages and data sources in order to create comparable datasets quickly and at modest expense. The three-step setup that we introduce consists of a summarisation step, a paraphrasing step, and a validation step. This setup guarantees syntactic and lexical variation and makes it possible to detect and remove the sizable portion of the data that consists of queries that are either invalid or hard to judge. The number of summaries and paraphrases can be chosen according to the requirements of the dataset; as for validation, we found that three judgments were sufficient for a final categorisation. An alternative to our rather artificial way to collect data is presented in (Baldwin et al., 2010), employing web forum structure.
We have presented an experiment with two basic TE algorithms which establishes that the difficulty of the dataset is roughly comparable with the RTE-5 Search task testset. However, both algorithms were essentially knowledge-free, and we will conduct experiments with more informed algorithms. We expect the inclusion of lexical entailment knowledge (such as hyponymy relations) to provide a clear benefit. However, the top systems on the RTE-5 Search-Task, where the best result was 46% F-Score (+13% F-Score over edit distance) crucially employed lexico-syntactic paraphrase knowledgeà la DIRT (Lin and Pantel, 2002). It remains to be seen how such syntax-based TE algorithms do on our dataset, where we expect parsing results to be substantially more noisy than for traditional RTE datasets.
(
Table 1 :
1Confusion matrix for pairs of AMT validation annotations
Post/Summary ID Example (German/English) Phenomenon 1/1 Rechner mit Virus infiziert. -Computer infected with virus. Mein Rechner ist von einem Virus befallen. -My computer is infected by a virus. Der Virtumonde-Virus lässt sich nicht entfernen. -The Virtumonde virus cannot be removed. Ich möchte, dass mein Board dauerhaft auf GB LAN schalten. -I want that my board permanently to switch to GB LAN.Incomplete sentence
1/2
Personal point of view,
short summary
1/3
Pseudo-passive
25/1
Ungrammatical sen-
tence
25/3
Wie lässt sich bei einer GB-Netzwerkkarte ein Fallback auf
100mbit verhindern? -How can a fallback to 100mbit in a GB
network adapter be prevented?
Question
20/2
Heute ist schon 4 mal beim aufrufen des p5q deluxe-sammelthreads
mein trendmicro virenscanner angeschlagen er meldet den Virus:
TSPY ONLINEG.FXG was kann ich dagegen machen? -Today
while calling the p5q deluxe collective threads my trendmicro
virus scanner has given mouth already 4 times it reports the virus:
TSPY ONLINEG.FXG what can i do against this?
Long summary,
writing errors
Table 3 :
3Linguistic phenomena in summarisation task
Table 4 :
4Linguistic phenomena in paraphrasing task
Ich habe heute alles zusammengebaut, aber aheb folgende probleme... 1.Der PC brauch ca
5-10min zum booten. 2.Nach dem Starten hängt der pc sich ständig auf. [...] 4.beim booten
wird "Pri Master Hard Disk : S.M.A.R.T. Status BAD, Backup and Replace Press F1 to
Resume." wenn ich denn F1 drücke fährt der pc weiter hoch. MFG
Accuracy P
R
F 1
for positive entailment
Word overlap
.93
.38 .38 .38
EDITS (edit distance) .95
.63 .34 .44
Table 5 :
5Test set results on social media dataset for two simple Textual Entailment algorithms[...] I have assembled everything today, but haev the following problems: 1.The PC take ca
5-10min to boot. 2.After starting the pc locks up constantly. [...] 4. while booting is "Pri
Master Hard Disk : S.M.A.R.T. Status BAD, Backup and Replace Press F1 to Resume." than
when I press F1 the pc continues booting. RSVP
H: Meinen Computer benötig für das Hochfahren sehr lange und zeigt mir dann eine Meldung für
einen Fehler an.
Mine computer need a long time for booting and then shows me a message for an error.
Die Prozessorauslastung ist bei 100% und Antivirenprogramme funktionieren nicht. The processor load is at 100% and anti virus programs do not work. Es gibt bei m ir zwei Probleme bei der Ausführung des Tools unter Vista. 1) Vista blockiert die Ausführung mit dem Kommentar " ...Sie verfügen eventuell nichtüber ausreichende Berechtigungen... " und 2) F-Secure gibt eine Malware-Warnung aus " W32/ Suspicious U.gen " Virus. Ist die Viruswarnung nur ein Fehlalarm? I h ave two problems with the execution of the tool under Vista. 1) Vista blocks the execution with the comment " ...You might not have sufficient authorisation... " and 2) F-Secure gives a malware warning " W32/ Suspicious U.gen " Virus. Is the virus warning just a false alarm?H: Wegen fehlenden Systemrechten des Anwenders in Windows kann die Datei nicht gestartet werden. -The file cannot be started due to missing system rights by the user in Windows.3)
T: Hallo PC-Freunde, ich habe letzte Woche XP neu installiert. Heute ist mir aufgefallen das die
CPU-Auslastung immer zwischen 60% und 80% liegt obwohl im Taskmanager der Lerlauf-
prozess mit 90-99% angezeigt wird. Kann es vieleicht sein das im Taskmanager nicht alle
Programme erfasst werden(währe mir neu) oder könnte vieleicht ein Virus, Trojaner sein der
diese ununterbrochen hohe Auslastung bewirkt? Vobei mein Antivirusprogramm (Awast)
keinen Virus oderähnliches erkennt. [. . . ]
[. . . ] Today I realised that the CPU load is always between 60% and 80% although the idle
task is always displayed with 90-99% in the task manager. Is it mabe possible thet not all
programs are captured in the task manager(whould be new to me) or could mabe be a virus,
trojan horse which causes this steadily high load? Hovever my anti virus program (Awast)
does not recognise a virus or the like. [. . . ]
H: Example 4 shows the opposite case, namely a positive T/H entailment pair that hardly shares any
vocabulary since many T details are omitted in H. Both systems are unable to correctly label this instance.
(4) T:
There is also a translation of the RTE-3 dataset into German, but it is so far unpublished, although available from http://www.dfki.de/˜neumann/resources.html
We used the term "summary" to describe the concept to our lay taggers which are unfamiliar with the term "entailment".
Downloadable from http://sourceforge.net/projects/edits/files/ 4 http://solariz.de/649/deutsche-stopwords.htm
Can be downloaded from http://www.excitement-project.eu/.
Acknowledgments. This work was supported by the EC project EXCITEMENT (FP7 ICT-287923).
Finding high-quality content in social media. E Agichtein, C Castillo, D Donato, A Gionis, G Mishne, Proceedings of WSDM. WSDMStanford, CAAgichtein, E., C. Castillo, D. Donato, A. Gionis, and G. Mishne (2008). Finding high-quality content in social media. In Proceedings of WSDM, Stanford, CA, pp. 183-194.
A Survey of Paraphrasing and Textual Entailment Methods. I Androutsopoulos, P Malakasiotis, Journal of Artificial Intelligence Research. 38Androutsopoulos, I. and P. Malakasiotis (2010). A Survey of Paraphrasing and Textual Entailment Methods. Journal of Artificial Intelligence Research 38, 135-187.
Intelligent linux information access by data mining: the ILIAD project. T Baldwin, D Martinez, R B Penman, S N Kim, M Lui, L Wang, A Mackinlay, Proceedings of the NAACL Workshop on Computational Linguistics in a World of Social Media. the NAACL Workshop on Computational Linguistics in a World of Social MediaLos Angeles, CABaldwin, T., D. Martinez, R. B. Penman, S. N. Kim, M. Lui, L. Wang, and A. MacKinlay (2010). Intelligent linux information access by data mining: the ILIAD project. In Proceedings of the NAACL Workshop on Computational Linguistics in a World of Social Media, Los Angeles, CA, pp. 15-16.
Paraphrasing with bilingual parallel corpora. C Bannard, C Callison-Burch, Proceedings of ACL. ACLAnn Arbor, MIBannard, C. and C. Callison-Burch (2005). Paraphrasing with bilingual parallel corpora. In Proceedings of ACL, Ann Arbor, MI, pp. 597-604.
The seventh PASCAL recognising textual entailment challenge. L Bentivogli, P Clark, I Dagan, H Trang Dang, D Giampiccolo, Proceedings of TAC. TACGaithersburg, MDBentivogli, L., P. Clark, I. Dagan, H. Trang Dang, and D. Giampiccolo (2011). The seventh PASCAL recognising textual entailment challenge. In Proceedings of TAC, Gaithersburg, MD.
Considering discourse references in textual entailment annotation. L Bentivogli, I Dagan, H T Dang, D Giampiccolo, M L Leggio, B Magnini, Proceedings of the 5th International Conference on Generative Approaches to the Lexicon. the 5th International Conference on Generative Approaches to the LexiconPisa, ItalyBentivogli, L., I. Dagan, H. T. Dang, D. Giampiccolo, M. L. Leggio, and B. Magnini (2009). Considering discourse references in textual entailment annotation. In Proceedings of the 5th International Conference on Generative Approaches to the Lexicon, Pisa, Italy.
The fifth PASCAL recognising textual entailment challenge. L Bentivogli, B Magnini, I Dagan, H Trang Dang, D Giampiccolo, Proceedings of TAC. TACGaithersburg, MDBentivogli, L., B. Magnini, I. Dagan, H. Trang Dang, and D. Giampiccolo (2009). The fifth PASCAL recognising textual entailment challenge. In Proceedings of TAC, Gaithersburg, MD.
Discriminative Learning of Selectional Preference from Unlabeled Text. S Bergsma, D Lin, R Goebel, Proceedings of EMNLP. EMNLPHonolulu, HawaiiBergsma, S., D. Lin, and R. Goebel (2008). Discriminative Learning of Selectional Preference from Unlabeled Text. In Proceedings of EMNLP, Honolulu, Hawaii, pp. 59-68.
Answering learners' questions by retrieving question paraphrases from social Q&A sites. D Bernhard, I Gurevych, Proceedings of the ACL Workshop on Innovative Use of NLP for Building Educational Applications. the ACL Workshop on Innovative Use of NLP for Building Educational ApplicationsColumbus, OhioBernhard, D. and I. Gurevych (2008). Answering learners' questions by retrieving question paraphrases from social Q&A sites. In Proceedings of the ACL Workshop on Innovative Use of NLP for Building Educational Applications, Columbus, Ohio, pp. 44-52.
Textual entailment at EVALITA. J Bos, M Pennacchiotti, F M Zanzotto, Proceedings of the 11th Conference of the Italian Association for Artificial Intelligence. the 11th Conference of the Italian Association for Artificial IntelligenceReggio EmiliaBos, J., M. Pennacchiotti, and F. M. Zanzotto (2009). Textual entailment at EVALITA 2009. In Proceedings of the 11th Conference of the Italian Association for Artificial Intelligence, Reggio Emilia.
Building a persistent workforce on Mechanical Turk for multilingual data collection. D L Chen, W B Dolan, Proceedings of the AAAI Human Computation Workshop. the AAAI Human Computation WorkshopSan Francisco, CAChen, D. L. and W. B. Dolan (2010). Building a persistent workforce on Mechanical Turk for multilingual data collection. In Proceedings of the AAAI Human Computation Workshop, San Francisco, CA.
The PASCAL Recognising Textual Entailment Challenge. I Dagan, B O. Glickman, Magnini, Proceedings of the First PASCAL Challenges Workshop on Recognising Textual Entailment. the First PASCAL Challenges Workshop on Recognising Textual EntailmentSouthampton, UKDagan, I., O. Glickman, and B. Magnini (2005). The PASCAL Recognising Textual Entailment Chal- lenge. In Proceedings of the First PASCAL Challenges Workshop on Recognising Textual Entailment, Southampton, UK.
Methods for using textual entailment in open-domain question answering. S Harabagiu, A Hickl, Proceedings of COLING/ACL. COLING/ACLSydney, AustraliaHarabagiu, S. and A. Hickl (2006). Methods for using textual entailment in open-domain question answering. In Proceedings of COLING/ACL, Sydney, Australia, pp. 905-912.
Satisfying information needs with multi-document summaries. S Harabagiu, A Hickl, F Lacatusu, Information Processing and Management. 436Harabagiu, S., A. Hickl, and F. Lacatusu (2007). Satisfying information needs with multi-document summaries. Information Processing and Management 43(6), 1619-1642.
An open-source package for recognizing textual entailment. M Kouylekov, M Negri, Proceedings of the ACL 2010 System Demonstrations. the ACL 2010 System DemonstrationsUppsala, SwedenKouylekov, M. and M. Negri (2010). An open-source package for recognizing textual entailment. In Proceedings of the ACL 2010 System Demonstrations, Uppsala, Sweden, pp. 42-47.
Discovery of inference rules for question answering. D Lin, P Pantel, Journal of Natural Language Engineering. 74Lin, D. and P. Pantel (2002). Discovery of inference rules for question answering. Journal of Natural Language Engineering 7(4), 343-360.
Towards cross-lingual textual entailment. Y Mehdad, M Negri, M Federico, Proceedings of HLT/NAACL. HLT/NAACLLos Angeles, CAMehdad, Y., M. Negri, and M. Federico (2010). Towards cross-lingual textual entailment. In Proceedings of HLT/NAACL, Los Angeles, CA, pp. 321-324.
Light-weight entailment checking for computational semantics. C Monz, M De Rijke, Proceedings of ICoS. ICoSSiena, ItalyMonz, C. and M. de Rijke (2001). Light-weight entailment checking for computational semantics. In Proceedings of ICoS, Siena, Italy, pp. 59-72.
Towards Extensible Textual Entailment Engines: the EDITS Package. M Negri, M Kouylekov, B Magnini, Y Mehdad, E Cabrio, Proceeding of IAAI. eeding of IAAIReggio Emilia, ItalyNegri, M., M. Kouylekov, B. Magnini, Y. Mehdad, and E. Cabrio (2009). Towards Extensible Textual Entailment Engines: the EDITS Package. In Proceeding of IAAI, Reggio Emilia, Italy.
Testing the reasoning for question answering validation. A Peñas, Á Rodrigo, V Sama, F Verdejo, Journal of Logic and Computation. 18Peñas, A.,Á. Rodrigo, V. Sama, and F. Verdejo (2008). Testing the reasoning for question answering validation. Journal of Logic and Computation 18, 459-474.
SPARTE: a test suite for recognising textual entailment in spanish. A Peñas, Á Rodrigo, F Verdejo, Proceedings of CICLing. A. GelbukhCICLingPeñas, A.,Á. Rodrigo, and F. Verdejo (2006). SPARTE: a test suite for recognising textual entailment in spanish. In A. Gelbukh (Ed.), Proceedings of CICLing, Lecture Notes in Computer Science.
Investigating a generic paraphrase-based approach for relation extraction. L Romano, M Kouylekov, I Szpektor, I Dagan, A Lavelli, Proceedings of EACL. EACLTrento, ItalyRomano, L., M. Kouylekov, I. Szpektor, I. Dagan, and A. Lavelli (2006). Investigating a generic paraphrase-based approach for relation extraction. In Proceedings of EACL, Trento, Italy, pp. 401-408.
Probabilistic part-of-speech tagging using decision trees. H Schmid, Proceedings of ICNLP. ICNLPManchester, UKSchmid, H. (1994). Probabilistic part-of-speech tagging using decision trees. In Proceedings of ICNLP, Manchester, UK.
Cheap and fast-but is it good?: evaluating non-expert annotations for natural language tasks. R Snow, B O'connor, D Jurafsky, A Ng, Proceedings of EMNLP. EMNLPHonolulu, HISnow, R., B. O'Connor, D. Jurafsky, and A. Ng (2008). Cheap and fast-but is it good?: evaluating non-expert annotations for natural language tasks. In Proceedings of EMNLP, Honolulu, HI, pp. 254-263.
Cheap facts and counter-facts. R Wang, C Callison-Burch, Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazons Mechanical Turk. the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazons Mechanical TurkWang, R. and C. Callison-Burch (2010). Cheap facts and counter-facts. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazons Mechanical Turk, pp. 163-167.
Information synthesis for answer validation. R Wang, G Neumann, Proceedings of CLEF. CLEFAarhus, DenmarkWang, R. and G. Neumann (2008). Information synthesis for answer validation. In Proceedings of CLEF 2008, Aarhus, Denmark. |
7,684,835 | Annotating Semantic Relations Combining Facts and Opinions | As part of the STATEMENT MAP project, we are constructing a Japanese corpus annotated with the semantic relations bridging facts and opinions that are necessary for online information credibility evaluation. In this paper, we identify the semantic relations essential to this task and discuss how to efficiently collect valid examples from Web documents by splitting complex sentences into fundamental units of meaning called "statements" and annotating relations at the statement level. We present a statement annotation scheme and examine its reliability by annotating around 1,500 pairs of statements. We are preparing the corpus for release this winter. | [
10103200,
15072379,
16639476
] | Annotating Semantic Relations Combining Facts and Opinions
2009. August 2009. 2009
Koji Murakami kmurakami@is.naist.jp
Shouko Masuda shouko@is.naist.jp
Suguru Matsuyoshi matuyosi@is.naist.jp
Eric Nichols eric-n@is.naist.jp
Kentaro Inui inui@is.naist.jp
Yuji Matsumoto
†Nara Institute of Science and Technology
8916-5, 630-0192Takayama, IkomaNaraJAPAN
‡Osaka Prefecture University
1-1, Naka-ku599-8531Gakuen, SakaiOsakaJAPAN
Annotating Semantic Relations Combining Facts and Opinions
ACL and AFNLP
the Third Linguistic Annotation WorkshopSuntec, Singapore2009. August 2009. 2009
As part of the STATEMENT MAP project, we are constructing a Japanese corpus annotated with the semantic relations bridging facts and opinions that are necessary for online information credibility evaluation. In this paper, we identify the semantic relations essential to this task and discuss how to efficiently collect valid examples from Web documents by splitting complex sentences into fundamental units of meaning called "statements" and annotating relations at the statement level. We present a statement annotation scheme and examine its reliability by annotating around 1,500 pairs of statements. We are preparing the corpus for release this winter.
Introduction
The goal of the STATEMENT MAP project is to assist internet users with evaluating the credibility of online information by presenting them with a comprehensive survey of opinions on a topic and showing how they relate to each other. However, because real text on the Web is often complex in nature, we target a simpler and more fundamental unit of meaning which we call the "statement." To summarize opinions for the statement map users, we first convert all sentences into statements and then, organize them into groups of agreeing and conflicting opinions that show the logical support for each group.
For example, a user who is concerned about potential connections between vaccines and autism would be presented with a visualization of the opinions for and against such a connection together with the evidence supporting each view as shown in Figure 1.
When the concerned user in our example looks at this STATEMENT MAP, he or she will see that some opinions support the query "Do vaccines cause autism?" while other opinions do not, but it will also show what support there is for each of these viewpoints. So, STATEMENT MAP can help user come to an informed conclusion.
Semantic Relations between Statements
Recognizing Semantic Relations
To generate STATEMENT MAPs, we need to analyze a lot of online information retrieved on a given topic, and STATEMENT MAP shows users a summary with three major semantic relations. AGREEMENT to group similar opinions CONFLICT to capture differences of opinions EVIDENCE to show support for opinions Identifying logical relations between texts is the focus of Recognizing Textual Entailment (RTE). A major task of the RTE Challenge (Dagan et al., 2005) is the identification of [ENTAILMENT] or [CONTRADICTION] between Text (T) and Hypothesis (H). For this task, several corpora have been constructed over the past few years, and annotated with thousands of (T,H) pairs. While our research objective is to recognize semantic relations as well, our target domain is text from Web documents. The definition of contradiction in RTE is that T contradicts H if it is very unlikely that both T and H can be true at the same time. However, in real documents on the Web, there are many examples which are partially contradictory, or where one statement restricts the applicability of another like in the example below.
(1) a. Mercury-based vaccines actually cause autism in children.
Mercury-based vaccine preservatives actually have caused autism in children.
It's biologically plausible that the MMR vaccine causes autism.
VACCINES CAUSE AUTISM
There is no valid scientific evidence that vaccines cause autism. The weight of the evidence indicates that vaccines are not associated with autism.
VACCINES DON'T CAUSE AUTISM
My son then had the MMR, and then when he was three he was diagnosed with autism. He then had the MMR, and then when he was three he was diagnosed with autism.
MY CHILD WAS DIAGNOSED WITH AUTISM RIGHT AFTER THE VACCINE
Vaccinations are given around the same time children can be first diagnosed. The plural of anecdote is not data.
ANECDOTES ARE NOT EVIDENCE [CONFLICT]
[FOCUS]
[EVIDENCE] [EVIDENCE]
Query : Do vaccines cause autism? Query : Do vaccines cause autism? nes While it is difficult to assign any relation to this pair in an RTE framework, in order to construct statement maps we need to recognize a contradiction between (1a) and (1b).
There is another task of recognizing relations between sentences, CST (Cross-Document Structure Theory) which was developed by Radev (2000). CST is an expanded rhetorical structure analysis based on RST (Mann and Thompson, 1988), and attempts to describe relations between two or more sentences from both single and multiple document sets. The CSTBank corpus (Radev et al., 2003) was constructed to annotate crossdocument relations. CSTBank is divided into clusters in which topically-related articles are gathered. There are 18 kinds of relations in this corpus, including [EQUIVALENCE], [ELABORATION], and [REFINEMENT].
Facts and Opinions
RTE is used to recognize logical and factual relations between sentences in a pair, and CST is used for objective expressions because newspaper articles related to the same topic are used as data. However, the task specifications of both RTE and CST do not cover semantic relations between opinions and facts as illustrated in the following example.
(2) a. There must not be a connection between vaccines and autism. b. I do believe that there is a link between vaccinations and autism.
Subjective statements, such as opinions, are recently the focus of many NLP research topics, such as review analysis, opinion extraction, opinion QA, or sentiment analysis. In the corpus constructed by the MPQA Project (Multi-Perspective Question Answering) (Wiebe et al., 2005), individual expressions are marked that correspond to explicit mentions of private states, speech events, and expressive subjective elements.
Our goal is to annotate instances of the three major relation classes: [AGREEMENT], [CON-FLICT] and [EVIDENCE], between pairs of statements in example texts. However, each relation has a wide range, and it is very difficult to define a comprehensive annotation scheme. For example, different kinds of information can act as clues to recognize the [AGREEMENT] relations. So, we have prepared a wide spectrum of semantic relations depending on different types of information regarded as clues to identify a relation class, such as [AGREEMENT] or [CONFLICT]. Table 1 shows the semantic relations needed for carrying out the anotation. Although detecting [EVI-DENCE] relations is also essential to the STATE-MENT MAP project, we do not include them in our current corpus construction.
Constructing a Japanese Corpus
Targeting Semantic Relations Between Statements
Real data on the Web generally has complex sentence structures. That makes it difficult to recognize semantic relations between full sentences. but it is possible to annotate semantic relation between parts extracted from each sentence in many cases. For example, the two sentences A and B in Figure 2 cannot be annotated with any of the semantic relations in Table 1, because each sentence include different types of information. However, if two parts extracted from these sentences C and D are compared, the parts can be identified as [EQUIVALENCE] because they are semantically close and each extracted part does not contain a different type of information. So, we attempt to break sentences from the Web down into reasonable text segments, which we call "statements." When a real sentence includes several pieces of se- There is no link between the MMR vaccine and autism.
The weight of the evidence indicates that vaccines are not associated with autism.
Vaccines are not associated with autism.
(A) Real sentence (1) (B) Real sentence (2)
(C) Statement (1) (D) Statement (2) (E) [EQUIVALENCE]
Figure 2: Extracting statements from sentences and annotating a semantic relation between them mantic segments, more than one statement can be extracted. So, a statement can reflect the writer's affirmation in the original sentence. If the extracted statements lack semantic information, such as pronouns or other arguments, human annotators manually add the missing information. Finally we label pairs of statements with either one of the semantic relations from Table 1 or with "NO RELATION," which means that two sentences (1) are not semantically related, or (2) have a relation other than relations defined in Table 1.
Corpus Construction Procedure
We automatically gather sentences on related topics by following the procedure below:
1. Retrieve documents related to a set number of topics using a search engine 2. Extract real sentences that include major subtopic words which are detected based on TF or DF in the document set 3. Reduce noise in data by using heuristics to eliminate advertisements and comment spam 4. Reduce the search space for identifying sentence pairs and prepare pairs, which look feasible to annotate. Dolan and Brockett (2005) proposed a method to narrow the range of sentence pair candidates and collect candidates of sentence-level paraphrases which correspond [EQUIVALENCE] in [AGREEMENT] class in our task. It worked well for collecting valid sentence pairs from a large cluster which was constituted by topic-related sentences. The method also seem to work well for [CONFLICT] relations, because lexical similarity based on bag-of-words (BOW) can narrow the range of candidates with this relation as well.
We calculate the lexical similarity between the two sentences based on BOW. We also used hyponym and synonym dictionaries (Sumida et al., 2008) and a database of relations between predicate argument structures (Matsuyoshi et al., 2008) as resources. According to our preliminary experiments, unigrams of KANJI and KATAKANA expressions, single and compound nouns, verbs and adjectives worked well as features, and we calculate the similarity using cosine distance. We did not use HIRAGANA expressions because they are also used in function words. Next, to evaluate inter annotator agreement, 207 randomly selected statement pairs were annotated by two human annotators. The annotators agreed in their judgment for 81.6% of the examples, which corresponds to a kappa level of 0.49. The annotation results are evaluated by calculating recall and precision in which one annotation result is treated as a gold standard and the other's as the output of the system, as shown in Talbe 2.
Analyzing the Corpus
Discussion
The number of sentence pairs that annotators identified as invalid examples shows that around 60% of all pairs were invalid, showing that there is still room to improve our method of collecting sentence pairs for the annotators. Developing more effective methods of eliminating sentences pairs that are unlikely to contain statements with plausible relations is important to improve annotator efficiency. We reviewed 50 such invalid sentence pairs, and the results indicate two major considerations: (1) negation, or antonyms have not been regarded as key information, and (2) verbs in KANJI have to be handled more carefully. The polarities of sentences in all pairs were the same although there are sentences which can be paired up with opposite polarities. So, we will consider the polarity of words and sentences as well as similarity when considering candidate sentence pairs. In Japanese, the words which consist of KATAKANA expressions are generally nouns, but those which contain KANJI can be nouns, verbs, or adjectives. Sharing KATAKANA words was the most common way of increasing the similarity between sentences. We need to assign a higher weight to verbs and adjectives that contain KANJI, to more accurately calculate the similarity between sentences.
Another approach to reducing the search space for statement pairs is taken by , who use category tags and in-article hyperlinks to organize scientific blog posts into discussions on the same topic, making it easier to identify relevant statements. We are investigating the applicability of these methods to the construction of our Japanese corpus but suffer from the lack of a richly-interlinked data source comparable to English scientific blogs.
Conclusion
In this paper, we described the ongoing construction of a Japanese corpus consisting of statement pairs annotated with semantic relations for handling web arguments. We designed an annotation scheme complete with the necessary semantic relations to support the development of statement maps that show [AGREEMENT], [CONFLICT], and [EVIDENCE] between statements for assisting users in analyzing credibility of information in Web. We discussed the revelations made from annotating our corpus, and discussed future directions for refining our specifications of the corpus. We are planning to annotate relations for more than 6,000 sentence pairs in this summer, and the finished corpus will consist of around 10,000 sentence pairs. The first release of our annotation specifications and the corpus will be made available on the Web 1 this winter.
Figure 1 :
1An example STATEMENT MAP for the query "Do vaccines cause autism?" b. Vaccines can trigger autism in a vulnerable subset of children.
Five
annotators annotated semantic relations according to our specifications in 22 document sets as targets. We have annotated target statement pairs with either [AGREEMENT], [CONFLICT] or [NO RELATION]. We provided 2,303 real sentence pairs to human annotators, and they identified 1,375 pairs as being invalid and 928 pairs as being valid. The number of annotated statement pairs are 1,505 ([AGREEMENT]:862, [CONFLICT]:126, [NO RELATION]:517).
Table 1 :
1Definition of semantic relations and example in the corpus The overwhelming evidence is that vaccines are unrelated to autism. B: There is no link between the MMR vaccine and autism.Relation Class
Relation Label
Example
AGREEMENT
Equivalence
A: Equivalent Opinion
A: We think vaccines cause autism.
B: I am the mother of a 6 year old that regressed into autism because of his 18
month vaccinations.
Specific
A: Mercury-based vaccine preservatives actually have caused autism in children.
B: Vaccines cause autism.
CONFLICT
Contradiction
A: Mercury-based vaccine preservatives actually have caused autism in children.
B: Vaccines don't cause autism.
Confinement
A: Vaccines can trigger autism in a vulnerable subset of children.
B: Mercury-based vaccine actually have caused autism in children.
Conflicting Opinion
A: I don't think vaccines cause autism.
B: I believe vaccines are the cause of my son's autism.
According to Department
of Medicine, there is no
link between the MMR
vaccine and autism.
Table 2 :
2Inter-annotator agreement for 2 annotatorsAnnotator A
AGR. CON. NONE TOTAL
AGR.
146 7
9
162
Anno-
CON.
0
13
1
14
tator B NONE
17
4
10
31
TOTAL 163 24
20
207
http://cl.naist.jp/stmap/corpus/ja
AcknowledgmentsThis work is supported by the National Institute of Information and Communications Technology Japan.
The pascal recognising textual entailment challenge. Oren Ido Dagan, Bernardo Glickman, Magnini, Proc. of the PASCAL Challenges Workshop on RTE. of the PASCAL Challenges Workshop on RTEIdo Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In Proc. of the PASCAL Challenges Workshop on RTE.
Automatical ly constructing a corpus of sentential paraphrases. Bill Dolan, Chris Brockett, Proc. of the IWP 2005. of the IWP 2005Bill Dolan and Chris Brockett. 2005. Automatical ly con- structing a corpus of sentential paraphrases. In Proc. of the IWP 2005, pages 9-16.
Rhetorical structure theory: towards a functional theory of text organization. William Mann, Sandra Thompson, Text. 83William Mann and Sandra Thompson. 1988. Rhetorical structure theory: towards a functional theory of text or- ganization. Text, 8(3):243-281.
A database of relations between predicate argument structures for recognizing textual entailment and contradiction. Suguru Matsuyoshi, Koji Murakami, Yuji Matsumoto, Kentaro Inui, Proc. of the ISUC. of the ISUCSuguru Matsuyoshi, Koji Murakami, Yuji Matsumoto, , and Kentaro Inui. 2008. A database of relations between predicate argument structures for recognizing textual en- tailment and contradiction. In Proc. of the ISUC 2008.
Statement map: Assisting information credibility analysis by visualizing arguments. Koji Murakami, Eric Nichols, Suguru Matsuyoshi, Asuka Sumida, Shouko Masuda, Proc. of the WICOW 2009. of the WICOW 2009Kentaro Inui, and Yuji MatsumotoKoji Murakami, Eric Nichols, Suguru Matsuyoshi, Asuka Sumida, Shouko Masuda, Kentaro Inui, and Yuji Mat- sumoto. 2009. Statement map: Assisting information credibility analysis by visualizing arguments. In Proc. of the WICOW 2009, pages 43-50.
Constructing a scientific blog corpus for information credibility analysis. Eric Nichols, Koji Murakami, Proc. of the Annual Meeting of ANLP. of the Annual Meeting of ANLPKentaro Inui, and Yuji MatsumotoEric Nichols, Koji Murakami, Kentaro Inui, and Yuji Mat- sumoto. 2009. Constructing a scientific blog corpus for information credibility analysis. In Proc. of the Annual Meeting of ANLP.
Dragomir Radev, Jahna Otterbacher, Zhu Zhang, CSTBank: Cross-document Structure Theory Bank. Dragomir Radev, Jahna Otterbacher, and Zhu Zhang. 2003. CSTBank: Cross-document Structure Theory Bank. http://tangra.si.umich.edu/clair/CSTBank.
Common theory of information fusion from multiple text sources step one: Crossdocument structure. R Dragomir, Radev, Proc. of the 1st SIGdial workshop on Discourse and dialogue. of the 1st SIGdial workshop on Discourse and dialogueDragomir R. Radev. 2000. Common theory of informa- tion fusion from multiple text sources step one: Cross- document structure. In Proc. of the 1st SIGdial workshop on Discourse and dialogue, pages 74-83.
Boosting precision and recall of hyponymy relation acquisition from hierarchical layouts in wikipedia. Asuka Sumida, Naoki Yoshinaga, Kentaro Torisawa, Proc. of the LREC. of the LRECAsuka Sumida, Naoki Yoshinaga, and Kentaro Torisawa. 2008. Boosting precision and recall of hyponymy rela- tion acquisition from hierarchical layouts in wikipedia. In Proc. of the LREC 2008.
Annotating expressions of opinions and emotions in language. Language Resources and Evaluation. Janyce Wiebe, Theresa Wilson, Claire Cardie, 39Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in lan- guage. Language Resources and Evaluation, 39(2- 3):165-210. |
210,722,256 | [] | Questions as a Pre-event, Pivot Event and Post-event of Emotions
Helena Yan helena.lau@connect.polyu.hk
Department of Chinese and Bilingual Studies ‡ Natural Language Processing Lab The Hong
Kong Polytechnic University Soochow University Hong Kong
China
Ping Lau
Department of Chinese and Bilingual Studies ‡ Natural Language Processing Lab The Hong
Kong Polytechnic University Soochow University Hong Kong
China
Sophia Yat
Department of Chinese and Bilingual Studies ‡ Natural Language Processing Lab The Hong
Kong Polytechnic University Soochow University Hong Kong
China
Mei Lee ym.lee@polyu.edu.hk
Department of Chinese and Bilingual Studies ‡ Natural Language Processing Lab The Hong
Kong Polytechnic University Soochow University Hong Kong
China
Zhongqing Wang wangzq@suda.edu.cn
Department of Chinese and Bilingual Studies ‡ Natural Language Processing Lab The Hong
Kong Polytechnic University Soochow University Hong Kong
China
Questions as a Pre-event, Pivot Event and Post-event of Emotions
This paper examines the use of information-seeking questions and rhetorical questions in terms of event structures of emotion. An emotion is treated as a pivot event that links the emotion-inducing event (i.e. preevent) and the event induced by emotion (i.e. postevent). We investigate the role information-seeking questions and rhetorical questions play in the three subevents. Results show that the overall distributions of the two types of questions used to mark the three sub-events are rather similar. This indicates that both types of questions play an equally important role in emotion expressions. It is found that more than a half (55.6%) of emotion-related questions are used to express emotions, approximately one-third of the questions (36.3%) are used to describe pre-events and the remaining 8.1% are the post-events of emotions. Various linguistic features of pre-events, pivot events and post-events of different emotions are proposed for emotion identification. We believe that this linguistic account of questions in emotion expressions will provide a clearer picture of the nature of emotion, and add rich dimensions to emotion analysis.
Introduction
Emotion, be it positive or negative, is a way that individuals interact with the others. Emotion can be communicated by means of words, facial expressions, or bodily reactions etc. In this study, we study emotions in text, with the major focus being placed on a particular clause typequestion. Questions can mainly be classified into two types, namely information-seeking questions (IQs) and rhetorical questions (RQs). Information-seeking questions, as suggested by its name, generally aim to make a request for information or for an answer, while rhetorical questions, expecting no answer, aim to achieve a pragmatic goal, such as to emphasize, to persuade, to show emotions etc. (Frank 1990;Roberts and Kreuz 1994). Previous research on the interactions between questions and emotions has extensively focused on rhetorical questions as they usually convey a more complicated meaning that goes beyond the literal. Little work has been done on the correlation between information-seeking questions and emotions. In an attempt to explore whether or not information-seeking questions do play no part in emotion expressions, this paper offers a linguistic account of the use of information-seeking questions and rhetorical questions in expressing emotions using an event-based approach. We take the position that emotion is a pivot event that interacts with its associated events, namely preevents (i.e. emotion causes), and post events (i.e. reactions to emotion) (Lee et al. 2012). This paper aims to address the following questions:
1) How frequently are information-seeking questions and rhetorical questions used in the three sub-events of emotion? 2) Which position do the subevents introduced by questions often occur in an emotion expression? 3) Do emotions have different preferences for a particular question type over the other one? 4) How do the three sub-events of different emotions being represented by questions?
2 Related Work
Emotion and the Associated Events
Emotion is a cognitive state that induces bodily reactions to the perceived external stimuli (Cannon 1927). As such, emotion is a pivot event that interacts with its associated events, namely preevents (i.e. emotion causes), and post events (i.e. reactions to emotion). Most emotion theories regard the recognition of emotion cause as an integral part of emotion elicitation (James 1884;Plutchik 1980;Wierzbicka 1999). Lee et al. (2010) constructed a Chinese emotion-cause annotated corpus for the purpose of extracting emotion causes. They identified seven groups of linguistic cues and two set of linguistic rules that can be used for emotion cause detection. Based on the linguistic rules proposed, Lee et al. (2012) developed a rule-based system for the detection of emotion cause. Drawing from the insight of Lee et al. (2010Lee et al. ( , 2012, a couple of studies (Gui et al. 2014;Li and Xu 2014;Gao et al. 2015) extended the rule based method for the detection in informal text. Lee et al. (2013Lee et al. ( , 2014 constructed another Chinese event-based emotion corpus with both preevents and post-events annotated. They suggested that there are significant interactions between emotions and pre-events as well as that of between emotions and post-events.
The Interaction between Questions and
Emotions
Information-seeking questions are typically used to elicit an answer, while rhetorical questions are used to make a statement without expecting a direct answer. With regard to the relation between questions and emotions, previous research has extensively focused on rhetorical questions and only a few studies have been done on investigating information-seeking questions. Quan et al. (2010) analyzed emotion expressions in Chinese at sentence level. They suggested that sentences without the presence of negation marker, conjunction or question mark do not convey any emotions if they do not contain any emotional words, while sentences with the presence of the three items do express emotions even if they do not contain any emotional words. They indicated that questions (including both information-seeking questions and rhetorical questions) can be used to express any emotions, in particular the anxiety emotion. Lau and Lee (2018) explored the interaction between emotions and both types of questions in social media. They illustrated that approximately 23% of information-seeking questions are associated with emotions, whereas 94% of rhetorical questions are used to express emotions. It reflects the important role rhetorical questions play in emotion expressions in social media.
With regard to rhetorical questions, they are considered an effective persuasive device (Petty, 1981;Frank, 1990). As a form of figurative language, rhetorical questions are often studied in a more general way. Previous studies indicated that figurative language is commonly used to express emotions (Kövecses, 1990(Kövecses, , 2003Lakoff and Johnson, 1980;Fussell and Moss, 1998;Gibbs et al., 2002), especially the intense ones (Fainsilber and Ortony, 1987;Fussell, 1992). The frequent use of figurative language for emotion expressions can partly be due to "the subjective nature of emotional experiences appears to lend itself to figurative expression" (Fussell and Moss, 1998: 113). Roberts and Kreuz (1994) examined the discourse goals of eight types of figurative devices, namely hyperbole, idiom, indirect request, irony, understatement, metaphor, rhetorical question, and simile. They suggested that rhetorical questions are used to express both positive and negative emotions, with the latter being more frequent. Leggitt and Gibbs (2000) investigated people's emotion reactions to different figurative devices. They showed that rhetorical questions are used to alert or challenge addressee's problem or behavior. Therefore, rhetorical questions are prone to evoke negative emotions, such as anger, disgust, and contempt. In addition, speakers of rhetorical questions appear to feel more negative emotions than that of other figurative devices. Rhetorical questions are also perceived as having very negative intent. Lee (2017) suggested that there is a close interaction between figurative language and emotion. She found that about one-third of the social media posts contain figurative devices, among which rhetorical questions are the most frequently used one (37%). She also illustrated that rhetorical questions are particularly productive in evoking negative emotions, i.e. sadness and anger. Drawing from the insight of Lee (2017), Lau and Lee (2018) further explored the use of rhetorical questions in emotion expressions. Various linguistic cues and syntactic structures are proposed for the identification of five different emotions, namely happiness, sadness, anger, fear, and surprise.
Corpus Data
Dataset
In this study, an existing event-based emotion corpus was utilized (Lee et al. 2014). The dataset of the event-based emotion corpus was retrieved from the Sinica Corpus which is a tagged balanced corpus of Mandarin Chinese containing ten million words. Lee et al. (2014) Each instance consists of 3 parts, namely the "<FocusSentence> (i.e. the sentence that contains an emotion keyword), the "<PrefixSentence>" (i.e. the sentence before the focus sentence) and the "<SuffixSentence>" (i.e. the sentence after the focus sentence). The emotion keyword(s) of each instance is indicated as <emo id=0> X </emo>, with its pre-event and post-event being manually annotated as well.
As we observed that some of the questions were not concerned with the identified emotion, we randomly selected 300 instances that contain at least one emotion-related question in one of the three sentences (i.e. prefix, focus, or suffix sentence) for the question and event annotation.
Question Annotation
Drawing from the insight of Lee (2017), Lau and Lee (2018) categorized questions into 14 subtypes, including A-not-A, echo, particle, wh-question and so on. Following Lau and Lee (2018), we also classify questions into those 14 proposed subtypes, among which 8 of them are closed questions, 5 are open questions, and 1 is "series of questions".
Aiming to elicit an open-ended answer, open class questions refer to question with wh-words, such as how, what, why etc. As for close class questions, they refer to questions represented in the form of A-not-A, alternative, echo, particle, or other question words that require a pre-determined answer. According to Li and Thompson (1981), Anot-A questions are formed with an affirmative and its negative counterpart juxtaposed, and either the affirmative or its negative counterpart can be chosen as the answer, as in (1). (1) 你想不想去日本? (Do you want to go to Japan?)
Alternative questions directly provide two or more possible options for respondent(s), and the options are mostly connected by 還是 (or), as in (2).
(2) 你想去日本還是韓國? (Do you want to go to Japan or Korea?)
Echo questions have the form of a declarative sentence but end with a question mark in the written form. Particle questions refer to questions that end with a sentence-final particle, such as 嗎, 呢 and 吧. As for the others category, we grouped the question words that are used to pose questions such as 難道, 何必, 豈 etc.
Event Annotation
In this study, emotion is regarded as a pivot event linking the events inducing emotion (i.e. preevents), and induced by emotion (i.e. post-events).
For each identified question, we manually classified them into one of the three events. Since the 300 instances extracted from the event-based emotion corpus were annotated with pre-events and post-events, we annotate the event types of the questions based on the original tags. As only the immediate pre-events and post-events that are closest to the emotion keywords were annotated in the corpus, we may expand the event boundary. Consider (3) -(5).
(3) 徐姐對選舉十分反感,{{說:做這些花式子 幹嘛?}} (Xu Jie is totally disgusted with the election, and she {{said, "What's the point of using these ploys?"}}) In (3) and (4), the pre-event and post-event originally tagged in the event-based emotion corpus are marked with "[[…]]" and "{{…}}", respectively. In (3), the question occurs within the event boundary, and is therefore annotated as the post-event of the emotion 生氣 'angry'. As for (4), although the question is outside the event boundary, the question is tagged as the pre-event in this study as the original annotated pre-event 自己的想法 'one's own thoughts' can be referred to the previous sentence 你為什麼不好心一點,快快死 去呢 'why don't you do me a kindness and die quickly'. (3) and (4) show a clear causal relation between the pre-event and emotion, as well as between the emotion and post-event. The question in (5) is obviously used to express the sadness emotion. Instead of using a declarative sentence to convey sadness, the writer in (5) expresses the emotion by means of a rhetorical question. Thus, the question is neither annotated as the pre-event, nor post-event but the pivot event.
Corpus Analysis
Of the 300 extracted instances, the number of emotion identified is 306. This indicates that a couple of instances contain more than one emotion. Among the five emotions, surprise has the highest frequency (37.9%), following by anger (19.3%), sadness (19.0%), fear (14.4%), and happiness (9.5%).
As for the number of questions, 347 questions were identified, among which 171 (49.3%) are information-seeking questions and 176 (50.7%) are rhetorical questions. Figure 1 shows the distribution of question type per emotion. Figure 1 is calculated relative to the total number of posts of a given emotion type. It shows that different emotions may have different preferences for a particular question type over the other one. While information-seeking questions are more frequently found in the events expressing happiness, fear and surprise, rhetorical questions are more closely associated with events expressing sadness and anger.
In order to explore the frequency of emotionrelated questions functioning as an expression of a pre-event, pivot event and post-event as well as their position in an emotion instance, Table 1 shows the distribution of sub-events in terms of the position. Emotion-related questions, whatever the event type, appear most frequently in the focus sentence of the instance, and least likely in the prefix sentence. We also notice that pre-events are more likely to occur in prefix than suffix sentences, whereas post-events tend to occur in suffix than prefix sentences. This is in line with the assumption that there is a sequential ordering among the sub-events of emotions. Apart from the distributions of the sub-events introduced by means of questions, we also investigate the distribution of questions in terms of emotion type as shown in Table 2. Table 2 shows that the overall distributions of rhetorical questions and information-seeking questions used to introduce the three sub-events are rather similar. In relation to the event structure, the major function of both types of questions is to express emotions, with rhetorical questions (30.3%) being little more frequently used than informationseeking questions (25.4%). Similar number of rhetorical questions and information-seeking questions are found to indicate emotion causes (i.e. pre-events), accounting for approximately 18% respectively. As compared to pre-events and emotion expressions, post-events are less likely to be expressed by means of questions, in particular rhetorical questions. Regardless of the question type, far more than a half of the questions of each emotion type are used to express the emotion state, except for surprise. Instead, questions related to surprise are generally used to indicate the pre-events.
Although the general distributions of both types of questions are similar, rhetorical questions and information-seeking questions do play a different role in the event structures of different emotions. For instance, people tend to use rhetorical questions to introduce the emotion causes that trigger sadness and anger, while informationseeking questions are used to introduce the preevents of happiness, fear and surprise. As for postevents, information-seeking questions are generally used to mark the post-events of different emotions, except for sadness. It is also observed that the post-events of anger are more frequently marked by questions, as compared to the other four emotions.
The Use of Questions in Emotion Expressions
In this section, we will explore how emotionrelated questions are used in the expressions of different emotions in terms of event structure. Table 3 demonstrates the occurrence of different question types used in the three sub-events of emotion. We will discuss the role questions play in the descriptions of pre-event, emotion state and post-event in the following sub-sections. Table 3 shows that pre-events are most often reported by means of why questions, a series of questions, and echo questions. As mentioned in the previous section, 36.3% of emotion-related questions are used to describe the pre-events of an emotion, with pre-events of surprise (51.5%) being the most frequent one, followed by fear (36.5%), sadness (26.1%), anger (25.8%) and happiness (16.7%). It is observed that the pre-events of surprise often occur in the focus or suffix sentence, while the pre-events of other emotions mostly occur in the prefix or focus sentence. Among all the 14 question types, why questions are mostly found in the pre-events of surprise, and they are often represented in the pattern of "奇怪…怎麼/為 什麼 + pre-event", as in (6).
Questions as a Pre-event
(6) 我也覺得奇怪,為什麼這些天老想吵架? (I also wonder why I always want to quarrel these days.)
Example (6) shows the pattern which is very commonly used to introduce the cause event of surprise. After expressing the emotion keyword 奇怪 'surprise', the pre-event usually appears right after the question word 怎麼/為什麼 'why'. In addition to why questions, echo questions are occasionally used to form the cause events of surprise. Different from the echo questions found in the pre-events of other emotions, echo questions associated with surprise are relatively short. It serves as a good way to introduce the pre-events and implicitly express the surprise emotion in an expressive way. Moreover, the experiencers usually repeat the questions of what the others just said, as in (7).
(7) 夢女再歎一口氣,在我心靈內道:「我要 回家。」我愕然叫了出來:「回家?」 (The girl in the dream sighs again, and says to me, "I want to go home." I scream out, "Go home?")
As for the pre-events of fear, they are mostly formed with how questions, why questions and Anot-A questions. While how and why questions can also be found to describe the causes of other emotions, A-not-A questions are unique to the description of the pre-events of fear. For example, (8) illustrates that A-not-A is used when one is experiencing the fear emotion. The pre-events of fear following the A-not-A form are a negative event, such as 我功課不好 'I'm not doing well in school' in (8).
(8) 他會害怕說,是不是我功課不好? (He will be worried, "Is it that I'm not doing well in school?") Similar to surprise, the pre-events of anger are occasionally expressed by why questions. While the why questions of surprise mostly introduce the emotion cause in a direct way, the why questions of anger are usually preceded by another clause which has a transitional relation with the following question. An example is exemplified in (9).
(9) 「我們也知道公司組織要改才能生存,但 是為什麼不問我們的意見?」邱垂境生氣 地表示。 ("We also understand that the organization of the company needs to be changed in order to survive, but why don't you ask for our opinion?" Qiu chuijing said angrily.)
Pre-events of sadness are expressed by a couple of question types with similar numbers of occurrence, including a series of questions, others and why questions etc. Pre-events of happiness are not often expressed by questions, and rarely do they appear in the suffix sentence.
Questions as a Pivot Event
According to Table 3, pivot events (i.e. emotion state) are often expressed by means of a series of questions, particle question, and what questions. As suggested in Table 1, more than a half of the questions of the other four emotions are used to mark the pivot event, except for the case of surprise. The tendency of questions of each emotion type serving as a pivot event in descending order range from happiness (76.7%), sadness (69.6%), anger (60.6%), fear (59.6%), to surprise (39.2%).
For the expression of happiness, it is observed that most of the questions may not be regarded as an expression of happiness if the contextual information is not taken into account. Consider (10).
(10) 「戴老師,我是你的忠實讀者,很喜歡你 的書…」這小姐很興奮地對我說:「您可 不可以幫我簽個名?」 ("Mr. Dai, I'm an avid fan of your books, I really like your books…". This lady said to me very excitedly, "Can you sign me a name?")
In (10), the A-not-A question is in fact an expression of happiness. However, if the emotion keyword 興奮 'excited' is not present, the question may not be comprehended as a happiness expression or a happiness expression of such strong intensity. Thus, it would be of value to the implicit emotion identification if one could collect this kind of expressions and study how people use them to convey happiness implicitly, or what kinds of events usually trigger happiness. Particle questions are quite commonly used in the expression of happiness. We found that the clause 真的嗎 'really?' is quite often used in the expressions of happiness. The clause is also found in the expressions of surprise with slight different structure. For the expression of surprise, the clause sometimes co-occurred with a referential phrase, such as 這 'this', 如此 'that', as in "真的如此 嗎?" 'Is that really the case?' Such an expression is rarely, if not never, found in the case of happiness. The use of these referential phrases can also be found in the expressions of surprise formed with other question types. This may be due to the assumption that the experiencers of surprise are more aware of the triggering events.
We notice that the adverb 就 tends to appear in the questions expressing sadness than the other emotions. The use of 就 implies the meaning that the situation will turn out to be negative to the speaker, but he/she is not capable of preventing it from happening. Thus, it is oftentimes used to express sadness. Consider (11).
(11) 哥哥就要這樣被吃掉嗎? (Is my brother going to be eaten?)
Besides, questions expressing sadness may be formed with rhetorical interrogations, such as 何必, 豈, 難道 etc. Although the connotations of some of these adverbs typically indicate surprise, our findings reveal that they may also be used in expressing sadness.
As for the anger emotion, it is commonly expressed by why questions and a series of questions. The frequent use of a series of questions is in line with Lau and Lee (2018) who suggested that the purpose is to vent one's anger to someone who evokes the emotion. This claim can be further supported by the frequent use of 你 'you' in the questions expressing anger. As anger is typically elicited by unwanted or harmful circumstances which may motivates aggressive behaviors, we observed that the addressee of the question 你 'you' is more explicitly presented in the questions of anger than that of the other four emotions. This may be ascribed to the nature of anger that when one is eaten up with anger, he may vent his anger directly to the one who induces the emotion.
Fear is typically triggered when a person thinks that some bad things are going to happen. Therefore, a number of questions are formed to seek help. For example, some may be represented with 怎麼辦 'what to do', and some may be in the structure of "要 (…) 怎麼/如何". Others questions may describe the bad things that the speaker thinks might happen though he does not want it to happen in that way. These questions may appear in the patterns of "可不/ 豈不/不就…" as in (12).
(12) 我那堆心愛的寶貝不就被燒成烤雞了嗎? (Wouldn't the bunch of my beloved babies be burnt into roasted chickens?)
Questions as a Post-event
From Table 1, we can see that post-events are less likely to be introduced by questions as compared to the other sub-events. If the questions do describe the post-events, particle questions and why questions would be the speakers' preferred options as shown in Table 3. Of the questions introducing post-events, questions of anger (13.6%) have the highest frequency, followed by surprise (9.2%), happiness (6.7%), sadness (4.3%) and fear (3.8%).
With the limited number of instances containing the post-events of emotions, we realize that the use of words describing the action of posing a question may serve as an emotion indicator. For instance, if the word 問 'ask' is used, it implies that the following question is an information-seeking questions and the speaker is literally seeking information. Therefore, it indicates that the following question is raised out of curiosity and is therefore a post-event of surprise. The indicator of fear found is 叫道 'shout' and anger is 泣訴 'accuse while weeping'. Another commonly used indicator is verb 說 'say', and it is likely expressing an anger emotion. Consider (13).
(13) 她火冒三丈的說:「哪裡沒有?」 (She said with anger, "Haven't you?")
The expressed emotion is hinted by the use of 說 'say'. As what was 'said' is in the form of a question, if the question functions as an information-seeking question, the verb 問 'ask' would be used instead. In other words, the verb 說 'say' reveals that the following question is a rhetorical question which is not used to seek information. As discussed, rhetorical question has a close relation with negative emotions, in particular sadness and anger. The question is likely the post-event of anger or sadness. Yet, postevents of sadness are found to be least likely expressed by questions. It can therefore be inferred that the question following 說 'say' is the postevent of anger, even without the presence of the emotion keyword or the adverb denoting the emotion, such as 火冒三丈 'angry' in (13).
Conclusion
This paper examines the use of informationseeking questions and rhetorical questions in terms of event structures of emotion. We investigate whether the two types of questions can be used to introduce the pre-event (i.e. evoking event), pivot event (i.e. caused emotion), and post-event (i.e. event induced by the emotion). Results show that the overall distributions of information-seeking questions and rhetorical question used to mark the three sub-events are rather similar. This indicates that both types of questions play an equally important role in emotion expressions. We find that more than a half (55.6%) of emotion-related questions are used to express emotions, approximately one-third of the questions (36.3%) are used to describe pre-events and the remaining 8.1% are the post-events of emotions. Various features of pre-events, pivot events and post-events of different emotions are proposed for emotion identification. We believe that this linguistic account of questions in emotion expressions will provide a clearer picture of the nature of emotion, and add rich dimensions to emotion analysis.
extracted 8,973 instances of sentences from the Sinica Corpus by keyword matching based on the list of 91 Chinese primary emotion keywords identified in Chen et al. (2009).
Figure 1 -
1Distribution of Question Type per Emotion
Table 3 -
3Distribution of Question Types in terms of Event TypesRhetorical Question
Information-seeking Question
Pre-event
Emotion
Post-event
Pre-event
Emotion
Post-event
Total
Happiness 6.7%
40.0%
0.0%
10.0%
36.7%
6.7%
100%
Sadness
20.3%
47.8%
2.9%
5.8%
21.7%
1.4%
100%
Anger
19.7%
47.0%
6.1%
6.1%
13.6%
7.6%
100%
Fear
13.5%
17.3%
0.0%
23.1%
42.3%
3.8%
100%
Surprise
21.5%
15.4%
1.5%
30.0%
23.8%
7.7%
100%
Total
18.4%
30.3%
2.3%
17.9%
25.4%
5.8%
100%
Table 2 -Distribution of Questions in terms of Emotion Type
Close Class Question
Open Class Question
Series of Q A-not-A Alternative Echo Particle Others How
How
many/
much
What Which Who Why Where When Total
Pre-event
21%
3%
1%
13% 10%
6% 7% 0% 5% 0% 2% 32% 1%
0% 100%
Emotion
20%
9%
1%
4% 18%
6% 10% 0% 17% 1% 3% 10% 2%
1% 100%
Post-event
11%
4%
0%
7% 21%
4% 7% 7% 14% 0% 4% 18% 4%
0% 100%
32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors
32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors
32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors
32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors
32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors
32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors
32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors
32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors
Acknowledgments
The James-Lange theory of emotions: A critical examination and an alternative theory. The American journal of psychology. W B Cannon, 39Cannon, W. B. 1927. The James-Lange theory of emotions: A critical examination and an alternative theory. The American journal of psychology, 39(1/4), 106-124.
Metaphorical uses of language in the expression of emotions. Lynn Fainsilber, Andrew Ortony, Metaphor and Symbol. 24Fainsilber, Lynn and Ortony, Andrew. 1987. Metaphorical uses of language in the expression of emotions. Metaphor and Symbol, 2(4), 239-250.
You call that a rhetorical question?: Forms and functions of rhetorical questions in conversation. Jane Frank, Journal of Pragmatics. 145Frank, Jane. 1990. You call that a rhetorical question?: Forms and functions of rhetorical questions in conversation. Journal of Pragmatics, 14(5), 723-738.
The use of metaphor in written descriptions of emotional states. Unpublished manuscript. S R Fussell, Carnegie Mellon UniversityFussell, S. R. 1992. The use of metaphor in written descriptions of emotional states. Unpublished manuscript, Carnegie Mellon University.
Figurative language in emotional communication. Social and Cognitive Approaches to Interpersonal Communication. Susan R Fussell, Mallie M Moss, Fussell, Susan R. and Moss, Mallie M. 1998. Figurative language in emotional communication. Social and Cognitive Approaches to Interpersonal Communication, 113-141.
What's special about figurative language in emotional communication. The Verbal Communication of Emotions: Interdisciplinary Perspectives. Raymond W Gibbs, John S Leggitt, Elizabeth A Turner, Fussell, S. R.Gibbs, Raymond W., Leggitt, John S., and Turner, Elizabeth A. 2002. What's special about figurative language in emotional communication. The Verbal Communication of Emotions: Interdisciplinary Perspectives, edited by Fussell, S. R., 125-149.
A rulebased approach to emotion cause detection for chinese micro-blogs. Kai Gao, Hua Xu, Jiushuo Wang, Expert Systems with Applications. 429Gao, Kai, Hua Xu, and Jiushuo Wang. 2015. A rulebased approach to emotion cause detection for chinese micro-blogs. Expert Systems with Applications, 42(9):4517-4528.
Emotion cause detection with linguistic construction in Chinese weibo text. Lin Gui, Li Yuan, Ruifeng Xu, Bin Liu, Qin Lu, Yu Zhou, Natural Language Processing and Chinese Computing. SpringerGui, Lin, Li Yuan, Ruifeng Xu, Bin Liu, Qin Lu, and Yu Zhou. 2014. Emotion cause detection with linguistic construction in Chinese weibo text. In Natural Language Processing and Chinese Computing, pages 457-464. Springer.
What is an Emotion? Mind. W James, 9James, W. 1884. What is an Emotion? Mind, 9(34):188 -205.
Emotion concepts. Zoltan Kövecses, Springer-VerlagNew YorkKövecses, Zoltan. 1990. Emotion concepts. New York: Springer-Verlag.
Metaphor and emotion: language, culture, and body in human feeling. Zoltan Kövecses, Cambridge University PressCambridgeKövecses, Zoltan. 2003. Metaphor and emotion: language, culture, and body in human feeling. Cambridge: Cambridge University Press.
Metaphors We Live by. G Lakoff, M Johnson, University of Chicago PressChicagoLakoff, G. and Johnson, M. 1980. Metaphors We Live by. Chicago: University of Chicago Press.
Informationseeking questions and rhetorical questions in emotion expressions. H Y P Lau, S Y Lee, Chinese Lexical Semantics: 19th Workshop, CLSW2018. Jia-Fei Hong, Qi Su, Jiun-Shiung WuChiayi County, Taiwan; HeidelbergSpringerLau, H. Y. P. and Lee, S. Y. M. 2018. Information- seeking questions and rhetorical questions in emotion expressions. Chinese Lexical Semantics: 19th Workshop, CLSW2018, Chiayi County, Taiwan. May 26-28, 2018, edited by Jia-Fei Hong, Qi Su, Jiun-Shiung Wu, 433-442. Heidelberg: Springer.
Figurative language in emotion expressions. S Y M Lee, Chinese Lexical Semantics: 18th Workshop. Yunfang Wu, Jia-Fei Hong and Qi SuLeshan, China; HeidelbergSpringer10709Lee, S. Y. M. 2017. Figurative language in emotion expressions. Chinese Lexical Semantics: 18th Workshop, CLSW 2017, Leshan, China. May 18-20, 2017, (Vol. 10709), edited by Yunfang Wu, Jia- Fei Hong and Qi Su, 408-419. Heidelberg: Springer.
Detecting emotion causes with a linguistic rule-based approach. Computational Intelligence, Special Issues on Computational Approaches to Analysis of Emotion in Text. S Y M Lee, Y Chen, C R Huang, S Li, WileyBlackwellLee, S. Y. M., Chen, Y., Huang, C. R., and S. Li. 2012. Detecting emotion causes with a linguistic rule-based approach. Computational Intelligence, Special Issues on Computational Approaches to Analysis of Emotion in Text. WileyBlackwell.
A textdriven rule-based system for emotion cause detection. S Y M Lee, Y Chen, C R Huang, Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text. the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in TextAssociation for Computational LinguisticsLee, S. Y. M., Chen, Y., and Huang, C. R. 2010. A text- driven rule-based system for emotion cause detection. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pages 45-53. Association for Computational Linguistics.
Annotating Events in an Emotion Corpus. S Y M Lee, S Li, C R Huang, LREC. 2014Lee, S. Y. M., Li, S., & Huang, C. R. 2014. Annotating Events in an Emotion Corpus. In LREC (Vol. 2014, pp. 3511-3516).
An event-based emotion corpus. S Y M Lee, H Zhang, C R Huang, Workshop on Chinese Lexical Semantics. Berlin, HeidelbergSpringerLee, S. Y. M., Zhang, H., and Huang, C. R. 2013. An event-based emotion corpus. In Workshop on Chinese Lexical Semantics (pp. 635-644). Springer, Berlin, Heidelberg.
Emotional reactions to verbal irony. John S Leggitt, Raymond W Gibbs, Discourse Processes. 291Leggitt, John S. and Gibbs, Raymond W. 2000. Emotional reactions to verbal irony. Discourse Processes, 29(1), 1-24.
Text-based emotion classification using emotion cause extraction. Weiyuan Li, Hua Xu, Expert Systems with Applications. 414Li, Weiyuan and Hua Xu. 2014. Text-based emotion classification using emotion cause extraction. Expert Systems with Applications, 41(4):1742-1749.
Mandarin Chinese: A functional reference grammar. Charles N Li, Sandra A Thompson, University of California PressBerkeleyLi, Charles N. and Thompson, Sandra A. 1981. Mandarin Chinese: A functional reference grammar. Berkeley: University of California Press.
Effects of rhetorical questions on persuasion: A cognitive response analysis. Richard E Petty, John T Cacioppo, Martin Heesacker, Journal of Personality and Social Psychology. 403432Petty, Richard E., Cacioppo, John T., and Heesacker, Martin. 1981. Effects of rhetorical questions on persuasion: A cognitive response analysis. Journal of Personality and Social Psychology, 40(3), 432.
Emotions: A Psychoevolutionary Synthesis. R Plutchik, Harper & RowNew YorkPlutchik, R. 1980. Emotions: A Psychoevolutionary Synthesis. Harper & Row: New York
Why do people use figurative language. Richard M Roberts, Kreuz, J Roger, Psychological Science. 53Roberts, Richard M., and Kreuz, Roger J. 1994. Why do people use figurative language?. Psychological Science, 5(3), 159-163.
Emotion analysis in blogs at sentence level using a Chinese emotion corpus. Changqin Quan, Tingting He, Fuji Ren, Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering (NLPKE-2010). the 6th International Conference on Natural Language Processing and Knowledge Engineering (NLPKE-2010)BeijingQuan, Changqin, He, Tingting., and Ren, Fuji. 2010. Emotion analysis in blogs at sentence level using a Chinese emotion corpus. Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering (NLPKE- 2010), Beijing, 2010, 1-8.
A Wierzbicka, Emotions across Languages and Cultures: Diversity and Universals. CambridgeCambridge University PressWierzbicka, A. 1999. Emotions across Languages and Cultures: Diversity and Universals. Cambridge University Press: Cambridge. |
||
17,522,253 | The Importance of Prosodic Factors in Phoneme Modeling with Applications to Speech Recognition | This paper tests speech recognition using prosody dependent allophone models. The log likehoods of various prosodically labeled phonemes are calculated using Baum-Welsh re-estimation.These log likehoods are then compared to log likehoods of non-prosodically labeled phonemes. Based on the comparison of these log likehoods, it can be concluded that modeling all prosodic information directly in the vowel model leads to improvement in the model. Consonants, on the other hand, split naturally into three categories, strengthened, lengthened and neutral. | [] | The Importance of Prosodic Factors in Phoneme Modeling with Applications to Speech Recognition
Sarah Borys
Department of Electrical and Computer Engineering
University of Illinois at Urbana-Champaign
61901UrbanaIL
The Importance of Prosodic Factors in Phoneme Modeling with Applications to Speech Recognition
This paper tests speech recognition using prosody dependent allophone models. The log likehoods of various prosodically labeled phonemes are calculated using Baum-Welsh re-estimation.These log likehoods are then compared to log likehoods of non-prosodically labeled phonemes. Based on the comparison of these log likehoods, it can be concluded that modeling all prosodic information directly in the vowel model leads to improvement in the model. Consonants, on the other hand, split naturally into three categories, strengthened, lengthened and neutral.
Introduction
Prosody is an important factor in how humans interpret speech. The same word string can have different meanings depending on the way it is said. Many linguists have performed extensive studies of prosody and of the effects of prosodic factors on spoken language.
In his dissertation, Cho (2001) investigates how phonetic features are conditioned by prosodic factors by examining pre-boundary, post-boundary, and accented syllables. Cho reports that boundary induced articulatory strengthening occurs in phrase final vowel positions and phrase initial consonant positions. Phrase initial vowels are also more susceptible to coarticulation than phrase final vowels. Cho also hypothesizes that accented syllables are characterized primarily by sonority expansion. An accented vowel is usually not affected by coarticulation with a neighboring vowel. Strengthening effects caused by boundaries and accents cannot be considered the same and Cho discusses several differences between boundary and accent strengthening effects.
In a study performed by Edwards et al (1991), the effect of final lengthening at prosodic boundaries was examined by studying articulator movement patterns. It was found that decreasing intragestural stiffness slows down the syllable, affecting the tempo of the spoken word, causing the syllable to be lengthened. The changing of intergestural phrasing also affects the syllable duration by decreasing the overlap of a vowel gesture with a consonant gesture. This increases the duration of accented syllables comparatively to unaccented syllables and causes the accented syllable to be strengthened.
De Jong (1994) investigated the supraglottal correlates of linguistic prominence in English. De Jong suggests that stress involves a localized shift toward hyperarticulated speech. An increase in the duration in the closure and in the aspiration of initial voiceless stops was observed along with an increase in duration of prevoicing in initial voiced stops in stressed syllables. Fougeron and Keating (1997) report that on the edges of prosodic phrase boundaries, final vowels and initial consonants have less reduced lingual articulation. The differences in articulation were manifested in the linguopalatal contact of boundary consonants and vowels. The linguopalatal contact of both consonants and vowels relates directly to the type and size of phrase boundary. Boundary type and size also appear to effect the acoustic duration of post-boundary consonants. Wightman et al (1992) report that there is segmental lengthening in the rhyme of a syllable that directly precedes a phrase boundary.
Wightman examines the effect of duration and pause on boundary words and shows that speaking rate effects the distribution of phoneme duration. The lengthening effects of pre-boundary syllables can be used to distinguish several different types of phrase boundaries.
These results show that prosody can cause variations not just in pitch, but also in the articulation of phonetic contrasts in different phonemes.
These variations can be modeled as a part of the phoneme definition in an automatic speech recognition (ASR) system. However, the question is whether or not modeling prosodic factors with phonemes would lead to improvements in the quality of the phoneme model and thus lead to improvements in both the correctness and accuracy in an ASR system.
Most modern speech recognizers function by breaking words up into mathematical features. The recognizer then determines the most likely occurring set of phonemes by comparing these extracted features with its own phoneme models. Phonemes are usually modeled using hidden Markov Models (HMMs). Once the recognizer has identified a set of the most likely occurring phonemes, it then uses a dictionary to match a word or group of words to that set.
Prosody can be incorporated into the phoneme model by allowing two different HMMs to represent a single phoneme. One HMM would need to represent the prosody independent version of the phoneme while the other would represent the phoneme in some prosodic context. This could allow the recognizer to do things such as distinguish between accented and unaccented phonemes or distinguish between boundary and nonboundary phonemes. Allowing the recognizer to make such a distinction may reduce the confusability of certain phoneme groups, which in turn could allow for increased recognition rates.
The goal of this research is to not only determine if the inclusion of prosody in the phoneme model causes improvement in the model, but also to determine which prosodic factors to model and the best way to model them. This will be accomplished by first splitting phonemes into different prosodically varying groups and then by comparing the log probability of the occurrence of each phoneme in those different groups. Because prosody causes noticeable variations in speech, a phoneme model that includes prosodic factors should differ from models of the same phoneme that do not. This difference will prove to be significant enough to show that prosodic factors should be taken into account for a more accurate phoneme model.
The Database
Boston University's Radio News Corpus (1995) was used for all experiments. The speakers from this corpus that were analyzed were F1A, F2B, and M2B. The usable data from these three speakers consisted of 259 phn : phrase medial phn! : phrase medial, accented phnB4 : phrase final, unaccented phnB4! : phrase final, accented B4phn : phrase initial, unaccented B4phn! : phrase initial, accented Figure 2. The different prosodic labels. "Phn" represents some generic phoneme.
wav files containing 18270 words. All the wav files that were used were accompanied by two types of prosodic transcription files, .brk and .ton files.
The corpus was labeled according to the ToBI standard. Silverman et al (1992) explain the labeling system in detail. It will not be described in this paper.
The .brk files specify a ToBI break index (0-4) for every spoken word in the associated wav file. For the experiments, the only boundary distinguished was the intonational phrase boundary (ToBI index 4). All other boundary types (indices 0-3) were grouped together.
There were 3855 intonational phrase boundaries in the data set.
The .ton files label the times in which an accented vowel occurs. The most abundant accent label was H* which occurs in a ratio of about 10 H* for every single L*. Other accent types do occur, but most include H* in bitonal accent.
Prosodic Annotation
The set of 38 different phonemes, shown in figure 1, were used in the experiments.
Allophone Modeling
Recognition experiments were preformed for four different allophone sets:
• Tied • Accent • Boundary • Untied
The Tied set contained no prosodically labeled data.
The Accent set contained monophones that were split into two groups, accented and unaccented. Phonemes were not distinguished on the basis of phrase position. The Boundary set modeled monophones as phrase initial, phrase medial, or phrase final.
Accented phonemes were not distinguished from unaccented phonemes.
The Untied set distinguish phonemes by both phrasal position and accentuation. A monophone in this group could be labeled as phrase medial, phrase medial accented, phrase initial, phrase initial accented, phrase final or phrase final accented. Figure 2 contains the six different labels used to represent the allophones of a single imaginary phoneme "phn."
Allophone Definitions
A phrase final phoneme was considered to be any phoneme that occurred in the nucleus or coda of the final syllable of a word directly preceding an intonational phrase boundary. Phrase initial phonemes, on the other hand, were considered to be any phoneme in the onset or nucleus of the initial syllable of a word that followed an intonational phrase boundary. Phase medial phonemes were considered to be any other phoneme.
An accented vowel was the lexically stressed vowel in a word containing a transcribed pitch accent. Because accented consonants are not clearly defined, three different labeled sets of accented consonants were developed:
• All Consonants • After Vowel • Before Vowel All Consonants considered every consonant in a syllable with an accented vowel to also be accented. After Vowel considered as accented only the coda consonants. Before Vowel recognized only the onset consonants of the accented syllable as being accented. Accents were considered to be limited to a single syllable.
Because there were three different groups of accented consonants and because there is only one way a vowel can be labeled as accented, vowels were beyond b iy y aa n d beyond! b iy y aa! n! d! beyondB4 b iy y aaB4 nB4 dB4 beyondB4! b iy y aaB4! nB4! dB4! B4beyond B4b B4iy y aa n d B4beyond! B4b B4iy y aa! n! d! Figure 4. An example of each of the six word types defined with Untied allophones for the After Vowel experimental condition. Boundary allophones could only be used to define three distinct word types, Accent only two, and Tied only one.
a. 0 370000 B4in 370000 760000 nineteen! 760000 1150000 seventy 1150000 1680000 sixB4 1680000 2310000 B4democratic! 2310000 2680000 governor b. 600000 1600000 w 1600000 2400000 aa! 2400000 2900000 n! 2900000 3800000 t 3800000 4900000 axB4 4900000 5300000 dB4
Dictionaries and Transcription Types
Each experimental condition required its own dictionary and transcription. Just as each phoneme had six distinct allophones, each word had six distinct types. A word could be phrase initial, medial or final and accented or unaccented. Each word type had its own definition. An example dictionary is shown in figure 4.
Every experimental condition had both a word level transcription and a phone level transcription. Figure 5 shows an example of the two different levels of transcription files. Experiments were performed using the Hidden Markov Toolkit (HTK), which is distributed by the University of Cambridge (2002). Phonemes were modeled using a three-state HMM with no emitting start and end states. Each emitting state consisted of three mixture Gaussians and no state skipping was allowed.
Experiments
Experimental Procedure
The Radio News Corpus data was divided into 2 sets: a training set and a test set.
The test set was approximately 10% of the size of the training set. The experimental procedure was completed for sixteen experimental conditions.
The experimental procedure can be divided into two steps. In step one, the training data was used to re-estimate the HMM definitions for each phoneme. Re-estimation was performed with the HTK tool HRest, which uses Baum-Welsh re-estimation described in detail in the HTK book available from Cambridge University (2002). HMM parameters were re-estimated until either the log likehood converged or HRest had performed 100 iterations of the re-estimation algorithm.
In the second step of the experiments, HRest was used to perform a single iteration of the reestimation algorithm on the test data using the HMM definitions that were updated from the re-estimation of the training set. During re-estimation, the log likehoods of each phoneme were output and saved for later comparisons.
Post Processing
Once all the log likehoods had been recorded, the Untied allophone sets were used as a basis to determine if the considered monophones were better modeled as prosody independent or prosody dependent.
To determine the best modeling strategy for a particular monophone, six different weighted averages (WA's) were calculated from the Untied log likehoods and compared to the computed log likehoods of the Boundary, Accent and Tied models. where PM, PI, and PF stand for phrase medial, initial and final, respectively. L x represents the computed log likehood of the allophone label x in the Untied allophone set, and W x represents the frequency of that x. W x , where x is representative of any of the six types of prosodically labeled monophones, is computed by the following formula:
W x = num x / TOTAL
where num x represents the number of examples of the token x, and TOTAL is the sum of all the different phoneme tokens being taken into account for the computation of WA of some set of phonemes.
The two formulas used in calculating the WA's for comparison with the Accent allophone set are as follows:
WA U = L phn W phn + L B4phn W B4phn + L phnB4 W phnB4 WA A = L phn! W phn! + L B4phn! W B4phn! + L phnB4! W phnB4! where WA U and WA A are the weighted averages of log likehoods for the accented and unaccented tokens respectively.
The WA compared to the Tied set was computed as follows: WA T = L phn! W phn! + L B4phn! W B4phn! + L phnB4! W phnB4! + L phn W phn + L B4phn W B4phn + L phnB4 W phnB4
where WA T is the weighted average of all of the phonemes in the Untied model. The weighted averages were then compared to the log likehoods using the following algorithm:
if (WA < LL), then split using prosodic labels if (WA ≥ LL), then do not split using prosodic labels LL is the log likehood computed using HRest.
Results
For each prosodic variable (phrasal position or accent), tables were constructed listing the preferred tying of phonemes based on the log likehood results. Table 1, for example, lists all phonemes that should be tied on the basis of accent and those that should not. Similar tables exist for phrasal position and for the combination of both accent and phrasal position. Examples of certain phonemes are not present due to the relatively small size of the data set.
Experimental results varied greatly between consonants and vowels. For consonants, there appeared to be an improvement in the model when phonemes are distinguished by phrasal position.
Separation of accented and unaccented phrase initial consonants yielded no improvement to the model for most consonants. This implies that phrase initial accented and phrase initial unaccented phonemes should be merged into a single token. Accented consonants are also not benefited by positional information. Results indicate that phrase initial, medial and final accented phonemes can be merged together. Figure 6a illustrates a proposed model for the prosodic labeling of consonants based on these results.
For vowels, a model showed improvement when the phoneme was separated into phrase initial, medial and final tokens. Vowel phoneme models also showed improvement when separated by accent. The accent on a vowel appears to be important regardless of phrasal position. These results suggest a six-way distinction should be used when modeling vowels and the proposed model is illustrated in figure 6b.
Conclusion
While the data used for these experiments was sparse for certain phonemes, many of the phoneme models tested showed improvement when prosody was incorporated directly into the HMM definition. Analysis of experimental results led to two different proposals for the modeling of consonants and vowels. Verifying that the proposed models are indeed an improvement over standard phoneme modeling will be a goal of future work.
Figure 1 .
1This figure contains a chart of the 38 different non-prosodically distinguished phonemes used for experimentation.
Figure 3 .
3The sixteen experimental conditions
Figure 5a .
5aAn example Untied word level transcription b. An example Untied phone level transcription for the After Vowel accent condition. The transcribed word is "wanted." separated into a fourth group of their own, entitled Vowels. The four groups along with the four different allophone models lead to the sixteen experimental conditions illustrated in figure 3.
Table 1 .
1The results of
experiments for the Accented
allophone sets. The "Merge"
column lists phonemes with
WA ≥ LL. The "Separate"
column indicates phonemes
where WA < LL. Due to the
relatively small size of the
data set, several phonemes
are missing from the table.
AcknowledgementsThis work could not have been completed without the help and guidance of Professor Mark Hasegawa-Johnson and Professor Jennifer Cole.References
Effects of Prosody on Articulation in English. T Cho, Ph.D. dissertationUCLACho, T. 2001 Effects of Prosody on Articulation in English. Ph.D. dissertation, UCLA.
The supraglottal articulation of prominence in English: Linguistic stress as localized hyperarticulation. De Jong, Kenneth , JASA. 971De Jong, Kenneth (1995) "The supraglottal articulation of prominence in English: Linguistic stress as localized hyperarticulation," JASA, vol.97(1), pp. 491-504.
The articulatory kinematics of final lengthening. Jan Edwards, Mary Beckman, Janet Fletcher, JASA. 891Edwards, Jan. Beckman, Mary, & Fletcher, Janet. 1991 "The articulatory kinematics of final lengthening," JASA 89(1), pp. 369-382.
Articulatory strengthening at edges of prosodic domains. P Fougeron, P Keating, JASA. 1016Fougeron, P. & Keating, P. 1997 "Articulatory strengthening at edges of prosodic domains," JASA 101(6), pp. 3728-3740.
The Boston University Radio News Corpus. M Ostendorf, P J Price, S Shattuck-Hufnagel, No ECS-95-001Boston UniversityTechnical ReportOstendorf, M., Price, P.J., Shattuck-Hufnagel, S. 1995. "The Boston University Radio News Corpus," Boston University Technical Report No ECS-95-001, <http://ssli.ee.Washington.edu/papers/radionews- tech.ps>.
. K Silverman, M Beckman, J Pitrelli, M Ostendorf, C Wighnman, P Price, J Pierrehumbert, J Hirschberg, ToBI, a standard for labeling English. 2ICSLPSilverman, K., Beckman, M., Pitrelli, J., Ostendorf, M. Wighnman, C. Price, P., Pierrehumbert, J., Hirschberg, J., 1992, "ToBI, a standard for labeling English" ICSLP, vol. 2, pp867-870
Segmental durations in the vicinity of prosodic phrase boundaries. C W Wightman, S Shattuck-Hufnagel, M Ostendorf, P J Price, J. Acoust. Soc. Am. 913Wightman, C. W., Shattuck-Hufnagel, S., Ostendorf, M., & Price, P. J. 1992. "Segmental durations in the vicinity of prosodic phrase boundaries," J. Acoust. Soc. Am., vol. 91, no. 3, pp 1707-17 |
424,313 | Supervised Classification for Extracting Biomedical Events | We introduce a supervised approach for extracting bio-molecular events by using linguistic features that represent the contexts of the candidate event triggers and participants. We use Support Vector Machines as our learning algorithm and train separate models for event types that are described with a single theme participant, multiple theme participants, or a theme and a cause participant. We perform experiments with linear kernel and edit-distance based kernel and report our results on the BioNLP'09 Shared Task test data set. | [
645004
] | Supervised Classification for Extracting Biomedical Events
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 2009. 2009
Arzucanözgür
Department of EECS
Department of EECS and School of Information
University of Michigan Ann Arbor
48109MIUSA
Dragomir R Radev radev@umich.edu
University of Michigan
48109Ann ArborMIUSA
Supervised Classification for Extracting Biomedical Events
Proceedings of the Workshop on BioNLP: Shared Task
the Workshop on BioNLP: Shared TaskBoulder, ColoradoAssociation for Computational LinguisticsJune 2009. 2009
We introduce a supervised approach for extracting bio-molecular events by using linguistic features that represent the contexts of the candidate event triggers and participants. We use Support Vector Machines as our learning algorithm and train separate models for event types that are described with a single theme participant, multiple theme participants, or a theme and a cause participant. We perform experiments with linear kernel and edit-distance based kernel and report our results on the BioNLP'09 Shared Task test data set.
Introduction
Most previous work on biomedical information extraction focuses on identifying relationships among biomedical entities (e.g. protein-protein interactions). Unlike relationships, which are in general characterized with a pair of entities, events can be characterized with event types and multiple entities in varying roles. The BioNLP'09 Shared Task addresses the extraction of bio-molecular events from the biomedical literature (Kim et al., 2009). We participated in the "Event Detection and Characterization" task (Task 1). The goal was to recognize the events concerning the given proteins by detecting the event triggers, determining the event types, and identifying the event participants.
In this study, we approach the problem as a supervised classification task. We group the event types into three general classes based on the number and types of participants that they involve. The first class includes the event types that are described with a single theme participant. The second class includes the event types that are described with one or more theme participants. The third class includes the events that are described with a theme and/or a cause participant. We learn support vector machine (SVM) models for each class of events to classify each candidate event trigger/participant pair as a real trigger/participant pair or not. We use various types of linguistic features such as lexical, positional, and dependency relation features that represent the contexts of the candidate trigger/participant pairs. The results that we submitted to the shared task were based on using a linear kernel function. In this paper, we also report our results based on using an edit-distance based kernel defined on the shortest dependency relation type paths between a candidate trigger/participant pair.
System Description
Event Type Classes
We grouped the nine event types targeted at the BioNLP'09 Shared Task into three general event classes based on the number and types of participants that they involve. Since the event types in each class are similar to each other based on the number and roles of participants that they involve and different from the event types in the other classes, we learned separate classification models for each class. We formulated the classification task as the classification of trigger/participant pairs. We extracted positive and negative training instances (trigger/participant pairs) from the training data for each class of events. We considered only the pairs that appear in the same sentence. We used the tokenized and sentence split abstracts provided by the shared task organizers 1 . Consider the sentence "The phosphorylation of TRAF2 inhibits binding to the CD40 cytoplasmic domain". This sentence describes the following three events: Event1 belongs to Class 1. The trigger/participant pair (phosphorylation, TRAF2) is a positive instance for Class 1. Event2 belongs to Class 2. It has two theme participants. The instances for Class 2 events are created by decomposing the events into trigger/theme pairs. The two positive instances extracted from the decomposition of Event2 are (binding, TRAF2) and (binding, CD40). Event3 belongs to Class 3. It consists of two semantically different participants, namely a theme and a cause. We trained two separate models for Class 3 events, i.e., one model to classify the themes and another model to classify the causes. Another distinguishing characteristic of Class 3 events is that a participant of an event can be a protein or an event. We represent the participants that are events with their corresponding event triggers. We decompose Event3 into its theme and cause and represent its cause Event1 with its trigger word "phosphorylation" and its theme Event2 with its trigger word "binding". As a result, (inhibits, binding) and (inhibits, phosphorylation) are included as positive instances to the Class 3 theme and Class 3 cause training sets, respectively. Negative instances for Class 1 and Class 2 are created by including all the trigger/protein pairs which are not among the positive instances of that class. Negative instances for Class 3 theme and Class 3 cause are created by including all the trigger/protein and trigger1/trigger2 pairs which are not among the positive instances of that class. For example, (phosphorylation, CD40) is a negative instance for Class 1 and (inhibits, TRAF2) is a negative instance for Class 3 theme and Class 3 cause.
Feature Extraction
Lexical and Part-of-Speech Features
We used the candidate trigger and its part-ofspeech, which was obtained by using the Stanford Parser, as features, based on our observation that different candidate triggers might have different likelihoods of being a real trigger for a certain event. For example, "transcription" is a trigger for the Transcription event 277 times in the training set and has not been used as a trigger for other types of events. On the other hand, "concentration" is used only once as a trigger for a Transcription event and three times as a trigger for Regulation events.
Positional Features
We used two features to represent the relative position of the participant with regard to the trigger in the sentence. The first feature has two values, namely "before" (the participant appears before the trigger) or "after" (the participant appears after the trigger). The second feature encodes the distance between the trigger and the participant. Distance is measured as the number of tokens between the trigger and the participant. Our intuition is that, if a candidate trigger and participant are far away from each other, it is less likely that they characterize an event.
Dependency Relation Features
A dependency parse tree captures the semantic predicate-argument dependencies among the words of a sentence. Dependency tree paths between protein pairs have successfully been used to identify protein interactions (Bunescu and Mooney, 2007;Erkan et al., 2007). In this paper, we use the dependency paths to extract events. For a given trigger/participant pair, we extract the shortest path from the trigger to the participant, from the dependency parse of the sentence. We use the McClosky-Charniak parses which are converted to the Stanford Typed Dependencies format and provided to the participants by the shared task organizers. Previous approaches use both the words and the dependency relation types to represent the paths (Bunescu and Mooney, 2007;Erkan et al., 2007). Consider the dependency tree in Figure 1. The path from "phosphorylation" to "CD40" is "nsubj inhibits acomp binding prep to domain num". Due to the large number of possible words, using the words on the paths might lead to data sparsity problems and to poor generalization. Suppose we have a sentence with similar semantics, where the synonym word "prevents" is used instead of "inhibits". If we use the words on the path to represent the path feature, we end up with two different paths for the two sentences that have similar semantics. Therefore, in this study we use only the dependency relation types among the words to represent the paths. For example, the path feature extracted for the (phosphorylation, CD40) negative trigger/participant pair is "nsubj acomp prep to num" and the path feature extracted for the (phosphorylation, TRAF2) positive trigger/participant pair is "prep of".
Classification
We used the SV M light library (Joachims, 1999) with two different kernel functions and feature sets for learning the classification models. Our first approach is based on using linear SVM with the features described in Section 2.2. In this approach the path feature is used as a nominal feature. Our second approach is based on integrating to SVM a kernel function based on the word-based edit distance between the dependency relation paths, where each dependency relation type on the path is treated as a word. For example, the word-based edit distance between the paths "prep of" and "prep of prep with" is 1, since 1 insertion operation (i.e., inserting "prep with" to the first path) is sufficient to transform the first path to the second one. The editdistance based similarity between two paths p i and p j and the corresponding kernel function are defined as follows (Erkan et al., 2007).
edit sim(pi, pj) = e −γ(edit distance(p i ,p j ))(1)
Experimental Results
The data provided for the shared task is prepared from the GENIA corpus (Kim et al., 2008). We used the training and the development sets for training. The candidate triggers are detected by using a dictionary based approach, where the dictionary is extracted from the training set. We filtered out the noisy trigger candidates such as "with", "+", ":", and "-", which are rarely used as real triggers and commonly used in other contexts. The candidate trigger/participant pairs are classified by using the classifiers learned for Class 1, Class 2, and/or Class 3 depending on whether the candidate trigger matched one of the triggers in these classes. The SVM score is used to disambiguate the event types, if a candidate trigger matches a trigger in more than one of the event classes. A trigger which is ambiguous among the event types in the same class is assigned to the event type for which it is most frequently used as a trigger.
The results that we submitted to the shared task were obtained by using the linear SVM approach with the set of features described in Section 2.2. After submitting the results, we noticed that we made an error in pre-processing the data set. While aligning the provided dependency parses with the sentence, we incorrectly assumed that all the sentences had dependency parses and ended up using the wrong dependency parses for most of the sentences. The overall performance scores for our official submission are 30.42% recall, 14.11% precision, and 19.28% F-measure. The results obtained after correcting the error are reported in Table 1. Correcting the error significantly improved the performance of the system. Table 2 shows the results obtained by using SVM with dependency path edit kernel. The two SVM models achieve similar performances. The performance for the regulation events is considerably lower, since errors in identifying the events are carried to identifying the event participants of a regulation event. The performances for the events which have multiple participants, i.e., binding and regulation events, are lower compared to the events with a single participant. The performance is higher when computed by decomposing the events (49.00 and 31.82 F-measure for binding and regulation events, respectively). This suggests that even when participants of events are identified correctly, there is significant amount of error in composing the events.
Conclusion
We described a supervised approach to extract biomolecular events. We grouped the event types into three general classes based on the number and types of participants that they can involve and learned separate SVM models for each class. We used various types of linguistic features that represent the context of the candidate event trigger/participant pairs. We achieved an F-measure of 39.83% on the shared task test data. Error analysis suggests that improving the approach of event composition for types of events with multiple participants and improving the strategy for detecting and disambiguating triggers can enhance the performance of the system.
Figure 1 :
1The dependency tree of the sentence "The phosphorylation of TRAF2 inhibits binding to the CD40 cytoplasmic domain."
Class 1 Events: Events that involve a single theme participant (Gene expression, Transcription, Protein catabolism, Localization, and Phosphorylation event types).Class 2 Events: Events that can involve one or more theme
participants (Binding event type).
Class 3 Events: Events that can be described with a theme
and/or a cause participant (Regulation, Positive regula-
tion, and Negative regulation event types). Unlike Class 1
and Class 2 events, where the participants are proteins, the
participants of Class 3 events can be proteins or events.
Table 1: Approximate span & recursive matching results using linear SVM with the set of features described in Section 2.2 (after correcting the error in pre-processing the data set).Event Type
Recall Precision F-measure
Localization
41.95
60.83
49.66
Binding
31.41
34.94
33.08
Gene expression
61.36
69.00
64.96
Transcription
37.23
30.72
33.66
Protein catabolism
64.29
64.29
64.29
Phosphorylation
68.15
80.70
73.90
Event Total
50.82
56.80
53.64
Regulation
15.12
19.82
17.15
Positive regulation
24.21
33.33
28.05
Negative regulation
21.64
32.93
26.11
Regulation Total
22.02
30.72
25.65
All Total
35.86
44.69
39.79
Table 2 :
2Approximate span & recursive matching results using SVM with dependency relation path edit kernel.
http://www-tsujii.is.s.u-tokyo.ac.jp/GENIA /SharedTask/tools.html
AcknowledgmentsThis work was supported in part by the NIH Grant U54 DA021519.
Text Mining and Natural Language Processing. R C Bunescu, R J Mooney, Chapter Extracting Relations from Text: From Word Sequences to Dependency Paths. SpringerR. C. Bunescu and R. J. Mooney, 2007. Text Mining and Natural Language Processing, Chapter Extracting Re- lations from Text: From Word Sequences to Depen- dency Paths, pages 29-44, Springer.
Semi-supervised classification for extracting protein interaction sentences using dependency parsing. Güneş Erkan, Arzucanözgür , Dragomir R Radev, Proceedings of EMNLP. EMNLPGüneş Erkan, ArzucanÖzgür, and Dragomir R. Radev. 2007. Semi-supervised classification for extracting protein interaction sentences using dependency pars- ing. In Proceedings of EMNLP, pages 228-237.
T Joachims, Advances in Kernel Methods-Support Vector Learning, Chapter Making Large-Scale SVM Learning Practical. MIT-PressT. Joachims, 1999. Advances in Kernel Methods-Support Vector Learning, Chapter Making Large-Scale SVM Learning Practical. MIT-Press.
Corpus annotation for mining biomedical events from literature. Jin-Dong Kim, Tomoko Ohta, Jun'ichi Tsujii, BMC Bioinformatics. 19Jin-Dong Kim, Tomoko Ohta, and Jun'ichi Tsujii. 2008. Corpus annotation for mining biomedical events from literature. BMC Bioinformatics, 9(1).
Overview of BioNLP'09 Shared Task on Event Extraction. Jin-Dong Kim, Tomoko Ohta, Sampo Pyysalo, Yoshinobu Kano, Jun'ichi Tsujii, Proceedings of Natural Language Processing in Biomedicine (BioNLP) NAACL 2009 Workshop. Natural Language Processing in Biomedicine (BioNLP) NAACL 2009 WorkshopTo appearJin-Dong Kim, Tomoko Ohta, Sampo Pyysalo, Yoshi- nobu Kano, and Jun'ichi Tsujii. 2009. Overview of BioNLP'09 Shared Task on Event Extraction. In Proceedings of Natural Language Processing in Biomedicine (BioNLP) NAACL 2009 Workshop. To appear. |
220,046,446 | Multimodal Quality Estimation for Machine Translation | We propose approaches to Quality Estimation (QE) for Machine Translation that explore both text and visual modalities for Multimodal QE. We compare various multimodality integration and fusion strategies. For both sentence-level and document-level predictions, we show that state-of-the-art neural and feature-based QE frameworks obtain better results when using the additional modality. | [
773282,
52967399,
67856167,
53244910,
52011315,
15132118,
53249630
] | Multimodal Quality Estimation for Machine Translation
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 5 -10, 2020. 2020
Shu Okabe shurokabe@gmail.com
Department of Computing
Imperial College London
UK
Frédéric Blain f.blain@sheffield.ac.uk
Department of Computer Science
University of Sheffield
UK
Lucia Specia l.specia@imperial.ac.uk
Department of Computing
Imperial College London
UK
Department of Computer Science
University of Sheffield
UK
Multimodal Quality Estimation for Machine Translation
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsJuly 5 -10, 2020. 20201233
We propose approaches to Quality Estimation (QE) for Machine Translation that explore both text and visual modalities for Multimodal QE. We compare various multimodality integration and fusion strategies. For both sentence-level and document-level predictions, we show that state-of-the-art neural and feature-based QE frameworks obtain better results when using the additional modality.
Introduction
Quality Estimation (QE) for Machine Translation (MT) (Blatz et al., 2004;Specia et al., 2009) aims to predict the quality of a machine-translated text without using reference translations. It estimates a label (a category, such as 'good' or 'bad', or a numerical score) for a translation, given text in a source language and its machine translation in a target language (Specia et al., 2018b). QE can operate at different linguistic levels, including sentence and document levels. Sentence-level QE estimates the translation quality of a whole sentence, while document-level QE predicts the translation quality of an entire document, even though in practice in literature the documents have been limited to a small set of 3-5 sentences (Specia et al., 2018b).
Existing work has only explored textual context. We posit that to judge (or estimate) the quality of a translated text, additional context is paramount. Sentences or short documents taken out of context may lack information on the correct translation of certain (esp. ambiguous) constructions. Inspired by recent work on multimodal machine learning (Baltrusaitis et al., 2019;Barrault et al., 2018), we propose to explore the visual modality in addition to the text modality for this task.
Multimodality through vision offers interesting opportunities for real-life data since texts are in- * Two authors contributed equally.
Source (EN)
Danskin Women's Bermuda Shorts MT (FR) Bermuda Danskin féminines court Table 1: Example of incorrectly machine-translated text: the word shorts is used to indicate short trousers, but gets translated in French as court, the adjective short. Here multimodality could help to detect the error (extracted from the Amazon Reviews Dataset of McAuley et al., 2015).
creasingly accompanied with visual elements such as images or videos, especially in social media but also in domains such as e-commerce. Multimodality has not yet been applied to QE. Table 1 shows an example from our e-commerce dataset in which multimodality could help to improve QE. Here, the English noun shorts is translated by the adjective court (for the adjective short) in French, which is a possible translation out of context. However, as the corresponding product image shows, this product is an item of clothing, and thus the machine translation is incorrect. External information can hence help identify mismatches between translations which are difficult to find within the text. Progress in QE is mostly benchmarked as part of the Conference on Machine Translation (WMT) Shared Task on QE. This paper is based on data from the WMT'18 edition's Task 4 -documentlevel QE. This Task 4 aims to predict a translation quality score for short documents based on the number and the severity of translation errors at the word level (Specia et al., 2018a). This data was chosen as it is the only one for which meta information (images in this case) is available. We extend this dataset by computing scores for each sentence for a sentence-level prediction task. We consider both feature-based and neural state-of-theart models for QE. Having these as our starting points, we propose different ways to integrate the visual modality.
The main contributions of this paper are as follows: (i) we introduce the task of Multimodal QE (MQE) for MT as an attempt to improve QE by using external sources of information, namely images; (ii) we propose several ways of incorporating visual information in neural-based and featurebased QE architectures; and (iii) we achieve the state-of-the-art performance for such architectures in document and sentence-level QE.
Experimental Settings
QE Frameworks and Models
We explore feature-based and neural-based models from two open-source frameworks: (Specia et al., 2015) is a feature-based QE framework composed of two modules: a feature extractor module, to extract the relevant QE features from both the source sentences and their translations, and a machine learning module. We only use this framework for our experiments on document-level QE, since it does not perform well enough for sentence-level prediction. We use the same model (Support Vector Regression), hyperparameters and feature settings as the baseline model for the document-level QE task at WMT'18.
QuEst++: QuEst++
deepQuest: deepQuest (Ive et al., 2018) is a neural-based framework that provides state-of-theart models for multi-level QE. We use the BiRNN model, a light-weight architecture which can be trained at either sentence or document level.
The BiRNN model uses an encoder-decoder architecture: it takes on its input both the source sentence and its translation which are encoded separately by two independent bi-directional Recurrent Neural Networks (RNNs). The two resulting sentence representations are then concatenated as a weighted sum of their word vectors, generated by an attention mechanism. For sentence-level predictions, the weighted representation of the two input sentences is passed through a dense layer with sigmoid activation to generate the quality estimates. For document-level predictions, the final representation of a document is generated by a second attention mechanism, as the weighted sum of the weighted sentence-level representations of all the sentences within the document. The resulting document-level representation is then passed through a dense layer with sigmoid activation to generate the quality estimates.
Additionally, we propose and experiment with BERT-BiRNN, a variant of the BiRNN model. Rather than training the token embeddings with the task at hand, we use large-scale pre-trained token-level representations from the multilingual cased base BERT model (Devlin et al., 2019). During training, the BERT model is fine-tuned by unfreezing the weights of the last four hidden layers along with the token embedding layer. This performs comparably to the state-of-the-art predictorestimator neural model in Kepler et al. (2019).
Data
WMT'18 QE Task 4 data: This dataset was created for the document-level track. It contains a sample of products from the Amazon Reviews Dataset (McAuley et al., 2015) taken from the Sports & Outdoors category. 'Documents' consist of the English product title and its description, its French machinetranslation and a numerical score to predict, namely the MQM score (Multidimensional Quality Metrics) (Lommel et al., 2014). This score is computed by annotating and weighting each word-level translation error according to its severity (minor, major and critical):
MQM Score = 1 − n min + 5n maj + 10n cri n
where n is the total number of words, and n i is the number of errors annotated with the corresponding error severity. Additionally, the dataset provides one picture per product, as well as pre-extracted visual features, as we discuss below. For the sentence-level QE task, each document of the dataset was split into sentences (lines), where every sentence has its corresponding MQM score computed in the same way as for the document. We note that this variant is different from the official sentence-level track at WMT since for that task visual information is not available.
Text features: For the feature-based approach, we extract the same 15 features as those for the baseline of WMT'18 at document level. For the neural-based approaches, text features are either the learned word embeddings (BiRNN) or pre-trained word embeddings (BERT-BiRNN).
Visual features: The visual features are preextracted vectors with 4,096 dimensions, also provided in the Amazon Reviews Dataset (McAuley et al., 2015). The method to obtain the features uses a deep convolutional neural network which has been pre-trained on the ImageNet dataset for image classification (Deng et al., 2009). The visual features extracted represent a vectorial summary of the image taken from the last pooled layer of the network. He and McAuley (2016) have shown that this representation contains useful visual features for a number of tasks.
Multimodal QE
We propose different ways to integrate visual features in our two monomodal QE approaches (Sections 3.1 and 3.2). We compare each proposed model with its monomodal QE counterpart as baseline, both using the same hyperparameters.
Multimodal feature-based QE
The feature-based textual features contain 15 numerical scores, while the visual feature vector contains 4,096 dimensions. To avoid over-weighting the visual features, we reduce their dimensionality using Principal Component Analysis (PCA). We consider up to 15 principal components in order to keep a balance between the visual features and the 15 text features from QuEst++. We choose the final number of principal components to keep according to the explained variance with the PCA, so this number is treated as a hyperparameter. After analysing the explained variance for up to 15 kept principal components (see Figure 4 in Appendix), we selected six numbers of principal components to train QE models with (1, 2, 3, 5, 10, and 15). As fusion strategy, we concatenate the two feature vectors.
Multimodal neural-based QE
Multimodality is achieved with two changes in our monomodal models: multimodality integration (where to integrate the visual features in the architecture), and fusion strategy (how to fuse the visual and textual features). We propose the following places to integrate the visual feature vector into the BiRNN architecture:
• embed -the visual feature vector is used after the word embedding layer;
• annot -the visual feature vector is used after the encoding of the two input sentences by the two bi-directional RNNs; • last -the visual feature vector is used just before the last layer.
To fuse the visual and text features, we reduce the size of the visual features using a dense layer with a ReLu activation and reshape it to match the shape of the text-feature vector. As fusion strategies between visual and textual feature vectors, we propose the following:
• conc -concatenation with both source and target word representations for the 'embed' strategy; concatenation with the text features for the 'last' strategy;
• mult -element-wise multiplication for the target word representations and concatenation for the source word representations for the 'embed' strategy; element-wise multiplication with the text features for the 'annot' and 'last' strategies;
• mult2 -element-wise multiplication for both source and target word representations (exclusive to the 'embed' model). Figure 1 presents the high-level architecture of the document-level BiRNN model, with the various multimodality integration and fusion approaches.
For example, in the 'embed' setting, the visual features are fused with each word representation from the embedding layers. Since this strategy modifies the embedding for each word, it can be expected to have a bigger impact on the result.
Results
We use the standard training, development and test datasets from the WMT'18 Task 4 track. For feature-based systems, we follow the built-in crossvalidation in QuEst++, and train a single model with the hyperparameters found by cross-validation. For neural-based models, we use early-stopping with a patience of 10 to avoid over-fitting, and all reported figures are averaged over 5 runs corresponding to different seeds.
We follow the evaluation method of the WMT QE tasks: Pearson's r correlation as the main metric (Graham, 2015), Mean-Absolute Error (MAE) and Root-Mean-Squared Error (RMSE) as secondary metrics. For statistical significance on Pearson's r, we compute Williams test (Williams, 1959) as suggested by Graham and Baldwin (2014).
For all neural-based models, we experiment with the all three integration strategies ('embed', 'annot' and 'last') and all three fusion strategies ('conc', 'mult' and 'mult2') presented in Section 3.2. This leads to 6 multimodal models for each BiRNN and BERT-BiRNN. In Tables 2 and 4, as well as in Figures 2 and 3, we report the top three performing models. We refer the reader to the Appendix for the full set of results.
Sentence-level MQE
The first part of Table 2 presents the results for sentence-level multimodal QE with BiRNN. The best model is BiRNN+Vis-embed-mult2, achieving a Pearson's r of 0.535, significantly outperforming the baseline (p-value<0.01). Visual features can, therefore, help to improve the performance of sentence-level neural-based QE systems significantly. Figure 2 presents the result of Williams significance test for BiRNN model variants. It is a correlation matrix that can be read as follows: the value in cell (i, j) is the p-value of Williams test for the change in performance of the model at row i compared to the model at column j (Graham, 2015).
With the pre-trained token-level representations from BERT (second half of Table 2), the best model is BERT-BiRNN+Vis-annot-mult, achieving a Pear- Table 2: Pearson correlation at sentence-level on the WMT'18 dataset. We report the monomodal models (BiRNN, BERT-BiRNN) and their respective top-3 best performing multimodal variants (+Vis). We refer the reader to the Appendix for the full set of results. Here, BERT, ann-mul and emb-mul2 correspond to the BERT-BiRNN, the BERT-BiRNN+Vis-annot-mult and the BiRNN+Vis-embed-mult2 models of Table 2. son's r of 0.602. This shows that even when using better word presentations, the visual features help to get further (albeit modest) improvements. Table 3 shows an example of predicted scores at the sentence-level for the baseline model (BiRNN) and for the best multimodal BiRNN model (BiRNN+Vis-embed-mult2). The multimodal model has predicted a closer score (-0.002) to the gold MQM score (0.167) than the baseline model (-0.248). The French translation is poor (cumulative-split is, for instance, not translated) as the low gold MQM score shows. However, the (main) word stopwatch is correctly translated as chronomètre in French. Since the associated picture indeed represents a stopwatch, one explanation for this improvement could be that the multimodal model may have rewarded this correct and important part of the translation.
Source (EN)
The A601X stopwatch features cumulative-split timing.
MT (FR)
Le chronomètre A601X dispose calendrier cumulative-split. gold MQM score 0.167 BiRNN -0.248 BiRNN+Vis-embed-mult2 -0.002 Table 3: Example of performance of sentence-level multimodal QE. Compared to the baseline prediction (BiRNN), the prediction from the best multimodal model (BiRNN+Vis-embed-mult2) is closer to the gold MQM score. This could be because the word stopwatch is correctly translated as chronomètre in French, and the additional visual feature confirms it. This could lead to an increase in the predicted score to reward the correct part, despite the poor translation (extracted from the Amazon Reviews Dataset of McAuley et al., 2015). Table 4 presents the results for the documentlevel feature-based and BiRNN neural QE models. 1 The first section shows the official models from the WMT'18 QE Task 4 report (Specia et al., 2018a). The neural-based approach SHEF-PT is the winning submission, outperforming another neural-based approach (SHEF-mtl-bRNN). For our BiRNN models (second section), BiRNN+Visembed-conc performs only slightly better than the monomodal baseline. For the feature-based models (third section), on the other hand, the baseline monomodal QuEst++ is outperformed by various multimodal variants by a large margin, with the one with two principal components (QuEst+Vis-2) performing the best. The more PCA components kept, the worse the results (see Appendix for full set of results). Table 4: Pearson correlation at document-level on the WMT'18 dataset: state-of-the-art models as reported by task organisers, our BiRNN model and its multimodal versions and feature-based QuEst++ and its multimodal versions. Figure 3 shows the Williams significance test for document-level QuEst++ on the WMT'18 dataset. 1 The BERT-BiRNN models performed very poorly at this level and more research on why is left for future work.
Document-level MQE
As we can see, QuEst+Vis-2 model outperforms the baseline with p-value = 0.002. Thus, visual features significantly improve the performance of featurebased QE systems compared to the monomodal QE counterparts.
Conclusions
We introduced Multimodal Quality Estimation for Machine Translation, where an external modality -visual information -is incorporated to featurebased and neural-based QE approaches, on sentence and document levels. The use of visual features extracted from images has led to significant improvements in the results of state-of-the-art QE approaches, especially at sentence level.
The version of deepQuest for multimodal QE and scripts to convert document into sentencelevel data are available on https://github.com/ sheffieldnlp/deepQuest. Figure 4 shows an almost linear relationship between the number of principal components and the explained variance of the PCA (see Section 3.1), i.e. the higher the number of principal components, the larger the explained variance. Therefore, we experimented with various numbers of components up to 15 (1, 2, 3, 5, 10, and 15) on the development set to find the best settings for quality prediction. Complete results Tables 5 and 6 present the full set of results of our experiments on document and sentence-level multimodal QE on our main test set, the WMT'18 test set. These are a super-set of the results presented in the main paper but include all combinations of multimodality integration and fusion strategies for sentence-level prediction, as well as different numbers of principal components kept for document-level QuEst prediction models.
A Appendix
PCA analysis
Additional test set Tables 7 and 8 present the full set of results of our experiments on the WMT'19 Task 2 test set on document and sentencelevel multimodal QE, respectively. This was the follow-up edition of the WMT'18 Task 4, where the same training set is used, but a new test set is released.
For document-level, we observe nuanced results with more modest benefits in using visual features, regardless of the integration method or fusion strategy.
For sentence-level, we observe on the one hand quite significant improvements with a gain of almost 8 points in Pearson's r over BiRNN, our monomodal baseline without pre-trained word embedding. It is interesting to note that almost all multimodal variants achieve better performance compared to the monomodal BiRNN baseline, with a peak when the visual features are fused with the word embedding representations by elementwise multiplication. On the other hand, we do not observe any gain in using visual features on the WMT'19 test set compared to our monomodal baseline with pre-trained word-embedding (BERT-BiRNN). Here that the BERT-BiRNN baseline model already performs very well. According to the task organisers, the mean MQM value on the WMT'19 test set is higher than on the WMT'18 test set, but actually closer to the training data (Fonseca et al., 2019). We therefore hypothesise here that the highly dimensional and contextualised word-level representations from BERT are already enough and do not benefit from the extra information provided by the visual features.
Figure 1 :
1High-level representation of the documentlevel BiRNN architecture which illustrates how the visual features are integrated into the model. The three different strategies are 'embed', 'annot' and 'last'.
Figure 2 :
2Williams significance test of top models for sentence-level BiRNN on the WMT'18 dataset.
Figure 3 :
3Williams significance test of top models for document-level QuEst++ on the WMT'18 dataset.
Figure 4 :
4Explained variance of 15 components (cumulative sum) for the training set of the WMT'18 Task data at document level.
Table 5 :
5Document-level results for BiRNN and QuEst++ on the WMT'18 dataset, with and without visual features.Pearson MAE RMSE
BiRNN
0.504
0.539 0.754
+Vis-last-conc
0.483
0.531 0.746
+Vis-last-mult
0.462
0.511 0.733
+Vis-annot-mult
0.460
0.521 0.741
+Vis-embed-conc
0.467
0.541 0.765
+Vis-embed-mult
0.473
0.534 0.753
+Vis-embed-mult2
0.535
0.569 0.792
BERT-BiRNN
0.590
0.455 0.659
+Vis-last-conc
0.360
0.993 1.252
+Vis-last-mult
0.529
0.520 0.744
+Vis-annot-mult
0.602
0.454 0.654
+Vis-embed-conc
0.576
0.474 0.694
+Vis-embed-mult
0.598
0.486 0.686
+Vis-embed-mult2
0.570
0.573 0.770
Table 6 :
6Sentence-level results for BiRNN and BERT-BiRNN on the WMT'18 Task 4 dataset, with and without visual features.
Table 7 :
7Document-level results for BiRNN on the WMT'19 Task 2 test set, with and without visual features.Metrics
Pearson MAE RMSE
BiRNN
0.485
0.616 0.922
+Vis-last-conc
0.492
0.602 0.908
+Vis-last-mult
0.520
0.584 0.895
+Vis-annot-mult
0.508
0.591 0.901
+Vis-embed-conc
0.470
0.614 0.927
+Vis-embed-mult
0.474
0.613 0.927
+Vis-embed-mult2
0.563
0.609 0.944
BERT-BiRNN
0.652
0.556 0.842
+Vis-last-mult
0.605
0.568 0.854
+Vis-annot-mult
0.596
0.565 0.845
+Vis-embed-conc
0.594
0.571 0.853
+Vis-embed-mult
0.596
0.560 0.827
+Vis-embed-mult2
0.590
0.581 0.853
Table 8 :
8Sentence-level results for BiRNN and BERT-BiRNN on the WMT'19 Task 2 test dataset, with and without visual features.
AcknowledgmentsThis work was supported by funding from both the Bergamot project (EU H2020 Grant No. 825303) and the MultiMT project (EU H2020 ERC Starting Grant No. 678017).
Multimodal machine learning: A survey and taxonomy. Tadas Baltrusaitis, Chaitanya Ahuja, Louis-Philippe Morency, 10.1109/TPAMI.2018.2798607IEEE Transactions on Pattern Analysis and Machine Intelligence. 412Tadas Baltrusaitis, Chaitanya Ahuja, and Louis- Philippe Morency. 2019. Multimodal machine learn- ing: A survey and taxonomy. IEEE Transac- tions on Pattern Analysis and Machine Intelligence, 41(2):423-443.
Findings of the third shared task on multimodal machine translation. Loïc Barrault, Fethi Bougares, Lucia Specia, Chiraag Lala, Desmond Elliott, Stella Frank, Proceedings of the Third Conference on Machine Translation. the Third Conference on Machine TranslationBelgium, Brussels2Shared Task Papers. Association for Computational LinguisticsLoïc Barrault, Fethi Bougares, Lucia Specia, Chiraag Lala, Desmond Elliott, and Stella Frank. 2018. Find- ings of the third shared task on multimodal machine translation. In Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Pa- pers, pages 308-327, Belgium, Brussels. Associa- tion for Computational Linguistics.
Confidence estimation for machine translation. John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, 10.3115/1220355.1220401Proceedings of the 20th International Conference on Computational Linguistics. the 20th International Conference on Computational LinguisticsGeneva, SwitzerlandAlberto Sanchis, and Nicola UeffingJohn Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto San- chis, and Nicola Ueffing. 2004. Confidence esti- mation for machine translation. In Proceedings of the 20th International Conference on Computational Linguistics, Geneva, Switzerland.
Imagenet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L Li, Kai Li, Li Fei-Fei, 10.1109/CVPR.2009.5206848Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionJ. Deng, W. Dong, R. Socher, L. Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE Confer- ence on Computer Vision and Pattern Recognition, pages 248-255.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Findings of the wmt 2019 shared tasks on quality estimation. Erick Fonseca, Lisa Yankovskaya, F T André, Mark Martins, Christian Fishel, Federmann, Proceedings of the Fourth Conference on Machine Translation. the Fourth Conference on Machine TranslationFlorence, ItalyAssociation for Computational LinguisticsErick Fonseca, Lisa Yankovskaya, André F. T. Martins, Mark Fishel, and Christian Federmann. 2019. Find- ings of the wmt 2019 shared tasks on quality esti- mation. In Proceedings of the Fourth Conference on Machine Translation, pages 230-239, Florence, Italy. Association for Computational Linguistics.
Improving evaluation of machine translation quality estimation. Yvette Graham, 10.3115/v1/P15-1174Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaAssociation for Computational Linguistics1Yvette Graham. 2015. Improving evaluation of ma- chine translation quality estimation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1804-1813, Beijing, China. Association for Computational Linguistics.
Testing for significance of increased correlation with human judgment. Yvette Graham, Timothy Baldwin, 10.3115/v1/D14-1020Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsYvette Graham and Timothy Baldwin. 2014. Testing for significance of increased correlation with human judgment. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 172-176, Doha, Qatar. Associ- ation for Computational Linguistics.
Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. Ruining He, Julian Mcauley, 10.1145/2872427.2883037Proceedings of the 25th International Conference on World Wide Web. the 25th International Conference on World Wide WebRepublic and Canton ofRuining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In Proceed- ings of the 25th International Conference on World Wide Web, pages 507-517, Republic and Canton of
International World Wide Web Conferences Steering Committee. Switzerland Geneva, Geneva, Switzerland. International World Wide Web Conferences Steering Committee.
Deepquest: a framework for neural-based quality estimation. Julia Ive, Frédéric Blain, Lucia Specia, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsJulia Ive, Frédéric Blain, and Lucia Specia. 2018. Deepquest: a framework for neural-based quality es- timation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3146-3157.
OpenKiwi: An open source framework for quality estimation. Fábio Kepler, Jonay Trénous, Marcos Treviso, Miguel Vera, André F T Martins, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics-System Demonstrations. the 57th Annual Meeting of the Association for Computational Linguistics-System DemonstrationsFlorence, ItalyAssociation for Computational LinguisticsFábio Kepler, Jonay Trénous, Marcos Treviso, Miguel Vera, and André F. T. Martins. 2019. OpenKiwi: An open source framework for quality estimation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics-System Demonstrations, pages 117-122, Florence, Italy. As- sociation for Computational Linguistics.
Arle Lommel, Hans Uszkoreit, Aljoscha Burchardt, 10.5565/rev/tradumatica.77Multidimensional Quality Metrics (MQM): A Framework for Declaring and Describing Translation Quality Metrics. Tradumàtica: tecnologies de la traducció. 0Arle Lommel, Hans Uszkoreit, and Aljoscha Burchardt. 2014. Multidimensional Quality Metrics (MQM): A Framework for Declaring and Describing Transla- tion Quality Metrics. Tradumàtica: tecnologies de la traducció, 0(12):455-463.
Image-based recommendations on styles and substitutes. Julian Mcauley, Christopher Targett, Qinfeng Shi, Anton Van Den, Hengel, 10.1145/2766462.2767755Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 38th International ACM SIGIR Conference on Research and Development in Information RetrievalNew York, NY, USAACMJulian McAuley, Christopher Targett, Qinfeng Shi, and Anton van den Hengel. 2015. Image-based recom- mendations on styles and substitutes. In Proceed- ings of the 38th International ACM SIGIR Confer- ence on Research and Development in Information Retrieval, pages 43-52, New York, NY, USA. ACM.
Findings of the WMT 2018 shared task on quality estimation. Lucia Specia, Frédéric Blain, Varvara Logacheva, Ramón Astudillo, André F T Martins, Proceedings of the Third Conference on Machine Translation: Shared Task Papers. the Third Conference on Machine Translation: Shared Task PapersBelgium, BrusselsAssociation for Computational LinguisticsLucia Specia, Frédéric Blain, Varvara Logacheva, Ramón Astudillo, and André F. T. Martins. 2018a. Findings of the WMT 2018 shared task on quality estimation. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 689-709, Belgium, Brussels. Association for Com- putational Linguistics.
Estimating the sentence-level quality of machine translation systems. Lucia Specia, Nicola Cancedda, Marc Dymetman, Marco Turchi, Nello Cristianini, Proceedings of the 13th Annual Conference of the European Association for Machine Translation. the 13th Annual Conference of the European Association for Machine TranslationBarcelona, SpainLucia Specia, Nicola Cancedda, Marc Dymetman, Marco Turchi, and Nello Cristianini. 2009. Estimat- ing the sentence-level quality of machine translation systems. In Proceedings of the 13th Annual Confer- ence of the European Association for Machine Trans- lation, pages 28-35, Barcelona, Spain.
Multi-level translation quality prediction with quest++. Lucia Specia, Gustavo Paetzold, Carolina Scarton, Association for Computational Linguistics and The Asian Federation of Natural Language Processing. Beijing, ChinaProceedings of ACL-IJCNLP 2015 System DemonstrationsLucia Specia, Gustavo Paetzold, and Carolina Scarton. 2015. Multi-level translation quality prediction with quest++. In Proceedings of ACL-IJCNLP 2015 Sys- tem Demonstrations, pages 115-120, Beijing, China. Association for Computational Linguistics and The Asian Federation of Natural Language Processing.
Quality Estimation for Machine Translation. Lucia Specia, Carolina Scarton, Gustavo Henrique Paetzold, 10.2200/S00854ED1V01Y201805HLT039Synthesis Lectures on Human Language Technologies. Morgan & ClaypoolLucia Specia, Carolina Scarton, and Gustavo Henrique Paetzold. 2018b. Quality Estimation for Machine Translation. Synthesis Lectures on Human Lan- guage Technologies. Morgan & Claypool.
Evan J Williams, Regression Analysis. New York, USAWileyEvan J. Williams. 1959. Regression Analysis, vol- ume 14. Wiley, New York, USA. |
220,048,080 | ReInceptionE: Relation-Aware Inception Network with Joint Local-Global Structural Information for Knowledge Graph Embedding | The goal of Knowledge graph embedding (KGE) is to learn how to represent the lowdimensional vectors for entities and relations based on the observed triples. The conventional shallow models are limited to their expressiveness. ConvE (Dettmers et al., 2018) takes advantage of CNN and improves the expressive power with parameter efficient operators by increasing the interactions between head and relation embeddings. However, there is no structural information in the embedding space of ConvE, and the performance is still limited by the number of interactions. The recent KBGAT (Nathani et al., 2019) provides another way to learn embeddings by adaptively utilizing structural information. In this paper, we take the benefits of ConvE and KBGAT together and propose a Relation-aware Inception network with joint local-global structural information for knowledge graph Embedding (ReInceptionE). Specifically, we first explore the Inception network to learn query embedding, which aims to further increase the interactions between head and relation embeddings. Then, we propose to use a relation-aware attention mechanism to enrich the query embedding with the local neighborhood and global entity information. Experimental results on both WN18RR and FB15k-237 datasets demonstrate that ReIncep-tionE achieves competitive performance compared with state-of-the-art methods. | [
174800352,
67855617,
11202498,
207852450,
3292002,
173990725,
196206886,
2768038,
174797737,
6628106,
3896491,
51984717,
59316623,
5068596,
2127100,
3051772,
102352412
] | ReInceptionE: Relation-Aware Inception Network with Joint Local-Global Structural Information for Knowledge Graph Embedding
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 5 -10, 2020. 2020
Zhiwen Xie xiezhiwen@whu.edu.cn
School of Computer Science
Wuhan University
Guangyou Zhou gyzhou@mail.ccnu.edu.cnjinliu@whu.edu.cn
School of Computer Science
Normal University
Central China
Jin Liu
School of Computer Science
Wuhan University
Jimmy Xiangji Huang jhuang@yorku.ca
School of Information Technology
York University
ReInceptionE: Relation-Aware Inception Network with Joint Local-Global Structural Information for Knowledge Graph Embedding
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsJuly 5 -10, 2020. 20205929
The goal of Knowledge graph embedding (KGE) is to learn how to represent the lowdimensional vectors for entities and relations based on the observed triples. The conventional shallow models are limited to their expressiveness. ConvE (Dettmers et al., 2018) takes advantage of CNN and improves the expressive power with parameter efficient operators by increasing the interactions between head and relation embeddings. However, there is no structural information in the embedding space of ConvE, and the performance is still limited by the number of interactions. The recent KBGAT (Nathani et al., 2019) provides another way to learn embeddings by adaptively utilizing structural information. In this paper, we take the benefits of ConvE and KBGAT together and propose a Relation-aware Inception network with joint local-global structural information for knowledge graph Embedding (ReInceptionE). Specifically, we first explore the Inception network to learn query embedding, which aims to further increase the interactions between head and relation embeddings. Then, we propose to use a relation-aware attention mechanism to enrich the query embedding with the local neighborhood and global entity information. Experimental results on both WN18RR and FB15k-237 datasets demonstrate that ReIncep-tionE achieves competitive performance compared with state-of-the-art methods.
Introduction
Knowledge graphs (KGs) are at the core of most state-of-the-art natural language processing solutions and have been spotlighted in many real-world applications, including question answering (Hao et al., 2017), dialogue generation Madotto et al., 2018) and machine reading comprehension (Yang and Mitchell, 2017). Typically, KGs * Corresponding author. are directed graphs whose nodes denote the entities and edges represent the different relations between entities. The structured knowledge in KGs is organized in the form of triples (h, r, t), where h and t stand for the head and tail entities respectively, and r represents the relation from h to t. Although large-scale KGs (e.g., Freebase (Bollacker et al., 2008), DBpedia (Lehmann et al., 2015)) have already contained millions or even billions of triples, they are still far from complete since the emerging new knowledge appears. Knowledge graph embedding (KGE) is an effective solution to solve the incompletion problem.
KGE aims to learn the low-dimensional vectors (embeddings) for entities and relations based on the observed triples in KGs. Conventional models including TransE (Bordes et al., 2013) and its numerous extensions (e.g., TransD (Ji et al., 2015), TransR (Lin et al., 2015), DistMul , ComplEx (Trouillon et al., 2016), etc.) have been proposed. These shallow models are limited to their expressiveness (Dettmers et al., 2018). Recently, CNN-based methods have been proposed to capture the expressive features with parameter efficient operators. ConvE (Dettmers et al., 2018) takes advantage of CNN and uses convolution filters on 2D reshapings of the head entity and relation embeddings. Through this, ConvE can increase the interactions between head and relation embeddings. Empirical results have proved that increasing the number of interactions is beneficial to the KGE task, but ConvE is still limited by the number of interactions Vashishth et al., 2020).
Furthermore, ConvE does not consider the structural information. In contrast, graph-based methods are effective to aggregate neighborhood information to enrich the entity/relation representation (Schlichtkrull et al., 2018;Bansal et al., 2019;Nathani et al., 2019). Among them, KB- GAT (Nathani et al., 2019) achieves state-of-the-art performance on various benchmark datasets via using graph attention networks (GAT) (Velickovic et al., 2018). KBGAT learns embeddings for every entity by taking all possible relations into account, which requires multiple hops of reasoning. In contrast, it can be beneficial to learn embeddings from a query-relevant subgraph of the local neighborhood and global entities. As an example shown in Figure 1, given a query (Jack London, nationality, ?) for Jack London, we can gather the relationaware local neighbor (place lived, Okaland). The local neighbor allows us to project Jack London into the Okaland region of the embedding space, which can lead to a high score for predicting the target America, as Okaland and America are close in embedding space. Besides, we also note that a specific relation can be acted as the "bridge" to link the related entities. Considering the relation nationality, the related head entities { Kaneto Shiozawa, Shammi Kapoor, Will Smith, · · · } and tail entities { America, Canada, Japan, · · · } tend to be a set of person names and countries. These related entities act as a strong signal to judge whether a triple is valid or not.
Based on the above observations, we take the benefits of ConvE and KBGAT together and propose a Relation-aware Inception network with joint local-global structural information for knowledge graph Embedding, and we name it ReIncep-tionE. In ReInceptionE, we first adapt Inception network (Szegedy et al., 2015(Szegedy et al., , 2016) -a high performing convolutional neural network with carefully designed filters, to increase the interactions using multiple convolution filters with different scales, while at the same time to keep parameter efficient. Then, we construct a local neighborhood graph and a global entity graph by sharing the head and relation respectively for a given query. With the constructed graphs, we apply a relation-aware attention mechanism to aggregate the local neighborhood features and gather the global entity information to enrich the head/relation representation. Finally, we aggregate the joint local-global structural information using a fully connected layer to predict the missing links.
In summary, we make the following three contributions: (1) It is the first to explore Inception network to learn query embedding which aims to further increase the interactions between head and relation embeddings; (2) We propose to use a relation-aware attention mechanism to enrich the query embedding with the local neighborhood and global entity information; (3) We conduct a series of experiments to evaluate the performance of the proposed method. Experimental results demonstrate that our method obtains competitive performance in comparison to these state-of-the-art models on both WN18RR and FB15k-237.
The rest of this paper is structured as follows.
Section 2 describes our proposed method for KGE. In Section 3, the experimental results are presented. We make a conclusion in Section 4.
Our Approach
In this section, we first describe the background and definition in Subsection 2.1, and Inception-based query encoder in Subsection 2.2. Then, we introduce the relation-aware local attention and global attention in Subsection 2.3 and 2.4, respectively. Finally, we describe the joint using of them in Subsection 2.5.
Background and Definition
Definition 3.1 Knowledge Graph G: A knowledge graph G = {(h, r, t)|(h, r, t) ∈ E ×R×E} denotes a collection of triples, where E and R indicate entities and relations, respectively, h, t ∈ E represent the head entity and tail entity, and r ∈ R denotes the specific relation linking from the head entity h to tail entity t.
Definition 3.2 Knowledge Graph Embedding: Knowledge graph embedding aims to learn embeddings of entities and relations with the valid triples in G, and then predict the missing head entity h given query (?, r, t) or tail entity t given query (h, r, ?) with the learned entity and relation embeddings.
The framework of the proposed ReInceptionE is shown in Figure 1 (right). ReIncetionE consists of four modules: (1) Inception-based query encoder (InceptionE), which is used to transform the input query q = (h, r, ?) into a k-dimensional vector v q ; (2) relation-aware local attention and (3) relation-aware global attention are used to capture the local neighborhood information and the global entity information; and (4) joint relation-aware attention is used to aggregate the different structural information using a fully connected layer. Finally, we compute the score for the given triple (h, r, t) based on the query embedding and the tail entity embedding.
Inception-Based Query Encoder
ConvE (Dettmers et al., 2018) is the first model to apply CNN for KGE, which uses 2D convolution operation to model the head and relation in a query. However, ConvE is limited by the number of interactions between the head and relation embeddings Vashishth et al., 2020). In this paper, we propose to employ the Inception network (Szegedy et al., 2015(Szegedy et al., , 2016, a high performing convolutional neural network with carefully designed filters, to increase the interactions by taking the head and relation as two channels of the input. Figure 2 shows the differences between InceptionE (right) and ConvE (left). Obviously, ConvE cannot capture full interactions between the head and relation embeddings since the convolution operations in ConvE only slides on the entity or relation 2D matrices independently. On the contrary, In-ceptionE can increase the interactions between the head and relation embeddings using multiple convolution filters with different scales, while at the same time keep parameter efficient.
As shown in Figure 2, given a query q = (h, r, ?), we first reshape the head and relation embeddings as 2D matrices denoted as v h and v r . Then, the 2D embeddings are viewed as two channels of the input for the Inception network. Thus, the entries at the same dimension of v h and v r are aligned over the channel dimension, which enables the convolution operations to increase the interactions between the head and relation embeddings. Specifically, We first use 1 × 1 convolutions to capture the direct interactions at the same dimension, which can be formulated as:
v 1×1 = Relu([v h ||v r ] * ω 1×1 )(1)
where Relu (Glorot et al., 2011) is a non-linear activation function, || denotes the concatenation operation, * denotes the convolutional operation and ω 1×1 is the parameter of convolution filters with 1 × 1 size, v 1×1 denotes the interaction features of the first 1 × 1 convolutional layer. Then, filters with different sizes, such as 2 × 2 and 3 × 3, are applied to capture high-level interaction features in various scales. Thus, we can get interaction features of the 2 × 2 and 3 × 3 convolutional layers, denoted by v 2×2 and v 3×3 , respectively.
As suggested in (Szegedy et al., 2016), we use two 3 × 3 convolutions instead of a 5 × 5 convolution to capture interaction features in larger spatial filters, which is able to reduce the number of parameters. The two 3 × 3 convolutions are denoted as:
v 2(3×3) = Relu(Relu(v 2(3×3) 1×1 * ω 1 3×3 ) * ω 2 3×3 ) (2) where v 2(3×3) 1×1
is the input interaction features, ω 1 3×3 and ω 2 3×3 are parameters of the two 3 × 3 convolution layers.
Finally, the output interaction features with different scales and levels are concatenated and a fully connected layer is applied to obtain the embedding of the given query. Formally, we define the Inception-based query encoder model as:
v q = Inception(v h , v r ) = Relu(vec([v 1×1 ||v 2×2 ||v 3×3 ||v 2(3×3) ])W)
(3) where W is the parameter of the fully connected layer.
Relation-Aware Local Attention
KBGAT learns embedding for every entity by taking all possible relations into account, and the embedding learning is impaired by the irrelevant neighbors. In contrast, it can be beneficial to learn embedding from a query-relevant neighborhood graph. In this subsection, we first construct a relation-aware neighborhood graph and then apply an attention mechanism to aggregate local graph structure information.
For the query q = (h, r, ?), we denote its neighbors as
N q = {n i = (e i , r i )|(e i , r i , h) ∈ G}.
Note that, for each triple (h, r, t), we create an inverse triple (t, r −1 , h), which has also been used in (Lacroix et al., 2018;Dettmers et al., 2018). Thus, query (?, r, t) can be converted to (t, r −1 , ?). And the neighbors {(r j , e j )|(h, r j , e j ) ∈ G} for head entity h can be converted to a format of {(e j , r −1 j )|(h, r j , e j ) ∈ G}. Thus, N q contains both the outgoing and incoming neighbors for a query q = (h, r, ?). Each neighbor n i = (e i , r i ) ∈ N q is also a query with a head entity e i and a relation r i . Thus, each entity and relation in neighbor n i = (e i , r i ) can be encoded using the Inception-based query encoder:
v n i = Inception(v e i , v r i )(4)
where v e i and v r i are the 2D embedding vectors of entity e i and relation r i . In practice, different neighbors may have different impacts for a given query. It is useful to determine the importance of each neighbor for a specific query. As an example in Figure 1, for the query (Jack London, nationality, ?), it is reasonable to focus on the the neighbors related to the relation nationality, such as (Jack London, place lived, Oakland). To this end, we use relation-aware attention mechanism to assign different importance for each neighbor and compute the relevant score for each neighbor using a non-linear activation layer:
s i = LeakyRelu(W 1 [W 2 v q ||W 3 v n i ])(5)
where W 1 , W 2 and W 3 are parameters to be trained and LeakyRelu (Maas et al., 2013) is the activation function. We then normalize the relevant scores for different neighbors using a softmax function to make it comparable across the neighbors, which is denoted as:
α i = exp(s i ) n j ∈Nq exp(s j )(6)
Finally, we aggregate the neighborhood information according to their attention scores and apply a non-linear function to obtain the neighborhood vector. To keep more information of the original query embedding, we also apply a residual operation:
v n = Relu n i ∈Nq α i W 3 v n i + W 2 v q (7)
For simplification, we denote the above relationaware attention operations as:
v n = ReAtt(V n , v q )(8)
where V n = {v n i |n i ∈ N q } is a set of local neighobrhood vectors.
Relation-Aware Global Attention
The number of relation-aware local neighbors for each node (entity) varies from one to another, making the neighbor graph very sparse. The sparse nature would affect the accuracy of the embedding. In fact, a specific relation can be acted as the "bridge" to link the related entities. In this subsection, we construct a relation-aware head graph and tail graph by gathering all entities for relation r in the given query q = (h, r, ?). Intuitively, all head entities for relation r share some common type information. And the tail entities for relation r contain some implicit information about the type of the target entity t. For example in Figure 1, given the relation nationality, all heads { Kaneto Shiozawa, Shammi Kapoor, Will Smith, · · · , } and tails { America, Canada, Japan, · · · , } are the names of a person and a country, sharing the similar entity types. These relation-aware global heads and tails can provide some useful information for the KGE task. Thus, we construct relation-aware global head and tail graphs according to the head and tail entities of the relation. Let H r = {e i |(e i , r, e j ) ∈ G} and T r = {e j |(e i , r, e j ) ∈ G} denote a set of head and tail entities for relation r, respectively. For each head entity h ri ∈ H r , we first represent it as an embedding vector v h ri . Then, we use relation-aware attention mechanism to capture the relevant information from all the relation-aware head entities, which is denoted as:
v rh = ReAtt(V rh , v q )(9)
where V rh = {v h ri |h ri ∈ H r } is a set of entity vectors for relation-aware global entities. Similarly, we use relation-aware attention mechanism to capture global tail informations, which is computed as:
v rt = ReAtt(V rt , v q )(10)
where V rt = {v t ri |t ri ∈ T r } is a set of entity embeddings for relation-aware global tails.
Joint Relation-Aware Attention
Once obtained the relation-aware local neighborhood information v n and global head and tail vectors v ht and v rt , we concatenate these vectors and merge them by using a linear feed-forward layer: where W 4 and b are the parameters of the feedforward layer. Finally, we compute the score for each triple (h, r, t) by applying a dot product of the query embedding v q and the tail embedding v t :
v q = W 4 [v n ||v rh ||v rt ] + b(11)f (h, r, t) = v T q v t(12)
To optimize the parameters in our model, we compute the probability of the tail t using a softmax function:
p(t|h, r) = exp(λf (h, r, t)) (h,r,t )∈G ∪{(h,r,t)} exp(λf (h, r, t ))(13)
where λ is a smoothing parameter, and G is a set of invalid triples created by randomly replacing the tail t with an invalid entity t . We train the model by minimizing the following loss function:
L = − 1 |E| |E| i=0 log p(t i |h i , r i )(14)
where (h i , r i , t i ) ∈ G is a valid triple, and |E| is the number of valid triples in G.
Experiments
Experimental Setup
Datasets: We conduct experiments for KGE on two widely used public benchmark datasets : WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova et al., 2015). WN18RR is a subset of WN18 (Bordes et al., 2013) while FB15k-237 is a subset of FB15k (Bordes et al., 2013). Since WN18 and FB15k contain a large number of inverse relations, making the triples in the test set can be obtained simply by inverting triples in the training set. To address the above problem, both WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova et al., 2015) are generated by removing the inverse relations from WN18 and FB15k. In recent two (Dettmers et al., 2018), the superscript a represents the results reported in the original papers while b represents the results are taken from (Sun et al., 2020), other results are directly taken from the corresponding papers. Both MRR and Hits@1 have a strong correlation, thus we do not report the results of Hits@1 since it does not give any new insight (Nguyen et al., 2019). The best results are in bold and the second best results are in underline.
years, WN18RR and FB15k-237 have become the most popular datasets for the KGE task. Table 1 shows the summary statistics of the datasets.
Implementations: For a test triple (h, r, t), the purpose of KGE task is to predict missing links, e.g. predict tail entity t given head entity h and relation r or predict head entity h given tail entity t and relation r. To evaluate our method, three metrics are used, including Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hit@10 (e.g. the accuracy in top 10 predictions). Please note that lower MR, higher MRR and Hits@10 indicate better performance. We follow the "Filtered" setting protocol (Bordes et al., 2013) to evaluate our model, i.e., ranking all the entities excluding the set of other true entities that appeared in training, validation and test sets. We initialize the embedding of entity and relation in our ReInceptionE model using the pre-trained embeddings with 100-dimension used in (Nguyen et al., 2019). We use Adam (Kingma and Ba, 2015) to optimize the model. The parameters of our model are selected via grid search according to the MRR on the validation set. We select the dropout rate from {0.1, 0.2, 0.4, 0.5}, the learning rate from {0.001, 0.0005, 0.0002, 0.0001} , the L 2 norm of parameters from {1e −3 , 1e −5 , 1e −8 }, the batch size from {32, 64, 128, 256, 512} and the smoothing parameter λ in Equation 13 from {1, 5, 10}. Finally, the learning rate is set to 0.0002 for WN18RR and 0.0001 for FB15k-237. The L 2 norm of parameters is set to 1e −5 . The batch size is set to 256. The dropout rate is set to 0.4 for WN18RR and 0.2 for FB15k-237. The smoothing parameter in Equation 13 is set to λ = 5. The number of filters for each convolution operation in the Inception module is set to 32. We
Main Results
We compare our results with various state-of-theart methods. Experimental results are summarized in Table 2. For all KGE models, a key step is to create the invalid triples to construct the negative samples. Most recently, Sun et al. (2020) investigated the inappropriate evaluation problem happened in ConvKB (Nguyen et al., 2018), CapsE (Nguyen et al., 2019) and KBGAT (Nathani et al., 2019). In fact, this issue comes from the unusual score distribution, e.g., the score function for some invalid triples gets the same values as the valid triples. Sun et al. (2020) also found that KBGAT removed the invalid triples when they appeared in the test set during negative sampling, suffering from the leakage of test triples. Therefore, we take the results (marked with the superscript b) from (Sun et al., 2020) for ConvKB, CapsE and KBGAT. Besides, we also list the results reported in the original papers (marked with the superscript a).
From Table 2, we can see that our proposed ReInceptionE obtains competitive results compared with the state-of-the-art methods. On WN18RR dataset, the ReInceptionE achieves the best results using Hits@10 and MRR, and the second-best results using MR. On FB15k-237 dataset, the ReIn-ceptionE obtains the second-best results using MR, and comparable results using MRR and Hits@10.
Our proposed ReInceptionE is closely related to ConvE (Dettmers et al., 2018) and KBGAT (Nathani et al., 2019). Compared with ConvE, ReInceptionE achieves large performance gains on both . The reason is that instead of simply con-catenating the head and relation embeddings, ReIn-ceptionE takes head and relation as two channels of the input and applies the Inception network to capture the rich interactions, which is able to learn expressive features by using filters with various scales. Unlike KBGAT, the ReInceptionE takes the (entity, relation) pair as a query and utilizes the relation-aware attention mechanism to gather the most relevant local neighbors and global entity information for the given query. The results again verify the effectiveness of the relation-aware local and global information for KGE. Some other methods have been proposed to address the KGE task, such as pLogicNet (Ou and Tang, 2019), RPJE (Niu et al., 2020), CoKE , TuckER (Balazevic et al., 2019a), D4-GUmbel (Xu and Li, 2019) and HAKE . pLogicNet (Ou and Tang, 2019) and RPJE (Niu et al., 2020) leverage logic rules to improve the performance. CoKE uses Transformer (Vaswani et al., 2017) to encode contextualized representations. HAKE embeds entities in the polar coordinate system to learn semantic hierarchies. D4-Gumbel (Xu and Li, 2019) uses the dihedral group to model relation composition. TuckER (Balazevic et al., 2019a) uses Tucker decomposition to learn tensor factorization for KGE. These methods take a series of different ways to model the KGE task. For example, logic rules play an important role to determine whether a triple is valid or not, we suspect that the performance of our proposed ReInceptionE can be further improved when taking the logic rules into account. We will leave the comparison and deep analysis in the future work.
Impact of Different Modules
We describe the experimental results in Table 3 to investigate the impact of different modules in ReIn-ceptionE. In Table 3, "InceptionE" is the baseline
Models
Predicting head Predicting tail 1-1 1-N N-1 N-N Table 4: Link prediction results for each relation category on the WN18RR and FB15k-237 test sets using Hits@10. Following (Bordes et al., 2013), we classify relations into four groups: one-to-one (1-1), one-to-many (1-N), manyto-one (N-1) and many-to-many (N-N).
Query and Target Top Neighbors and Predictions
Query: (Jack London, nationality, ?) Target: America model without using relation-aware local neighbors and global entities. "ReInception w/o N" is the model without using relation-aware local neighbor information while "ReInception w/o E" is the model without using relation-aware global entity information. Besides, we also take two closely related models ConvE and KBGAT for fair comparison.
From Table 3, we can see that our baseline Incep-tionE outperforms the closely related CNN-based model ConvE. Compared with ConvE, InceptionE is more powerful because it can capture the rich interaction features by using filters with various scales. And the ReInceptionE, which incorporates relation-aware local neighborhood and global entity information, outperforms the related graph-based model KBGAT. Table 3 also shows that the ReIn-ceptionE outperforms InceptionE, "ReInception w/o N" and "ReInception w/o E" by a large margin on both datasets, which reconfirms our observations that relation-aware local neighbors and global entities can play different contributions for KGE.
Evaluation on different Relation Types
In this subsection, we present the experimental results on different relation types on WN18RR and FB15k-237 using Hits@10. We choose the closely related model ConvE, as well as InceptionE as the baselines. Following (Bordes et al., 2013), we classify the relations into four groups: one-to-one (1-1), one-to-many (1-N), many-to-one (N-1) and manyto-many (N-N), based on the average number of tails per head and the average number of heads per tail. Table 4 shows the link prediction results for each relation category. From Table 4, we find that InceptionE achieves better performance than ConvE for all relation types, indicating that increasing the number of interactions between head and re-lation embeddings is indeed beneficial to KGE task. Furthermore, our proposed ReInceptionE significantly outperforms ConvE and InceptionE for all relation types. In particular, ReInceptionE obtains larger improvements for complex relations, such as one-to-many, many-to-one and many-to-many. This again verifies our observations that increasing the interactions and taking the local-global structural information allows the model to capture more complex relations.
Case Study
In order to further analyze how relation-aware neighbors contribute to KGE task, we give two examples in Table 5. For the query (Jack London, nationality, ?), ReInceptionE assigns the highest attention scores for neighbors (place lived, Oakland), since Oakland and America are close to each other in embedding space because of other relations between them. And the top predictions for the query are a set of entities with the type of country. For the second example (Jerry Lewls, languages, ?), ReInceptionE assigns the very high score for neighbor (place of birth, Newark). This can allow us to project (place of birth, Newark) into the Jerry Lewis region of the embedding space, which can lead to a high score for predicting the target English Language. These examples give clear evidence of how our proposed ReInceptionE benefits the KGE task.
Conclusions
In this paper, we propose a novel relation-aware Inception network for knowledge graph embedding, called ReInceptionE. ReInceptionE takes the benefits of ConvE and KBGAT together. The proposed method first employs Inception network to learn the query embedding, with the aim of increasing the interaction between head and relation embeddings, while at the same time to keep the parameter efficient. Then, we gather the relation-aware local neighborhood and global entity information with an attention mechanism and enrich the query embedding with the joint local-global structural information. Empirical studies demonstrate that our proposed method obtains comparative performance compared with the state-of-the-art performance on two widely used benchmark datasets WN18RR and FB15k-237.
Figure 1 :
1An example of relation-aware local and global information (left) and the general framework of our proposed ReInceptionE (right).
Figure 2 :
2The structures of ConvE (left) and the proposed Inception-based query encoder (right). The red squares denote the slide windows of convolution filters.
Table 2 :
2Link prediction results on WN18RR and FB15k-237 test sets. * denotes that the results are taken from
Table 3 :
3Impact of different modules contributes the KGE task.observe that MRR performance increases slowly,
starting to stagnate around 200 epochs. Finally, we
train the model up to 200 epoches in the follow-
ing experiments. The source codes are available at
https://github.com/JuneTse/ReInceptionE.
America, United Kingdom, Canada, Australia, Germany Query: (Jerry Lewis, languages, ?) Target: English Language Top Neighbors: (place of birth, Newark) Prob: 0.197 (place lived, Newark) Prob: 0.173 (Nutty Professor II, story by) Prob: 0.105 (award nomination, Razzie Award for Worst Actor) Prob: 0.089 (nominated for, Law & Order: Special Victims Unit) Prob: 0.082 Top Predictions: English Language, Spanish Language, French Language, Italian Language, Japanese LanguageTop Neighbors:
(place lived, Oakland) Prob: 0.415
(place of birth, San Francisco) Prob: 0.353
(Berkeley, student) Prob: 0.083
(influence by, Friedrich Nietzsche) Prob: 0.042
(influence by, Charles Darwin) Prob: 0.031
Top Predictions:
Table 5 :
5Two examples of top 5 attention neighbors and predictions for the given queries.
AcknowledgmentsThis work was supported by the National Natural Science Foundation of China under Grants 61972290 and 61972173, and also supported by the National Key R&D Program of China under Grant 2018YFC1604000. This research was also supported in part by the research grant from Natural Sciences and Engineering Research Council (NSERC) of Canada and York Research Chairs (YRC) program. We thank anonymous reviewers for their thorough review comments on this paper.
TuckER: Tensor factorization for knowledge graph completion. Ivana Balazevic, Carl Allen, Timothy Hospedales, Proceedings of the EMNLP-IJCNLP. the EMNLP-IJCNLPIvana Balazevic, Carl Allen, and Timothy Hospedales. 2019a. TuckER: Tensor factorization for knowledge graph completion. In Proceedings of the EMNLP- IJCNLP.
Multi-relational poincaré graph embeddings. Ivana Balazevic, Carl Allen, Timothy M Hospedales, Proceedings of the NeurIPS. the NeurIPSIvana Balazevic, Carl Allen, and Timothy M. Hospedales. 2019b. Multi-relational poincaré graph embeddings. In Proceedings of the NeurIPS.
A2N: Attending to neighbors for knowledge graph inference. Trapit Bansal, Da-Cheng Juan, Sujith Ravi, Andrew Mccallum, Proceedings of the ACL. the ACLTrapit Bansal, Da-Cheng Juan, Sujith Ravi, and An- drew McCallum. 2019. A2N: Attending to neigh- bors for knowledge graph inference. In Proceedings of the ACL.
Freebase: A collaboratively created graph database for structuring human knowledge. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, Jamie Taylor, 10.1145/1376616.1376746Proceedings of the SIGMOD. the SIGMODKurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A col- laboratively created graph database for structuring human knowledge. In Proceedings of the SIGMOD.
Translating embeddings for modeling multirelational data. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, Oksana Yakhnenko, Proceedings of the NIPS. the NIPSAntoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Proceedings of the NIPS.
Convolutional 2d knowledge graph embeddings. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, Proceedings of the AAAI. the AAAITim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In Proceedings of the AAAI.
Graph pattern entity ranking model for knowledge graph completion. Takuma Ebisu, Ryutaro Ichise, Proceedings of the NAACL. the NAACLTakuma Ebisu and Ryutaro Ichise. 2019. Graph pattern entity ranking model for knowledge graph comple- tion. In Proceedings of the NAACL.
Deep sparse rectifier neural networks. Xavier Glorot, Antoine Bordes, Yoshua Bengio, Proceedings of the AISTATS. the AISTATSXavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectifier neural networks. In Pro- ceedings of the AISTATS.
An endto-end model for question answering over knowledge base with cross-attention combining global knowledge. Yanchao Hao, Yuanzhe Zhang, Kang Liu, Shizhu He, Zhanyi Liu, Hua Wu, Jun Zhao, 10.18653/v1/P17-1021Proceedings of the ACL. the ACLYanchao Hao, Yuanzhe Zhang, Kang Liu, Shizhu He, Zhanyi Liu, Hua Wu, and Jun Zhao. 2017. An end- to-end model for question answering over knowl- edge base with cross-attention combining global knowledge. In Proceedings of the ACL.
Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings. He He, Anusha Balakrishnan, Mihail Eric, Percy Liang, 10.18653/v1/P17-1162Proceedings of the ACL. the ACLHe He, Anusha Balakrishnan, Mihail Eric, and Percy Liang. 2017. Learning symmetric collaborative dia- logue agents with dynamic knowledge graph embed- dings. In Proceedings of the ACL.
Knowledge graph embedding via dynamic mapping matrix. Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, Jun Zhao, Proceedings of the ACL. the ACLGuoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Knowledge graph embedding via dy- namic mapping matrix. In Proceedings of the ACL.
Adative convolution for multi-relational learning. Xiaotian Jiang, Quan Wang, Bin Wang, Proceedings of the NAACL-HLT. the NAACL-HLTXiaotian Jiang, Quan Wang, and Bin Wang. 2019. Ada- tive convolution for multi-relational learning. In Proceedings of the NAACL-HLT.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, Proceedings of the ICLR. the ICLRDiederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the ICLR.
Canonical tensor decomposition for knowledge base completion. Timothée Lacroix, Nicolas Usunier, Guillaume Obozinski, Proceedings of the ICML. the ICMLTimothée Lacroix, Nicolas Usunier, and Guillaume Obozinski. 2018. Canonical tensor decomposition for knowledge base completion. In Proceedings of the ICML.
Dbpediaa large-scale, multilingual knowledge base extracted from wikipedia. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Sören Patrick Van Kleef, Christian Auer, Bizer, 10.3233/SW-140134Semantic WebJens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, Sören Auer, and Christian Bizer. 2015. Dbpedia - a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web.
Learning entity and relation embeddings for knowledge graph completion. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, Xuan Zhu, Proceedings of the AAAI. the AAAIYankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation em- beddings for knowledge graph completion. In Pro- ceedings of the AAAI.
Rectifier nonlinearities improve neural network acoustic models. Andrew L Maas, Y Awni, Andrew Y Hannun, Ng, ICML Workshop on Deep Learning for Audio, Speech and Language Processing. Maas, Andrew L, Awni Y Hannun, and Andrew Y Ng. 2013. Rectifier nonlinearities improve neural net- work acoustic models. In ICML Workshop on Deep Learning for Audio, Speech and Language Process- ing.
Mem2Seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. Andrea Madotto, Chien-Sheng Wu, Pascale Fung, Proceedings of the ACL. the ACLAndrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2Seq: Effectively incorporating knowl- edge bases into end-to-end task-oriented dialog sys- tems. In Proceedings of the ACL.
Anytime bottom-up rule learning for knowledge graph completion. Christian Meilicke, Daniel Melisachew Wudage Chekol, Heiner Ruffinelli, Stuckenschmidt, 10.24963/ijcai.2019/435Proceedings of the IJCAI. the IJCAIChristian Meilicke, Melisachew Wudage Chekol, Daniel Ruffinelli, and Heiner Stuckenschmidt. 2019. Anytime bottom-up rule learning for knowledge graph completion. In Proceedings of the IJCAI.
Learning attention-based embeddings for relation prediction in knowledge graphs. Deepak Nathani, Jatin Chauhan, Charu Sharma, Manohar Kaul, Proceedings of the ACL. the ACLDeepak Nathani, Jatin Chauhan, Charu Sharma, and Manohar Kaul. 2019. Learning attention-based embeddings for relation prediction in knowledge graphs. In Proceedings of the ACL.
A novel embedding model for knowledge base completion based on convolutional neural network. Tu Dinh Dai Quoc Nguyen, Dat Nguyen, Dinh Quoc Nguyen, Phung, Proceedings of the NAACL. the NAACLDai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. 2018. A novel embed- ding model for knowledge base completion based on convolutional neural network. In Proceedings of the NAACL.
A capsule network-based embedding model for knowledge graph completion and search personalization. Thanh Dai Quoc Nguyen, Tu Dinh Vu, Dat Nguyen, Dinh Quoc Nguyen, Phung, Proceedings of the NAACL. the NAACLDai Quoc Nguyen, Thanh Vu, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. 2019. A cap- sule network-based embedding model for knowl- edge graph completion and search personalization. In Proceedings of the NAACL.
Rule-guided compositional representation learning on knowledge graphs. Guanglin Niu, Yongfei Zhang, Bo Li, Peng Cui, Si Liu, Jingyang Li, Xiaowei Zhang, Proceedings of the AAAI. the AAAIGuanglin Niu, Yongfei Zhang, Bo Li, Peng Cui, Si Liu, Jingyang Li, and Xiaowei Zhang. 2020. Rule-guided compositional representation learning on knowledge graphs. In Proceedings of the AAAI.
Knowledge graph completion by contextaware convolutional learning with multi-hop neighborhoods. Byungkook Oh, Seungmin Seo, Kyong-Ho Lee, 10.1145/3269206.3271769Proceedings of the CIKM. the CIKMByungkook Oh, Seungmin Seo, and Kyong-Ho Lee. 2018. Knowledge graph completion by context- aware convolutional learning with multi-hop neigh- borhoods. In Proceedings of the CIKM.
Probabilistic logic neural networks for reasoning. Meng Ou, Jian Tang, Proceedings of the NeurIPS. the NeurIPSMeng Ou and Jian Tang. 2019. Probabilistic logic neu- ral networks for reasoning. In Proceedings of the NeurIPS.
Modeling relational data with graph convolutional networks. Michael Sejr, Thomas N Schlichtkrull, Peter Kipf, Rianne Bloem, Van Den, Ivan Berg, Max Titov, Welling, 10.1007/978-3-319-93417-4_38Proceedings of the ESWC. the ESWCMichael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In Proceedings of the ESWC.
End-to-end structure-aware convolutional networks for knowledge base completion. Chao Shang, Yun Tang, Jing Huang, Jinbo Bi, Xiaodong He, Bowen Zhou, 10.1609/aaai.v33i01.33013060Proceedings of the AAAI. the AAAIChao Shang, Yun Tang, Jing Huang, Jinbo Bi, Xi- aodong He, and Bowen Zhou. 2019. End-to-end structure-aware convolutional networks for knowl- edge base completion. In Proceedings of the AAAI.
Rotate: Knowledge graph embedding by relational rotation in complex space. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, Jian Tang, Proceedings of the ICLR. the ICLRZhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. In Proceed- ings of the ICLR.
A reevaluation of knowledge graph completion methods. Zhiqing Sun, Shikhar Vashishth, Soumya Sanyal, Partha P Talukdar, Yiming Yang, Proceedings of the ACL. the ACLZhiqing Sun, Shikhar Vashishth, Soumya Sanyal, Partha P. Talukdar, and Yiming Yang. 2020. A re- evaluation of knowledge graph completion methods. In Proceedings of the ACL.
Going deeper with convolutions. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich, 10.1109/CVPR.2015.7298594Proceedings of the CVPR. the CVPRChristian Szegedy, Wei Liu, Yangqing Jia, Pierre Ser- manet, Scott E. Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In Proceed- ings of the CVPR.
Rethinking the inception architecture for computer vision. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna, 10.1109/CVPR.2016.308Proceedings of the CVPR. the CVPRChristian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2016. Re- thinking the inception architecture for computer vi- sion. In Proceedings of the CVPR.
Representing text for joint embedding of text and knowledge bases. Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, Michael Gamon, Proceedings of the EMNLP. the EMNLPKristina Toutanova, Danqi Chen, Patrick Pantel, Hoi- fung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In Proceedings of the EMNLP.
Complex embeddings for simple link prediction. Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, Guillaume Bouchard, Proceedings of ICML. ICMLThéo Trouillon, Johannes Welbl, Sebastian Riedel,Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Proceed- ings of ICML.
Vikram Nitin, Nilesh Agrawal, and Partha Talukdar. 2020. Interacte: Improving convolution-based knowledge graph embeddings by increasing feature interactions. Shikhar Vashishth, Soumya Sanyal, Proceedings of the AAAI. the AAAIShikhar Vashishth, Soumya Sanyal, Vikram Nitin, Nilesh Agrawal, and Partha Talukdar. 2020. In- teracte: Improving convolution-based knowledge graph embeddings by increasing feature interactions. In Proceedings of the AAAI.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Proceedings of the NIPS. the NIPSAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the NIPS.
Graph attention networks. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio, Proceedings of the ICLR (Poster). the ICLR (Poster)Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph attention networks. In Proceedings of the ICLR (Poster).
Knowledge graph embedding with entity neighbors and deep memory network. Kai Wang, Yu Liu, Xiujuan Xu, Dan Lin, abs/1808.03752CoRRKai Wang, Yu Liu, Xiujuan Xu, and Dan Lin. 2018. Knowledge graph embedding with entity neighbors and deep memory network. CoRR, abs/1808.03752.
CoKE: Contextualized knowledge graph embedding. Quan Wang, Pingping Huang, Haifeng Wang, Songtai Dai, Wenbin Jiang, Jing Liu, Yajuan Lyu, Yong Zhu, Hua Wu, abs/1911.02168CoRRQuan Wang, Pingping Huang, Haifeng Wang, Songtai Dai, Wenbin Jiang, Jing Liu, Yajuan Lyu, Yong Zhu, and Hua Wu. 2019. CoKE: Contextualized knowl- edge graph embedding. CoRR, abs/1911.02168.
Relation embedding with dihedral group in knowledge graph. Canran Xu, Ruijiang Li, Proceedings of the ACL. the ACLCanran Xu and Ruijiang Li. 2019. Relation embed- ding with dihedral group in knowledge graph. In Proceedings of the ACL, pages 263-272.
Leveraging knowledge bases in LSTMs for improving machine reading. Bishan Yang, Tom Mitchell, 10.18653/v1/P17-1132Proceedings of the ACL. the ACLBishan Yang and Tom Mitchell. 2017. Leveraging knowledge bases in LSTMs for improving machine reading. In Proceedings of the ACL.
Embedding entities and relations for learning and inference in knowledge bases. Bishan Yang, Wen-Tau Yih, Xiaodong He, Jianfeng Gao, Li Deng, Proceedings of ICLR. ICLRBishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In Proceedings of ICLR.
Transms: Knowledge graph embedding for complex relations by multidirectional semantics. Shihui Yang, Jidong Tian, Honglun Zhang, Junchi Yan, Hao He, Yaohui Jin, 10.24963/ijcai.2019/268Proceedings of the IJCAI. the IJCAIShihui Yang, Jidong Tian, Honglun Zhang, Junchi Yan, Hao He, and Yaohui Jin. 2019. Transms: Knowl- edge graph embedding for complex relations by mul- tidirectional semantics. In Proceedings of the IJCAI.
Quaternion knowledge graph embedding. Shuai Zhang, Yi Tay, Lina Yao, Qi Liu, Proceedings of the NeurIPS. the NeurIPSShuai Zhang, Yi Tay, Lina Yao, and Qi Liu. 2019. Quaternion knowledge graph embedding. In Pro- ceedings of the NeurIPS.
Learning hierarchy-aware knowledge graph embeddings for link prediction. Zhanqiu Zhang, Jianyu Cai, Yongdong Zhang, Jie Wang, Proceedings of the AAAI. the AAAIZhanqiu Zhang, Jianyu Cai, Yongdong Zhang, and Jie Wang. 2020. Learning hierarchy-aware knowledge graph embeddings for link prediction. In Proceed- ings of the AAAI. |
220,048,457 | Programming in Natural Language with fu SE : Synthesizing Methods from Spoken Utterances Using Deep Natural Language Understanding | The key to effortless end-user programming is natural language. We examine how to teach intelligent systems new functions, expressed in natural language. As a first step, we collected 3168 samples of teaching efforts in plain English. Then we built fu SE , a novel system that translates English function descriptions into code. Our approach is three-tiered and each task is evaluated separately. We first classify whether an intent to teach new functionality is present in the utterance (accuracy: 97.7% using BERT). Then we analyze the linguistic structure and construct a semantic model (accuracy: 97.6% using a BiLSTM). Finally, we synthesize the signature of the method, map the intermediate steps (instructions in the method body) to API calls and inject control structures (F 1 : 67.0% with information retrieval and knowledge-based methods). In an end-to-end evaluation on an unseen dataset fu SE synthesized 84.6% of the method signatures and 79.2% of the API calls correctly. | [
207556454,
52967399,
16509032,
51873800,
44167998
] | Programming in Natural Language with fu SE : Synthesizing Methods from Spoken Utterances Using Deep Natural Language Understanding
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 5 -10, 2020. 2020
Sebastian Weigelt weigelt@kit.edu
Institute for Program Structures and Data Organization
Karlsruhe Institute of Technology
KarlsruheGermany
Vanessa Steurer vanessa.steurer@web.de
Institute for Program Structures and Data Organization
Karlsruhe Institute of Technology
KarlsruheGermany
Tobias Hey hey@kit.edu
Institute for Program Structures and Data Organization
Karlsruhe Institute of Technology
KarlsruheGermany
Walter F Tichy tichy@kit.edu
Institute for Program Structures and Data Organization
Karlsruhe Institute of Technology
KarlsruheGermany
Programming in Natural Language with fu SE : Synthesizing Methods from Spoken Utterances Using Deep Natural Language Understanding
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsJuly 5 -10, 2020. 20204280
The key to effortless end-user programming is natural language. We examine how to teach intelligent systems new functions, expressed in natural language. As a first step, we collected 3168 samples of teaching efforts in plain English. Then we built fu SE , a novel system that translates English function descriptions into code. Our approach is three-tiered and each task is evaluated separately. We first classify whether an intent to teach new functionality is present in the utterance (accuracy: 97.7% using BERT). Then we analyze the linguistic structure and construct a semantic model (accuracy: 97.6% using a BiLSTM). Finally, we synthesize the signature of the method, map the intermediate steps (instructions in the method body) to API calls and inject control structures (F 1 : 67.0% with information retrieval and knowledge-based methods). In an end-to-end evaluation on an unseen dataset fu SE synthesized 84.6% of the method signatures and 79.2% of the API calls correctly.
Introduction
Intelligent systems became rather smart lately. One easily arranges appointments by talking to a virtual assistant or controls a smart home through a conversational interface. Instructing a humanoid robot in this way no longer seems to be futuristic. For the time being, users can only access built-in functionality. However, they will soon expect to add new functionality themselves. For humans, the most natural way to communicate is by natural language. Thus, future intelligent systems must be programmable in everyday language.
Today's systems that claim to offer programming in natural language enable laypersons to issue single commands or construct short scripts (e.g. Mihalcea et al. (2006); Rabinovich et al. (2017)); usually no new functionality is learned. Only a few addressed learning new functionality from natural language instructions (e.g. Le et al. (2013); Markievicz et al. (2017)). However, even recent approaches still either restrict the language or are (over-)fitted to a certain domain or application.
We propose to apply deep natural language understanding to the task of synthesizing methods from spoken utterances. Our approach combines modern machine learning techniques with information retrieval and knowledge-based methods to grasp the user's intent. As a first step, we have performed a user study to investigate how laypersons teach new functionality with nothing but natural language. In a second step, we develop fu SE (Function Synthesis Executor). fu SE translates teaching efforts into code. On the basis of the gathered data we constructed a three-tiered approach. We first determine, whether an utterance comprises an explicitly stated intent to teach a new skill. Then, we decompose these teaching efforts into distinct semantic parts. We synthesize methods by transferring these semantic parts into a model that represents the structure of method definitions. Finally, we construct signatures, map instructions of the body to API calls, and inject control structures.
Related Work
The objective of programming in natural language was approached from different perspectives over the years. Quite a few approaches are natural language interfaces to code editors (Price et al., 2000;Begel, 2004;Begel and Graham, 2005;Désilets et al., 2006). However, they assume that users literally dictate source code. Thus, these approaches are intended for developers rather than laypersons. Other approaches such as Voxelurn by Wang et al. (2017) aim to naturalize programming languages to lower the hurdle for programming novices.
Approaches for end-user programming in natu- Figure 1: Schematic overview of fu SE 's three-tiered approach.
ral language take up the challenge of bridging the semantic gap between informal spoken or written descriptions in everyday language and formal programming languages. Early systems were syntaxbased (Winograd, 1972;Ballard and Biermann, 1979;Biermann and Ballard, 1980;Biermann et al., 1983;Liu and Lieberman, 2005). Some were already capable to synthesize short scripts including control structures and comments, e.g. NLP for NLP by Mihalcea et al. (2006). Others take the user in the loop and create scripts with a dialog-driven approach (Le et al., 2013). In further developments intelligent assistants offer their service to assist with programming (Azaria et al., 2016). Often these assistants support multi-modal input, e.g. voice and gestures (Campagna et al., 2017(Campagna et al., , 2019. Others combine programming in natural language with other forms of end-user programming, such as programming by example (Manshadi et al., 2013) or programming by demonstration (Li et al., 2018). Some authors such as Landhäußer et al. (2017) and Atzeni and Atzori (2018a,b) take a knowledgebased approach by integrating domain and environmental information in the form of ontologies.
Suhr and Artzi (2018) employ a neural network to learn a situational context model that integrates the system environment and the human-systeminteraction, i.e. the dialog. Many recent approaches integrate semantic parsing in the transformation process (Guu et al., 2017;Rabinovich et al., 2017;Chen et al., 2018;Dong and Lapata, 2018). Even though the natural language understanding capabilities are often impressive, the synthesized scripts are still (semantically) erroneous in most cases. Additionally, learning of new functionality is not covered by approaches of that category so far.
Programming in natural language is of particular interest in the domain of humanoid robotics (Lauria et al., 2001(Lauria et al., , 2002She et al., 2014;Mei et al., 2016). People expect to teach them as they teach human co-workers. Therefore, some authors, e.g. Markievicz et al. (2017), use task descriptions that were intended to instruct humans to benchmark their approach. However, often the assumed vocabulary is rather technical (Lincoln and Veres, 2012). Thus, the usability for laypersons is limited.
Approach
The goal of our work is to provide a system for programming in (spoken) natural language. Laypersons shall be enabled to create new functionality in terms of method definitions by using natural language only. We offer a general approach, i.e. we do not restrict the natural language regarding wording and length. Since spontaneous language often comprises grammatical flaws, disfluencies, and alike, our work must be resilient to these issues.
We decompose the task in three consecutive steps. The rationale behind this decision is as follows. On the one hand, we can implement more focused (and precise) approaches for each task, e.g. using machine learning for one and information retrieval for another. On the other hand, we are able to evaluate and optimize each approach individually. The stages of our three-tiered approach are the following (see Figure 1 for an example):
1. Classification of teaching efforts: Determine whether an utterance comprises an explicitly stated teaching intent or not.
2. Classification of the semantic structure: Analyze (and label) the semantic parts of a teaching sequence. Teaching sequences are composed of a declarative and a specifying part as well as superfluous information.
3. Method synthesis: Build a model that represents the structure of methods from syntactic information and classification results. Then, map the actions of the specifying part to API calls and inject control structures to form the body; synthesize the method signature.
The first two stages are classification problems. Thus, we apply various machine learning techniques. The first stage is a sequence-to-single-label task, while the second is a typical sequence-tosequence task. For the first we compare classical machine learning techniques, such as logistic regression and support vector machines, with neural network approaches including the pre-trained language model BERT (Devlin et al., 2019). For the second task we narrow down to neural networks and BERT. A more detailed description of the first two stages may be found in (Weigelt et al., 2020). The implementation of the third stage is a combination of syntactic analysis, knowledge-based techniques and information retrieval. We use semantic role labeling, coreference analysis, and a context model (Weigelt et al., 2017) to infer the semantic model. Afterwards, we synthesize method signatures heuristically and map instructions from the body to API calls using ontology search methods and datatype analysis. Additionally, we inject control structures, which we infer from keywords and syntactic structures. To cope with spontaneous (spoken) language, our approach relies on shallow NLP techniques only.
Dataset
We carried out a study to examine how laypersons teach new functionality to intelligent systems. The study consists of four scenarios in which a humanoid robot should be taught a new skill: greeting someone, preparing coffee, serving drinks, and setting a table for two. All scenarios take place in a kitchen setting but involve different objects and actions. Subjects were supposed to teach the robot using nothing but natural language descriptions. We told the subjects that a description ideally comprises a declaration of intent to teach a new skill, a name for the skill, and an explanation of intermediate steps. However, we do not force the subjects into predefined wording or sentence structure. Instead, we encouraged them to vary the wording and to 'speak' freely. We also instructed them to imagine that they were standing next to the robot. After the short introduction, we successively presented the scenarios to the subjects. Finally, we requested some personal information in a short questionnaire. We used the online micro-tasking platform Prolific 1,2 . In less than three days, 870 participants completed the study. The share of male and female participants is almost equal (50.5% vs. 49.5%); more than 60% are native English speakers. Most of them (70%) had no programming experience at all. An analysis of the dataset revealed that there is barely any difference in the language used by subjects, who are inexperienced in programming, compared to more experienced subjects (except for a few subjects that used a rather technical language). The age of the participants ranges from 18 to 76 with more than half being 30 or younger. The collected data comprises 3,168 descriptions with more than 109,000 words altogether (1,469 unique words); the dataset statistics are depicted in Table 1. We provide a set of six descriptions from the dataset in Table 13 (Appendix A). A thorough analysis of the dataset revealed that a notable share (37%) lacks an explicitly stated intent to teach a skill, albeit we even consider phrases such as "to prepare lunch" as teaching intent. Regarding the semantic structure, we observed that the distinct parts can be clearly separated in almost all cases. However, the respective parts occurred in varying order and are frequently non-continuous.
The data was jointly labeled by two of the authors. We first attached the binary labels teaching and non-teaching. These labels correspond to the classification task from the first stage. Then we add ternary labels (declaration, specification, and miscellaneous) to all words in descriptions that were classified as teaching effort in the first step. This label set is used for the second stage. The distribution of the labels is depicted in Table 2.
Both label sets are unequally distributed, which may cause the machine learning models to overfit in favor of the dominating label. This mainly affects the ternary classification task; the speech recordings would be more natural. However, from previous studies we learned that subjects more willingly write texts than speak. Besides, the audio quality of recordings is often poor, when subjects use ordinary microphones.
First Stage: Teaching Intents
The first step of fu SE is discovering teaching intents in utterances. An utterance can either be an effort to teach new functionality or merely a description of a sequence of actions. This problem is a typical sequence-to-single-label task, where the words of the utterance are the sequential input and the output is either teaching or non-teaching.
To train, validate, and test classifiers we split up the dataset in two ways. The first is the common approach to randomly split the set in an 80-to-20 ratio, where 80% of the data is used for training and 20% for testing. As usual, we again split the training set in 80 parts for training and 20 for validation. However, we felt that this approach does not reflect realistic set-ups, where a model is learned from historical data and then applied to new unseen data, that is semantically related but (potentially) different. Therefore, we introduced an additional so-called scenario-based split in which we separate the data according to the scenarios. We use three of the four scenarios for training and the remaining for testing. Note that we again use an 80-20 split to divide training and validation sets.
We applied classical machine learning and neural network approaches to the task. The classical techniques are: decision trees, random forests, support vector machines, logistic regression, and Naïve Bayes. As baseline for the classification accuracy we use the so-called Zero-Rule classifier (ZeroR); it always predicts the majority class of the training set, i.e. teaching in this case.
We transform the words to bag-of-words vectors and use tri-and quadrigrams as additional features. The measured accuracy of each classifier on the random and scenario-based data is depicted in Table 3; the validation set accuracy is given in parenthesis and the test set accuracy without.
On the random set all classifiers exceed the baseline. Thus, the (slightly) imbalanced dataset does not seem to affect the classifiers much. Logistic regression performs surprisingly well. However, on the scenario-based split the accuracy of all classifiers decreases drastically. While the accuracies on the validation set remain stable, these classifier techniques are unable to generalize to unseen input. The logistic regression remains the best classifier. However, its accuracy decreases to 71.9%.
These results reinforced our intuition that deep learning is more appropriate for this task. We implemented a broad range of neural network architectures: artificial neural networks, convolutional networks, and recurrent networks, including LSTMs and GRUs and their bidirectional variants. We experimented with additional layers, which we systematically added to the networks, such as dropout (DO), dense (D), or global max pooling (GMax). We altered all hyper-parameters in reasonable ranges of values 3 . We present only the best performing configurations, i.e. architecture and hyperparameter combinations, in Table 4. Detailed information on the tested hyper-parameter values and further results may be found in Appendices B and C. The words from the input are represented as fastText word embeddings (Bojanowski et al., 2017;Joulin et al., 2017); we use the 300-dimensional embeddings that were trained on the 2018). Moreover, we use Google's pre-trained language model BERT (base-uncased), which we equipped with a flat binary output layer.
The results attest that deep learning approaches clearly outperform the best classical technique (logistic regression). In particular, the accuracies show smaller differences between random and scenario-based split. This suggests that the classification is more robust. The best accuracy on the scenario test set is achieved by a bidirectional GRU: 93.2%. Using BERT, the accuracy increases by more than 4% with a peak at 97.7% using 300 training epochs. However, the ten-epochs version is a feasible choice, since the accuracy loss is negligible and the training savings are immense.
Second Stage: Semantic Structures
The second stage, detecting the semantic parts in teaching efforts, is a typical sequence-to-sequencelabeling task with the labels declaration, specification, and miscellaneous. Even though these semantic structures correspond to phrases from a grammatical point of view, we decided to use perword labels. For this task we only use neural network approaches and BERT. The remaining set-up is similar to the first stage. We again use fastText embeddings and vary the network architectures and hyper-parameters. Except for a ternary output layer, we use the same configuration for BERT as in the first stage.
The results for both, the random and scenariobased split, are reported in Table 5 the clear choice for this task; accuracy values are consistently high. Most encouragingly, the decline on the scenario data is negligible (less than 1%). Apparently, the models generalize well and are thus resilient to a change in vocabulary. For the second stage the use of BERT is of no advantage; the results even fall behind the best RNN configurations.
Third Stage: Method Synthesis
During stage three we first transfer the natural language utterances into a model that represents both method definitions and scripts. Afterwards, we synthesize methods (or scripts) from this model. We create a method signature and map instructions in the body to API calls; to synthesize scripts we only map the instructions and inject control structures. Before we can transfer natural language utterances to the semantic model we must perform a few NLP pre-processing steps that enrich the input with syntactic and semantic information. To obtain parts of speech (PoS), we apply a joint tagging approach; we consolidate the PoS tags produced by the Stanford Log-linear Part-Of-Speech Tagger (Toutanova et al., 2003) and SENNA (Collobert et al., 2011). The Stanford Tagger also provides us with word lemmas. Then we detect individual events in terms of clauses. Since our approach is supposed to cope with spoken language, we are unable to make use of punctuation. Instead, we split the input in a continuous sequence of instructions based on heuristics that make use of PoS tags and keywords. However, the instructions do not necessarily span complete clauses. Thus, we can not apply common parsers. Instead, we use the shallow parser BIOS 6 that provides us with chunks. To obtain semantic roles for each instruction, we again (Fellbaum, 1998), we can also make use of synonyms.
We use ontologies to model the target systems, i.e. APIs. An ontology represents the classes, methods, parameters, data types, and values (resp. value ranges), of an API (similar to the ontologies used by Landhäußer et al. (2017) and Atzeni and Atzori (2018a,b)). The basic ontology structure is depicted in Table 6. If the system is supposed to interact with an environment, we employ additional ontologies that model the environment including objects and their states (see Table 7). Environment ontologies are merged into system ontologies by copying concepts to the respective placeholders.
To bridge the semantic gap between natural and programming language we introduce a semantic model, as depicted in Figure 2. The model resembles the basic structure of method definitions. However, the leaves are composed of natural language phrases. To determine the phrases that will make up the model elements, we first smooth the classification results provided by the second stage. fu SE maps all phrases of an instruction to the same second-level model element, i.e. either method signature or an instruction of the body. Therefore, we 7 SENNA uses the semantic role label set defined in the CoNLL-2004 resp. CoNLL-2005 shared tasks Màrquez, 2004, 2005 unify the second stage classification labels for each instruction using majority decision. Afterwards, we map phrases to leaf elements. Roughly speaking, we use the roles provided by semantic role labeling (SRL) and map predicates to names and arguments to parameters. If we detect a coreference, we substitute the referring expression with the referent, e.g. it with the cup. We also add a lemmatized variant of the phrase and all synonyms. Note that the parameters are a list of phrases. The first step to create method definitions is signature synthesis. To construct a meaningful name, we heuristically clean up the phrase, e.g. remove auxiliary verbs and stop words, and concatenate the remaining words. The parameters are either mapped to data types to infer formal parameters or -if no mapping is to be found -they are attached to the name. For instance, assuming that the declarative instruction is serving wine means, fu SE extracts serve as the first part of the name. Then it tries to map wine to an ontology individual (as discussed later). Assuming it finds the individual RedWineBottle and it is an instance of the concept Graspable in the environment ontology. If the system ontology supports the data type Graspable, fu SE synthesizes the signature serve(serve.what : Graspable). Otherwise, the method signature serveWine() is created.
The instructions in the method body are mapped to API calls. Therefore, we first query the ontologies for each leaf element individually. For the queries we use three sets of words we create from the original phrase, the lemmatized version, and the synonyms. We then build the power sets and all permutations of each set, before we concatenate the words to construct a query set. For instance, for the phrase is closed, we produce the query strings: isclosed, closedis, beclose, closebe, closed, is, . . . The ontology search returns all individuals with a Jaro-Winkler score (Winkler, 1990) a fuzzy score 8 above .15. We decided for these comparatively low thresholds, since we see them as lightweight filters that let pass numerous generally valid candidates. Since an individual may be returned more than once with different scores, we set the score of the individual to the maximum of each of its scores. Afterwards, we construct API calls from the model structure and rate each candidate. We start with the method name candidates. For each candidate we query the ontology for formal parameters. Then, we try to satisfy the parameters with the candidates returned by the individual ontology search. Note that we perform type checking for the parameters (including inheritance if applicable). For instance, for the instruction take the cup we may have found the individual grasp as candidate for a method name and the parameter candidates Mug (type Graspable) and Cupboard (type Location). The ontology indicates that the method grasp has one parameter of type Graspable. Then, the type check ensures that fu SE creates the call candidate grasp(Mug) but not grasp(Cupboard). The score is composed of the individual scores of the method names and parameters, the share of mapped words of query string to all words in the query, the ratio of mapped parameters to (expected) formal parameters, and the number of additional (superfluous) parameters. In Appendix D we give a more formal introduction to our scoring approach.
The result of the scoring process is a ranked list of candidates for each instruction. For the time being, we simply use the top-ranked candidates to synthesize the method body. However, re-ranking the candidates based on other semantic resources is promising future work. In a last step, we inject control structures, i.e. conditional branching, various types of loops, and concurrency (Weigelt et al., 2018b,c). The approach is rule-based. We use key phrases, such as in case, until, and at the same time. Proceeding from these anchor points we look for structures that fit into the respective control structure. Here, we apply heuristics on the syntax (based on PoS tags and chunks) and coreference. Utterances that were labeled as non-teaching in the first stage also run through the third stage, except for signature synthesis. Thus, we only construct scripts for this type of utterances.
We determine the quality of the approach for the third stage based on utterances from scenarios one, two, and three, since we used scenario four during development. The assessment is partly manual. Hence, we randomly drew 25 utterances from each scenario to reduce the effort. For each description we used the manual labels of first-stage and secondstage classifications and prepared a gold standard for API calls in the method body. Table 9 depicts the dataset. We did not prepare solutions for the signatures, since plenty of valid solutions are imaginable. Thus, we decided to review the signatures manually afterwards. Of the 52 synthesized method names we assessed eight inappropriate. A name is inappropriate if either the name is off-topic or it contains unrelated terms, e.g. askSpeaker or prepareCoffeeFriend for the scenario How to prepare coffee. Moreover, fu SE correctly mapped 23 parameters without any false positive.
The API ontology used in our setting (household robot) comprises 92 methods, 59 parameters, and 20 data types. To represent the environment (a kitchen) of the robot, we used another ontology with 70 objects of six types, and six states. Table 8 details the results for the method body synthesis. Besides precision, recall, and F 1 , it shows the average rank at which the correct element is to be found. Since the semantic role labeling introduces a vast amount of errors on spoken utterances and our approach heavily depends on it, we also determine recall and F 1 excluding SRL errors. The results are encouraging. We achieve an F 1 value of 76.7% for the individuals and 62.0% for entire calls; in both cases the precision is slightly ahead of the recall. If we excluded SRL errors, the overall performance increases (about 7% for individuals and 5% for calls). Besides the SRL, missing and inappropriate synonyms are a major source of errors. If Word-Net lacks a synonym for an important word in the utterance, fu SE 's API mapping may be unable to determine the correct ontology individual. Contrary, if WordNet provides an inappropriate synonym, fu SE may produce an incorrect (superfluous) mapping. In other cases, our language model is unable to capture the semantics of the utterance properly. For example, fu SE creates two method calls for the phrase "make sure you close it" : close(. . . ) and make(. . . ). It may also produce superfluous mappings for explanatory phrases, such as "the machine fills cups", if the second stage did not classify it as miscellaneous. Regarding the composition of API calls (methods plus arguments), the majority of errors is introduced by the arguments. In addition to the afore-mentioned error sources, arguments are often ambiguous. For instance, the phrase "open the door" leaves it up to interpretation, which door was intended to be opened. Even though fu SE makes use of an elaborated context model, some ambiguities are impossible to resolve (see section 5). A related issue is the incorrect resolution of coreferences; each mistake leads to a misplaced argument. Most of these error sources can be eliminated, if the pre-processing improves. However, many difficulties simply arise from erroneous or ambiguous descriptions. Still, fu SE interprets most of them correctly. Most encouragingly, the average rank of the correct element is near 1. Thus, our scoring mechanism succeeds in placing the right elements on top of the list.
Evaluation
To measure the performance of fu SE on unseen data, we set up an end-to-end evaluation. We created two new scenarios. They take place in the kitchen setting again, but involve different actions and objects. In the first, subjects are supposed to teach the robot, how to start the dishwasher and in the second, how to prepare cereals. Once more we used Prolific to collect the data and set the number of participants to 110. However, we accepted only 101 submissions, i.e. 202 descriptions. We randomly drew 50 descriptions each. Since the evaluation of the overall approach entails the same output as the third stage, we prepared the gold standard like in subsection 3.4 and used the same ontologies. Table 11 details the dataset used in the end-to-end evaluation. Additionally, we provide five exemplary descriptions from the dataset in Table 14 (Appendix A).
In the end-to-end evaluation our approach synthesized 73 method signatures; five were missed due to an incorrect first-stage classification. Out of 73 synthesized methods we assessed seven to be inappropriate. Additionally, 36 parameters were mapped correctly and no false positives were created. Except for the missing method signatures the results are in line with the third-stage evaluation.
The results for the method body synthesis, as depicted in However, the effect is smaller here. Moreover, the average rank is also closer to the optimum (1.0) in both cases. Since the first two stages of fu SE are based on neural networks, it is difficult to say why the results in the end-to-end evaluation improve. However, we believe the main cause is the introduction of a new test dataset, which has two consequences. First, the models used in the first two stages are learned on all four scenarios instead of three, i.e. the models are trained on a larger dataset, which (presumably) makes them more robust. Second, the new task may be simpler to describe. Consequently, the descriptions comprise simpler wordings and become easier to handle. In summary, the results show that fu SE generalizes to different settings -at least in the same domainand is marginally degraded by error propagation.
To assess how well fu SE generalizes to truly spoken utterances we evaluated on another dataset. It is a collection of recordings from multiple recent projects. The setting (instructing a humanoid robot in a kitchen setting) is the same. However, none of the scenarios involved teaching new functionality. Thus, we can only measure fu SE 's ability to construct scripts. The descriptions in this dataset comprise control structures to a much larger extent. Altogether the dataset comprises 234 recordings and manual transcriptions. The 108 subjects were mostly under-graduate and graduate students.
On the transcripts we assess the mapping of methods and parameters individually. The results for both and entire calls are depicted in Table 12. Even though the spoken samples comprise a vast number of disfluencies and grammatical flaws, fu SE maps more calls correctly. This counter-intuitive effect may be explained by the lower complexity and briefness of the spoken descriptions. Regarding the control structures, 27.4% were injected correctly. Note that correctly means an appropriate condition plus a block with correct extent. If we lower the standards for condition correctness, the share of correct structures is 71.23%.
Conclusion
We have presented fu SE , a system for programming in natural language. More precisely, we aim to enable laypersons to teach an intelligent system new functionality with nothing but spoken instructions. Our approach is three-tiered. First, we classify whether a natural language description entails an explicitly stated intent to teach new functionality. If an intent is spotted, we use a second classifier to separate the input into semantically disjoint parts; we identify declarative and specifying parts and filter out superfluous information. Finally, we synthesize method signatures from the declarative and method bodies from the specifying parts. Method bodies contain instructions and control structures. Instructions are mapped to API calls. We implemented the first two steps using classical machine learning and neural networks. Teaching intents are identified with an accuracy of 97.7% (using BERT). The classification of the semantics is correct in 97.6% of the cases (using a BiLSTM).
We evaluated fu SE on 100 descriptions obtained from a user study. The results are promising; fu SE correctly synthesized 84.6% of the method signatures. The mapping of instructions in the body to API calls achieved an F 1 -score of 66.9%. In a second evaluation on a speech corpus the F 1 -score for API calls is 79.2%.
We plan to evaluate fu SE in other domains. It will be interesting to see, if we can reuse (or transfer) the machine learning models as well as the rest of the approach. Future adoptions to fu SE will include the integration of a dialog component. We may query the user in case of ambiguous statements or missing parameters. We have implemented an extensible dialog module and shown that it can be used to resolve ambiguous references, word recognition errors, and missing conditions (Weigelt et al., 2018a). However, we still have to figure out, how to query users properly if an API mapping is ambiguous or parameters are missing. Another improvement concerns the analysis of verb references. Humans often refer to previous actions, which may cause superfluous instructions. We will also implement a sanity check that considers feasibility and meaningfulness of the sequence of actions in the method body. The latter may involve a feedback mechanism via the dialog component. Giving feedback to newly learned method definitions that may be lengthy and therefore unhandy to repeat as a whole is an interesting challenge.
A Dataset Examples
The dataset includes descriptions of varying quality. Some texts have syntactical flaws such as typos and grammar mistakes. They also vary in terms of descriptiveness and style; the latter ranges from full sentences to notes. Table 13 shows six examples from the preliminary study (scenarios one to four) and Table 14 five examples from the end-to-end evaluation (scenarios five and six). Most of the descriptions contain errors. For instance, description 2180 contains typos, such as "ring some beverage".
B Architectures and Hyper-parameters
We applied a broad range of machine learning approaches to the classification tasks. Table 15 shows the types, architectures and hyper-parameters we tested in the process. We also experimented with self-trained and pre-trained fastText embeddings. Table 16 shows representative configurations for the first stage of fu SE (binary classification); for neural networks we altered the hyper-parameters systematically to give an intuition of the effects. There are general trends. Classifiers perform better on randomly split data, a batch size of 100 is better than 300, and pre-trained embeddings outperform the self-trained in almost all cases. Overall, BERTbased classifiers achieve the best results. However, some neural network configurations come close (e.g. RNN 6.0 ); classical machine learning techniques are inadequate. For the second stage (ternary classification) we show interesting results in Table 17. The trends are as follows. The preferable batch size is 32, pre-trained embeddings again outperform the self-trained, and RNNs are best.
C Configurations and Results
D Call Candidate Scoring
In subsection 3.4 we only discuss the rationale behind our call candidate scoring mechanism. Subsequently, we give a formal introduction. A call candidate is an API method with arguments (extracted from the natural language input). The arguments are of either primitive, composite (strings or enumerations), or previously defined types (e.g. objects from the environment). The arguments adhere to the formal definition of the API method. For each call candidate c fu SE calculates the score S(c) as follows:
S(c) = φ * P (c) * S M (c)+(1−φ) * W S P (c) (1)
The score is composed of two components: the method score S M (c) and the weighted parameter score W S P (c). The impact of the latter on the final score can be adjusted with the weight φ. Further, S M (c) is scaled by the perfect match bonus P (c):
P (c) = τ M (c) > 0.9 1 otherwise(2)
The perfect match bonus P (c) allows us to prefer call candidates with a method name score M (c) above 0.9. The scaling factor τ is configurable (τ ≥ 1). The method score S M (c) is computed as follows:
S M (c) = M (c) − β |I A (c)| * 1 − |I F (c)| |I A (c)|(3)
The method name score M (c) is the maximal similarity of the natural language chunk that represents the action (or event) and the (API) method name. We use Jaro-Winkler and fuzzy score as similarity measures. To obtain the method score S M (c), the method name score M (c) is reduced by a subtrahend that indicates how well the method name represents the words in the original natural language chunk. The subtrahend is composed of two factors. The second is one minus the fraction of words in the chunk that can be found in the method name and the total amount of words in the chunk; i.e., this factor is the share of unmapped words. The other factor scales it by a configurable parameter β, which is divided by length of the chunk. The rationale behind this is as follows. In short chunks each word is important. Therefore, unmapped words are strongly penalized. With an increasing number of words in the chunk, it is increasingly unlikely to map all words. However, in longer chunks many words are semantically irrelevant. Therefore, we reduce the subtrahend with the length of the chunk. The weighted parameter score W S P (c) in Equation 1 is calculated as follows:
W S P (c) = S P (c) − ω * P en(c)(4)
The score is composed of the parameter score S P (c) and a penalty value P en(c); the latter is weighted by the configurable factor ω. The parameter score S P (c) is calculated as follows: P M is the set of all parameters p i (extracted from the natural language input) that were mapped to formal method parameters. Each p i has a similarity score (P i (c)). Thus, S P (c) is the sum of all similarity scores of mapped parameters multiplied with the share of mapped (P M ) and expected formal parameters as defined in the ontology (P O (c)). To calculate W S P (c) (see Equation 4), S P (c) is reduced by the penalty value P en(c) that is calculated as follows:
S P (c) = P i (c) * |P M | |P O (c)|(5)P en(c) = |P E | − |P M | |P E |(6)
P E is the set of parameters that were extracted from natural language input (see Figure 2). Thus, P en(c) is the number of parameters in the input that were not mapped to a formal method parameter, normalized by the total amount of extracted (natural language) parameters.
For the evaluation of the third stage of fu SE and the end-to-end-evaluation we set the method score weight φ to 0.6, the perfect match multiplier τ to 1.5, the search string coverage weight β to 0.5, and the penalty factor ω to 0. Table 16: Classification accuracy obtained on the validation (in parenthesis) and the test set for the first stage (binary classification). The best results (per classifier category) are printed in bold type. The basic structure of each neural network includes an embedding layer and an output layer (dense layer).
Figure 2 :
2Exemplary semantic model for an utterance.
Table 1 :
1The number of descriptions, words, and unique words per scenario and in the entire dataset.
Table 2 :
2The distribution of binary and ternary labels in the dataset. The resp. share is given in parenthesis.random
scenario
Decision Tree
(.893) .903 (.861) .719
Random Forest
(.917) .909 (.893) .374
SVM
(.848) .861 (.870) .426
Naïve Bayes
(.771) .801 (.765) .300
Logistic Regression (.927) .947 (.891) .719
baseline (ZeroR)
.573
.547
Table 3 :
3Classification accuracy achieved by classical
machine learning techniques on validation (in paren.)
and test set. The best results are printed in bold type.
label specification distinctly dominates (76%)
the others. The entire dataset is publicly ac-
cessible (open access), including raw data, la-
beled data, meta-data, and scenario descriptions:
http://dx.doi.org/10.21227/zecn-6c61.
Common Crawl dataset 4 by Facebook Research (Mikolov et al.,network architecture
random
scenario
C(128,3), Max(2),
C(64,3), GMax, D(10) (.952) .971 (.962) .874
C(128,5), Max(2),
C(128,5), GMax, D(10) (.954) .966 (.977) .862
BiGRU(32), DO(.2),
D(64), DO(.2)
(.952) .959 (.958) .932
BiLSTM(128), D(64) (.956) .959 (.962) .919
BERT, 5 epochs
(.973) .981 (.991) .969
BERT, 10 epochs
(.976) .982 (.992) .973
BERT, 300 epochs
(.962) .982 (.992) .977
baseline (Log. Reg.)
(.927) .947 (.891) .719
Table 4 :
4Classification accuracy for neural networks on
validation (in parenthesis) and test set (best in bold).
5 . The bidirectional architectures -be it GRU or LSTM -are 5 Again, we only present the best configurations here. For more configurations, refer toTable 16in Appendix C.network architecture
random
scenario
BiLSTM(128)
(.987) .985 (.981) .976
BiGRU(128)
(.985) .985 (.982) .968
BiLSTM(128), DO(.2) (.988) .988 (.981) .975
BiLSTM(256), DO(.2) (.987) .985 (.982) .975
BERT, 5 epochs
(.979) .982 (.979) .965
BERT, 10 epochs
(.983) .985 (.983) .972
BERT, 300 epochs
(.981) .983 (.985) .973
baseline (ZeroR)
.759
.757
Table 5 :
5Classification accuracy achieved by neural networks on validation (in parenthesis) and test set for the second stage. The best results are printed in bold type.
Table 6 :
6Domain ontology structure for systems. Graspable Graspable objects, e.g., cup Openable Openable objects, e.g., bottle . . .StateObject states, e.g., openedclass
description
Thing
Top concept of the ontology
Object
Objects in environment
Table 7 :
7Domain ontology structure for environments. employ SENNA 7 . Word senses are disambiguated using the tool Babelfy (Moro et al., 2014). Since Babelfy is linked to WordNet
). [to make] [coffee] you have [to locate] [the cup] . . .method
signature
name
parameters
body
inst1
name
parameters
inst2 . . .
Table 8 :
8The results of the evaluation of the API call mapping for individual elements, i.e. names and parameters, and entire calls. The values in parenthesis denote the results obtained excluding SRL errors.total teach non-teach API calls
sc. 1
25
18
7
77
sc. 2
25
19
6
97
sc. 3
25
15
10
123
total
75
52
23
297
Table 9 :
9The dataset used to evaluate the third stage.
Table 10 :
10The results of the end-to-end evaluation, divided in individual elements, i.e. names and parameters, and entire calls. The values in parenthesis denote the results obtained excluding SRL errors.total teach non-teach API calls
sc. 5
50
44
6
158
sc. 6
50
34
16
315
total 100
78
22
473
Table 11 :
11The end-to-end evaluation dataset.
Table 10 ,
10even exceed the previous evaluation. The value of the F 1 -score is 87.7% forpre. rec.
F 1
methods
.924 .884 .904
parameters .828 .951 .885
API calls
.735 .859 .792
Table 12 :
12Evaluation results for the speech corpus.individuals and 66.9% for entire API calls. Again,
recall and F 1 increase, if we exclude SRL errors.
Look directly at the person. Wave your hand. Say 'hello'. 1000 2 You have to place the cup under the dispenser and press the red button to make coffee. 1346 2 Making coffee means you have to press the red button, put a cup underneath the hole and then pouring the coffee that comes out into your cup 2180 3 To ring a beverage, open the fridge and select one of te beverages inside, pour it into one of the glasses on the kitchen counter and hand the glass over to the person. 2511 4 collect cutlery from cupboard, bring them to the table and place down neatly 2577 4 To set the table for two, Go to the cupboard and take two of each; plates, glasses, knives, and forks. Take them to the kitchen table and set two individual places.ID scen. text
302
1
Table 13 :
13Six exemplary submissions taken from the preliminary study dataset (scenarios one to four). Hey, Amar. We're going to start the dishwasher so what we have to do is first make sure the dishwasher is closed and then press the blue button twice to start the dishwasher.E 79 5 Hi Armar. Turning on the Dishwasher means you have to go to the dishwasher. Close the dishwasher, then press the blue button 2 times. E 81 5 Hi Armar, to use the dishwasher you need to check first if it is closed, if not, close it by pushing the front door. If it is closed, look for the blue button on the dishwasher. Once you find it, press it a first time and then a second time. That's how you start the dishwasher. E 117 6 hi armar, you get your cereal ready you need to go to the fridge and open the door by pulling it. you will find the milk bottle inside the fridge door. lift it out and carry it to the kitchen table. place it next to your bowl and cereal box. start by filling the bowl with your cereal then pour in some milk. E 158 6 Hi Armar, you have to go to the fridge, open it, grab the milk, close it and carry the milk to the kitchen table. Then place it next to the bowl and the cereal box. Fill the bowl with the cereals and then pour the mil in the bowl. That is how you prepare some cerealsID scen. text
E 10
5
Table 14 :
14Five exemplary submissions taken from the end-to-end evaluation dataset (scenarios five and six).
3. We determined all values empirically with the help of examples from scenario 4.types architect. additional layers number of units epochs
batch sizes
dropout values learning rates
ANN
Flatten (Flat),
10, 32, 50, 64,
binary:
binary: 100, 0.2, 0.3, 0.4
0.001,
GMax,
100, 128, 200,
300,
300
0.0005
Dense (D),
256
500,
Dropout(DO)
1000
CNN
CONV
Max,
GMax,
ternary:
ternary: 32,
Dense (D),
50, 100, 64, 100, 300
Dropout(DO)
300
RNN
LSTM
Dense (D),
GRU
Dropout (DO)
BiLSTM
BiGRU
BERT
Flatten (Flat)
5, 10,
32
0.00002
300
Table 15 :
15Overview of the types, architectures, and hyper-parameters of neural networks used in the two classification tasks (step one and two of fu SE ).GMax, D(100) 300 (.888) .889 (.877) .897 (.895) .676 (.908) .428 CNN1.0 Conv(128, 3), GMax, D(10) 100 (.947) .966 (.954) .963 (.962) .765 (.966) .854 CNN1.1 Conv(128, 5), GMax, D(10) 100 (.947) .971 (.930) .965 (.973) .743 (.973) .776 CNN1.2 Conv(128, 7), GMax, D(10) 100 (.952) .966 (.943) .962 (.973) .775 (.970) .897 CNN2.0 Conv(128, 3), Max(2), Conv(64, 3), GMax, D(10) 100 (.952) .959 (.952) .971 (.968) .855 (.962) .874 CNN2.1 Conv(128, 5), Max(2), Conv(64, 5), GMax, D(10) 100 (.949) .972 (.952) .966 (.969) .850 (.975) .859 CNN2.2 Conv(128, 5), Max(2), Conv(128, 5), GMax, D(10) 100 (.952) .964 (.954) .966 (.973) .862 (.977) .862 CNN2.3 Conv(128, 5), Max(2), Conv(128, 5), GMax, D(10) 300 (.952) .953 (.947) .965 (.973) .783 (.971) .901 CNN2.4 Conv(128, 5), Max(5), Conv(128, 5), GMax, D(10) RNN4.1 LSTM(128), D(100) 300 (.562) .625 (.567) .633 (.519) .702 (.519) .702 RNN5.0 BiLSTM(64), DO(0.2), D(64), DO(0.2) 100 (.947) .955 (.949) .955 (.956) .896 (.962) .916 RNN5.1 BiLSTM(64), DO(0.2), D(64), DO(0.2) 300 (.929) .919 (.954) .949 (.945) .650 (.966) .872 RNN5.2 BiLSTM(64), DO(0.3), D(200), D(100) 100 (.941) .947 (.947) .949 (.947) .884 (.956) .911 RNN6.0 BiLSTM(128), D(64) 100 (.951) .955 (.956) .959 (.960) .927 (.962) .919 RNN6.1 BiLSTM(128), D(64), D(32) 100 (.945) .962 (.947) .955 (.950) .919 (.966) .898 RNN7.0 BiLSTM(128), D(100), DO(0.3), D(50) 100 (.936) .937 (.945) .941 (.937) .922 (.954) .917 RNN7.1 BiLSTM(128), D(100), DO(0.3), D(50)batch
random
scenario
Note that we do not discuss the influence of varying epoch numbers, since we used early stopping, i.e. the training stops when the validation loss stops decreasing. 4 Common Crawl: https://commoncrawl.org/
http://www.surdeanu.info/mihai/bios/
https://commons.apache.org/proper/ commons-text/apidocs/org/apache/commons/ text/similarity/FuzzyScore.html
Table 17: Classification accuracy obtained on the validation (in parenthesis) and the test set for the second stage (ternary classification). The best results are printed in bold type. The basic structure of each model includes an embedding layer and an output layer (dense layer).
Towards Semantic Approaches for General-Purpose End-User Development. Mattia Atzeni, Maurizio Atzori, 10.1109/IRC.2018.000772018 Second IEEE International Conference on Robotic Computing (IRC). Mattia Atzeni and Maurizio Atzori. 2018a. To- wards Semantic Approaches for General-Purpose End-User Development. In 2018 Second IEEE Inter- national Conference on Robotic Computing (IRC), pages 369-376.
Translating Natural Language to Code: An Unsupervised Ontology-Based Approach. Mattia Atzeni, Maurizio Atzori, 10.1109/AIKE.2018.000092018 IEEE First International Conference on Artificial Intelligence and Knowledge Engineering (AIKE). Mattia Atzeni and Maurizio Atzori. 2018b. Translat- ing Natural Language to Code: An Unsupervised Ontology-Based Approach. In 2018 IEEE First In- ternational Conference on Artificial Intelligence and Knowledge Engineering (AIKE), pages 1-8.
Instructable intelligent personal agent. Amos Azaria, Jayant Krishnamurthy, Tom M Mitchell, Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16. the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16Phoenix, ArizonaAAAI PressAmos Azaria, Jayant Krishnamurthy, and Tom M. Mitchell. 2016. Instructable intelligent personal agent. In Proceedings of the Thirtieth AAAI Con- ference on Artificial Intelligence, AAAI'16, pages 2681-2689, Phoenix, Arizona. AAAI Press.
Programming in Natural Language. W Bruce, Alan W Ballard, Biermann, 10.1145/800177.810072Proceedings of the 1979 Annual Conference (ACM), ACM '79. the 1979 Annual Conference (ACM), ACM '79New York, NY, USAACMNLC" As a PrototypeBruce W. Ballard and Alan W. Biermann. 1979. Pro- gramming in Natural Language: "NLC" As a Proto- type. In Proceedings of the 1979 Annual Conference (ACM), ACM '79, pages 228-237, New York, NY, USA. ACM.
Spoken Language Support for Software Development. Andrew Begel, 10.1109/VLHCC.2004.49Proceedings of the 2004 IEEE Symposium on Visual Languages -Human Centric Computing, VLHCC '04. the 2004 IEEE Symposium on Visual Languages -Human Centric Computing, VLHCC '04USAIEEE Computer SocietyAndrew Begel. 2004. Spoken Language Support for Software Development. In Proceedings of the 2004 IEEE Symposium on Visual Languages -Human Centric Computing, VLHCC '04, pages 271-272, USA. IEEE Computer Society.
Spoken Programs. Andrew Begel, Susan L Graham, 10.1109/VLHCC.2005.58Proceedings of the 2005 IEEE Symposium on Visual Languages and Human-Centric Computing, VLHCC '05. the 2005 IEEE Symposium on Visual Languages and Human-Centric Computing, VLHCC '05USAIEEE Computer SocietyAndrew Begel and Susan L. Graham. 2005. Spoken Programs. In Proceedings of the 2005 IEEE Sym- posium on Visual Languages and Human-Centric Computing, VLHCC '05, pages 99-106, USA. IEEE Computer Society.
. Alan W Biermann, Bruce W Ballard, Natural Language Computation. Comput. Linguist. 62Alan W. Biermann and Bruce W. Ballard. 1980. To- ward Natural Language Computation. Comput. Lin- guist., 6(2):71-86.
An Experimental Study of Natural Language Programming. Alan W Biermann, Bruce W Ballard, Anne H Sigmon, 10.1016/S0020-7373(83)80005-4International Journal of Man-Machine Studies. 181Alan W. Biermann, Bruce W. Ballard, and Anne H. Sig- mon. 1983. An Experimental Study of Natural Lan- guage Programming. International Journal of Man- Machine Studies, 18(1):71-87.
Enriching word vectors with subword information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, 10.1162/tacl_a_00051Transactions of the Association for Computational Linguistics. 5Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.
Almond: The Architecture of an Open, Crowdsourced, Privacy-Preserving, Programmable Virtual Assistant. Giovanni Campagna, Rakesh Ramesh, Silei Xu, Michael Fischer, Monica S Lam, 10.1145/3038912.3052562Proceedings of the 26th International Conference on World Wide Web, WWW '17. the 26th International Conference on World Wide Web, WWW '17Republic and Canton of Geneva, SwitzerlandInternational World Wide Web Conferences Steering CommitteeGiovanni Campagna, Rakesh Ramesh, Silei Xu, Michael Fischer, and Monica S. Lam. 2017. Al- mond: The Architecture of an Open, Crowdsourced, Privacy-Preserving, Programmable Virtual Assis- tant. In Proceedings of the 26th International Con- ference on World Wide Web, WWW '17, pages 341- 350, Republic and Canton of Geneva, Switzerland. International World Wide Web Conferences Steering Committee.
Genie: A generator of natural language semantic parsers for virtual assistant commands. Giovanni Campagna, Silei Xu, Mehrad Moradshahi, Richard Socher, Monica S Lam, 10.1145/3314221.3314594Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2019. the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2019Phoenix, AZ, USAAssociation for Computing MachineryGiovanni Campagna, Silei Xu, Mehrad Moradshahi, Richard Socher, and Monica S. Lam. 2019. Genie: A generator of natural language semantic parsers for virtual assistant commands. In Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2019, pages 394-410, Phoenix, AZ, USA. Association for Computing Machinery.
Introduction to the CoNLL-2004 shared task: Semantic role labeling. Xavier Carreras, Lluís Màrquez, Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004. the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004Boston, Massachusetts, USAAssociation for Computational LinguisticsXavier Carreras and Lluís Màrquez. 2004. Introduc- tion to the CoNLL-2004 shared task: Semantic role labeling. In Proceedings of the Eighth Confer- ence on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004, pages 89-97, Boston, Massachusetts, USA. Association for Com- putational Linguistics.
Introduction to the CoNLL-2005 shared task: Semantic role labeling. Xavier Carreras, Lluís Màrquez, Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005). the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)Ann Arbor, MichiganAssociation for Computational LinguisticsXavier Carreras and Lluís Màrquez. 2005. Introduc- tion to the CoNLL-2005 shared task: Semantic role labeling. In Proceedings of the Ninth Confer- ence on Computational Natural Language Learning (CoNLL-2005), pages 152-164, Ann Arbor, Michi- gan. Association for Computational Linguistics.
Sequenceto-Action: End-to-End Semantic Graph Generation for Semantic Parsing. Bo Chen, Le Sun, Xianpei Han, 10.18653/v1/P18-1071Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaLong Papers1Association for Computational LinguisticsBo Chen, Le Sun, and Xianpei Han. 2018. Sequence- to-Action: End-to-End Semantic Graph Generation for Semantic Parsing. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 766- 777, Melbourne, Australia. Association for Compu- tational Linguistics.
Natural Language Processing (Almost) from Scratch. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, Pavel Kuksa, J. Mach. Learn. Res. 12Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural Language Processing (Almost) from Scratch. J. Mach. Learn. Res., 12:2493-2537.
VoiceCode: An Innovative Speech Interface for Programming-by-voice. Alain Désilets, David C Fox, Stuart Norton, 10.1145/1125451.1125502CHI '06 Extended Abstracts on Human Factors in Computing Systems, CHI EA '06. New York, NY, USAACMAlain Désilets, David C. Fox, and Stuart Norton. 2006. VoiceCode: An Innovative Speech Interface for Programming-by-voice. In CHI '06 Extended Ab- stracts on Human Factors in Computing Systems, CHI EA '06, pages 239-242, New York, NY, USA. ACM.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Coarse-to-fine decoding for neural semantic parsing. Li Dong, Mirella Lapata, 10.18653/v1/P18-1068Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics1Li Dong and Mirella Lapata. 2018. Coarse-to-fine de- coding for neural semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 731-742, Melbourne, Australia. Association for Computational Linguistics.
WordNet: An Electronic Lexical Database. Christiane Fellbaum, MIT PressChristiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press.
Integrating a Dialog Component into a Framework for Spoken Language Understanding. Sebastian Weigelt, Tobias Hey, Mathias Landhäußer, 10.1145/3194104.3194105Proceedings of the 6th International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering, RAISE '18. the 6th International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering, RAISE '18New York, NY, USAACMSebastian Weigelt, Tobias Hey, and Mathias Landhäußer. 2018a. Integrating a Dialog Com- ponent into a Framework for Spoken Language Understanding. In Proceedings of the 6th Interna- tional Workshop on Realizing Artificial Intelligence Synergies in Software Engineering, RAISE '18, pages 1-7, New York, NY, USA. ACM.
Detection of Conditionals in Spoken Utterances. Sebastian Weigelt, Tobias Hey, Vanessa Steurer, 10.1109/ICSC.2018.000212018 IEEE 12th International Conference on Semantic Computing (ICSC). Sebastian Weigelt, Tobias Hey, and Vanessa Steurer. 2018b. Detection of Conditionals in Spoken Utter- ances. In 2018 IEEE 12th International Conference on Semantic Computing (ICSC), pages 85-92.
Detection of Control Structures in Spoken Utterances. Sebastian Weigelt, Tobias Hey, Vanessa Steurer, 10.1142/S1793351X18400159International Journal of Semantic Computing. 123Sebastian Weigelt, Tobias Hey, and Vanessa Steurer. 2018c. Detection of Control Structures in Spoken Utterances. International Journal of Semantic Com- puting, 12(3):335-360.
Context Model Acquisition from Spoken Utterances. Sebastian Weigelt, Tobias Hey, Walter F Tichy, 10.18293/SEKE2017-083Proceedings of The 29th International Conference on Software Engineering & Knowledge Engineering. The 29th International Conference on Software Engineering & Knowledge EngineeringPittsburgh, PASebastian Weigelt, Tobias Hey, and Walter F. Tichy. 2017. Context Model Acquisition from Spoken Ut- terances. In Proceedings of The 29th International Conference on Software Engineering & Knowledge Engineering, pages 201-206, Pittsburgh, PA.
Roger that! Learning How Laypersons Teach New Functions to Intelligent Systems. Sebastian Weigelt, Vanessa Steurer, Tobias Hey, Walter F Tichy, 10.1109/ICSC.2020.000202020 IEEE 14th International Conference on Semantic Computing (ICSC). Sebastian Weigelt, Vanessa Steurer, Tobias Hey, and Walter F. Tichy. 2020. Roger that! Learning How Laypersons Teach New Functions to Intelligent Sys- tems. In 2020 IEEE 14th International Conference on Semantic Computing (ICSC), pages 93-100.
String Comparator Metrics and Enhanced Decision Rules in the Fellegi-Sunter Model of Record Linkage. William E Winkler, William E. Winkler. 1990. String Comparator Metrics and Enhanced Decision Rules in the Fellegi-Sunter Model of Record Linkage.
Understanding natural language. Terry Winograd, 10.1016/0010-0285(72)90002-3Cognitive Psychology. 31Terry Winograd. 1972. Understanding natural lan- guage. Cognitive Psychology, 3(1):1-191. |
6,087,019 | Détection de conflits dans les communautés épistémiques en ligne | 1) UMR 7503 LORIA, CNRS Campus scientifique 54 506 Vandoeuvre-lès-Nancy (2) UMR 5191 ICAR, CNRS 5 parvis René Descartes 69342 Lyon Cedex 07 (3) UMR 5141 LTCI, CNRS 46 rue Barrault 75 634 Paris Cedex 13 (4) CNAM-CRTD,RÉSUMÉLa présence de conflits dans les communautés épistémiques en ligne peut s'avérer bloquante pour l'activité de conception. Nous présentons une étude sur la détection automatique de conflit dans les discussions entre contributeurs Wikipedia qui s'appuie sur des traits de surface tels que la subjectivité ou la connotation des énoncés et évaluons deux règles de décision : l'une découle d'un modèle dialectique en exploitant localement la structure linéaire de la discussion, la subjectivité et la connotation ; l'autre, plus globale, ne s'appuie que sur la taille des fils et les marques de subjectivité au détriment des marques de connotation. Nous montrons que ces deux règles produisent des résultats similaires mais que la simplicité de la règle globale en fait une approche préférée dans la détection des conflits.ABSTRACTConflicts detection in online epistemic communitiesConflicts in online epistemic communities can be a blocking factor when producing knowledge. We present a way to automatically detect conflict in Wikipedia discussions, based on subjectivity and connotation marks. Two rules are evaluated : a local rule that uses the structure of the discussion threads, connotation and subjectivity marks and a global rule that takes the whole thread into account and only subjectivity. We show that the two rules produce similar results but that the simplicity of the global rule makes it a preferred approach to detect conflicts.MOTS-CLÉS : wikipedia, conflit, syntaxe, sémantique, interaction. | [
59863706,
232021594,
15631550
] | Détection de conflits dans les communautés épistémiques en ligne
TALNCopyright TALN2012
Actes De La Conférence Conjointe
Jep-Taln-Recital
Détection de conflits dans les communautés épistémiques en ligne
GrenobleTALN22012wikipediaconflictsyntaxsemanticsinteraction
1) UMR 7503 LORIA, CNRS Campus scientifique 54 506 Vandoeuvre-lès-Nancy (2) UMR 5191 ICAR, CNRS 5 parvis René Descartes 69342 Lyon Cedex 07 (3) UMR 5141 LTCI, CNRS 46 rue Barrault 75 634 Paris Cedex 13 (4) CNAM-CRTD,RÉSUMÉLa présence de conflits dans les communautés épistémiques en ligne peut s'avérer bloquante pour l'activité de conception. Nous présentons une étude sur la détection automatique de conflit dans les discussions entre contributeurs Wikipedia qui s'appuie sur des traits de surface tels que la subjectivité ou la connotation des énoncés et évaluons deux règles de décision : l'une découle d'un modèle dialectique en exploitant localement la structure linéaire de la discussion, la subjectivité et la connotation ; l'autre, plus globale, ne s'appuie que sur la taille des fils et les marques de subjectivité au détriment des marques de connotation. Nous montrons que ces deux règles produisent des résultats similaires mais que la simplicité de la règle globale en fait une approche préférée dans la détection des conflits.ABSTRACTConflicts detection in online epistemic communitiesConflicts in online epistemic communities can be a blocking factor when producing knowledge. We present a way to automatically detect conflict in Wikipedia discussions, based on subjectivity and connotation marks. Two rules are evaluated : a local rule that uses the structure of the discussion threads, connotation and subjectivity marks and a global rule that takes the whole thread into account and only subjectivity. We show that the two rules produce similar results but that the simplicity of the global rule makes it a preferred approach to detect conflicts.MOTS-CLÉS : wikipedia, conflit, syntaxe, sémantique, interaction.
Problématique
Les communautés épistémiques en ligne sont des communautés qui rassemblent des individus dans le but de concevoir collectivement des ressources : ontologies, articles, spécifications, etc. (Barcellini et al., 2008). Nous nous plaçons dans la perspective de l'étude de ces communautés et proposons un outil automatique de détection de fils de discussion conflictuels. L'objectif est double. Tout d'abord un outil de détection de conflit permet de repérer automatiquement les fils de discussion qui peuvent être bloquants ou au contraire productifs pour l'activité de conception. Ensuite, l'outil permet de détecter automatiquement les individus les plus conflictuels. Dans les deux cas, cette détection automatique constitue un outil utile pour les gestionnaires de communautés.
La construction d'un outil de détection de conflits s'inscrit dans le cadre du projet CCCP-Prosodie 1 , au sein duquel la communauté Astronomie de Wikipedia fut étudiée. En particulier, l'article dédié à l'astre céleste Pluton, qui a perdu son statut de planète en 2006 fit l'objet d'intenses discussions à propos de son renommage. Dans (Fréard et al., 2010), une annotation manuelle des contributions des participants de la page de discussion autour de Pluton est effectuée. La communauté est alors étudiée sous l'angle de l'évolution du conflit autour du renommage grâce à cette annotation manuelle. Toutefois les catégories d'annotation proposées par (Fréard et al., 2010), elles-mêmes inspirées de (Baker et al., 2009) sont difficiles à reproduire automatiquement (acte de dialogue, catégorisation du contenu propositionnel, niveau d'expertise du contributeur) et particulièrement sur le texte libre des pages de discussion Wikipedia.
Nous proposons dans cet article d'explorer les traits accessibles par une méthode automatique qui permettent de caractériser le conflit et proposons alors une méthode pour la détection de ces conflits. Nous discutons d'abord la définition du conflit dans la partie 2, proposons plusieurs méthodes pour détecter le conflit dans la partie 3, et concluons par une évaluation des différentes méthodes dans la partie 4.
2 Qu'est-ce qu'un conflit ?
Le conflit dans l'argumentation
La notion de conflit (ou conflit d'opinions avouées) figure à la base des modèles dialectiques de l'argumentation dans le dialogue (Barth et Krabbe, 1982;Mackenzie, 1985), qui analysent le processus argumentatif comme un jeu d'attaques et de défenses visant à déterminer si une proposition (ou thèse) est tenable sous le feu de la critique. Selon le modèle de Barth & Krabbe (op.cit), le conflit est défini par la distribution de rôles vis-à-vis d'une thèse : le proposant qui doit défendre la thèse sans se dédire, l'opposant qui a le droit d'attaquer ou mettre en doute les allégations du proposant sans toutefois revenir sur ses propres concessions. En pratique, on observe qu'un conflit est déclaré lorsque 3 actes de dialogue (ou argumentation moves) ont été produits : (1) l'affirmation d'une proposition p, (2) l'attaque ou la mise en doute de p, (3) la défense de p par une justification ou une contre-attaque.
Dans le cadre d'une discussion critique (ou argumentation rationnelle), les arguments ne doivent porter que sur les énoncés et les objets de discours. Or, en réalité, et dans Wikipedia en particulier, les participants recourent fréquemment à des raisonnements fallacieux (van Eemeren et Grootendorst, 1992) -arguments d'autorité, attaques à la personne, etc. -dans le but de décrédibiliser l'interlocuteur, relativiser la portée de ses dires, voire l'exclure du débat (cas des trolls). Il est donc nécessaire d'étendre la notion de conflit de sorte à inclure les conflits personnels, qui portent sur la légitimité d'une personne à participer au débat, et les conflits méta-argumentatifs, portant sur le fait qu'un participant a respecté ou non les règles du débat instituées par la communauté.
Approche discursive du conflit
La mise en oeuvre d'un modèle dialectique ci-dessus, qui plus est dans sa version étendue, est largement problématique tant elle nécessite de mobiliser des connaissances du domaine (étendues aux personnes et aux règles communautaires) et de techniques d'analyse de discours (van Eemeren et al., 1993;Asher et Lascarides, 2003).
Sans toutefois abandonner le modèle dialectique, nous proposons d'exploiter des indices de surface : la polarité (connotation positive ou négative des énoncés) et subjectivité (mentions directe ou indirecte des locuteurs dans le discours). Si la connotation est un indice relativement faible dans le cadre général et ne suffit pas à marquer fiablement une opinion et encore moins un argument, cet indice prend tout son sens dans un contexte argumentatif. La mobilisation d'une connotation négative peut marquer les attaques tandis que les connotations positives peuvent marquent les concessions et les défenses. La subjectivité, quant à elle, est d'une part une marque d'implication du locuteur dans le discours et sert d'autre part au démarquage des thèses dans un conflit mixte (ce que tu dis vs. ce que je dis). Elle est en outre un indice essentiel au repérage de conflits personnels ou méta-argumentatifs.
L'extrait de la figure 1, tiré de la page de discussion de Wikipedia sur Pluton, illustre un conflit entre deux participants, M et R. Il est remarquable que chaque message porte une connotation négative (neg) et que les deux participants s'engagent personnellement en utilisant la première ou la seconde personne : M attaque R avec la seconde personne, et R se défend avec la première personne.
FIGURE 1 -Extrait d'un conflit alternant les marqueurs subjectifs dans un contexte négatif
Cet exemple nous incite à considérer une règle naïve de détection de conflit où le conflit est défini comme une situation négative dans lequelle les participants alternent les marques subjectives. Cette règle est implémentée et évaluée dans la section 4.
Mise en oeuvre
Nous proposons d'explorer la pertinence des dimensions de subjectivité et de polarité pour la détection du conflit en deux temps : d'abord nous proposons une annotation de bas niveau des énoncés des participants en termes de subjectivité et de polarité, ensuite une annotation de haut niveau afin de déterminer si un fil de discussion porte ou non un conflit.
Annotation de la subjectivité et de la polarité
L'annotation des traits de subjectivité et de polarité consiste à associer à chaque énoncé et à chaque message une liste de traits. L'acquisition de la dimension de subjectivité peut être compliquée si on considère la subjectivité inhérente des adjectifs (comparer par exemple "absurde" et "incorrect"). Nous nous limitons alors à la présence de marques pronominales. Chaque pronom personnel (je, nous), posssessif (le mien), et déterminant possessif (mon, nôtre) de première personne (respectivement de deuxième personne) est annoté subjective-1st (respectivement subjective-2nd), un énoncé pouvant comporter plusieurs de ces marques.
La reconnaissance de la polarité d'un énoncé est un problème complexe car la polarité d'un mot change en fonction du co-texte (négation syntaxique, rôle syntaxique d'un mot, etc.) ou du contexte d'énonciation (point de vue de celui qui énonce). De nombreuses approches se sont attaquées au problème général dans le cadre de la détection d'opinion ou de sentiments (Pang et al., 2002;Wilson et al., 2009;Pak et Paroubek, 2011). Nous pouvons cependant supposer que dans le cadre du débat argumentatif, le problème est moins difficile. Par exemple l'énoncé "l'école élémentaire Gavin a été condamnée en Avril 2004" tiré de (Wilson et al., 2009) illustre l'emploi d'un mot négatif qui exprime une opinion neutre dans le cadre général mais qui peut être considérée comme une attaque dans le débat. Le problème est alors plus simple mais non trivial, puisque la négation syntaxique intervient tout de même. Y compris dans le débat, "ce n'est pas absurde" ne peut être considéré comme négatif. Nous proposons alors deux méthodes d'annotation de la polarité, une méthode purement lexicale, et une méthode syntaxique qui permet de filtrer des cas comme "ce n'est pas absurde". Ces deux méthodes sont évaluées et comparées dans la section 4.
Analyse lexicale L'analyse lexicale ne repose que sur la polarité des mots sans considérer l'influence du co-texte, le problème se résumant à obtenir cette polarité. Il existe en Anglais des lexiques de polarités comme Wordnet-affect (Valitutti et al., 2004), ou SentiWordnet (Baccianella et al., 2010. En Français, on peut citer le lexique de (Mathieu, 2004) mais celui-ci est limité à 950 mots. Nous avons alors procédé à l'annotation manuelle en termes de polarités (positive, négative ou neutre) du lexique du Français fondamental qui comporte les 8000 lemmes les plus fréquents du Français (Gougenheim et al., 1964). Ce lexique étant relativement limité en taille, nous l'avons étendu automatiquement en nous appuyant sur EuroWordnet-FR (Vossen, 1998), considérant que les hyponymes d'un lemme connoté l'étaient également. Etant donné le lexique, l'annotation d'un énoncé se limite à collecter les polarités des mots qu'il contient. L'analyse purement lexicale considère alors des énoncés du type "ce n'est pas absurde" comme des énoncés négatifs.
Analyse syntaxico-sémantique profonde L'analyse profonde s'appuie sur l'analyseur syntaxique LLP2 (Lopez, 2000) que nous avons utilisé dans plusieurs projets et dont nous souhaitions déterminer la capacité à s'adapter à du texte libre. La grammaire est une grammaire LTAG (Joshi et Schabes, 1997), comportant 1500 arbres, ancrée avec le LEFFF (Sagot, 2010), un lexique d'environ 530 000 formes fléchies. Les dérivations partielles de l'analyseur LLP2 font l'objet de réécritures successives (Bedaride et Gardent, 2009) afin de produire une forme sémantique plus facile à manipuler (120 règles). La forme sémantique est ensuite annotée selon la polarité grâce à des règles qui s'appuient sur le lexique de polarités précédemment construit. Par exemple, un énoncé est annoté négativement s'il comporte un verbe négatif, un modifieur négatif (adverbe ou adjectif) ou un verbe positif nié syntaxiquement : "je n'aime pas" sera négatif car aimer est positif. Les règles les plus complexes s'appuient également sur la taxonomie EuroWordnet, par exemple si un énoncé contient un verbe hyponyme de "penser", nié syntaxiquement, dont la subordonnée est polarisée, l'énoncé est annoté selon l'inverse de la polarité de la subordonnée, typiquement "je ne pense pas que cela soit absurde" sera positif, car la subordonnée est négative et la principale niée syntaxiquement. Au total, l'annotation de la polarité contient 55 règles de ce type.
Qu'il s'agisse de la méthode lexicale ou syntaxique, un énoncé est annoté comme une liste de marques de subjectivité et de polarité. Un énoncé peut alors contenir plusieurs marques de même type, mais également des marques à la fois négatives ou positives. Un message est annoté en faisant l'union des marques des énoncés qui le compose. Par exemple avec l'annotation lexicale, le message "Merci
Annotation des conflits
L'annotation des conflits consiste à déterminer, pour un fil de discussion, s'il contient ou non un conflit. Afin d'effectuer automatiquement cette décision, nous proposons de comparer deux types de règles qui s'appuient toutes deux sur les traits de subjectivité et de polarité acquis automatiquement par l'annotation lexicale ou syntaxique.
Le premier type de règle que nous désirons tester est une règle dialectique inspirée de l'exemple de la figure 1 qui s'appuie sur la structure hiérarchique des messages sur Wikipedia, similaire à la structure qu'on peut trouver dans une liste de diffusion. Un conflit est alors défini comme la présence de deux messages dans la hiérarchie, tous deux négatifs, et qui alternent les marques subjectives, c'est-à-dire un message s'appuyant sur la première personne, suivi plus loin d'un message s'appuyant sur la seconde personne ou vice versa. Un fil de discussion est annoté comme conflict s'il contient une telle séquence et no_conflict sinon. En vertu de sa dépendance à la structure locale d'un fil, nous référerons à cette règle comme la règle "locale".
Le second type de règle est une règle apprise à partir d'un corpus annoté et d'un classifieur de type arbre de décision (C4.5). Deux annotateurs ont annoté manuellement les 153 fils de discussion de la page de discussion associée à l'article Astrologie, un des articles appartenant à la catégorie Wikipedia article sujet à controverses. La page de discussion Astrologie contient 982 messages et rassemble 88 auteurs. Chaque fil est annoté selon la présence ou l'absence de conflit. L'accord inter-annotateur obtenu par la méthode du kappa est plutôt bon κ = 0.644 (p < 0.0001). Après adjudication (κ = 1.0), la page de discussion Astrologie comporte environ 20% de fils conflictuels (30 fils conflictuels pour 123 non conflictuels). Les traits d'apprentissage sont issus de l'annotation automatique de la polarité et de la subjectivité : un fil de discussion est représenté comme un vecteur à cinq dimensions 〈n, p, s 1 , s 2 , t〉 où n est le taux de négativité du fil, p le taux de positivité, s 1 et s 2 respectivement les taux de marques subjectives de première et seconde personne et t la taille du fil en termes de nombre de messages. Les taux sont calculés comme le rapport du nombre d'occurrence d'un marqueur dans le fil sur le nombre de mots du fil, et sont en conséquence très inférieurs à 1. La méthode statistique construit alors un arbre de décision pour les classes conflict ou no_conflict à partir de ces traits. L'arbre de décision ainsi obtenu s'appuie seulement sur la taille du fil et le taux de subjectivité seconde personne : si un fil contient strictement plus de 4 messages et que son taux s 2 est supérieur à 0.00274, alors le fil est considéré comme conflictuel. En vertu de son application sur l'ensemble d'un fil, nous référerons à cette règle comme la règle "globale".
Nous évaluons dans la section suivante les types de règles d'annotation des conflits ainsi que la pertinence de l'analyse syntaxique par rapport à une simple analyse lexicale.
Evaluation et résultats
Nous avons choisi de tester les différentes méthodes et règles sur les pages de discussion associées aux articles Communisme et Jésus de Nazareth, deux articles controversés. Les deux pages réunies comportent 320 fils de discussion, 2659 messages et 256 auteurs. Seule l'annotation de la page Communisme a été faite en double, et elle fournit un bon accord inter-annotateur de κ = 0.79 (p < 0.0001). Les deux pages réunies comportent davantage de conflits que la page Astrologie puisqu'environ 38% des fils sont conflictuels (122 fils conflictuels pour 198 fils non conflictuels). Le corpus de test est annoté automatiquement en termes de polarité et de subjectivité selon la méthode lexicale et la méthode syntaxique, et on évalue la capacité des différentes règles d'annotation des conflits à reproduire correctement l'annotation manuelle.
Nous résumons l'ensemble des résultats dans la table 1 et donnons pour chaque combinaison de techniques (locale ou globale pour l'annotation des conflits, lexicale ou syntaxique pour l'annotation des polarités), le kappa, la précision et le rappel pour chaque catégorie (la présence de conflit C ou son absence C). La combinaison globale/syntaxique n'est pas présentée étant donné qu'elle fournit des résultats similaires à la combinaison globale/lexical en vertu du fait que la règle globale ne repose pas sur la polarité. Le premier axe de comparaison, entre la règle locale et la règle globale montre que la règle globale produit des résultats légèrement meilleurs. Le second axe de comparaison entre l'approche lexicale ou syntaxique suggère que la méthode syntaxique n'apporte rien : si elle permet de gagner environ 2 points en rappel pour la présence de conflits, elle en perd 7 en précision. Notre étude ne permet pas de conclure sur l'importance des polarités dans l'établissement d'un conflit étant donné que les trois méthodes produisent des résultats qui ne sont pas significativement différents et que nous ignorons la qualité de l'annotation a priori des polarités. En revanche, cette étude met en avant l'intérêt de la subjectivité. Il s'agit en effet d'un problème beaucoup moins complexe que celui des polarités mais qui offre pourtant de très bons résultats. L'extrême simplicité de la règle globale (elle peut se limiter au repérage des pronoms de seconde personne) suggère alors que la subjectivité est une piste de recherche qui doit être privilégiée pour la détection automatique des conflits.
Discussion et perspectives
Nous avons évalué deux règles de détection de conflit dans des fils de discussion, s'appuyant sur les mêmes traits de surface, subjectivité et polarité, l'une fonctionnant au niveau local (deux contributions dans le même fil, liées dans un rapport proposition/réplique), l'autre au niveau global (l'ensemble des traits des contributions est rapporté au niveau du fil). L'évaluation a montré que malgré leur différence de nature ces règles obtiennent des résultats comparables et de niveau tout à fait correct. De plus, la règle globale obtient un bon score malgré le fait qu'elle n'exploite pas la polarité et que d'autre part la longueur du fil et le taux de subjectivité de seconde personne sont des indices suffisants. Selon cette règle, il suffit de prendre en compte le nombre de contributions (un conflit se règle rarement en moins de 5 interventions, car il en faut au minimum 3 pour se déclarer) et une proportion minimale de marques subjectives de seconde personne, c'est-à-dire la présence durable de 2 voix, d'un « dialogisme diffus » (car la structure des contributions n'est pas prise en compte). Nous estimons que la raison principale pour laquelle la méthode globale est si efficace tient à l'activité épistémique de la communauté Wikipedia : la page de discussion est un espace de parole finalisé dédié à la tâche de conception. Or dans ce cas, l'usage de la seconde personne est nettement moins fréquent que dans des discussions non finalisées.
Cette expérience a montré que la piste de la subjectivité est une piste intéressante à exploiter pour la détection des conflits. Nous pouvons tirer parti de la simplicité de la règle globaleindépendante du domaine et facilement transposable à d'autres langues -pour évaluer si elle reste efficace sur des discussions du Wikipedia anglophone. Par ailleurs, il est possible d'étudier une autre facette de la subjectivité grâce aux termes marqués subjectivement ("absurde" versus "incorrect") et de vérifier alors si leur présence ou absence contribue à améliorer la détection des conflits. Enfin, il peut être intéressant de tester la place de la subjectivité dans d'autres types d'interactions moins finalisées, par exemple dans des forums généralistes.
Références
M
(2ème, neg): ''Pluton est bien la seule planète naine, non ?''. Merci de montrer avec une aussi éclatante clarté que finalement, tu ne connais pas grand chose en astronomie. R (1ère, neg): Je voulais dire "Pluton est bien la seule planète naine à s'appeler ainsi (Pluton), non ?". Merci de ne pas me prendre pour plus inculte que je ne le suis. M (2ème, neg): Non. Ceres et Eris les deux autres planetes naines sont nommées selon la meme convention […] Tu comprends?R (1ère, neg): En fait, ma question était purement rhétorique, je ne parlais pas de la dénomination avec numéro mais simplement du nom Pluton.[...]
[...] Maintenant que vous avez fait votre critique et répandu vos insultes, qu'attendez-vous pour nous aider à améliorer les articles en utilisant votre immense culture orthographique, historique et philosophique ?" sera annoté par : [positive, subjective-2nd, subjective-2nd, negative, subjective-2nd, negative, positive, positive].
ASHER, N. et LASCARIDES, A. (2003). Logics of Conversation. Cambridge University Press, Cambridge. BACCIANELLA, S., ESULI, A. et SEBASTIANI, F. (2010). Sentiwordnet 3.0 : An enhanced lexical resource for sentiment analysis and opinion mining. In CALZOLARI, N., CHOUKRI, K., MAEGAARD, B., MARIANI, J., ODIJK, J., PIPERIDIS, S., ROSNER, M. et TAPIAS, D., éditeurs : Proceedings of the
Seventh International Conference on Language Resources and Evaluation (LREC'10). Valletta, MaltaEuropean Language Resources Association (ELRASeventh International Conference on Language Resources and Evaluation (LREC'10), Valletta, Malta. European Language Resources Association (ELRA).
Méthodologies d'analyse de situations coopératives de conception : le corpus MOSAIC, chapitre Etude des profils interactifs dans la conception collective en architecture. M Baker, F Détienne, K Lund, A Séjourné, Presses Universitaires de NancyBAKER, M., DÉTIENNE, F., LUND, K. et SÉJOURNÉ, A. (2009). Méthodologies d'analyse de situations coopératives de conception : le corpus MOSAIC, chapitre Etude des profils interactifs dans la conception collective en architecture. Presses Universitaires de Nancy.
User and developer mediation in an open source software community : boundary spanning through cross participation in online discussions. F Barcellini, F Détienne, J.-M Burkhardt, International Journal of Human Computer Studies. 667BARCELLINI, F., DÉTIENNE, F. et BURKHARDT, J.-M. (2008). User and developer mediation in an open source software community : boundary spanning through cross participation in online discussions. International Journal of Human Computer Studies, 66(7):558-570.
Semantic normalisation : a framework and an experiment. E M Barth, E C Krabbe, De Gruyter, Berlin, P Bedaride, C Gardent, 8th International Workshop on Computational Semantics. Tilburg, NetherlandFrom Axiom to DialogueBARTH, E. M. et KRABBE, E. C. (1982). From Axiom to Dialogue. de Gruyter, Berlin. BEDARIDE, P. et GARDENT, C. (2009). Semantic normalisation : a framework and an experiment. In 8th International Workshop on Computational Semantics, Tilburg, Netherland.
The role of argumentation in online epistemic communities : the anatomy of a conflict in wikipedia. D Fréard, A Denis, F Détienne, M Baker, M Quignard, F Barcellini, Proceedings of European Conference of Cognitive Ergonomics. European Conference of Cognitive ErgonomicsFRÉARD, D., DENIS, A., DÉTIENNE, F., BAKER, M., QUIGNARD, M. et BARCELLINI, F. (2010). The role of argumentation in online epistemic communities : the anatomy of a conflict in wikipedia. In Proceedings of European Conference of Cognitive Ergonomics.
L'élaboration du français fondamental : étude sur l'établissement d'un vocabulaire et d'une grammaire de base. G Gougenheim, R Michea, P Rivenc, A Sauvageot, Paris Didier, A Joshi, Y Schabes, G Rozenberg, A Salomaa, éditeurs : Handbook of Formal Languages. Berlin, New YorkSpringer3Tree-adjoining grammarsGOUGENHEIM, G., MICHEA, R., RIVENC, P. et SAUVAGEOT, A. (1964). L'élaboration du français fondamental : étude sur l'établissement d'un vocabulaire et d'une grammaire de base. Didier, Paris. JOSHI, A. et SCHABES, Y. (1997). Tree-adjoining grammars. In ROZENBERG, G. et SALOMAA, A., éditeurs : Handbook of Formal Languages, volume 3, pages 69 -124. Springer, Berlin, New York.
Extended partial parsing for lexicalized tree grammars. P Lopez, Proc. of the Sixth International Workshop on Parsing Technologies (IWPT 2000). of the Sixth International Workshop on Parsing Technologies (IWPT 2000)LOPEZ, P. (2000). Extended partial parsing for lexicalized tree grammars. In Proc. of the Sixth International Workshop on Parsing Technologies (IWPT 2000), pages 159-170.
No Logic before Friday. J D Mackenzie, Synthese. 63MACKENZIE, J. D. (1985). No Logic before Friday. Synthese, 63:329-341.
A semantic lexicon for emotions and feelings. Y Y Mathieu, Yan, James G Qu, J W Shanahan, éditeur : American Association for Artificial Intelligence (AAAI) Spring symposium on Exploring Attitude and Affect in Text, AAAI Technical Report Series. Menlo Park, USAAAAI PressMATHIEU, Y. Y. (2004). A semantic lexicon for emotions and feelings. In YAN QU, James G. Shanahan, J. W., éditeur : American Association for Artificial Intelligence (AAAI) Spring symposium on Exploring Attitude and Affect in Text, AAAI Technical Report Series, AAAI Press, Menlo Park, USA, pages 89-93.
Classification en polarité de sentiments avec une représentation textuelle à base de sous-graphes d'arbres de dépendance. A Pak, P Paroubek, Montpellier, B Pang, L Lee, S Vaithyanathan, Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP)Proceedings of TALN 2011PAK, A. et PAROUBEK, P. (2011). Classification en polarité de sentiments avec une représenta- tion textuelle à base de sous-graphes d'arbres de dépendance. In Proceedings of TALN 2011, Montpellier. PANG, B., LEE, L. et VAITHYANATHAN, S. (2002). Thumbs up ? Sentiment classification using machine learning techniques. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 79-86.
The LEFFF, a freely available and large-coverage morphological and syntactic lexicon for french. B Sagot, Proceedings of LREC 2010. LREC 2010La Valette, MalteSAGOT, B. (2010). The LEFFF, a freely available and large-coverage morphological and syntactic lexicon for french. In Proceedings of LREC 2010, La Valette, Malte.
Reconstructing Argumentative Discourse. Studies in Rhetoric and Communication. A Valitutti, C Strapparava, O Stock, F H Van Eemeren, R Grootendorst, Erlbaum, N J Mahwah, F H Van Eemeren, R Grootendorst, S Jackson, S Jacobs, PsychNology Journal. University of Alabama PressCommunication, Argumentation, FallaciesVALITUTTI, A., STRAPPARAVA, C. et STOCK, O. (2004). Developing affective lexical resources. PsychNology Journal, pages 61-83. van EEMEREN, F. H. et GROOTENDORST, R. (1992). Communication, Argumentation, Fallacies. Erlbaum, Mahwah, N. J. van EEMEREN, F. H., GROOTENDORST, R., JACKSON, S. et JACOBS, S. (1993). Reconstructing Argumentative Discourse. Studies in Rhetoric and Communication. University of Alabama Press, London.
EuroWordNet : a multilingual database with lexical semantic networks for European Languages. P Vossen, KluwerDordrechtVOSSEN, P. (1998). EuroWordNet : a multilingual database with lexical semantic networks for European Languages. Kluwer, Dordrecht.
Recognizing contextual polarity : An exploration of features for phrase-level sentiment analysis. T Wilson, J Wiebe, P Hoffmann, Computational Linguistics. 353WILSON, T., WIEBE, J. et HOFFMANN, P. (2009). Recognizing contextual polarity : An exploration of features for phrase-level sentiment analysis. Computational Linguistics, 35(3):399-433. |
6,183,518 | Projecting Parameters for Multilingual Word Sense Disambiguation | We report in this paper a way of doing Word Sense Disambiguation (WSD) that has its origin in multilingual MT and that is cognizant of the fact that parallel corpora, wordnets and sense annotated corpora are scarce resources. With respect to these resources, languages show different levels of readiness; however a more resource fortunate language can help a less resource fortunate language. Our WSD method can be applied to a language even when no sense tagged corpora for that language is available. This is achieved by projecting wordnet and corpus parameters from another language to the language in question. The approach is centered around a novel synset based multilingual dictionary and the empirical observation that within a domain the distribution of senses remains more or less invariant across languages. The effectiveness of our approach is verified by doing parameter projection and then running two different WSD algorithms. The accuracy values of approximately 75% (F1-score) for three languages in two different domains establish the fact that within a domain it is possible to circumvent the problem of scarcity of resources by projecting parameters like sense distributions, corpus-co-occurrences, conceptual distance, etc. from one language to another. | [
6012701,
1487550,
1580335
] | Projecting Parameters for Multilingual Word Sense Disambiguation
August 2009. 2009
Mitesh M Khapra miteshk@cse.iitb.ac.in
Department of Computer Science and Engineering
Indian Institute of Technology
Bombay Powai, Mumbai -400076MaharashtraIndia
Sapan Shah
Department of Computer Science and Engineering
Indian Institute of Technology
Bombay Powai, Mumbai -400076MaharashtraIndia
Piyush Kedia
Department of Computer Science and Engineering
Indian Institute of Technology
Bombay Powai, Mumbai -400076MaharashtraIndia
Pushpak Bhattacharyya
Department of Computer Science and Engineering
Indian Institute of Technology
Bombay Powai, Mumbai -400076MaharashtraIndia
Projecting Parameters for Multilingual Word Sense Disambiguation
ACL and AFNLP
the 2009 Conference on Empirical Methods in Natural Language ProcessingSingaporeAugust 2009. 2009
We report in this paper a way of doing Word Sense Disambiguation (WSD) that has its origin in multilingual MT and that is cognizant of the fact that parallel corpora, wordnets and sense annotated corpora are scarce resources. With respect to these resources, languages show different levels of readiness; however a more resource fortunate language can help a less resource fortunate language. Our WSD method can be applied to a language even when no sense tagged corpora for that language is available. This is achieved by projecting wordnet and corpus parameters from another language to the language in question. The approach is centered around a novel synset based multilingual dictionary and the empirical observation that within a domain the distribution of senses remains more or less invariant across languages. The effectiveness of our approach is verified by doing parameter projection and then running two different WSD algorithms. The accuracy values of approximately 75% (F1-score) for three languages in two different domains establish the fact that within a domain it is possible to circumvent the problem of scarcity of resources by projecting parameters like sense distributions, corpus-co-occurrences, conceptual distance, etc. from one language to another.
Introduction
Currently efforts are on in India to build large scale Machine Translation and Cross Lingual Search systems in consortia mode. These efforts are large, in the sense that 10-11 institutes and 6-7 languages spanning the length and breadth of the country are involved. The approach taken for translation is transfer based which needs to tackle the problem of word sense disambiguation (WSD) (Sergei et. al., 2003). Since 90s machine learning based approaches to WSD using sense marked corpora have gained ground (Eneko Agirre & Philip Edmonds, 2007). However, the creation of sense marked corpora has always remained a costly proposition. Statistical MT has obviated the need for elaborate resources for WSD, because WSD in SMT happens implicitly through parallel corpora (Brown et. al., 1993). But parallel corpora too are a very costly resource.
The above situation brings out the challenges involved in Indian language MT and CLIR. Lack of resources coupled with the multiplicity of Indian languages severely affects the performance of several NLP tasks. In the light of this, we focus on the problem of developing methodologies that reuse resources. The idea is to do the annotation work for one language and find ways of using them for another language.
Our work on WSD takes place in a multilingual setting involving Hindi (national language of India; 500 million speaker base), Marathi (20 million speaker base), Bengali (185 million speaker base) and Tamil (74 million speaker base). The wordnet of Hindi and sense marked corpora of Hindi are used for all these languages. Our methodology rests on a novel multilingual dictionary organization and on the idea of "parameter projection" from Hindi to the other languages. Also the domains of interest are tourism and health.
The roadmap of the paper is as follows. Section 2 describes related work. In section 3 we introduce the parameters essential for domain-specific WSD. Section 4 builds the case for parameter projection. Section 5 introduces the Multilingual Dictionary Framework which plays a key role in parameter projection. Section 6 is the core of the work, where we present parameter projection from one language to another. Section 7 describes two WSD algorithms which combine various parameters for do-main-specific WSD. Experiments and results are presented in sections 8 and 9. Section 10 concludes the paper.
Related work
Knowledge based approaches to WSD such as Lesk"s algorithm (Michael Lesk, 1986), Walker"s algorithm (Walker D. & Amsler R., 1986), conceptual density (Agirre Eneko & German Rigau, 1996) and random walk algorithm (Mihalcea Rada, 2005) essentially do Machine Readable Dictionary lookup. However, these are fundamentally overlap based algorithms which suffer from overlap sparsity, dictionary definitions being generally small in length.
Supervised learning algorithms for WSD are mostly word specific classifiers, e.g., WSD using SVM (Lee et. al., 2004), Exemplar based WSD (Ng Hwee T. & Hian B. Lee, 1996) and decision list based algorithm (Yarowsky, 1994). The requirement of a large training corpus renders these algorithms unsuitable for resource scarce languages.
Semi-supervised and unsupervised algorithms do not need large amount of annotated corpora, but are again word specific classifiers, e.g., semisupervised decision list algorithm (Yarowsky, 1995) and Hyperlex (Véronis Jean, 2004)). Hybrid approaches like WSD using Structural Semantic Interconnections (Roberto Navigli & Paolo Velardi, 2005) use combinations of more than one knowledge sources (wordnet as well as a small amount of tagged corpora). This allows them to capture important information encoded in wordnet (Fellbaum, 1998) as well as draw syntactic generalizations from minimally tagged corpora.
At this point we state that no single existing solution to WSD completely meets our requirements of multilinguality, high domain accuracy and good performance in the face of not-so-large annotated corpora.
Parameters for WSD
We discuss a number of parameters that play a crucial role in WSD. To appreciate this, consider the following example:
The river flows through this region to meet the sea.
The word sea is ambiguous and has three senses as given in the Princeton Wordnet (PWN): S1: (n) sea (a division of an ocean or a large body of salt water partially enclosed by land) S2: (n) ocean, sea (anything apparently limitless in quantity or volume) S3: (n) sea (turbulent water with swells of considerable size) "heavy seas"
Our first parameter is obtained from Domain specific sense distributions. In the above example, the first sense is more frequent in the tourism domain (verified from manually sense marked tourism corpora). Domain specific sense distribution information should be harnessed in the WSD task.
The second parameter arises from the dominance of senses in the domain. Senses are expressed by synsets, and we define a dominant sense as follows:
A few dominant senses in the Tourism domain are {place, country, city, area}, {body of water}, {flora, fauna}, {mode of transport} and {fine arts}. In disambiguating a word, that sense which belongs to the sub-tree of a domain-specific dominant sense should be given a higher score than other senses. The value of this parameter (θ) is decided as follows: θ = 1; if the candidate synset is a dominant synset θ = 0.5; if the candidate synset belongs to the subtree of a dominant synset θ = 0.001; if the candidate synset is neither a dominant synset nor belongs to the sub-tree of a dominant synset.
Our third parameter comes from Corpus cooccurrence. Co-occurring monosemous words as well as already disambiguated words in the context help in disambiguation. For example, the word river appearing in the context of sea is a monosemous word. The frequency of co-occurrence of river with the "water body" sense of sea is high in the tourism domain. Corpus co-occurrence is cal-A synset node in the wordnet hypernymy hierarchy is called Dominant if the synsets in the sub-tree below the synset are frequently occurring in the domain corpora. culated by considering the senses which occur in a window of 10 words around a sense.
Our fourth parameter is based on the semantic distance between any pair of synsets in terms of the shortest path length between two synsets in the wordnet graph. An edge in the shortest path can be any semantic relation from the wordnet relation repository (e.g., hypernymy, hyponymy, meronymy, holonymy, troponymy etc.).
For nouns we do something additional over and above the semantic distance. We take advantage of the deeper hierarchy of noun senses in the wordnet structure. This gives rise to our fifth and final parameter which arises out of the conceptual distance between a pair of senses. Conceptual distance between two synsets S 1 and S 2 is calculated using Equation (1), motivated by Agirre Eneko & German Rigau (1996).
Conceptual
Distance (S1, S2) = Length of the path between (S1, S2) in terms of hypernymy hierarchy
Height of the lowest common ancestor of S1 and S2 in the wordnet hierarchy
(1)
The conceptual distance is proportional to the path length between the synsets, as it should be. The distance is also inversely proportional to the height of the common ancestor of two sense nodes, because as the common ancestor becomes more and more general the conceptual relatedness tends to get vacuous (e.g., two nodes being related through entity which is the common ancestor of EVERYTHING, does not really say anything about the relatedness).
To summarize, our various parameters used for domain-specific WSD are:
Wordnet-dependent parameters belongingness-to-dominant-concept conceptual-distance semantic-distance
Corpus-dependent parameters
sense distributions corpus co-occurrence. In section 7 we show how these parameters are used to come up with a scoring function for WSD.
Building a case for Parameter Projection
Wordnet-dependent parameters depend on the graph based structure of Wordnet whereas the Corpus-dependent parameters depend on various statistics learnt from a sense marked corpora. Both the tasks of (a) constructing a wordnet from scratch and (b) collecting sense marked corpora for multiple languages are tedious and expensive. An important question being addressed in this paper is: whether the effort required in constructing semantic graphs for multiple wordnets and collecting sense marked corpora can be avoided? Our findings seem to suggest that by projecting relations from the wordnet of a language and by projecting corpus statistics from the sense marked corpora of the language we can achieve this end. Before we proceed to discuss the way to realize parameter projection, we present a novel dictionary which facilitates this task.
Synset based multilingual dictionary
Parameter projection as described in section 4 rests on a novel and effective method of storage and use of dictionary in a multilingual setting proposed by Mohanty et. al. (2008). For the purpose of current discussion, we will call this multilingual dictionary framework MultiDict. One important departure from traditional dictionary is that synsets are linked, and after that the words inside the synsets are linked. The basic mapping is thus between synsets and thereafter between the words. Table 1 shows the structure of MultiDict, with one example row standing for the concept of boy. The first column is the pivot describing a concept with a unique ID. The subsequent columns show the words expressing the concept in respective languages (in the example table above, English, Hindi and Marathi). Thus to express the concept "04321: a youthful male person", there are two lexical elements in English, which constitute a synset. Correspondingly, the Hindi and Marathi synsets contain 3 words each.
It may be noted that the central language whose synsets the synsets of other languages link to is Hindi. This way of linking synsets-more popularly known as the expansion approach-has several advantages as discussed in (Mohanty et. al., 2008). One advantage germane to the point of this paper is that the synsets in a particular column automatically inherit the various semantic relations of the Hindi wordnet (Dipak Narayan et. al., 2000), which saves the effort involved in reconstructing these relations for multiple languages.
After the synsets are linked, cross linkages are set up manually from the words of a synset to the words of a linked synset of the central language. The average number of such links per synset per language pair is approximately 3. These crosslinkages actually solve the problem of lexical choice in translating from text of one language to another.
Thus for the Marathi word मु लगा {mulagaa} denoting "a youthful male person", the correct lexical substitute from the corresponding Hindi synset is लड़का {ladakaa} (Figure 1). One might argue that any word within the synset could serve the purpose of translation. However, the exact lexical substitution has to respect native speaker acceptability. We put these cross linkages to another use, as described later.
Since it is the MultiDict which is at the heart of parameter projection, we would like to summarize the main points of this section. (1) By linking with the synsets of Hindi, the cost of building wordnets of other languages is partly reduced (semantic relations are inherited). The wordnet parameters of Hindi wordnet now become projectable to other languages.
(2) By using the cross linked words in the synsets, corpus parameters become projectable (vide next section).
6 Parameter projection using MultDict
P(Sense|Word) parameter
Suppose a word (say, W) in language L 1 (say, Marathi) has k senses. For each of these k senses we are interested in finding the parameter P(S i |W)which is the probability of sense S i given the word W expressed as: The probability P({water body}|saagar) for Marathi is
#({ }, ) #({ }, ) + #({ }, )
We propose that this can be approximated by the counts from Hindi sense marked corpora by replacing saagar with the cross linked Hindi words samudra and saagar, as per Figure 2: Thus, the following formula is used for calculating the sense distributions of Marathi words using the sense marked Hindi corpus from the same domain:
#) = #( , _ _ _ ) #( , _ _ _ )(2)
Note that we are not interested in the exact sense distribution of the words, but only in the relative sense distribution. To prove that the projected relative distribution is faithful to the actual relative distribution of senses, we obtained the sense distribution statistics of a set of Marathi words from a sense tagged Marathi corpus (we call the sense marked corpora of a language its self corpora). These sense distribution statistics were compared with the statistics for these same words obtained by projecting from a sense tagged Hindi corpus using Equation (2) The fourth row of Table 2 shows that whenever सागर (saagar) (sea) appears in the Marathi tourism corpus there is a 100% chance that it will appear in the "water body" sense and 0% chance that it will appear in the sense of "abundance". Column 5 shows that the same probability values are obtained using projections from Hindi tourism cor-pus. Taking another example, the third row shows that whenever ठिकाण (thikaan) (place, home) appears in the Marathi tourism corpus there is a much higher chance of it appearing in the sense of "place" (96.2%) then in the sense of "home" (3.7%). Column 5 shows that the relative probabilities of the two senses remain the same even when using projections from Hindi tourism corpus (i.e. by using the corresponding cross-linked words in Hindi). To quantify these observations, we calculated the average KL divergence and Spearman"s correlation co-efficient between the two distributions. The KL divergence is 0.766 and Spearman"s correlation co-efficient is 0.299. Both these values indicate that there is a high degree of similarity between the distributions learnt using projection and those learnt from the self corpus.
Co-occurrence parameter
Similarly, within a domain, the statistics of cooccurrence of senses remain the same across languages. For example, the co-occurrence of the Marathi synsets {आकाव (akash) (sky), अिं बर (ambar) (sky)} and {मे घ (megh) (cloud), अभ्र (abhra) (cloud)} in the Marathi corpus remains more or less same as (or proportional to) the co-occurrence between the corresponding Hindi synsets in the Hindi corpus.
Our algorithms for WSD
We describe two algorithms to establish the usefulness of the idea of parameter projection. The first algorithm-called iterative WSD (IWSD-) is greedy, and the second based on PageRank algorithm is exhaustive. Both use scoring functions that make use of the parameters detailed in the previous sections.
Iterative WSD (IWSD)
We have been motivated by the Energy expression in Hopfield network (Hopfield, 1982) in formulating a scoring function for ranking the senses. Hopfield Network is a fully connected bidirectional symmetric network of bi-polar (0/1 or +1/-1) neurons. We consider the asynchronous Hopfield Network. At any instant, a randomly chosen neuron (a) examines the weighted sum of the input, (b) compares this value with a threshold and (c) gets to the state of 1 or 0, depending on whether the input is greater than or less than or equal to the threshold. The assembly of 0/1 states of individual neurons defines a state of the whole network. Each state has associated with it an energy, E, given by the following expression
= − + > =1 (3)
where, N is the total number of neurons in the network, and are the activations of neurons i and j respectively and is the weight of the connection between neurons i and j. Energy is a fundamental property of Hopfield networks, providing the necessary machinery for discussing convergence, stability and such other considerations.
The energy expression as given above cleanly separates the influence of self-activations of neurons and that of interactions amongst neurons to the global macroscopic property of energy of the network. This fact has been the primary insight for equation (4) which was proposed to score the most appropriate synset in the given context. The correspondences are as follows: The component * of the energy due to the self activation of a neuron can be compared to the corpus specific sense of a word in a domain. The other component * * coming from the interaction of activations can be compared to the score of a sense due to its interaction in the form of corpus co-occurrence, conceptual distance, and wordnetbased semantic distance with the senses of other words in the sentence. The first component thus captures the rather static corpus sense, whereas the second expression brings in the sentential context.
Algorithm 1: performIterativeWSD(sentence)
1. Tag all monosemous words in the sentence. 2. Iteratively disambiguate the remaining words in the sentence in increasing order of their degree of polysemy. 3. At each stage select that sense for a word which maximizes the score given by Equation (4) Algorithm1: Iterative WSD IWSD is clearly a greedy algorithm. It bases its decisions on already disambiguated words, and ignores words with higher degree of polysemy. For example, while disambiguating bisemous words, the algorithm uses only the monosemous words. Rada Mihalcea (2005) proposed the idea of using PageRank algorithm to find the best combination of senses in a sense graph. The nodes in a sense graph correspond to the senses of all the words in a sentence and the edges depict the strength of interaction between senses. The score of each node in the graph is then calculated using the following recursive formula:
Modified PageRank algorithm
= 1 − d + d * W ij W jk S k ∈Out S i * Score S j S j ∈In S i
Instead of calculating W ij based on the overlap between the definition of senses S i and S as proposed by Rada Mihalcea (2005), we calculate the edge weights using the following formula:
= , * 1 , * 1 , * | * | = 0.85
This formula helps capture the edge weights in terms of the corpus bias as well as the interaction between the senses in the corpus and wordnet. It should be noted that this algorithm is not greedy. Unlike IWSD, this algorithm allows all the senses of all words to play a role in the disambiguation process.
Experimental Setup:
We tested our algorithm on tourism corpora in 3 languages (viz., Marathi, Bengali and Tamil) and health corpora in 1 language (Marathi) using projections from Hindi. The corpora for both the domains were manually sense tagged. A 4-fold cross validation was done for all the languages in both the domains. The size of the corpus for each language is described in Table 4. Table 6 shows the results of disambiguation (precision, recall and F-score). We give values for two algorithms in the tourism domain: IWSD and Pa-geRank. In each case figures are given for both with and without parameter projection. The wordnet baseline figures too are presented for the sake of grounding the results. Note the lines of numbers in bold, and compare them with the numbers in the preceding line. This shows the fall in accuracy value when one tries the parameter projection approach in place of self corpora. For example, consider the F-score as given by IWSD for Marathi. It degrades from about 81% to 72% in using parameter projection in place of self corpora. Still, the value is much more than the baseline, viz., the wordnet first sense (a typically reported baseline).
Results and Discussions
Coming to PageRank for Marathi, the fall in accuracy is about 8%. Appendix A shows the corresponding figure for Tamil with IWSD as 10%. Appendix B reports the fall to be 11% for a different domain-Health-for Marathi (using IWSD).
In all these cases, even after degradation the performance is far above the wordnet baseline. This shows that one could trade accuracy with the cost of creating sense annotated corpora.
Conclusion and Future Work:
Based on our study for 3 languages and 2 domains, we conclude the following:
(i) Domain specific sense distributions-if obtainable-can be exploited to advantage.
(ii) Since sense distributions remain same across languages, it is possible to create a disambiguation engine that will work even in the absence of sense tagged corpus for some resource deprived language, provided (a) there are aligned and cross linked sense dictionaries for the language in question and another resource rich language, (b) the domain in which disambiguation needs to be performed for the resource deprived language is the same as the domain for which sense tagged corpora is available for the resource rich language.
(iii) Provided the accuracy reduction is not drastic, it may make sense to trade high accuracy for the effort in collecting sense marked corpora.
It would be interesting to test our algorithm on other domains and other languages to conclusively establish the effectiveness of parameter projection for multilingual WSD.
It would also be interesting to analyze the contribution of corpus and wordnet parameters independently. Domain)
Figure 1 :
1Cross linked synset members for the concept: a youthful male person
Figure 2 Figure 2 :
22#" indicates "count-of". Consider the example of two senses of the Marathi word सागर {saagar}, viz., sea and abundance and the corresponding cross-linked words in Hindi (Two senses of the Marathi word सागर (saagar), viz., {water body} and {abundance}, and the corresponding cross-linked words in Hindi 1 .
{मु लगा mulgaa , पोरगा porgaa , पोर por }Concepts
L1
(Eng-
lish)
L2 (Hindi)
L3 (Mara-
thi)
04321: a
youthful
male per-
son
{male
child,
boy}
{लड़का ladkaa,
बालक baalak
,
बच्चा
bachchaa}
Table 1 :
1Multilingual Dictionary Framework
Sense_8231 shows the same word saagar for both Marathi and Hindi. This is not uncommon, since Marathi and Hindi are sister languages.({water body}, samudra)
#({water body}, samudra) + #({abundance}, saagar)
1 मु लगा
/MW1
mulagaa,
पोरगा
/MW2
poragaa,
पोर /MW3
pora
लड़का
/HW1
ladakaa,
बालक
/HW2
baalak,
बच्चा /HW3
bachcha,
छोरा /HW4
choraa
male-child
/HW1,
boy
/HW2
Marathi Synset
Hindi Synset English Synset
Sense_2650
Sense_8231
saagar (sea)
{water body}
saagar (sea)
{abundance}
samudra (sea)
{water body}
saagar (sea)
{abundance}
. The results are summarized inTable 2.Sr.
No
Marathi
Word
Synset
P(S|word)
as learnt
from
sense
tagged
Marathi
corpus
P(S|word) as
projected
from sense
tagged
Hindi cor-
pus
1
ककिं मत
(kimat)
{ worth }
0.684
0.714
{ price }
0.315
0.285
2
रस्ता
(rasta)
{ roadway }
0.164
0.209
{road,
route}
0.835
0.770
3
ठिकाण
(thikan)
{ land site,
place}
0.962
0.878
{ home }
0.037
0.12
4
सागर
(saagar)
{water
body}
1.00
1.00
{abun-
dance}
0
0
Table 2 :
2Comparison of the sense distributions of some Marathi words learnt from Marathi sense tagged corpus with those projected from Hindi sense tagged corpus.
Table 3 :
3Comparison of the corpus co-occurrence statistics learnt from Marathi and Hindi Tourism corpus.
Table 3
3shows a few examples depicting similarity
between co-occurrence statistics learnt from Mara-
thi tourism corpus and Hindi tourism corpus. Note
that we are talking about co-occurrence of synsets
and not words. For example, the second row shows
that the probability of co-occurrence of the synsets
{cloud} and {sky} is almost same in the Marathi
and Hindi corpus.
Table 4 :
4Size of manually sense tagged corpora for different languages.
Table 5
5shows the number of synsets in MultiDict for each language.Language
# of synsets in
MultiDict
Hindi
29833
Marathi
16600
Bengali
10732
Tamil
5727
Table 5 :
5Number of synsets for each language IWSD (training on self corpora; no parameter projection) 81.29 80.42 80.85 81.62 78.75 79.94 IWSD (training on Hindi and reusing parameters for another language) 73.45 70.33 71.86 79.83 79.65 79.79 PageRank (training on self corpora; no parameter projection) 79.61 79.61 79.61 76.41 76.41 76.41 PageRank (training on Hindi and reusing parameters for another language) 71.11 71.11 71.11 75.05 75.05 75.05Algorithm
Language
Marathi
Bengali
P % R %
F %
P % R % F %
Wordnet Baseline
58.07 58.07 58.07 52.25 52.25 52.25
Table 6 :
6Precision, Recall and F-scores of IWSD, PageRank and Wordnet Baseline. Values are reported with and without parameter projection.
Table 7 :
7Tamil Tourism corpus using parameters projected from Hindi IWSD (training on Hindi and reusing for Marathi) 75.96 67.75 71.62 Wordnet Baseline 60.32 60.32 60.32Appendix B: Results for Marathi (Health
Domain)
Algorithm
Words
P % R % F %
IWSD (training on Mara-
thi)
84.28 81.25 82.74
Table 8 :
8Marathi Health corpus parameters projected from Hindi
Appendix A: Results for Tamil (Tourism
Word sense disambiguation using conceptual density. Agirre Eneko & German Rigau, Proceedings of the 16th International Conference on Computational Linguistics (COLING). the 16th International Conference on Computational Linguistics (COLING)Copenhagen, DenmarkAgirre Eneko & German Rigau. 1996. Word sense dis- ambiguation using conceptual density. In Proceed- ings of the 16th International Conference on Computational Linguistics (COLING), Copenhagen, Denmark.
Dipak Narayan, Debasri Chakrabarti, Prabhakar Pande, P Bhattacharyya, An Experience in Building the Indo WordNet -a WordNet for Hindi. First International Conference on Global WordNet. IndiaMysoreDipak Narayan, Debasri Chakrabarti, Prabhakar Pande and P. Bhattacharyya. 2002. An Experience in Build- ing the Indo WordNet -a WordNet for Hindi. First International Conference on Global WordNet, My- sore, India.
Word Sense Disambiguation Algorithms and Applications. Eneko Agirre, & Philip Edmonds, Springer PublicationsEneko Agirre & Philip Edmonds. 2007. Word Sense Disambiguation Algorithms and Applications. Sprin- ger Publications.
WordNet: An Electronic Lexical Database. C Fellbaum, The MIT PressFellbaum, C. 1998. WordNet: An Electronic Lexical Database. The MIT Press.
. Hindi Wordnet, Hindi Wordnet.
Neural networks and physical systems with emergent collective computational abilities. J J Hopfield, Proceedings of the National Academy of Sciences of the USA. the National Academy of Sciences of the USA79J. J. Hopfield. April 1982. "Neural networks and physi- cal systems with emergent collective computational abilities", Proceedings of the National Academy of Sciences of the USA, vol. 79 no. 8 pp. 2554-2558.
Supervised word sense disambiguation with support vector machines and multiple knowledge sources. Lee Yoong, K Hwee, T Ng & Tee, K Chia, Proceedings of Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text. Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of TextBarcelona, SpainLee Yoong K., Hwee T. Ng & Tee K. Chia. 2004. Su- pervised word sense disambiguation with support vector machines and multiple knowledge sources. Proceedings of Senseval-3: Third International Workshop on the Evaluation of Systems for the Se- mantic Analysis of Text, Barcelona, Spain, 137-140.
Using syntactic dependency as local context to resolve word sense ambiguity. Lin Dekang, Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics (ACL). the 35th Annual Meeting of the Association for Computational Linguistics (ACL)MadridLin Dekang. 1997. Using syntactic dependency as local context to resolve word sense ambiguity. In Proceed- ings of the 35th Annual Meeting of the Association for Computational Linguistics (ACL), Madrid, 64-71.
Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone. Michael Lesk, Proceedings of the 5th annual international conference on Systems documentation. the 5th annual international conference on Systems documentationToronto, Ontario, CanadaMichael Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone. In Proceedings of the 5th annual international conference on Systems documentation, Toronto, Ontario, Canada.
Large vocabulary unsupervised word sense disambiguation with graph-based algorithms for sequence data labeling. Mihalcea Rada, Proceedings of the Joint Human Language Technology and Empirical Methods in Natural Language Processing Conference (HLT/EMNLP). the Joint Human Language Technology and Empirical Methods in Natural Language Processing Conference (HLT/EMNLP)Vancouver, CanadaMihalcea Rada. 2005. Large vocabulary unsupervised word sense disambiguation with graph-based algo- rithms for sequence data labeling. In Proceedings of the Joint Human Language Technology and Empiri- cal Methods in Natural Language Processing Confe- rence (HLT/EMNLP), Vancouver, Canada, 411-418.
Integrating multiple knowledge sources to disambiguate word senses: An exemplar-based approach. T Ng Hwee, B Hian, Lee, Proceedings of the 34th. the 34thNg Hwee T. & Hian B. Lee. 1996. Integrating multiple knowledge sources to disambiguate word senses: An exemplar-based approach. In Proceedings of the 34th
Annual Meeting of the Association for Computational Linguistics (ACL). Santa Cruz, U.S.A.Annual Meeting of the Association for Computation- al Linguistics (ACL), Santa Cruz, U.S.A., 40-47.
F Peter, Vincent J Brown, Della Pietra, Stephen A Della Pietra, Robert L Mercer, The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics. 19Peter F. Brown and Vincent J.Della Pietra and Stephen A. Della Pietra and Robert. L. Mercer. 1993. The Mathematics of Statistical Machine Translation: Pa- rameter Estimation. Computational Linguistics Vol 19, 263-311.
Rajat Mohanty, Pushpak Bhattacharyya, Prabhakar Pande, Shraddha Kalele, Mitesh Khapra, Aditya Sharma, Synset Based Multilingual Dictionary: Insights, Applications and Challenges. Global Wordnet Conference. Szeged, HungaryRajat Mohanty, Pushpak Bhattacharyya, Prabhakar Pande, Shraddha Kalele, Mitesh Khapra and Aditya Sharma. 2008. Synset Based Multilingual Dictionary: Insights, Applications and Challenges. Global Word- net Conference, Szeged, Hungary, January 22-25.
Selectional preference and sense disambiguation. Resnik Philip, Proceedings of ACL Workshop on Tagging Text with Lexical Semantics, Why, What and How. ACL Workshop on Tagging Text with Lexical Semantics, Why, What and HowWashington, U.S.A.Resnik Philip. 1997. Selectional preference and sense disambiguation. In Proceedings of ACL Workshop on Tagging Text with Lexical Semantics, Why, What and How? Washington, U.S.A., 52-57.
Structural Semantic Interconnections: A Knowledge-Based Approach to Word Sense Disambiguation. Roberto Navigli, Paolo Velardi, IEEE Transactions On Pattern Analysis and Machine Intelligence. Roberto Navigli, Paolo Velardi. 2005. Structural Se- mantic Interconnections: A Knowledge-Based Ap- proach to Word Sense Disambiguation. IEEE Transactions On Pattern Analysis and Machine Intel- ligence.
Readings in Machine Translation. Sergei Nirenburg, Harold Somers, Yorick Wilks, MIT PressCambridge, MASergei Nirenburg, Harold Somers, and Yorick Wilks. 2003. Readings in Machine Translation. Cambridge, MA: MIT Press.
HyperLex: Lexical cartography for information retrieval. Véronis Jean, Computer Speech & Language. 183Véronis Jean. 2004. HyperLex: Lexical cartography for information retrieval. Computer Speech & Language, 18(3):223-252.
The Use of Machine Readable Dictionaries in Sublanguage Analysis. D Walker, R Amsler, Analyzing Language in Restricted Domains. Grishman and KittredgeLEA PressWalker D. and Amsler R. 1986. The Use of Machine Readable Dictionaries in Sublanguage Analysis. In Analyzing Language in Restricted Domains, Grish- man and Kittredge (eds), LEA Press, pp. 69-83.
Decision lists for lexical ambiguity resolution: Application to accent restoration in Spanish and French. Yarowsky David, Proceedings of the 32nd Annual Meeting of the association for Computational Linguistics (ACL). the 32nd Annual Meeting of the association for Computational Linguistics (ACL)Las Cruces, U.S.A.Yarowsky David. 1994. Decision lists for lexical ambi- guity resolution: Application to accent restoration in Spanish and French. In Proceedings of the 32nd An- nual Meeting of the association for Computational Linguistics (ACL), Las Cruces, U.S.A., 88-95.
Unsupervised word sense disambiguation rivaling supervised methods. Yarowsky David, Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics (ACL). the 33rd Annual Meeting of the Association for Computational Linguistics (ACL)Cambridge, MAYarowsky David. 1995. Unsupervised word sense dis- ambiguation rivaling supervised methods. In Pro- ceedings of the 33rd Annual Meeting of the Association for Computational Linguistics (ACL), Cambridge, MA, 189-196. |
6,077,317 | CONTROL STRUCTURES FOR ACTIONS IN PROCEDURAL TEXTS AND PT-CHART | This paper describes a partial taxonomy of control structures for actions in procedural texts. On the basis of the taxonomy, we examine natural language expressions for control structures in Japanese procedural texts and present PT (Procedural Text) -chart which represents the structure of a procedural text. | [] | CONTROL STRUCTURES FOR ACTIONS IN PROCEDURAL TEXTS AND PT-CHART
Yoshio Momouchi
DIVISION OF INFORMATION ENGINEERING FACULTY OF ENGINEERING
HOKKAIDO UNIVERSITY SAPPORO 060
JAPAN
CONTROL STRUCTURES FOR ACTIONS IN PROCEDURAL TEXTS AND PT-CHART
This paper describes a partial taxonomy of control structures for actions in procedural texts. On the basis of the taxonomy, we examine natural language expressions for control structures in Japanese procedural texts and present PT (Procedural Text) -chart which represents the structure of a procedural text.
Introduction
Cookbooks, route instructions, machine assembly instructions and algorism descriptions , which are written in natural languages, and sorting programs, which are written in programming languages, are examples of procedural texts. There are many points at which natural language procedural texts and programs may be considered on common ground. A procedure is composed of a sequence of actions intended to achieve a goal. Control structures determine patterns of behavior for action sequences. An action is something an actor does or can do. An action is enabled by certain states. An action acts on objects and causes the change of states of objects.
In programs, such concepts as objects, states, actions and control structures are defined explicitly. How are these concepts identified in natural language texts? What expressions represent these concepts? We think it most necessary that we have a right understanding of these concepts to understand procedural texts. In Japanese procedural texts, actions are mostly expressed by verbs and control structures are expressed by nouns, adverbs, auxiliary verbs, postpositional words and so on. The primary verb is a procedure call in programs. Other syntactic structures are used to embed the procedure call information. Control structures in natural language procedural texts are more complex and richer than in programs. The control information is embeded within broad range of syntactic structures. We classify control structures into two groups, temporal and behavioral control structures, which are respectively associated with temporal and behavioral aspects of action sequences, and examine Japanese language expressions for control structures. In programs, several proposals, e.g. BNF notation and a flowchart techniques have been made for techniques supporting the formal description of the structure or development of a program. Two trials which use BNF notation and PT-chart are made for a formal description of the structure of a natural language procedural text. A procedural text.has the static (organization) structure and the dynamic (control) structure. It is highly desirable that a formal representation framework of a procedural text has the power to represent both the structures. PT-chart is a graphical representation, which has the power. L.A.Miller considered syntactic and semantic structures in English recipe texts) He showed that all of the action-encoding mechanisms are based on the concept of a procedure call in the sense of computer programs, various syntactic structures are used to embed the call information and any procedure may be decomposable to successively less complex procedures. He also examined types of verb qualifications in the texts and classified cases controlling the action into seven groups. On the basis of his studies, we consider further details of text structures, control structures and action-encoding mechanisms in procedural texts, especially in Japanese. This research is an underlying study for the semantic analysis of procedural texts.
Procedural text
We chose texts of a cookbook, "Family Cooking" by K.Okamoto, as procedural texts. We examined texts of 69 Japanese cookings which are a subset from "Family Cooking". We call them Data. Examples from Data are given in Japanese enclosed by brackets [] and some of them have English translations enclosed by quotation marks ,,. In cookings, objects which actions act on and actors who do actions are as follows, objects: ingredients, seasonings, cookwares actors : men and/or women The framework of a cooking text is roughly constructed by the following components in order of the description. But the construction of cooking, seasoning, finishing and dishing is most common. We call a unit delimitted by a period in a text a sentence. Sentences are classified into two groups depending on the finishing style. Conditions are grouped into the following three; precondition: condition which a state satisfies before an action ongoing condition: condition which a state satisfies during an action postcondition: condition which a state satisfies after an action If a precondition is satisfied, an actor can do an action. In the above examples, start condition is a precondition, continuation termination condition is a postcondition, ongoing behavior con-dition is an ongoing condition and result condition is a postcondition. In Japanese procedural texts, language expressions for actors are always omitted and more than one action is often described in one sentence.
Control structures
Language expressions which are used to embed the action call information are the following in Japanese procedural texts. (A) verb (B) participial adjective in a noun phrase (C) constituent of a noun phrase or compound noun In Japanese, a participial adjective is placed in pre-nominal position. A participial adjective indicates an action that should be taken place some time prior to the cooking step in which it occurs. Constituent of a noun phrase or compound noun also indicates an action. The primary verb is an action call, in the sense of computer programs. The mechanism to control a sequence of actions is called a control structure. Many various control structures are found in procedural texts, and so it is very important to consider control structures systematically. A partial taxonomy of control structures and Japanese expressions for them are given in the following;
[I] Temporal control structure (2) Noun phrase or compound noun, which doesn't include a verb (a) context has an expression for an action to
PT-chart
We describe a formalized framework of a procedural text. The fundamental style of the following descriptions is borrowed from BNF notation of Algol60. The braces{} denote none or more repetitions of the enclosed material. The brackets[] denote none or one occurrence of the enclosed material. Point node, which represents a junction Name node, which represents a name Condition node, which represents a condition Declaration node, which represents to be-a declaration Action node, which represents actions Data node, which represents data Statement node, which represents a statement which is mentioned at this point of a procedure
Discussions
Procedural texts of cookings are mostly composed of descriptions of actions that act on concrete objects, ingredients or cookwares, and descriptions of states of them. So we can regard the semantics of a procedural text as state transformation from an initial state before a cooking to a final state after a cooking. Then it is related with algorithmic logics which are considered as logics of programs~ A way will be open to consider logics for general procedural texts. In such logics, a problem is how to incorporate a procedure into logics. By studies of algorithmic logics, three methods have been found.
(i) A method which regards a procedure as a logical formula. The meaning of a procedure a as a logical fomula is 'a terminates'. Suppose that p is a logical fomula and a is a procedure, a;p is also a logical formula. The meaning of the formula is 'a terminates and p is true'.
(2) A method Which regards a procedure as a modal operator. Suppose that p is a logical formula and a is a procedure, <a>p is also a logical formula. <a> is a modal operator. The meaning of the formula is 'if a terminates, then p is true'. We can define another modal operator [a] as [alp= l<~>Ip.
(3) A method which regards properties of a procedure as axioms. The axioms give the semantics of a procedure.
Researches are left to implement a computer program for processing natural language procedural texts. Natural language procedural texts may have many incomplete, ambiguous expressions. The program must process them using context or knowledge. In transformation from natural language texts to complete, unambiguous texts which are written in a formal language, the program must also understand descriptions of objects, actions, states and control structures. The programs for processing natural language algorithm descriptions are considered by severalresearchers. 6,7 They discuss that the right understanding of control structures in natural language algorithm descriptions is an important function to the program. So the identification of control structures in procedural texts is an underlying study for implementing the computer program.
(a) name (of a cooking) (b) remarks (c) ingredients: quantity, notes (d) preparations (sub) : preparation for each ingredient (e) preparations (main) : cooking, seasoning, finishing, dishing (a), (b), (c) and (d) are included in all descriptions of 69 cookings and the contents are almost the smne. (e) has many different contents.
(
kara orosu magiwa ni naganegi o kuwae] 'add the spring onions just before one takes the pot off the fire' (ii) Time interval [siagari tikaku natta koro sisitoogarasi o kuwae] 'add the green pepper about the time of finishing' (b) Start condition (i) Independent of preceding actions [futoi mono w~haNgetugiri ni suru] 'cut radishes in semicircle slices if thick' (ii) Dependent on preceding actions (I) [siru ga fukiagare ba hi o tomeru] 'if the soup boils over, put out the fire' (iii) Dependent on preceding actions (II) [daikoN ga yawarakani narikakeru koro gohaN o kuwaeru] 'add the boiled rice about the time when the radish becomes tender'
ni site oku ..... yuzu no wagiri] 'leave it cut in round slices ..... a citron cut in round slices' (b) context doesn't have an expression for an action to cause a state [nure-fukiN] 'wet dishcloth' (__:action description) In (l-b) and (2-b), it is necessary to call an action to cause a state. (l-b) corresponds to (B) and (2-b) corresponds to (C).
<procedural text>::=(<name>[<declaration>]<con-dition><action>) <action>::=[<statement-list>]<action body>[ <statement-list>] <action body>::=<simple action>l<sequential act-ion>I<selective action>I<repeated action> I <parallel action> <simple action>::=(<name><condition>(<act>)) <sequential action>::=(<name><condition>(<action >{;<action>})) <selective action>::=(<name><condition>(<action> {,<action>})) <repeated action>::=(<name><condition>(<action> {.<action>})) <parallel action>::=(<name><condition>(<action>: <action>{:<action>})) <condition>::=([<temporal control>][<behavioral control>][<modal condition>]) <temporal control>::=<sequence> <selection> <repetition>I<parallel>I<start>l<continuation> <behavioral control>::=<ongoing> <declaration>::=(<declaration unit>{<declaration unit>}) <declaration unit>::=<data declaration>l<action declaration> <data declaration>::=(<name><data>) <action declaration>::=(<procedural text>)On the basis of the above descriptive framework, we construct a representation framework, PT-chart. PT-chart has the power to represent both the organization and control structures of a procedural text. PT-chart has a tree structure composed of nodes and edges. A pattern is a subchart of PT-chart. Nodes are Point, Name, Condition, Declaration, Statement, Action and Data. Edges are Line, Arrow , Sequence, Selection, Repetition and Parallel. Some important pattern types, which mainly have the double purpose of the organization structure and control structure, are Sequence,
'put the stock into the pot and put it over the fire' [sayuu ni hiraki± ago no bubuN o hootyoo de tatakikiru] 'open it right and left and chop off the chin part with a kitchen knife' [itido kireini mizuaraisita noti, mizuke o fukitori] 'after one washes it, wipe out the moisture'Continuation
(a) Continuation time interval
(i) Definite time interval
[30 puNkaN nekaseru]
'ferment it for 30 minutes'
(ii) Indefinite time interval
[sibaraku oku]
'let it stand for a while'
(iii) Approximate time interval
[yaku 2 fuNkaN yuderu]
'boil it for about two minutes'
(iv) Upper limit
[saikoo 1 jikaN kurai yuderu]
'boil it for less than one hour'
(v) Lower limit
[saitei 1 jikaN wa tukekoNde oku]
'leave it pickled for more than one hour'
(b) End time
[nabe ni ireru tyokuzeN made mizu ni tukekoNde
oku]
'leave it soaked in water until one puts it into
the pot'
(c) Continuation condition
[katai aida niru]
'boil it while it is hard'
(d) End condition
[azayakana midoriiro ni naru made sizukani
yuderu]
'boil it gently until it becomes a bright green
color'
(3) Repetition
(a) Continuous repetition
(i) Definite times
[kore o 2,3 kai kurikaesu]
'repea.t this two or three times'
(ii) Indefinite times
[kawa ni fuooku de suukasyo and o akete oku]
'open the several holes in the face of it with
a fork'
(b) Intermittent repetition
[tokidoki mizu o kaenagara]
'making water afresh occasionally'
(4) Selection
(a) Obligatory selection
[utuwa ni moru ka, aruiwa oobati ni ikki ni
akete]
'fill it in a dish o__[ empty it into a big pot at
a breath'
(b) Optional selection
[konomi de gomasio o furikakeru]
'sprinkle salt with parched sesame over it
according to one's preference'
(5) Sequence
(a) Successive sequence
[nabe ni nidasijiru (nibaNdasi) o irete hi ni
kakeru]
(b) Sequence with time delay
(i) Definite time delay
[i jikaN go ni hi ni kakeru]
'put it over the fire after one hour'
(ii) Indefinite time delay
[kurogoma to sio wa betubetu ni iri, ato de
mazeawasu]
'parch sesame and salt separately, then latel Z
mix them'
(iii) Order
[saigo ni yaku 20 byookaN tuyobi ni site]
'lastly,
cook it over the hot fire for about 20
seconds'
(6) Parallel
(a) Simultaneous parallel (one action)
(i) Action by one actor
[goboo o kaiteNsasenagara raNgiri ni si]
'cut the burdock turning over it'
(ii) Action by one actor and continued action
[gutugutu ninagara taheru]
'eat simmering it'
(b) Concurrent parallel (some actions)
A good example is doing actions for ingredients
in sub preparations. Some actors can do actions
for ingredients concurrently.
There are two cases.
(i) an action is independent of another action
(ii) an action is dependent on another action
[II] Behavioral control structure
(I) Ongoing condition
[sizukani arumihaku o hagasu]
'take the alminum foil off gently'
(:
expression for control structure
There is not always one to one correspondence
between a control structure and a language ex-
pression. We have examined Japanese procedural
texts. We have a table of relations between
control structures and corresponding Japanese
expressions.
Start
:koro, toki, noti, syuNkaN, magiwa
hi, tokoro de, toki wa, mae ni, uti
ni, to dooji ni, no baai wa, tara,
nard, kara, ha, wa
Continuation:kaN, kaN ijoo, kaN hodo, yaku~kaN,
saitei~kaN, i tyuuya, 1 baN, made,
kurai, kurai hi, sibaraku, tuzukeru,
te iru, te oku
Repetition
:kore o ~kai kurikaesu, tugitugi to,
Selection
Sequence
Parallel
Ongoing
tokidoki, 2,3 do, mooitido, suuka-
syo, zutu
:ka, aruiwa, baai ni yotte wa,
konomi de
:te, kara, ato, ato de, ato kara,
tugi ni, tuzuite, saigo ni, saisyo
wa, saigo wa, mazu, tadati hi, sugu,
saki ni
:te, kara, nagara, yoo ni, mama
:sizukani, yukkuri to, yoo hi, mama
Conditions (states) which must be true in order
for the action to start and/or proceed are
described by several different expressions as
known from examples.
There are some cases in which a condition is
expressed by a part of a noun phrase or Compound
noun.
(I) Participial adjective in a noun phrase
[marumete oita nikudaNgo]
'a hand-rolled quenelle'
[5,6 cm nagasa ni kirisoroeta fuki]
'a butterbur cut the pieces to 5,6 cms in length'
[saki no togatta hotyoo]
'a sharp-pointed knife'
(2) Constituent of a noun phrase
[yuzu no wagiri]
'a round slice of a citron'
[unagi no kabayaki]
'broiled eels'
(3) Constituent of a compound noun
[ajituke-siitake]
'a seasoned mushroom'
[arai-gome]
'washed rice'
[n.et-tou]
'boiling water'
(__:state description)
States result by doing actions. But the above
expressions about states don't always call
actions. If an expression for calling the action
to cause the state is included in the preceding
context, the action needn't be called. Relations
between the context and the action to cause the
state are grouped as follows;
(i) Noun phrase, which inciudes a verb (in the
case of Japanese)
(a) context has an expression for an action to
cause a state
[mizu de sarasu ..... mizu ni sarasite atta udo]
'bleach in the water ..... an udo bleached in the
water'
(b) context doesn't have an expression for an
action to cause a state
[aratta mame]
'washed beans'
,...and n in that order under a condition Cl Declaration pattern: declaration of procedures and/or data (procedures=>actions) Name or condition node may be omitted in each pattern.Sequence pattern:
doing procedures 2,3,...and n in that order
under a condition C1
Selection pattern:
doing one of procedures 2,3 .... and n under
a condition C1
Parallel pattern:
doing all procedures 2,3 .... and n concur-
rently under a condition C1
Repetition pattern:
repeating procedures 2,3I
Selection pattern
Parallel pattern
I
I
Repetition pattern
1
Sequence pattern
Declaration
I
pattern
Chestnut
Rice
• rice 7 cups
chestnuts 32
syoyu 4 tablespoons
Ingredients
salt 1 teaspoons
sake 2 teaspoons
water 8 cups
chest_
Prepa-
Prepa-
nuts
ration
Prepa-
ration
ration
An example of PT-chart
•i 'Rice
Preparation
Prepa-
Seasoning
ration
Finishing
~soak
and soften
i
in warm water
|
J
~PsUt
the rice, syoyu, salt, sake I
chestnuts and water in the
I
pot
i
et the pot on the stove
. [~---'l /boil the
very high
~L5 ~_~lame
rice over a
d
• let the rice
stand
113-~.
The author would like to express his thanks to Dr.E.Miyamoto and Mr.T.Maeda of Hokkaido University, and Mr.H.Sawamura of IIAS-SIS for their helpful discussions and to Dr.L.A.Miller for sending papers to him.
Natural language procedures: guides for programming language design. L A Miller, Comp. Prog. Symp., Sixth cong. of IEA. Paper presented at theMiller,L.A.:'Natural language procedures: guides for programming language design', Paper presented at the Comp. Prog. Symp., Sixth cong. of IEA(1976)
The interpretation of temporal order in coordinate conjunction. M S Chodorow, L A Miller, IBM Thomas J. Watson Res. Center Res. Report. 6199Chodorow,M.S. and L.A.Miller:'The interpreta- tion of temporal order in coordinate conjunction ', IBM Thomas J. Watson Res. Center Res. Report RC6199(1976)
Naive programmer problems with specification of transfer-of-control. L A Miller, Prec. of NCC. Miller,L.A.:'Naive programmer problems with specification of transfer-of-control', Prec. of NCC(1975)
Algorithmic logics. H Sawamura, Information Processing. 20in JapaneseSawamura,H.:'Algorithmic logics', Information Processing Vol.20, No.8, IPSJ(1979) (in Japanese)
Flowcharting by stepwise refinement. O Ferstl, SIGPLAN Notices. 13Ferstl,O.:'Flowcharting by stepwise refinement ', SIGPLAN Notices Vol.13, No.i(1978)
Automated derivation of program control structure from natural language program descriptions. D Wile, R Balzer, N Goldman, SIGART Newsletter. 64ACMWile,D., R.Balzer and N.Goldman:'Automated derivation of program control structure from natural language program descriptions', SIGART Newsletter 64, ACM(1977)
What the nature of natural language tells us about how to make natural-language-like programming languages more natural. J R Hobbs, SIGART Newsletter. 64ACMHobbs,J.R.:'What the nature of natural language tells us about how to make natural-language-like programming languages more natural', SIGART News- letter 64, ACM(1977) |
18,064,933 | A COMPARATIVE STUDY OF JAPANESE AND ENGLISH SUBLANGUAGE PATTERNS | As part of a project to develop a Japanese-English machine translation system for technical texts within a limited domain, we conducted a study to investigate the roles that sublanguage techniques(Harris, 1968) and operatorargument grammar(Harris, 1982)would play in the analysis and transfer stages of the system. The data consisted of fifty sentences from the Japanese and English versions of the FOCUS Query Language Primer, which were decomposed into elementary sentence patterns. A total of 187 pattern instances were found for Japanese and 191 for English. When the elements of these elementary sentences were classified and compared with their counterparts in the other language, we identified 43 word classes in Japanese and 43 corresponding English word classes. These word classes formed 32 sublanguage patterns in each language, 29 of which corresponded to patterns in the other language. This paper examines in detail these correspondences as well as the mismatches between sublanguage patterns in Japanese and English.The high level of agreement found between sublanguage categories and patterns in Japanese and English suggests that these categories and patterns can facilitate analysis and transfer. Moreover, the use of operator-argument grammar, which incorporates operator trees as an intermediate representation, substantially reduces the amount of structural transfer needed in the system. A pilot implementation is underway. | [
876283
] | A COMPARATIVE STUDY OF JAPANESE AND ENGLISH SUBLANGUAGE PATTERNS
Virginia Teller
Graduate Center
Hunter College
The City University of New York
Monmouth College
New York University
Michiko Kosaka
Graduate Center
Hunter College
The City University of New York
Monmouth College
New York University
Ralph Grishman
Graduate Center
Hunter College
The City University of New York
Monmouth College
New York University
A COMPARATIVE STUDY OF JAPANESE AND ENGLISH SUBLANGUAGE PATTERNS
As part of a project to develop a Japanese-English machine translation system for technical texts within a limited domain, we conducted a study to investigate the roles that sublanguage techniques(Harris, 1968) and operatorargument grammar(Harris, 1982)would play in the analysis and transfer stages of the system. The data consisted of fifty sentences from the Japanese and English versions of the FOCUS Query Language Primer, which were decomposed into elementary sentence patterns. A total of 187 pattern instances were found for Japanese and 191 for English. When the elements of these elementary sentences were classified and compared with their counterparts in the other language, we identified 43 word classes in Japanese and 43 corresponding English word classes. These word classes formed 32 sublanguage patterns in each language, 29 of which corresponded to patterns in the other language. This paper examines in detail these correspondences as well as the mismatches between sublanguage patterns in Japanese and English.The high level of agreement found between sublanguage categories and patterns in Japanese and English suggests that these categories and patterns can facilitate analysis and transfer. Moreover, the use of operator-argument grammar, which incorporates operator trees as an intermediate representation, substantially reduces the amount of structural transfer needed in the system. A pilot implementation is underway.
Introduction
For a pair of disparate languages --Japanese and English --we are developing a machine translation system based on a sublanguage analysis of technical texts within a restricted domain. As developed by Harris (1968), the sublanguage approach to linguistic analysis entails delimiting a circumscribed domain of discourse, selecting sample texts in the domain, and identifying the word classes and patterns of word class co-occurrences that are specific to the sublanguage. Sublanguage patterns provide important benefits in both the analysis and transfer stages of a machine translation system. During analysis they serve to block incorrect parses and aid in the recovery of elided material. This recovery is particularly important in a language like Japanese, where zeroing is far more widespread than in English. The use of sublanguage patterns in the transfer phase rests on the premises that (1) there is a correspondence between the sublanguage categories and patterns in the source language (Japanese) and the target language (English); and (2) these categories and patterns are the appropriate units for lexical disambiguation. In addition, the operator-argument grammar framework (Harris, 1982) that we have adopted, which incorporates operator trees as an intermediate representation, further explicates the underlying relationships among sublanguage word classes and substantially reduces the amount of structural transfer needed in the system.
The sublanguage approach has found several computational applications, in North America, particularly in the work of the Linguistic String Project at New York University (e.g. Sager, 1981) and the TAUM group at the University of Montreal, where sublanguage grammars have been used in machine translation projects (Lehrberger, 1982;Isabelle & Bourbeau, 1985;Kittredge, 1987). To date, however, these techniques have not been tested on languages as dissimilar as Japanese and English, and the correctness of the premises outlined above is far from assured. The close correspondence between French and English sublanguage patterns found by the TAUM group is not guaranteed to carry over to Japanese and English. The relationships could just as easily be one-to-many or many-to-one. We have investigated this question with the goal of using sublanguage categories and patterns to facilitate the computer analysis of source texts in Japanese in the sublanguage domain of computer manuals intended as instructional material. Our efforts have concentrated on the FOCUS Query Language Primer, which has been published in both Japanese and English.
On the basis of a comparative linguistic analysis of Japanese and English using Harris's operator-argument framework, we have proposed a novel design for a machine translation system (Kosaka, Teller & Grishman, 1988). A central claim of our proposal is that this model, which is essentially a transfer system without a component for structural transfer, offers a middle road between the transfer and interlingua approaches to machine translation. Since the strength and validity of this claim rest squarely on the results of our linguistic analysis, most of this paper is devoted to a detailed description of that analysis and an assessment of the significance and implications of the results for machine translation.
Comparative linguistic analysis
2.1 Method. The distributional analysis of sublanguage texts according to the principles of operator-argument grammar produces a set of sublanguage word classes and a set of word class co-occurrence patterns. The cooccurrence constraints embodied in these patterns are viewed as a manifestation of the underlying semantic constraints of the domain. The patterns that emerged from our study were obtained in a two step process. First, each sentence in the sample texts was decomposed into its constituent elementary sentences. This process regularizes surface representations in order to arrive at canonical representations that accord with information content. For example, the sentence IN-GROUPS-OF and TOP can be used with ACROSS, contains four elementary sentences.: 1
(1) a. U uses IN-GROUPS-OF with ACROSS.
b. U uses TOP with ACROSS. c. S1 and S2. d. can S.
In the second step these elementary sentences were classified into operator-argument co-occurrence patterns. The operators that occur in a sublanguage fall into four classes. Zero-order operators, which include most nouns, accept no arguments. First-order operators, which take zero-order operators as arguments, comprise the operators that appear in base sublanguage relationships (kernel sentences). These operators include the verbs in subjectverb-object patterns, and their arguments are the subject and object word classes permitted by the sublanguage. The class of second-order operators contains certain modifiers such as modals as well as disjunction, coordinate and subordinate conjunctions, etc. whose arguments are first-order operators (i.e. kernel sentences). Operators that produce paraphrases (e.g. passive, nominalization) also belong to this class. The fourth class, meta-operators, consists in our corpus of verbs that belong to the sublanguage of instructional material rather than the domain of computer manuals. These include "mental" verbs such as hope, learn, observe and understand as well as discuss, explain, introduce and present, which take human subjects.
Co-occurrence patterns in operator-argument format are labeled with the word class of the operator followed by the word class of the arguments. Sentences (la) and (1b), for example, are instances of the kernel pattern USE_WITH-USER-OPTION-PHRASE, which consists of a first-order operator with zero-order arguments. (Note that the pattern specifies USER as the subject argument that is missing but understood in the elementary sentence.) Sentence (1c) falls into the class AND/OR-Sl-S2-(Sn), and (1d) is a member of MODAL-S. Both of these patterns contain second-order operators with kernel sentences as arguments. Six word classes are also illustrated: OPTION (with members IN-GROUPS-OF and TOP), PHRASE (with member ACROSS), USER, USE-WITH, AND/OR, and MODAL.
Fifty sentences were selected for analysis from a twenty page section of the Japanese and English versions of the FOCUS manual. These source and target texts gave us an independent standard by which to judge our techniques and results, thereby eliminating the need for translation on our part and the possibility of bias that could be introduced if we translated a particular text ourselves.
Working independently, two linguists listed the co-occurrence patterns for each sentence. These included elementary sentences with higher order operators such as coordinate and subordinate conjunctions as well as subject-verb-object structures and prepositional/postpositional phrases that exhibited selectional restrictions specific to the domain.
Results.
A total of 187 pattern instances were found for Japanese and 191 for English. When the elements of these elementary sentences were classified and compared with their counterparts in the other language, we identified 43 word classes in Japanese and 43 corresponding English word classes. These word classes formed 32 patterns in Japanese and 32 in English that occurred more than once. Twenty-nine of the Japanese patterns correspond to English patterns in the sense that they have identical argument structures and convey the same meaning. This was an encouraging outcome given the possible number of combinations of 43 word classes that could appear in kernel patterns consisting of two, three, and even four elements. -suru, group-wake-suru, keisan-suru, sansyutu-suru, syuturyoku-suru,...} in Japanese and {count, compute, generate, group, sum, ... } in English, and the noun class VALUE, whose Japanese and English members are {atai, gookei, kekka, suuti] and {result, summary, total, value}, respectively. Words that occur in different contexts are considered homographs and are assigned to more than one word class. Examples are syuturyoku-suru and generate, which belong to two verb classes: CREATE (as in generate a report) and COMPUTE (as in generate subtotals). There are also two classes labeled IN and three with the label USE. These are operators that appear in two or more patterns with different argument structures. Table 2 shows the 32 elementary sentence patterns that occurred more than once in Japanese. 2 The numbers in brackets give the frequency in Japanese, the frequency in English, and the number of matching occurrences for each pattern. A matching occurrence is one where corresponding Japanese and English sentences contain Instances of the same sublanguage pattern.
The kernel sentence patterns define a set of base relationships among word classes that constitute a partial description of the domain knowledge. These patterns, together with the higher order and meta-operator patterns, embody a set of semantic constraints that can be stated as selectional restrictions on word class co-occurrences, for example, on the subject and object word classes allowed with a particular class of verbs. During the analysis phase of machine translation sublanguage patterns serve to reduce the ambiguity of the source language text and block incorrect parses proposed on the basis of syntactic information alone. As discussed in Kosaka, Teller and Grishman (1988), these patterns also make it possible to resolve ellipsis. Our intention in performing a linguistic analysis of both English and Japanese, however, was to determine the degree to which sublanguage patterns could play a role during the transfer phase as well, by providing a principled basis for translating source language vocabulary and semantic content into equivalent terms in the target language. For this purpose the numbers in brackets in Table 2 are of crucial importance. These numbers indicate the strength of the match between the patterns found in Japanese and those in English and hence the degree of similarity that can be expected between operator-argument trees in the two languages. 3 Table 4. Instances of the IN2-USE2-FIELD kernel sentence pattern. Page references are given first for the Japanese, then the English text. An asterisk indicates a matching pattern occurred in the other language.
JAPANESE:
*p51/p57 s5: mokuteki-field-tyuu de no siyoo. *p51/p57 s7: mokuteki-field-tyuu de no siyoo. *p51/p57 s7: mokuteki-field-tyuu de no siyoo.
ENGLISH: *p51/p57 s5: use in object list. *p51/p57 s7: use in object list. *p51/p57 s7: use in object list.
Within the domain of texts we have examined so far, the correspondences between Japanese and English sublanguage patterns are excellent. The highest agreement lies in the group of 18 kernel sentence patterns. Mismatches are more common in the patterns with higher order and meta-operators. The largest proportion of discrepancies is accounted for by just two patterns, USE2-USER-TABLE and META-HUMAN-X (where X stands for NP or S). Tables 3 and 4 illustrate nearly perfect matches between Japanese and English for two major syntactic relations. The CREATE-TABLE-REPORT kernel sentence pattern describes a subject-verb-object structure, while IN2-USE2-FIELD incorporates a postpositional/prepositional phrase. In the following section an explanation is given for each entry in Table 2 where there is a discrepancy of more than one between the number of matching occurrences (the third item in brackets) and the frequencies in Japanese and English (the first and second numbers in brackets). The commentary is keyed to superscript references in the table.
Commentary on the operator-argument patterns
[1] The absence of the pattern DISPLAY-VERB-VALUE in English is explained by a construction with allow for which there was no equivalent in Japanese. An example is the sentence SUM and COUNT allow you to display results, which decomposes into four elementary sentence patterns: SUM allows S; COUNT allows S; you display results; S1 and S2. The first argument in the English version of the pattern with DISPLAY is USER (i.e. you) instead of VERB (i.e. SUM, COUNT). In the case of the three DISPLAY-FOCUS-VALUE mismatches, the English text used the verbs appear, print, and report, which are not members of the DISPLAY class.
[2] Although there was no equivalent for the SPEC-USER-PHRASE pattern in English, an acceptable English translation can be generated from the Japanese pattern after lexical transfer.
[3] Instances where a subordinate clause in Japanese was expressed as a prepositional phrase in English account for the discrepancies between Japanese and English in the patterns with the USE class of verbs. In the next section an example of the USE1-USER-DBASE pattern is examined in which the preposition from is used in English. The preposition most commonly found instead of the USE2-USER-TABLE pattern is with.
[41 Omitted from the analysis are five quantifier word classes and five sublanguage patterns with quantifier operators. Since quantifiers can modify almost any NP, they impose very little selectional specificity on their arguments and therefore play a limited role in providing sublanguage cooccurrence restrictions. Although the match between Japanese and English quantifier patterns was excellent ([13,12,11] in terms of bracketed numbers), the analysis of quantifiers in Japanese is complicated and lies beyond the scope of this paper.
[5] Differences in the distribution of conjunctions arose in situations where either the Japanese or the English text conjoined two sentences that appeared as two separate sentences in the other language.
[6] The mismatches observed in the pattern MODAL-S were due primarily to cases where Japanese used modals for politeness, the result being sentences that convey quite different meanings in Japanese and English.
[7] The pattern PERFORM-FOCUS-COMPUTE appeared in one Japanese sentence where the English version used the verb involve Instead of perform and in one sentence with a completely different structure in English.
[81 The symbol X stands for 'NP or S'. The meta-operators in our sample revealed a particularly Interesting source of deviation between Japanese and English. In many cases sentences containing members of this class of operators express very different meanings in the two languages. Example (2b) below gives a literal rendering in English of sentence (2a) from the FOCUS manual, while (3) gives the actual sentence that appeared in the English version: (2) a. Kokomade de kaki-no TABLE command no kakukoomoku-ga rikai-dekita-to omoimasu. b. By now we hope that you are able to understand each component of the following TABLE command. (3) At this point the following components of the TABLE command have been introduced. The meta-operators appear to be one of the parameters that contribute to stylistic differences in expression between the two languages.
Implications for machine translation
Within the subdomain of texts we have examined so far, the correspondences between Japanese and English are not limited to sublanguage word classes and co-occurrence patterns but extend to the overall structure of operator trees as well. For example, the Japanese nominalization (4a) and its matching English infinitive clause (4b) are represented in our operator tree system as shown in Figures la and 1b, respectively:
(4) a. Mokuteki field tyuu de no ROW-TOTAL, COLUMN-TOTAL no siyoo. b. Use ROW-TOTAL and COLUMN-TOTAL in the object list.
Structurally the trees are identical, but the nominalization operator appears in Japanese where English uses an infinitive. Identical operator-argument patterns also appear in the two trees -the kernel sentence patterns USE2-USER-TABLE and IN2-USE2-FIELD and the paraphrastic operator pattern AND/OR-S1-S2-(Sn). Although the USE2 operator siyoo/use allows two arguments, the O's indicate that no argument appears in subject position in either Japanese or English.
The strong similarities between Japanese and English operator trees suggest that, with operator trees as an intermediate representation, it may be possible to construct a system to translate Japanese into English without the structural transfer usually associated with such systems. A successful translation of (4a) into (4b) can be achieved solely on the basis of lexical transfer. No restructuring is necessary at the operator-argument level of analysis.
As for the data not accounted for in this manner, several options are available to the designer of an MT system. When no match is found for a source language pattern, the system could fail to produce a translation, restructure the tree into a comparable target language pattern, or proceed with lexical transfer without restructuring. We have adopted the last of these options. Our strategy has been to assess whether an acceptable English sentence could, in principle, be generated from the Japanese operator tree. Although devices could be introduced to map the Japanese sublanguage patterns with no equivalent in English into different, and possibly more appropriate, operator-argument structures, we prefer to maintain the position of avoiding such structural change as long as the Japanese operator tree can be used as the basis for a grammatical English sentence. This tactic has proven successful in the majority of discrepancies we have encountered so far.
Figure 1 .
1Operator trees for examples (4a) and (4b). Dashed lines indicate an adjunct relationship to the parent node. Solid lines indicate an operator-argument relationship.
Table 1
1lists the 43 word classes that emerged from our analysis. Approximately half of the classes contain only one member, owing to the relatively small size of the sample texts. The largest class, TABLE, which consists of all the keywords and fields in the FOCUS TABLE command, comprises over a dozen members. Examples of robust classes include the verb class COMPUTE with members {gookei
Table 1 .Table 2 .
12Japanese-English word classes. Japanese-English sublanguage patterns. The frequencies in Japanese and English and the number of matching occurrences for each pattern are given in brackets. Superscripts refer to numbered commentary in the text.zero-order
first-order
second-order
meta
DBASE
COMBINE
REQUIRE
AFTER
MEAN
FIELD
COMPONENT
SPECIFY
AND/OR
META
FOCUS
COMPUTE
USE1
BEFORE
SAME
FORMAT
CREATE
USE2
IF
HUMAN
DISPLAY
USE3
IN2
total: 3
OPTION
FIT
USEFUL
IN-ORDER-TO
PHRASE
GROUP
USE-WITH MODAL
REPORT
IN1
WRITE
NEG
TABLE
PRINT
PERFORM
USER
RELATE
VALUE
total: 17
WHEN
VERB
total: 11
total: 12
Table 3 .
3Instances of the CREATE-TABLE-REPORT kernel sentence pattern.Page references are given for Japanese and then English, followed by the sentence number. An asterisk indicates a matching pattern occurred in the other language. JAPANESE: *p50/p56 s2: O-ga report-o sakusei-suru. *p55/p61 s4: TABLE-command-ga report-o sakusei-suru. *p56/p62 s4: TABLE-command-ga report-o sakusei-sita. *p62/p68 s1: TABLE-command-ga report-o sakusei-suru.p66/p72 s4: O-ga report-o syuturyoku-suru.
ENGLISH:
*p50/p56 s1: U create report.
*p55/p61 s4: TABLE-commands produce reports.
*p56/p62 s4: TABLE-commands generate reports.
*p62/p68 s1: TABLE-commands produce reports.
IN-GROUPS-OF, TOP, and ACROSS are keywords in the FOCUS query language. The symbol U stands for "unspecified", that is, a missing or zeroed argument.
The pattern AFTER-S1-S2 is included because of the importance of ato-de/after as a subordinate conjunction, even though there was only a single instance in the corpus.
Not shown are the English elementary sentence patterns that had no equivalents in Japanese; only three of these occurred more than once. Certain types of prepositional phrases are also omitted from the analysis.
Sentences (5a) and (5b) below illustrate how the operator-argument level of intermediate representation allows a graceful recovery from an apparent mismatch in sublanguage patterns:(5) a. JINJI database o siyoo site tugi no yoo na report o sakusei suru TABLE command o kakinasai. b. Write TABLE commands that will produce the following reports from the EXPERSON data base. Although, as shown inFigures 2a and 2b, the Japanese and English versions share instances of the kernel sentence patterns WRITE-USER-TABLE and CREATE-TABLE-REPORT, the Japanese sentence also contains the pattern USE1-USER-DBASE, which is lacking in English. In addition, the subordinate conjunction -te introduces a clause in Japanese which is expressed in English as a prepositional phrase with from. Rather than restructuring the Japanese operator tree into one that resembles the English version, we can obtain an acceptable translation using only lexical transfer. The result, shown inFigure 2c, will produce a sentence like(6):(6) Using the EXPERSON database, write TABLE commands that will produce the following reports.ImplementationThe results of our indicate that the sublanguage approach is worth pursuing for the analysis and lexical transfer stages of a machine translation system. In parallel with our sublanguage studies we have begun implementation of a pilot MT system. We have taken a previously developed questionanswering system and incorporated a small, core Japanese grammar and regularization component capable of parsing and producing operator trees for simple sentences. The parser has been coupled to a semantic analyzer that utilizes sublanguage patterns and an existing retrieval component to produce a Japanese version of the question answerer. We then added a lexical transfer component based on the same sublanguage patterns and an English sentence generator to complete the pilot translation system. Relativization and quantification are among the features of the current implementation, as shown by the following examples of input (7) and output (8):(7) a. Jane ga A o totta kamoku wa nan desoo ka? b. Subete no gakusei wa V11 o totta ka?(8) a. What is the course that Jane got an A in? b. Did all students take V11?These examples illustrate two functions that the sublanguage patterns perform in the translation process. First, the Japanese verb totta participates in two sublanguage patterns in this domain, and the two corresponding English patterns involve different verbs (take and receive); in this way the patterns guide lexical transfer. Second, since Japanese provides no overt marker of which argument is omitted in a relative clause, the sublanguage pattern is required in order to identify the omitted argument in Japanese and pair it with the corresponding argument in the English pattern. In examples such as these, where the missing argument in English is marked by a preposition, the preposition must be generated as part of the English relative clause.
Dashed lines indicate an adjunct relationship to the parent node. Solid lines indicate an operatorargument relationship. Operator trees for sentences (5a), (5b) and (6)Figure 2. Operator trees for sentences (5a), (5b) and (6). Dashed lines indicate an adjunct relationship to the parent node. Solid lines indicate an operator- argument relationship.
Mathematical structures of language. Z Harris, Wiley InterscienceNew YorkHarris, Z. 1968. Mathematical structures of language. New York: Wiley Interscience.
A grammar of English on mathematical principles. Z Harris, WileyNew YorkHarris, Z. 1982. A grammar of English on mathematical principles. New York: Wiley.
TAUM-AVIATION: Its technical features and some experimental results. P Isabelle, L Bourbeau, Computational Linguistics. 11Isabelle, P. and Bourbeau, L. 1985. TAUM-AVIATION: Its technical features and some experimental results. Computational Linguistics 11:18-27.
The significance of sublanguage for automatic translation. R Kittredge, Nirenburg. Kittredge, R. 1987. The significance of sublanguage for automatic translation. In Nirenburg (Ed.), pp. 59-67.
Sublanguage: Studies of language in restricted semantic domains. R Kittredge, J Lehrberger, Walter de GruyterBerlin-New YorkKittredge, R. and Lehrberger, J., Eds. 1982. Sublanguage: Studies of language in restricted semantic domains. Berlin-New York: Walter de Gruyter.
A sublanguage approach to Japanese-English machine translation. M Kosaka, V Teller, R Grishman, Proceedings of the International Conference on New Directions in Machine Translation. Dordrecht: Foris. the International Conference on New Directions in Machine Translation. Dordrecht: ForisTo appear inKosaka, M., Teller, V. and Grishman, R. 1988. A sublanguage approach to Japa- nese-English machine translation. To appear in Proceedings of the Interna- tional Conference on New Directions in Machine Translation. Dordrecht: Foris.
Automatic translation and the concept of sublanguage. J Lehrberger, Kittredge and LehrbergerLehrberger, J. 1982. Automatic translation and the concept of sublanguage. In Kittredge and Lehrberger (Eds.), pp. 81-107.
S Nirenburg, Ed, Machine translation: Theoretical and methodological issues. CambridgeCambridge University PressNirenburg, S., Ed. 1987. Machine translation: Theoretical and methodological issues. Cambridge: Cambridge University Press.
Natural language information processing: A computer grammar of English and its applications. N Sager, Addison-WesleyReading, MASager, N. 1981. Natural language information processing: A computer grammar of English and its applications. Reading, MA: Addison-Wesley. |
8,162,001 | Predicting the Semantic Orientation of Adjectives | We identify and validate from a large corpus constraints from conjunctions on the positive or negative semantic orientation of the conjoined adjectives. A log-linear regression model uses these constraints to predict whether conjoined adjectives are of same or different orientations, achieving 82% accuracy in this task when each conjunction is considered independently. Combining the constraints across many adjectives, a clustering algorithm separates the adjectives into groups of different orientations, and finally, adjectives are labeled positive or negative. Evaluations on real data and simulation experiments indicate high levels of performance: classification precision is more than 90% for adjectives that occur in a modest number of conjunctions in the corpus. | [
6713452,
10986188,
3166885,
7003847
] | Predicting the Semantic Orientation of Adjectives
Vasileios Hatzivassiloglou
Department of Computer Science 450 Computer Science Building
Columbia University New York
N.Y. 10027USA
Kathleen R Mckeown
Department of Computer Science 450 Computer Science Building
Columbia University New York
N.Y. 10027USA
Predicting the Semantic Orientation of Adjectives
We identify and validate from a large corpus constraints from conjunctions on the positive or negative semantic orientation of the conjoined adjectives. A log-linear regression model uses these constraints to predict whether conjoined adjectives are of same or different orientations, achieving 82% accuracy in this task when each conjunction is considered independently. Combining the constraints across many adjectives, a clustering algorithm separates the adjectives into groups of different orientations, and finally, adjectives are labeled positive or negative. Evaluations on real data and simulation experiments indicate high levels of performance: classification precision is more than 90% for adjectives that occur in a modest number of conjunctions in the corpus.
Introduction
The semantic orientation or polarity of a word indicates the direction the word deviates from the norm for its semantic group or lezical field (Lehrer, 1974).
It also constrains the word's usage in the language (Lyons, 1977), due to its evaluative characteristics (Battistella, 1990). For example, some nearly synonymous words differ in orientation because one implies desirability and the other does not (e.g., simple versus simplisfic). In linguistic constructs such as conjunctions, which impose constraints on the semantic orientation of their arguments (Anscombre and Ducrot, 1983;Elhadad and McKeown, 1990), the choices of arguments and connective are mutually constrained, as illustrated by:
The tax proposal was simple and well-received } simplistic but well-received *simplistic and well-received by the public.
In addition, almost all antonyms have different semantic orientations3 If we know that two words relate to the same property (for example, members of the same scalar group such as hot and cold) but have different orientations, we can usually infer that they are antonyms. Given that semantically similar words can be identified automatically on the basis of distributional properties and linguistic cues (Brown et al., 1992;Pereira et al., 1993;Hatzivassiloglou and McKeown, 1993), identifying the semantic orientation of words would allow a system to further refine the retrieved semantic similarity relationships, extracting antonyms.
Unfortunately, dictionaries and similar sources (theusari, WordNet (Miller et al., 1990)) do not include semantic orientation information. 2 Explicit links between antonyms and synonyms may also be lacking, particularly when they depend on the domain of discourse; for example, the opposition bearbull appears only in stock market reports, where the two words take specialized meanings.
In this paper, we present and evaluate a method that automatically retrieves semantic orientation information using indirect information collected from a large corpus. Because the method relies on the corpus, it extracts domain-dependent information and automatically adapts to a new domain when the corpus is changed. Our method achieves high precision (more than 90%), and, while our focus to date has been on adjectives, it can be directly applied to other word classes. Ultimately, our goal is to use this method in a larger system to automatically identify antonyms and distinguish near synonyms.
2
Overview of Our Approach
Our approach relies on an analysis of textual corpora that correlates linguistic features, or indicators, with 1 Exceptions include a small number of terms that are both negative from a pragmatic viewpoint and yet stand in all antonymic relationship; such terms frequently lexicalize two unwanted extremes, e.g., verbose-terse.
2 Except implicitly, in the form of definitions and usage examples. semantic orientation. While no direct indicators of positive or negative semantic orientation have been proposed 3, we demonstrate that conjunctions between adjectives provide indirect information about orientation. For most connectives, the conjoined adjectives usually are of the same orientation: compare fair and legitimate and corrupt and brutal which actually occur in our corpus, with ~fair and brutal and *corrupt and legitimate (or the other cross-products of the above conjunctions) which are semantically anomalous. The situation is reversed for but, which usually connects two adjectives of different orientations.
The system identifies and uses this indirect information in the following stages:
1. All conjunctions of adjectives are extracted from the corpus along with relevant morphological relations.
2. A log-linear regression model combines information from different conjunctions to determine if each two conjoined adjectives are of same or different orientation. The result is a graph with hypothesized same-or different-orientation links between adjectives.
3. A clustering algorithm separates the adjectives into two subsets of different orientation. It places as many words of same orientation as possible into the same subset.
4. The average frequencies in each group are compared and the group with the higher frequency is labeled as positive.
In the following sections, we first present the set of adjectives used for training and evaluation. We next validate our hypothesis that conjunctions constrain the orientation of conjoined adjectives and then describe the remaining three steps of the algorithm. After presenting our results and evaluation, we discuss simulation experiments that show how our method performs under different conditions of sparseness of data.
Data Collection
For our experiments, we use the 21 million word 1987 Wall Street Journal corpus 4, automatically annotated with part-of-speech tags using the PARTS tagger (Church, 1988). In order to verify our hypothesis about the orientations of conjoined adjectives, and also to train and evaluate our subsequent algorithms, we need a 3Certain words inflected with negative affixes (such as inor un-) tend to be mostly negative, but this rule applies only to a fraction of the negative words. Furthermore, there are words so inflected which have positive orientation, e.g., independent and unbiased. set of adjectives with predetermined orientation labels. We constructed this set by taking all adjectives appearing in our corpus 20 times or more, then removing adjectives that have no orientation. These are typically members of groups of complementary, qualitative terms (Lyons, 1977), e.g., domestic or medical.
We then assigned an orientation label (either + or -) to each adjective, using an evaluative approach. The criterion was whether the use of this adjective ascribes in general a positive or negative quality to the modified item, making it better or worse than a similar unmodified item. We were unable to reach a unique label out of context for several adjectives which we removed from consideration; for example, cheap is positive if it is used as a synonym of inexpensive, but negative if it implies inferior quality.
The operations of selecting adjectives and assigning labels were performed before testing our conjunction hypothesis or implementing any other algorithms, to avoid any influence on our labels. The final set contained 1,336 adjectives (657 positive and 679 negative terms). Figure 1 shows randomly selected terms from this set.
To further validate our set of labeled adjectives, we subsequently asked four people to independently label a randomly drawn sample of 500 of these adjectives. They agreed with us that the positive/negative concept applies to 89.15% of these adjectives on average. For the adjectives where a positive or negative label was assigned by both us and the independent evaluators, the average agreement on the label was 97.38%. The average inter-reviewer agreement on labeled adjectives was 96.97%. These results are extremely significant statistically and compare favorably with validation studies performed for other tasks (e.g., sense disambiguation) in the past. They show that positive and negative orientation are objective properties that can be reliably determined by humans.
To extract conjunctions between adjectives, we used a two-level finite-state grammar, which covers complex modification patterns and noun-adjective apposition. Running this parser on the 21 million word corpus, we collected 13,426 conjunctions of adjectives, expanding to a total of 15,431 conjoined adjective pairs. After morphological trans- (and, or, bu~, either-or, or neither-nor), the type of modification (attributive, predicative, appositive, resultative), and the number of the modified noun (singular or plural).
< i • i0 -I~ < 1 • 10 -1~ < 1 • 10 -1~ 2.09.10 -:~ < 1. i0 -16 < 1. i0 -I~ 0.04277
Validation of the Conjunction
Hypothesis
Using the three attributes extracted by the parser, we constructed a cross-classification of the conjunctions in a three-way table. We counted types and tokens of each conjoined pair that had both members in the set of pre-selected labeled adjectives discussed above; 2,748 (29.56%) of all conjoined pairs (types) and 4,024 (26.74%) of all conjunction occurrences (tokens) met this criterion. We augmented this table with marginal totals, arriving at 90 categories, each of which represents a triplet of attribute values, possibly with one or more "don't care" elements. We then measured the percentage of conjunctions in each category with adjectives of same or different orientations. Under the null hypothesis of same proportions of adjective pairs (types) of same and different orientation in a given category, the number of same-or different-orientation pairs follows a binomial distribution with p = 0.5 (Conover, 1980). We show in Table 1 the results for several representative categories, and summarize all results below:
• Our conjunction hypothesis is validated overall and for almost all individual cases. The results are extremely significant statistically, except for a few cases where the sample is small.
• Aside from the use of but with adjectives of different orientations, there are, rather surprisingly, small differences in the behavior of conjunctions between linguistic environments (as represented by the three attributes). There are a few exceptions, e.g., appositive and conjunctions modifying plural nouns are evenly split between same and different orientation. But in these exceptional cases the sample is very small, and the observed behavior may be due to chance. • Further analysis of different-orientation pairs in conjunctions other than but shows that conjoined antonyms are far more frequent than expected by chance, in agreement with (Justeson and Katz, 1991).
Prediction of Link Type
The analysis in the previous section suggests a baseline method for classifying links between adjectives: since 77.84% of all links from conjunctions indicate same orientation, we can achieve this level of performance by always guessing that a link is of the sameorientation type. However, we can improve performance by noting that conjunctions using but exhibit the opposite pattern, usually involving adjectives of different orientations. Thus, a revised but still simple rule predicts a different-orientation link if the two adjectives have been seen in a but conjunction, and a same-orientation link otherwise, assuming the two adjectives were seen connected by at least one conjunction.
Morphological relationships between adjectives also play a role. Adjectives related in form (e.g., adequate-inadequate or thoughtful-thoughtless) almost always have different semantic orientations. We implemented a morphological analyzer which matches adjectives related in this manner. This process is highly accurate, but unfortunately does not apply to many of the possible pairs: in our set of 1,336 labeled adjectives ( add to the predictions made from conjunctions the different-orientation links suggested by morphological relationships.
We improve the accuracy of classifying links derived from conjunctions as same or different orientation with a log-linear regression model (Santner and Duffy, 1989), exploiting the differences between the various conjunction categories. This is a generalized linear model (McCullagh and Nelder, 1989) with a linear predictor = wWx where x is the vector of the observed counts in the various conjunction categories for the particular adjective pair we try to classify and w is a vector of weights to be learned during training. The response y is non-linearly related to r/ through the inverse logit function, e0 Y= l q-e" Note that y E (0, 1), with each of these endpoints associated with one of the possible outcomes.
We have 90 possible predictor variables, 42 of which are linearly independent. Since using all the 42 independent predictors invites overfitting (Duda and Hart, 1973), we have investigated subsets of the full log-linear model for our data using the method of iterative stepwise refinement: starting with an initial model, variables are added or dropped if their contribution to the reduction or increase of the residual deviance compares favorably to the resulting loss or gain of residual degrees of freedom. This process led to the selection of nine predictor variables.
We evaluated the three prediction models discussed above with and without the secondary source of morphology relations. For the log-linear model, we repeatedly partitioned our data into equally sized training and testing sets, estimated the weights on the training set, and scored the model's performance on the testing set, averaging the resulting scores. 5 Table 2 shows the results of these analyses. Although the log-linear model offers only a small improvement on pair classification than the simpler but prediction rule, it confers the important advantage 5When morphology is to be used as a supplementary predictor, we remove the morphologically related pairs from the training and testing sets. of rating each prediction between 0 and 1. We make extensive use of this in the next phase of our algorithm.
Finding Groups of Same-Oriented
Adjectives
The third phase of our method assigns the adjectives into groups, placing adjectives of the same (but unknown) orientation in the same group. Each pair of adjectives has an associated dissimilarity value between 0 and 1; adjectives connected by sameorientation links have low dissimilarities, and conversely, different-orientation links result in high dissimilarities. Adjective pairs with no connecting links are assigned the neutral dissimilarity 0.5.
The baseline and but methods make qualitative distinctions only (i.e., same-orientation, differentorientation, or unknown); for them, we define dissimilarity for same-orientation links as one minus the probability that such a classification link is correct and dissimilarity for different-orientation links as the probability that such a classification is correct. These probabilities are estimated from separate training data. Note that for these prediction models, dissimilarities are identical for similarly classifted links.
The log-linear model, on the other hand, offers an estimate of how good each prediction is, since it produces a value y between 0 and 1. We construct the model so that 1 corresponds to same-orientation, and define dissimilarity as one minus the produced value.
Same and different-orientation links between adjectives form a graph. To partition the graph nodes into subsets of the same orientation, we employ an iterative optimization procedure on each connected component, based on the exchange method, a nonhierarchical clustering algorithm (Spgth, 1985). We define an objective/unction ~ scoring each possible partition 7 ) of the adjectives into two subgroups C1 and C2 as i=1
x,y E Ci where [Cil stands for the cardinality of cluster i, and d(z, y) is the dissimilarity between adjectives z and y. We want to select the partition :Pmin that minimizes ~, subject to the additional constraint that for each adjective z in a cluster C, 1 1
ICl-1 d(=,y) < --IVl d(=, y) (1)
where C is the complement of cluster C, i.e., the other member of the partition. This constraint, based on Rousseeuw's (1987) s=lhoue~es, helps correct wrong cluster assignments.
To find Pmin, we first construct a random partition of the adjectives, then locate the adjective that will most reduce the objective function if it is moved from its current cluster. We move this adjective and proceed with the next iteration until no movements can improve the objective function. At the final iteration, the cluster assignment of any adjective that violates constraint (1) is changed. This is a steepestdescent hill-climbing method, and thus is guaranteed to converge. However, it will in general find a local minimum rather than the global one; the problem is NP-complete (Garey and $ohnson, 1979). We can arbitrarily increase the probability of finding the globally optimal solution by repeatedly running the algorithm with different starting partitions.
7
Labeling the Clusters as Positive or Negative
The clustering algorithm separates each component of the graph into two groups of adjectives, but does not actually label the adjectives as positive or negative. To accomplish that, we use a simple criterion that applies only to pairs or groups of words of opposite orientation. We have previously shown (Hatzivassiloglou and McKeown, 1995) that in oppositions of gradable adjectives where one member is semantically unmarked, the unmarked member is the most frequent one about 81% of the time. This is relevant to our task because semantic markedness exhibits a strong correlation with orientation, the unmarked member almost always having positive orientation (Lehrer, 1985;Battistella, 1990). We compute the average frequency of the words in each group, expecting the group with higher av-erage frequency to contain the positive terms. This aggregation operation increases the precision of the labeling dramatically since indicators for many pairs of words are combined, even when some of the words are incorrectly assigned to their group.
8
Results and Evaluation
Since graph connectivity affects performance, we devised a method of selecting test sets that makes this dependence explicit. Note that the graph density is largely a function of corpus size, and thus can be increased by adding more data. Nevertheless, we report results on sparser test sets to show how our algorithm scales up.
We separated our sets of adjectives A (containing 1,336 adjectives) and conjunction-and morphologybased links L (containing 2,838 links) into training and testing groups by selecting, for several values of the parameter a, the maximal subset of A, An, which includes an adjective z if and only if there exist at least a links from L between x and other elements of An. This operation in turn defines a subset of L, L~, which includes all links between members of An. We train our log-linear model on L -La (excluding links between morphologically related adjectives), compute predictions and dissimilarities for the links in L~, and use these to classify and label the adjectives in An. c~ must be at least 2, since we need to leave some links for training. Table 3 shows the results of these experiments for a = 2 to 5. Our method produced the correct classification between 78% of the time on the sparsest test set up to more than 92% of the time when a higher number of links was present. Moreover, in all cases, the ratio of the two group frequencies correctly identified the positive subgroup. These results are extremely significant statistically (P-value less than 10 -16 ) when compared with the baseline method of randomly assigning orientations to adjectives, or the baseline method of always predicting the most frequent (for types) category (50.82% of the adjectives in our collection are classified as negative). Figure 2 shows some of the adjectives in set A4 and their classifications. Graph Connectivity and
Performance
A strong point of our method is that decisions on individual words are aggregated to provide decisions on how to group words into a class and whether to label the class as positive or negative. Thus, the overall result can be much more accurate than the individual indicators. To verify this, we ran a series of simulation experiments. Each experiment measures how our algorithm performs for a given level of precision P for identifying links and a given average number of links k for each word. The goal is to show that even when P is low, given enough data (i.e., high k), we can achieve high performance for the grouping.
As we noted earlier, the corpus data is eventually represented in our system as a graph, with the nodes corresponding to adjectives and the links to predictions about whether the two connected adjectives have the same or different orientation. Thus the parameter P in the simulation experiments measures how well we are able to predict each link independently of the others, and the parameter k measures the number of distinct adjectives each adjective appears with in conjunctions. P therefore directly represents the precision of the link classification algorithm, while k indirectly represents the corpus size.
To measure the effect of P and k (which are reflected in the graph topology), we need to carry out a series of experiments where we systematically vary their values. For example, as k (or the amount of data) increases for a given level of precision P for individual links, we want to measure how this affects overall accuracy of the resulting groups of nodes. Thus, we need to construct a series of data sets, or graphs, which represent different scenarios corresponding to a given combination of values of P and k. To do this, we construct a random graph by randomly assigning 50 nodes to the two possible orientations. Because we don't have frequency and morphology information on these abstract nodes, we cannot predict whether two nodes are of the same or different orientation. Rather, we randomly assign links between nodes so that, on average, each node participates in k links and 100 x P% of all links connect nodes of the same orientation. Then we consider these links as identified by the link prediction algorithm as connecting two nodes with the same orientation (so that 100 x P% of these predictions will be correct). This is equivalent to the baseline link classification method, and provides a lower bound on the performance of the algorithm actually used in our system (Section 5).
Because of the lack of actual measurements such as frequency on these abstract nodes, we also decouple the partitioning and labeling components of our system and score the partition found under the best matching conditions for the actual labels. Thus the simulation measures only how well the system separates positive from negative adjectives, not how well it determines which is which. However, in all the experiments performed on real corpus data (Section 8), the system correctly found the labels of the groups; any misclassifications came from misplacing an adjective in the wrong group. The whole procedure of constructing the random graph and finding and scoring the groups is repeated 200 times for any given combination of P and k, and the results are averaged, thus avoiding accidentally evaluating our system on a graph that is not truly representative of graphs with the given P and k.
We observe (Figure 3) that even for relatively low t9, our ability to correctly classify the nodes approaches very high levels with a modest number of links. For P = 0.8, we need only about ? links per adjective for classification performance over 90% and only 12 links per adjective for performance over 99%. s The difference between low and high values of P is in the rate at which increasing data increases overall precision. These results are somewhat more optimistic than those obtained with real data (Section 8), a difference which is probably due to the uniform distributional assumptions in the simulation. Nevertheless, we expect the trends to be similar to the ones shown in Figure 3 and the results of Table 3 on real data support this expectation.
10
Conclusion and Future Work
We have proposed and verified from corpus data constraints on the semantic orientations of conjoined adjectives. We used these constraints to automatically construct a log-linear regression model, which, combined with supplementary morphology rules, predicts whether two conjoined adjectives are of same 812 links per adjective for a set of n adjectives requires 6n conjunctions between the n adjectives in the corpus. In each figure, the last z coordinate indicates the (average) maximum possible value of k for this P, and the dotted line shows the performance of a random classifier.
or different orientation with 82% accuracy. We then classified several sets of adjectives according to the links inferred in this way and labeled them as positive or negative, obtaining 92% accuracy on the classification task for reasonably dense graphs and 100% accuracy on the labeling task. Simulation experiments establish that very high levels of performance can be obtained with a modest number of links per word, even when the links themselves are not always correctly classified.
As part of our clustering algorithm's output, a "goodness-of-fit" measure for each word is computed, based on Rousseeuw's (1987) silhouettes. This measure ranks the words according to how well they fit in their group, and can thus be used as a quantitative measure of orientation, refining the binary positive-negative distinction. By restricting the labeling decisions to words with high values of this measure we can also increase the precision of our system, at the cost of sacrificing some coverage.
We are currently combining the output of this system with a semantic group finding system so that we can automatically identify antonyms from the corpus, without access to any semantic descriptions. The learned semantic categorization of the adjectives can also be used in the reverse direction, to help in interpreting the conjunctions they participate. We will also extend our analyses to nouns and verbs. tion under grant GER-90-24069, and by the New York State Center for Advanced Technology under contracts NYSSTF-CAT(95)-013 and NYSSTF-CAT(96)-013.
We thank Ken Church and the AT&T Bell Laboratories for making the PARTS part-of-speech tagger available to us. We also thank Dragomir Radev, Eric Siegel, and Gregory Sean McKinley who provided models for the categorization of the adjectives in our training and testing sets as positive and negative.
Figure 1 :
1Randomly selected adjectives with positive and negative orientations.
Figure 2 :
2disturbing generous good honest important large mature patient peaceful positive proud sound stimulating straightforward strange talented vigorous witty Classified as negative: ambiguous cautious cynical evasive harmful hypocritical inefficient insecure irrational irresponsible minor outspoken pleasant reckless risky selfish tedious unsupported vulnerable wasteful Sample retrieved classifications of adjectives from set A4. Correctly matched adjectives are shown in bold. 9
Figure 3 :
3Simulation results obtained on 50 nodes.
Table 1 :
1Validation of our conjunction hypothesis. The P-value is the probability that similar
extreme results would have been obtained if same-and different-orientation conjunction types were
equally distributed.
or more
actually
formations, the remaining 15,048 conjunction tokens
involve 9,296 distinct pairs of conjoined adjectives
(types). Each conjunction token is classified by the
parser according to three variables: the conjunction
used
Table 2 :
2Accuracy of several link prediction models.891,780 possible pairs), 102 pairs
are morphologically related; among them, 99 are of
different orientation, yielding 97.06% accuracy for
the morphology method. This information is orthog-
onal to that extracted from conjunctions: only 12
of the 102 morphologically related pairs have been
observed in conjunctions in our corpus. Thus, we
Table 3 :
3Evaluation of the adjective classification and labeling methods.
AcknowledgementsThis work was supported in part by the Office of Naval Research under grant N00014-95-1-0745, jointly by the Office of Naval Research and the Advanced Research Projects Agency under grant N00014-89-J-1782, by the National Science Founda-
L ' Argumentation dans la Langue. Jean- , Claude Anscombre, Oswald Ducrot, Philosophic et Langage. Pierre Mardaga. Jean-Claude Anscombre and Oswald Ducrot. 1983. L ' Argumentation dans la Langue. Philosophic et Langage. Pierre Mardaga, Brussels, Belgium.
Markedness: The Evaluative Superstructure of Language. State University of. Edwin L Battistella, New York PressAlbany, New YorkEdwin L. Battistella. 1990. Markedness: The Eval- uative Superstructure of Language. State Univer- sity of New York Press, Albany, New York.
Class-based n-gram models of natural language. F Peter, Brown, J Vincent, Peter V Della Pietra, Jennifer C De Souza, Robert L Lai, Mercer, Computational Linguistics. 184Peter F. Brown, Vincent J. della Pietra, Peter V. de Souza, Jennifer C. Lai, and Robert L. Mercer. 1992. Class-based n-gram models of natural lan- guage. Computational Linguistics, 18(4):487-479.
A stochastic parts program and noun phrase parser for unrestricted text. Kenneth W Church, Proceedings of the Second Conference on Applied Natural Language Processing (ANLP-88). the Second Conference on Applied Natural Language Processing (ANLP-88)Austin, TexasFebruary. Association for Computational LinguisticsKenneth W. Church. 1988. A stochastic parts program and noun phrase parser for unrestricted text. In Proceedings of the Second Conference on Applied Natural Language Processing (ANLP-88), pages 136-143, Austin, Texas, February. Associa- tion for Computational Linguistics.
. W J Conover, Practical Nonparametric Statistics. Wiley2nd editionW. J. Conover. 1980. Practical Nonparametric Statistics. Wiley, New York, 2nd edition.
Pattern Classification and Scene Analysis. O Richard, Peter E Duda, Hart, WileyNew YorkRichard O. Duda and Peter E. Hart. 1973. Pattern Classification and Scene Analysis. Wiley, New York.
A procedure for generating connectives. Michael Elhadad, Kathleen R Mckeown, Proceedings of COLING. COLINGHelsinki, FinlandMichael Elhadad and Kathleen R. McKeown. 1990. A procedure for generating connectives. In Pro- ceedings of COLING, Helsinki, Finland, July.
Computers and Intractability: A Guide to the Theory ofNP-Completeness. R Michael, David S Garey, Johnson, W. H. FreemanSan Francisco, CaliforniaMichael R. Garey and David S. Johnson. 1979. Computers and Intractability: A Guide to the Theory ofNP-Completeness. W. H. Freeman, San Francisco, California.
Towards the automatic identification of adjectival scales: Clustering adjectives according to meaning. Vasileios Hatzivassiloglou, Kathleen R Mckeown, Proceedings of the 31st Annual Meeting of the ACL. the 31st Annual Meeting of the ACLColumbus, OhioAssociation for Computational LinguisticsVasileios Hatzivassiloglou and Kathleen R. McKe- own. 1993. Towards the automatic identification of adjectival scales: Clustering adjectives accord- ing to meaning. In Proceedings of the 31st Annual Meeting of the ACL, pages 172-182, Columbus, Ohio, June. Association for Computational Lin- guistics.
A quantitative evaluation of linguistic tests for the automatic prediction of semantic markedness. I-Iatzivassiloglou Vasileios, Kathleen R Mekeown, Proceedings of the 83rd Annual Meeting of the ACL. the 83rd Annual Meeting of the ACLBoston, MassachusettsAssociation for Computational LinguisticsVasileios I-Iatzivassiloglou and Kathleen R. MeKe- own. 1995. A quantitative evaluation of linguis- tic tests for the automatic prediction of semantic markedness. In Proceedings of the 83rd Annual Meeting of the ACL, pages 197-204, Boston, Mas- sachusetts, June. Association for Computational Linguistics.
Cooccurrences of antonymous adjectives and their contexts. John S Justeson, Slava M Katz, Computational Linguistics. 171John S. Justeson and Slava M. Katz. 1991. Co- occurrences of antonymous adjectives and their contexts. Computational Linguistics, 17(1):1-19.
Semantic Fields and Lezical Structure. Adrienne Lehrer, North Holland, Amsterdam and New YorkAdrienne Lehrer. 1974. Semantic Fields and Lezical Structure. North Holland, Amsterdam and New York.
Markedness and antonymy. Adrienne Lehrer, Journal of Linguistics. 313Adrienne Lehrer. 1985. Markedness and antonymy. Journal of Linguistics, 31(3):397-429, September.
. John Lyons, Cambridge University Press1Cambridge, EnglandJohn Lyons. 1977. Semantics, volume 1. Cambridge University Press, Cambridge, England.
Generalized Linear Models. Peter Mccullagh, John A Nelder, Chapman and HallLondon2nd editionPeter McCullagh and John A. Nelder. 1989. Gen- eralized Linear Models. Chapman and Hall, Lon- don, 2nd edition.
Introduction to WordNet: An on-line lexical database. George A Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, Katherine J Miller, International Journal of Lexicography (special issue). 34George A. Miller, Richard Beckwith, Christiane Fell- baum, Derek Gross, and Katherine J. Miller. 1990. Introduction to WordNet: An on-line lexi- cal database. International Journal of Lexicogra- phy (special issue), 3(4):235-312.
Distributional clustering of English words. Fernando Pereira, Naftali Tishby, Lillian Lee, Proceedings of the 3Ist Annual Meeting of the ACL. the 3Ist Annual Meeting of the ACLColumbus, OhioAssociation for Computational LinguisticsFernando Pereira, Naftali Tishby, and Lillian Lee. 1993. Distributional clustering of English words. In Proceedings of the 3Ist Annual Meeting of the ACL, pages 183-190, Columbus, Ohio, June. As- sociation for Computational Linguistics.
Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J Peter, Rousseeuw, Journal of Computational and Applied Mathematics. 20Peter J. Rousseeuw. 1987. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics, 20:53-65.
The Statistical Analysis of Discrete Data. J Thomas, Diane E Santner, Duffy, Springer-VerlagNew YorkThomas J. Santner and Diane E. Duffy. 1989. The Statistical Analysis of Discrete Data. Springer- Verlag, New York.
Helmuth Sp~ith, Cluster Dissection and Analysis: Theory, FORTRAN Programs, Examples. Ellis Horwo0d. Chiehester, West Sussex, EnglandHelmuth Sp~ith. 1985. Cluster Dissection and Anal- ysis: Theory, FORTRAN Programs, Examples. Ellis Horwo0d, Chiehester, West Sussex, England. |
8,535,026 | A system for object-oriented dialogue in Swedish | [] | A system for object-oriented dialogue in Swedish
A system for object-oriented dialogue in Swedish
Lars A h re n b e rg -9 6 -D e p a rtm e n t o f C o m p u te r and In form a tion S cien ce L in k öp in g u n iv ersity S-581 83 L IN K Ö P IN G S W E D E N
Introduction
T w o m od els fo r sem a n tic in terp re ta tio n th at are cu rren tly bein g d e v e lo p e d are con strzu n t-ba sed m od els (e.g. F en sta d et al. 1985, H alvorsen 1987) F irst, I w a n t to in v estig a te and d e m o n stra te th e p ossibilities o f in tegra tin g s y n ta ctic, se m a n tic 3nd p ra g m a tic k n ow led g e in th e in te rp re ta tio n p rocess w hile still h avin g th at k n o w led g e in sep a ra te m od u les. S e co n d , I w a n t to in v estiga te the possibilities o f trea tin g d ia log u e p h e n o m e n a su ch as in d e x ica lity and co h e re n ce w ith in such a system .
T h e resu lts w ill b e u sed in th e d esign o f a larger and m ore general system , LIN LIN (th e L in k öp in g N a tu tra l L an gu age In terf2ice; see A h re n b e rg et al., 1986; A h ren berg 1987).
A s a p p lica tio n I h a v e ch ose n a sim ple d ra w in g sy ste m w h ere the h u m an pairtner can d ra w , m a n ip u la te and ask q u estion s a b o u t geom e trica l figures o n a screen. T h e reason fo r th is ch o ice is th a t a visib le d o m a in m akes it q u ite o b v io u s w h eth er the system is in te rp re tin g in p u ts co rre ctly o r n ot.
T h e sy s te m is still u n d er co n s tr u c tio n . T h e m o rp h o lo g ica l 2ind sy n ta ctic co m p o n e n ts are in o p e ra tio n w h ile th e sem a n tic co m p o n e n ts are still to be in tegra ted in the system and th e p ra g m a tic c o m p o n e n ts d o n o t yet exist. In this p ap er I th erefore co n ce n tra te o n th e p r o b le m o f exp ressin g and d istrib u tin g sem a n tic constradnts, i.e. the rules th at exp ress th e co n tr ib u tio n s o f lexica l and gram m aticeJ elem en ts to th e in te rp re ta tio n o f the exp ression s o f w h ich th ey are part. F irst, I give a sh ort o v erview o f the s y s te m 's a rch ite ctu re .
A system for object-oriented dialog in Swedish Lars Ahrenberg Proceedings of NODALIDA 1987, pages 96-106 -97 -
System overview
T h e in tera ction w ith F A L IN is restricted to sim ple sequ en ces o f the kind th a t can be expressed by finite a u tom a ta . T h e basic sequences are, w ith the u se r's m oves first:
Q u e s tio n /A n s w e r , In s tru c tio n /E x e cu tio n and A s s e r tio n /A c c e p ta n c e . T h e system m ay also ask question s o f the user in the p rocess o f in terp reta tion and in fo rm h im /h e r o f p rob lem s w ith the inp u t.
T h e system w ill alw ays try to classify an inpu t in term s o f th e illo cu tio n a ry ca tegories th at are allow ed. T h is cla ssifica tion to a large e x te n t d eterm in es w h a t a ction s the system w ill execu te and w h a t in form ation it w ill present to the user.
T h e smalyzer aind the k n ow led ge bases th at it heis access to are illu strated in figure 1.
T h e m orph d iction a ry con sists o f a stem d ictio n a ry and a set o f a ffix d iction a ries, all o f th em co m p iled in to letter trees. A ll entries are in their su rface fo rm (cf. K arlsson , 1986). F ixed expression s com p risin g m ore than on e grap h ical w o r d su ch as t dag In the in terp re ta tion p rocess an input sen ten ce is assigned three structures: a con stitu e n t stru ctu re (c-stru ctu r e ), a fu n ction a l stru ctu re (f-stru ctu re) and a sem antic 98 Proceedings of NODALIDA 1987 in form ation a b ou t the inp u t sen ten ce regarded as a m essage. T h u s , it is n o t a sem an tic stru ctu re in a strict sense, since it represents a co n te x tu a lly adeq u ate in te rp re ta tio n o f the inpu t and co n te x tu a l feu:tors are used in its co n stru ctio n . P a rtia l stru ctu res for sentence (2) are sh ow n in figures 2a-2c.
(2) R ita en cirkel i övre h ögra h örn et. T h e con stra in ts on p rop er corresp on d en ces betw een c-stru ctu re and f-stru ctu re are stated in the lex ica l-fu n ction a l gr2unm ar w hereas the con strain ts on proper corresp on d en ces betw een f-stru ctu re aind s-stru ctu re are in clu d ed in the defin ition s o f in d ivid u al o b je c t ty p e s and a ttribu tes. A ls o fu n ctio n a l attribu tes are assigned such con st radnts. I refer to these latter rules colle ctiv e ly as S y n t2u :tic/S e m a n tic (3) C ircle29 : T h e sem antic stru ctu re eLSSOciated w ith a co n stitu e n t w ill n orm a lly n o t b e co n s tru cte d until the con stitu en t is ju d g e d sy n ta ctica lly co m p le te b y the parser, i.e. w h en an in a ctive edge is p rop osed for in tro d u ctio n in to the ch a rt. T h u s , a co n stitu e n t su ch as en svart fråga (a black q u estion ) m ay b e rejected b y the analyzer on th e grou nd s th at d escrip tion s o f question s ca n n ot con ta in d e scrip tors refering to co lo u r. Sim ilarly, sen ten ces such as (4) and (5)
( ( T Y P E (C E N T R E (R A D IU S (C O L O U R < k C ir c le # l) P o in t l3 ) 6) B la ck ) (R E S U L T -O F D ra w 4 )) T h e d iscou
Rules for syntactic/semantic correspondences
T h e relation b etw een s y n ta ctic stru ctu re and sem an tic stru ctu re is p erceived in different w a ys b y differen t theories. O ften som e form o f an isom orp h ism h yp oth esis is a d op ted . In form a l sem an tics and oth er sch ools auiopting a " ru le -to -ru le " -p rin cip le the corresp on d en ce is a d eriva tion a l corresp on d en ce, n ot a stru ctu ral on e. T h is app roach has also been used in natural language p rocessors, e.g. in the R o se tta p ro je c t (A p p e lo 101 Proceedings of NODALIDA 1987NODALIDA -102et al. 1987. O th er n atural language p rocessors rely im p licitly or exp licitly on stru ctu ra l iso m orp h y b etw een s y n ta ctic and sem an tic stru ctu res (e.g. L ytinen, 1987; D anieli et al., 1987). W h ile I b elieve that sim ple o n e -to -o n e relations betw een syn ta ctic and sem an tic elem en ts are su fficien t to o handle sim ple language fragm en ts, I also feel th a t there are lim its to such a m e th o d o lo g y . T h ere are s y n t2Lctic con stitu en ts that co rresp on d to n o sem a n tic o b je c t (e.g. form a l su b jects and o b je c ts ), there are those th at co rresp on d to m ore than on e sem an tic o b je c t (e.g. locu tio n a ry and illocu tion ary co n te n ts) and there are ca ses w here several sy n ta ctic con stitu en ts relate to one and the sam e sem a n tic o b je c t (e.g. id iom s, a d je ctiv a l a ttrib u te s). Such stru ctu ra l m od ification s are easily exp ressed b y d escrip to r sch em a ta. M o re o v e r, semauitic sch em a ta can be associated w ith s y n ta ctic o b je c ts and, in th e oth e r d ire ctio n , fu n ctio n a l sch em ata can be associated w ith sem a n tic o b je cts . A lso , d e scrip to r sch em a ta can be aissociated w ith co n te x tu a l fax:tors in very m u ch the sam e w ay as th ey are associated w ith syn ta ctic o b je cts .
A n o th e r q u estion is w h a t sy n ta ctic con stitu en ts sh ou ld be con sidered relevan t for the co r resp on d e n ce rules. H alvorsen (1983) 105 Proceedings of NODALIDA 1987 -106 -F en sta d , J. E ., H alvorsen , P -K , L a n gh olm , T . and van B en th em , J. 1985. E qu ation s, S ch em a ta and S itu ation s; A fram ew ork for lin gu istic sem an tics. M a n u scrip t, C SL I, S ta n ford U niversity.
lo y in g o b je c t-o r ie n te d k n o w led g e rep resen ta tion form a lism s such as fram e system s o r se m a n tic n etw ork s (e.g. B o b r o w t W e b b e r , 1980; S on d h eim er et ad. 1984, H irst 1 98 7). T h is p a p er d escrib es a diaJogue sy ste m for Sw edish in w h ich I w ish to co m b in e featu res o f b o th m o d e b . A large p art o f its lin gu istic k n ow led ge, in clu d in g sem an tic and p ra g m a tic k n o w led g e, is exp ressed 33 con stra in ts. T h e sem a n tic o b je c ts a ssociated w ith lin gu istic exp ression s in th e in te rp re ta tio n p rocess 3 ie elem en ts o f a sem an tic n etw o rk . M o r e o v e r , c o n s tr 2unts and o b je c t d escrip tion s p la y a m a jo r role also in the tre a tm e n t o f c o n te x t. T h e sy ste m , ca lle d F A L IN , is b ein g d e v e lo p e d w ith the fo llo w in g pu rp oses in m in d:
Figu re 1 :
1A n ov erview o f F A L IN 's analyzer. d a y ) or hur m ånga (h o w m a n y) are inclu d ed in the stem d ictio n a ry . T h e m orph d ictio n a ry can be search ed in d ifferen t m od es, e.g. one m ay ch oose to look for only one analysis o f a given strin g, or all o f th em , or one m ay inclu de or exclu d e the possibility o f a nalyzin g a w ord aa a co m p o u n d . A m orp h in the d ictio n a ry is associated w ith a set o f m orph em es. W ith each m orp h em e there are associated a co n tin u a tio n class o f su ffix lexicon s and, op tio n a lly , a flag guidin g the co n tin u ed search. A m orp h em e is either a stem or an affix. A stem m orp h em e carries in form a tion a b o u t syntewrtic ca te g o ry , m o rp h o s y n ta ctic features and m ean in g. T h e m ean in gs o f a stem m orp h em e are co lle cte d in a lex em e set, where a lexem e iden tifies a un iqu e sem a n tic o b je c t as value o f a sem an tic a ttribu te. B asically, there is on e lexem e fo r ea ch sense o f the m orp h em e. A n afHx m orp h em e is associated w ith m o r p h o s y n ta ctic featu res and, p o ssib ly , in form a tion a b o u t ca teg ory changes that it ind u ces. G iven a strin g su ch aa cirklarna (th e circles) the d ictio n a ry search w ill result in the stru ctu re ( l a ) . T h e first elem en t o f this stru ctu re, N , indicates sy n ta ctic ca te g o ry and the s e con d elem en t, ICirkel, identifies a lexem e set. T h e co n te n t o f the lexem e set m ay be ( l b ) w h ere ea ch d ifferen t item identifies a n od e in th e n etw ork . A t th a t n ode furth er in form a tion a b ou t this sense o f the m orp h em e ca n b e fo u n d . F or instan ce, (^C i r c le # ! m a y represen t th e geom etrica l co n c e p t o f a circle w hereas <k C ircle#2 m ay represen t the sense o f " stu d y circle " . e L e x ic a l-F u n ctio n a l G ram m ar is a p h rase-stru ctu re gram m ar w ith ann otated fu n ction a l sch em a ta in th e style o f Kaplan<S^Bresnan (1 9 8 2 ). It d eviates in several resp ects fro m th e cu rren t th eory smd p ra ctice o f L F G , h ow ever. T h ere are no sem antic form s and n o a ttrib u te P R E D . Instead o f P R E D an a ttrib u te L E X is used. T h e valueo f L E X is a lexem e set. A n im p orta n t d ifferen ce b etw een L E X and P R E D is th at L E X is n o t o b lig a to ry . C on seq u e n tly p rop erties such as coh eren ce and com pleten ess o f fu n ction a l stru ctu res are n ot d eterm in ed b y fu n ction a l in form a tion , bu t are indu ced fro m sem a n tic co n stra in ts a ssociated w ith o b je c t ty p e d efin ition s.
re (s-stru ctu re ). T h e c-stru ctu re is a p hrase-stru cture tree w hereas the other tw o stru ctu ru s are d escrip tor stru ctu res en cod in g in form a tion in term s o f a ttribu tes and V2dues. T h e f-stru ctu re en cod es gram m atical in form a tion , in p 2Lrticular in form ation a bou t gram m atical relations and m o rp h o sy n ta ctic features. T h e s-stru ctu re en codes
Figure 2a :Figure 2b :Figure 2c :
2a2b2cA co n stitu en t stru ctu re. A fu n ctio n a l stru ctu re. A sem a n tic stru ctu re. be w e ll-fo rm e d th e three stru ctu res m u st be in a relation o f p ro p er co rresp o n d en ce.
corre sp on d en ces, o r S y n /S e m -c o rr e sp o n d e n c e s for sh ort. T h e d om a in k n ow led ge o f the system is e n co d e d in a sem an tic n etw ork w ith d ata stru ctu res rep resen tin g o b je c t typ es, o b je c t instan ces and a ttribu tes. T h e o b je c t types represent co n c e p ts su ch as " circle " , " lin e" and " in stru ction " and carry in form ation a b ou t su p ertyp es and su b ty p e s, p a rt-w h o le relation sh ips and " p ro to ty p e s " . A p r o to ty p e expresses con stra in ts on the values o f a ttribu tes th at a ie allow ed for instan ces o f the ty p e. A s said a b o v e th ey also carry lin gu istic in form ation sp ecific to the ty p e. F or in sta n ce, the o b je c t ty p e for " circle" w ill con ta in the in form a tion th at it is in clu d ed in th e lexem e set ICirkel. T h e o b je c t ty p e for " in stru ctio n " w ill con ta in the in form a tion th a t an in stru ction can be co n stitu te d b y m eans o f an im perative utterem ce. S im ilarly, a ttrib u tes representing sem an tic roles con ta in in form ation a b ou t h o w th ey are exp ressed lin gu istica lly, w h eth er b y lexem es or gram m atica l relations. A n o b je c t in sta n ce has a unique internal n am e and a d e scrip tio n . A n illustration is given in (3 ).
rse d om a in b asically con sists o f all the o b je cts th at exist, i.e. are part o f the n etw ork at any given stage in the discou rse. H ow ever, w ith o u t im posin g som e kind o f s tra tifica tion on the d iscou rse d o m 2dn it w ill n ot b e p ossible to handle anaphoric or im p licit reference. T h e re h ave been va riou s su ggestions h ow this sh ould be done (e.g. G rosz, 1977; A lsh a w i, 1987). T h e first m e th o d th at w ill be exp lored in this system is to in tro d u ce an o b je c t rep resen tin g the s y s te m 's view o f " a dialogu e state" at any given m o m e n t. T h e d escrip tion o f this o b je c t, w h ich w ill com p rise co n te x t factors such as speaker, a ddressee, cu rren t to p ics, cu rren t visible o b je cts etc, w ill then be u pd a ted for each n ew u tteran ce. T h e p ro cesso r con sists o f a ch a rt parser com m u n ica tin g w ith m od u les th a t classify d e scrip tion s and d eterm in e their referents, if any. T h e ch art parser presen tly w orks in a b o tto m -u p m od e b u ild in g c-stru ctu re and f-stru ctu re in paredlel. T h u s, the co n sisten cy o f fu n ction a l in form a tion is ch eck ed w h en ever a task is e xecu ted . certain d eterm in istic traits, w h ich I will n ot d escribe here, b u t it w ill adways find an analysis if there is one. T h e role o f the classifyin g co m p o n e n t is to determ in e an a p p rop ria te o b je c t typ e for an s-structu re con stitu en t. S om etim es a T Y P E -d e s c r ip to r can be d e term in ed easily from the lexical in form ation , b u t there are several co m p lica tio n s, such as d isa m b igu a tion and the han dlin g o f headless phrases. A general requ irem en t is th a t, if a lexem e set heis been in d ica ted , the value o f T Y P E m ust be an elem en t o f th a t set. O th e r d escrip tors o f the sem an tic stru ctu re are required to be co m p a tib le w ith the TY P E J-descriptor a ccord in g to its p ro to ty p e . T h e task o f the referent id en tifica tion co m p o n e n t is to d eterm in e referents o f the d escription fou n d in an s-stru ctu re con stitu en t. N ot all s-stru ctu re co n stitu e n ts will refer to an aJready existin g in d ivid u al, o f cou rse. F or these there is still a need to determ ine a m od e o f a p p lica tion o f the d e scrip tio n , i.e. the co n d itio n s u n der w h ich a referent w ill exist.
w ill be d isa m b igu a ted w hen sem an tic con strain ts are taken in to a cco u n t. F or instan ce, an a ctive edge span n in g the w o rd s fly tta cirkeln o f (5) and lo ok in g for a loca tiv e a dverbial can co m b in e sy n ta ctica lly w ith an in active edge spa n n in g the w ords i h orn et, b u t the p ro p o se d edge w ill be re je cte d on sem an tic grou nds, sin ce th e lo ca tio n expressed b y the latter w ord s w o n 't b e o f the a pp rop riate typ e for a m o v e m e n t a ction . (4) R ita cirk eln i h örn et. (D ra w th e circle in the corn er.) (5) F ly tta cirkeln i h örn et. (M o v e th e circle in the corn er.)
defines the corre sp o n d e n ce s in term s o f tran sla tion rules w h ich a ssociate fu n ction a l stru ctu res w ith sem an tic stru ctu res. T h e sem an tic stru ctu res h ave q uite a restricted fo rm , h ow ever, (equ ivalen t to form u las o f illo cu tion a ry lo g ic ) and em p lo y on ly a lim ited n u m ber o f a ttribu tes. H alvorsen (1 9 8 7 ), on th e oth er hand, states the corre sp o n d e n ce s already at c-stru ctu re level. T h e co rresp on d en ces b etw een functionaJ and sem an tic stru ctu res are ca p tu red by m eans o f a p r o je c tio n o p e ra to r, cr. T h e p ro je ctio n o p e ra to r takes fu n ction a l structuresas argu m en ts and return s the corre sp o n d in g sem an tic stru ctu re. A sch em a associating the su b je c t co n s titu e n t w ith the first a rgu m en t o f a verb is w ritten as in(6 ).(6) {{a T) A R G l ) = ( ct( t S U B J ))S chem as o f this kin d are a tta ch ed b o th to lexical entries and to rules in the graunmar.A sch em a su ch as (6 ) w o u ld b e a tta ch e d to every verbad stem in the language that allow s this co rre s p o n d e n ce , i.e. the great m a jo rity o f verbs. T h e lexical en try for the verb a l stem kick is specified as fo llow s {ibid. p. 9 ):(7) KICK V S-ED {{a T) REL) = KICK (T PRED) = ' K I C K ' {{a t) ARGl) = ((t(t SUBJ)) {{a T) ARG2) = (ct(t OBJ))T h e re are som e d isa d va n ta g es w ith this m e th o d , h ow ever.F irst, corresp on d en ces o f the ty p e in (6 ) are n ot stated as rules, in p articu la r n ot as rules a b o u t su b jects and first a rgu m en ts, b u t as sp ecific in form ation a b ou t in d ivid u al w o rd s, and, since there are m a n y adternative corresp on d e n ce s, lexical entries ten d to be o verloa d ed with in form a tion . T h is is actu ally a generzd p ro b le m w ith le x ica l-fu n ction a l gram m ars where entries are fully sp ecified . S econ d , the role o f the fu n ction a l p red ica te ' K I C K ' is unclear. If in form ation a b ou t p red icate-a rgu m en t stru ctu re is m o v e d fro m fu n ction a l stru ctu re to sem an tic stru ctu re, as H alvorsen suggests it sh ou ld , it seem s to be o f very little significance. In F A L IN corresp on d en ces o f th e ty p e (6 ), alth ou gh in a sUghtly differen t fo rm , are associated d irectly w ith the attribu tes SU BJ and A R G l as elem en ts o f the n etw ork . T h rou g h inh eritan ce th ey b e com e available to any relation th a t a cce p t A R G l (or one o f its su ba ttribu tes) sls an attribu te. Sem an tic a ttribu tes such as A R G l and A R G 2 can b e regarded as a b stra ct sem an tic roles (cf. W ach tel 1987). R oles such as b ein g the agent o f an act o f d ra w in g or the speaker o f an u tteran ce are d ifferen tiation s o f A R G l , w hereas the result o f a d ra w in g , i.e. the p ictu re, and the m essage o f an u tteran ce £ire d ifferen tia tion s o f A R G 2 . A lth ou g h these a ttribu tes are n ot in th em selves represen tin g graimmaticaLl fu n ctio n s, they allow the form u la tion o f sim ple rules for the in te rp re ta tio n o f gram m atica l relations. R ules th at in d u ce a d ifferen t m a p p in g b etw een graim m atical relation s and sem an tic argum ents, such as rules for passive co n stru ctio n s, w ill also h ave th eir results stated on the description s o f the a ttribu tes in v olv ed instead on th e d escrip tion s o f in d ivid u al verbs. In d ivid u al verbs need on ly be specified for the kinds o f m a p p in g th ey p erm it. T h u s, if w e inclu de b oth the a ctiv e and the passive cases in th e sam e rule, w e get som eth in g o f the fo rm o f (8 ). T h e arrow s h ave their usual in terp reta tion s as m etavariables for corresp on d in g stru ctu res. T o distingu ish fu n ction a l and sem an tic structures the latter are in d exed b y a low ered 's ' and the form er by an 'f ' . Schem as w ith ou t arrow s state con d ition s on the stru ctu re in w h ich the a ttrib u te itself occu rs. istrib u tin g the fu n ctio n a l sch em as in the sem 2uitic netw ork w e red u ce m u ch o f the lexical overloa^ling in ord in a ry lexica l-fu n ction a l granunars. E very d ifferen t sense o f a m orph em e is given its ow n en try . M o re o v e r, w hen a stem is p art o f an id io m or other p oly m orp h em ic item , in form ation a b o u t this is n ot on ly a tta ch ed to the stem , bu t also 103 Proceedings of NODALIDA 1987 -104to the relevan t n od e in the n etw ork. F o r instan ce, the m orp h em e ta (tak e) is a ssociated w ith a L E X -v a lu e , !T ak e, th at have a fairly large n u m ber o f differen t senses. In this set w e w o u ld also fin d the a ction & T a k e -a w a y , expressed in Sw edish as ta bort. T h is item is d istin gu ish ed from all the oth ers in the sam e set by a special co n d itio n on fu n ctio n a l stru ctu res expressin g it, i.e. th at it con ta in s the tw o d escrip tors in (10) at to p level. H ere, P R T is sn a ttrib u te represen tin g n ction a l stru ctu re m a y corre sp o n d to a co n te n t stru ctu re in tw o differen t m odes. I d istingu ish a co n stitu tiv e (or illocu tion a ry) m o d e fro m a strict (or lo cu tio n a ry ) m od e. T h e u ttera n ce o f an exp ression con stitu tes an illocu tion a ry a ct, i.e. an o b je c t instance o f a p a rticu la r illocu tion a ry ty p e . T h e d escrip tion o f this o b je c t is said to corresp on d to the fu n ctio n a l stru ctu re o f the expression in the co n stitu tiv e m od e. T h e d escription s o f the o b je c ts referred t o in th e u tteran ce, on the oth er h an d , are said to corresp on d s trictly w ith th e f-stru ctu res o f their refering expression s. C o n stitu tiv e corresp on d en ce w ill b e in d ica ted b y d ou b le arrow s, iy auid to d istingu ish it fro m strict co rre sp o n d e n ce. O f th e lin gu istic elem en ts th at pEirticipate in co n stitu tiv e sch em a ta I w ill here only co n sid er m o o d d e scrip tors. A rule for the im p erative m o o d m ay be form u DS is a reference to the d escrip tio n o f the discou rse state. W h en an s-structu re is co n s tru cte d b y m ean s o f (11 ) the cu rren t values for the in d ica ted a ttribu tes o f the d iscou rse state w ill b e retrieved . T h e fo u rth sch em a relates the tw o different co rresp on d in g s-stru ctu res to each o th e r, thus in tegratin g the locu tion a ry m eaning into the d e scrip tio n o f th e illocu tio n a ry a ct. T o be p rop e rly corre sp on d in g an f-stru ctu re and an s-stru ctu re m ust m eet certain general requ irem en ts. T h e fu n ction a l a ttribu tes and d e scrip tors C2ui be d ivid ed into tw o classes, sem an tica lly relevan t and sem an tically irrelevan t. T h e latter descriptors 104 Proceedings of NODALIDA 1987 -105play no role in the corresp on d en ce relation , whereas every sem an tica lly relevant fu n ction a l d escrip tor m ust corresp o n d to a stru ctu re o f sem an tic d escrip tors a ccord in g to one o f the sy n /s e m -co rre s p o n d e n ce s defined for it. B o th f-stru ctu res and s-structu res m ust be con sisten t and d eterm in ed. M o re o v e r, the s-stru ctu re con stitu en ts m ust be ty p ed , co m p a tib le w ith a p ro to ty p e and specified as to h ow th ey apply as d escription s o f o b je cts in the d iscou rse d om ain . N ot all in form a tion in s-structu res have a cou n terp a rt in fu n ctio n a l d escrip tors, h ow ever. It m ay instead be retrieved from the d iscou rse state. A ll this m eans th a t there is n o requ irem en t on strict isom orp h y , w h eth er d erivation a l or stru ctu ra l, b etw een f-stru ctu res and s-structu res. Still, the use o f sch em a ta and the p ostu la tion o f on ly tw o classes o f co rresp on d en ces m ake the fram ew ork b o th p rin cip led and restricted . A h re n b erg , L. 1987. P arsing in to D iscourse O b je c t D escrip tion s. In P r o ce ed in g s, Third C o n fe r e n c e o f the A C L E u rop ea n Chapter, C op en h a gen 1-3 A p ril, 1987, p p . 140-147. A lsh aw i H. 1987. M em o ry and c o n te x t fo r language in terp reta tio n . Caunbridge U niversity Press: C am b rid ge. A p p e lo , L ., F ellin ger, C . and L an dsbergen , J. 1987. S u bg ra m m a rs, R u le Classes and C on trol in the R o se tta T ran sla tion System . In P ro ceed in g s, Third C o n fe r e n c e o f the A C L E u rop ea n C hapter, C open h agen 1-3 A p ril, 1987, p p . 118-133.B o b ro w , R . J ., W e b b e r, B. L. 1980. K n ow led g e R epresen tation for S y n ta ctic /S e m a n tic Processin g. In P roceed in g s, F irst A n n u a l N ation a l C o n fe r e n c e on A rtificia l In telligen ce, S ta n ford , A u gu st 1980, p p. 316-323.
G rosz, B . J. 1977. T h e R ep resen ta tion and Use o f F o cu s in D ialogu e U nderstanding. (P h D thesis) S R I T ech n ieh a l N o te N o. 151, S R I In tern ation al: M e n lo P ark.H alvorsen , P -K . 1983. S em an tics fo r L e x ica l-F u n ctio n a l Graunmar. L in gu istic Inquiry 14:4, stra in t-b a se d G ra m m a rs. T ech n ieh a l R ep o r t C S L I-T R -8 7 -1 0 1 , C en tre for the S tu d y o f L a n gu age and In form a tion : S ta n ford.K a p la n , R . ic B resn an , J. 1982. L e x ica l-F u n ctio n a l G ran u n ar: A F orm a l S ystem for G ra m m a tica l R ep resen ta tion . In B resnan J. (e d .) 1982: The M en ta l R ep r ese n ta tio n o f G ra m m a tica l R ela tio n s, T h e M I T Press: C a m b rid g e M a ss., p p. 173-281. K a rlsson , F. 1986. A p a ra d ig m -b a sed m o rp h o lo g ica l analyzer. In K a rlsson , F . (e d .) 1986: P a p ers fr o m the F ifth Scandinavian C o n fe r e n c e o f C om p u ta tion a l Lin gu istics, H elsinki, D e c e m b e r 1 1 -1 2 1985. U niversity o f H elsinki; H elsinki, p p . 95-112. L y tin en , S. L. 1987. In tegra tin g synteoc and sem an tics. In N iren b u rg, S. (e d .) 1987: M a ch in e tra n slation . C eunbridge U niversity Press: C a m b rid g e , p p . 302-316. M o re n o , D ., F erra ra , F ., G em ello, R . and R u llen t, C . 1987. In tegratin g S em antics and F lexib le S y n ta x b y E x p lo itin g Isom orp h ism betw een G r a m m a tic d and Sem juitical R e la tion s. In P r o ce ed in g s, Third E u ro p ea n C h apter A C L C o n fe r e n c e , C open h a gen , A p ril 1-3, 1987, p p . 278-283. S on d h eim er, N. K ., W eisch ed el, R . M . and B o b r o w , R . J. 1984. Sem antic In terp reta tion U sing K L -O N E . In P r o ce ed in g s o f C olin g '84, S ta n ford U niversity, Cal. 2-6 July 1984, p p . 101-107. W a ch te l, T . 1987. D iscou rse stru ctu re in L O Q U I. In R e c e n t D evelo p m en ts and A p p lica tio n s o f N atu ral L anguage U nderstanding. U N IC O M Sem inars L td: L on d on ,
AcknowledgementsT h is paper rep orts w ork in progress o f the p r o je c t " A n alysis 2uid G e n e ra tio n o f N atural L anguage T e x ts " fin a n ced b y the N ation al Sw edish B o a rd fo r T e ch n ich a l D ev elop m en t. I am in d eb ted to Nils D a h lb ä ck , A m e J ön sson , M a gn u s M erkel and M ats W irén for valu able discu ssion and to U lf D ah lén for m u ch o f th e p rogra m m in g.
M o t ett d ia log sy stem fö r svenska. L Ren B Erg, N Lb Ä Ck, A Jön Sson, M Erkel, M Ch W Irén, ; N L P L A B M, L in k öpin g university: Lin k öpin gren b erg , L. D a h lb ä ck , N ., Jön sson , A ., M erkel, M . o ch W irén , M . 1986. M o t ett d ia log sy stem fö r svenska. N L P L A B M em o 86-01. D ep a rtm en t o f co m p u te r and in form a tion scien ce, L in k öpin g university: Lin k öpin g. |
|
258,463,952 | The Relationship between L2 Grit and Intrinsic Reading Motivation of Filipino Pre-service Teachers in Central Mindanao | This quantitative study analyzed the levels of and the relationship between the second language (L2) grit and intrinsic reading motivation of pre-service teachers majoring in English language education (N=128) and elementary education (N=108) from two universities in Central Mindanao, Philippines. Using a quantitative correlational research design and a cross-sectional survey method, the randomly selected respondents answered the L2 Grit scale and Intrinsic Reading Motivation Scale which both had good internal consistency. The results from the descriptive statistics showed that both groups had high levels of L2 grit and intrinsic reading motivation. Moreover, these variables had a significant positive correlation based on the Pearson product-moment correlation analyses. This means that when the level of students' grit in learning the second language increases, their motivation to read English texts also increases, and vice-versa. Such also indicates that strengthening the intrinsic reading motivation of the learners will most likely encourage the development of their L2 grit. As a non-cognitive concept, grit assists students in accomplishing the long-term goals they have. Hence, pedagogical implications and recommendations for future study are presented. | [] | The Relationship between L2 Grit and Intrinsic Reading Motivation of Filipino Pre-service Teachers in Central Mindanao
Jannie Faye
S Nacario King
Arman A Calingasan kingarmancalingasan@gmail.com
Omar K Pendi omar.k.pendi@gmail.com
Bai Sittie
P Maguing
Department of Professional Education
Life Academy International CCF Worship and Training Center
Department of Professional Education
Notre Dame University
Notre Dame Avenue, Rosary Heights 2Cotabato City, Pasig City, Metro ManilaPhilippines, Philippines
Department of Professional Education
Notre Dame University
Notre Dame Avenue, Rosary Heights 2, Cotabato CityPhilippines
Notre Dame University
Notre Dame Avenue, Rosary Heights 2Cotabato CityPhilippines
The Relationship between L2 Grit and Intrinsic Reading Motivation of Filipino Pre-service Teachers in Central Mindanao
This quantitative study analyzed the levels of and the relationship between the second language (L2) grit and intrinsic reading motivation of pre-service teachers majoring in English language education (N=128) and elementary education (N=108) from two universities in Central Mindanao, Philippines. Using a quantitative correlational research design and a cross-sectional survey method, the randomly selected respondents answered the L2 Grit scale and Intrinsic Reading Motivation Scale which both had good internal consistency. The results from the descriptive statistics showed that both groups had high levels of L2 grit and intrinsic reading motivation. Moreover, these variables had a significant positive correlation based on the Pearson product-moment correlation analyses. This means that when the level of students' grit in learning the second language increases, their motivation to read English texts also increases, and vice-versa. Such also indicates that strengthening the intrinsic reading motivation of the learners will most likely encourage the development of their L2 grit. As a non-cognitive concept, grit assists students in accomplishing the long-term goals they have. Hence, pedagogical implications and recommendations for future study are presented.
Introduction
One of the learners' personalities that are extensively studied by educational psychologists is grit. Duckworth et al. (2007) introduced grit to delineate a person's passion and tenacity to pursue a long-term goal amid challenges and adversities. It is a combination of enthusiasm and persistence in the pursuit of a goal that takes a long process before its fulfillment. Duckworth and Gross (2014) argued that grit is one of the valuable determinants of success. Educators believe that studies focusing on grit direct creative processes toward the production of successful students (Keegan, 2017). In the context of the second language (L2) learning and teaching, the second languagedomain-specific grit (henceforth L2 grit) has been recently considered an important personality trait of successful L2 learners because its role is relatively new in this realm (Teimouri et al., 2020).
Concerning language learning success, intrinsic motivation may be regarded as correlative to L2 grit. As defined by Ryan and Deci (2000) based on the self-determination theory, intrinsic motivation is an inner zeal to do an activity that elicits internal gratification. This means that intrinsically motivated learners possess an interest and desire to learn something that gives them personal satisfaction, and they are likely to be successful in the academic endeavor (Teimouri et al., 2020). Moreover, Anjomshoa and Sadighi (2015) asserted that successful English language learning requires a learner to be intrinsically motivated. Learners who have high intrinsic motivation seem to have similar characteristics to gritty students. Both manifest an attitude of resilience, perseverance, and sustained interest in learning without expecting external rewards.
To measure students' grit in learning a second language, Teimouri et al. (2020) constructed and validated a second languagedomain-specific grit scale. They also examined the relationship between L2 grit and language achievement in a sample of 191 Persian students who studied an English Translation course. They found that gritty students are more passionate about L2 learning and cognitively engaged in class discussion than less gritty students. Moreover, their analyses indicated a positive relationship between L2 grit and language achievement. Similarly, Alamer (2021) validated his newly developed L2 grit scale which he tested among 213 Saudi students who were studying English as an L2. After analyzing the psychometric properties of his selfconstructed survey, Alamer (2021) affirmed that the "L2-grit scale is reliable, valid, and suitable for use in L2 research" (p. 1). Freiermuth et al. (2021), on the other hand, interviewed eight gritty English language students from Japan, Malaysia, Taiwan, and Thailand to detail the characteristics of a gritty L2 learner. They discovered that L2 learners who are gritty have the endurance it takes to learn the English language and delight in the learning process. They are never bored by it and are confident in their ability to communicate in it even if they are not yet fluent.
Because L2 grit has been lately explored in L2 learning and teaching, only a few studies on this research topic have provided empirical data. To date, most L2 grit studies have focused on the development and validation of the L2 grit scale (e.g., Alamer, 2021;Teimouri et al., 2020). Consequently, the present study adopts and modifies the available L2 grit scale from the previous research to explore the relationship of this personality trait with a different yet related construct in L2 learning, i.e., intrinsic reading motivation. Another research gap was found in the study of Freiermuth et al. (2021). Although they identified enjoyment or intrinsic motivation as one of the factors that influence L2 grit, they failed to specify what type of intrinsic motivation was referred to by the participants. In addition, Freiermuth et al. (2021) determined motivation as a factor through qualitative analysis only. This psychological construct is chosen because to the best of our knowledge, no efforts have been made to examine the relationship between L2 grit and intrinsic reading motivation. Furthermore, the current research addresses the issue of homogeneity of the sample as one of the limitations pointed out by Alamer (2021). According to him, future researchers must consider using samples from different language learning environments. Therefore, this study is conducted in the Philippines, a multicultural and multilingual learning context where English is considered a second de jure official language. Unlike past studies, this research analyzes the constructs mentioned above using the survey responses from two different groups of language learners (i.e., students majoring in English and students studying elementary education). Diversity of the language learning contexts and language learners may provide an in-depth understanding of grit as an L2 learning construct and the utilization of the L2 grit scale (Alamer, 2021).
Theoretical Framework
In language learning and academic success, personality and motivation are two of its most important determinants (Kelsen & Liang, 2019). Termed as a trait, Roccas et al., (2002, p. 790) defined personality as "what people are like" and that it may be positive or negative. Specifically, it deals with a collection of underlying traits that determine the actions, thoughts, and feelings of an individual (Medford & McGeown, 2012). Recently, Brandt et al. (2021) have described broadly one's personality as to how one behaves towards something. Concerning the present study, grit and motivation may be commonly mistaken as constructs with an entirely similar identity. Although previous studies have shown a relationship between the said variables, their distinct roles in the completion of goals must be emphasized. First, grit, according to Duckworth et al. (2007), has been considered a personality that entails the capacity to persevere and maintain dedication toward a long-term objective. Second, they suggested the possibility of grit assuming a narrow component of the fivefactor model of personality.
The Big Five framework proposes five primary factors of personality that account for the individual differences among people (Medford & McGeown, 2012). In the academic setting, it influences not only the learners' accomplishments but also their language learning as the same framework draws the individual disparity by determining one's attitude, cognition, motivation, temperament, and learning styles (Kelsen & Liang, 2019). The five dimensions of personality include (1) agreeableness, (2) extraversion, (3) neuroticism, (4) openness to experiences, and (5) conscientiousness. Through various contexts, earlier researchers have utilized the same model to predict individual differences (Roccas et al., 2002). Moreover, among the identified personality factors, conscientiousness is most closely associated with grit. Credé et al. (2017) found substantial evidence for the link between grit and conscientiousness in their study. This corroborated the argument of Roberts et al. (2012) that despite being separately developed constructs, the two variables have clear relationships.
According to Roberts et al. (2009), individual differences in terms of one's inclination to be industrious, obedient, organized, responsible, and self-controlled are best described in a spectrum of constructs called conscientiousness. They also postulated that in its hierarchical structure, the upper level can be separated into two aspects: proactive and inhibitive. On the lower level of proactive conscientiousness, however, resides industriousness. Roberts et al. (2012) described industrious people as those who work hard, strive for excellence, and persevere despite obstacles. This description is similar to that of grit, particularly its subcomponent, that is, perseverance of effort. In the study by MacCann et al. (2009), perseverance of effort was found to have a positive relationship with the industrious facet of conscientiousness. In simpler words, grit is among the essential features of human personality and has significant behavioral implications (Costa & McCrae, 1992, as cited in Komarraju & Karau, 2005).
As mentioned earlier, motivation plays a crucial role in learning a target language. Brandt et al. (2021) described motivation as the cause for people's actions towards something. Medford and McGeown (2012) asserted that there exist various theories of motivation, but intrinsic and extrinsic are most often employed in reading studies, which is also the focus of this endeavor. This explains that an intrinsically motivated reader finds such activity to be fundamentally engaging or delightful. On the other hand, an extrinsically motivated reader takes part in a reading activity because of separable factors, for example, receiving a reward or avoiding a penalty. Furthermore, Schiefele et al. (2012) found that high domainspecific intrinsic motivation is equivalent to high success expectations and subsequent desire to demonstrate effort. In addition, intrinsically motivated students are depicted as those who maintain interest as they pursue a personal objective (Ryan & Deci, 2000). This corresponds to a grit subcomponent that is concerned with the consistency of interest (CI).
The relationship between personality and motivation has been evidenced by several studies. In a study by Komarraju and Karau (2005), they argued that high openness to experiences is tantamount to greater academic motivation. Moreover, a similar study yielded results showing conscientiousness and openness to experiences accounting for 17% of the variance in the learners' intrinsic academic motivation (Komarraju et al., 2009). According to other research, an individual's personality may be associated with distinct sub-facets of motivation in various ways. Clark and Schroth (2010) reported that intrinsically motivated students, in terms of acquiring knowledge and completing tasks, were both conscientious and agreeable. On the one hand, learners who were intrinsically motivated toward simulation experiences were likely under the personality factor of openness to experience. Although these past studies have found a link between personality and motivation, their different roles in terms of accomplishing goals must be considered. Motivation explains why one behaves, whereas grit tells how one behaves (Brandt et al., 2021). This, then, resonates with the assumption of the present study that when students are intrinsically motivated in reading, they also have a high level of L2 grit and vice versa.
Statement of the Problem
In this study, the respondents are preservice teachers who specialize in English language education and elementary education. Part of the objectives of the study is to determine levels of their L2 grit and intrinsic reading motivation. Most importantly, this present research aims to examine the relationship between intrinsic reading motivation and L2 Grit of the pre-service teachers from two universities in Central Mindanao, Philippines. It specifically attempts to answer the following research questions:
• What is the level of L2 grit and intrinsic reading motivation of students majoring in English and students studying elementary education?
• Is there a significant relationship between L2 grit and intrinsic reading motivation of students majoring in English and students studying elementary education? 2 Methodology
Research Design
The present study employed a quantitative survey research method to derive quantitative descriptions in measuring the relationships between variables of the selected sample population (Creswell & Creswell, 2018). Further, the relationship between L2 grit and intrinsic reading motivation was assessed using survey tools, resulting in numerical data that can be evaluated using statistical processes. In line with this, the study followed the cross-sectional survey to draw information on particular phenomena at one period of time (Kelley et al., 2003).
Research Setting
The current research endeavor was conducted at two universities in Cotabato City, Philippines. Both academic institutions offer education courses such as Bachelor of Elementary Education (BEEd) and Bachelor of Secondary Education (BSEd) -major in English which are essential for this study. It is appropriate to conduct this study in this setting because the target respondents were available in both schools.
Research Respondents
The respondents for the present study were 236 Filipino college students (N = 191 female; N = 38 male; N = 7 preferred not to reveal their biological sex) from two higher educational institutions in Central Mindanao: a private and a state university. Out of 384 preservice teachers majoring in English and 149 in elementary education learning English as a second language (L2), 128 (33%) and 108 (72%) from each group responded respectively. In this study, participating students who were specializing in English were heavily exposed to the said language because most of their course works were related to English pedagogy, literature, and linguistics. Meanwhile, participating students who were studying elementary education took English as a minor subject; as a result, they had lesser exposure to English as compared to the former group of respondents. English as a Second Language (ESL) is a term that is used to refer to specialized methods of teaching the English language to individuals whose first language is not English.
The respondents' age ranged from 18 to 26 years old with 19 years old being the modal age, and they were mostly freshmen (38.6%). Moreover, the study followed a multistage clustering by beginning with the identification of clusters or groups, followed by the collection of names of individuals belonging to those clusters, and finally extraction of samples from them (Creswell & Creswell, 2017). Ultimately, the study drew from a random sampling technique to provide everyone an equal chance of being selected (Creswell & Creswell, 2017) and prevent biases.
Research Instruments L2 Grit Scale.
In measuring the L2 grit of college students, this study adopted the validated survey questionnaire of Teimouri et al. (2020). It consists of two components: perseverance of effort (PE) and consistency of interest (CI) in learning a language. The consistency of interest calculates the interest in studying L2, while perseverance of effort assesses the learners' persistence in achieving their goals in L2 learning. This five-point Likert scale from 1 to 5 (not at all like me to very much like me) was acceptable in the present study as it had an internal consistency of 0.794.
Intrinsic Reading Motivation Scale. The present study modified the Reading Motivation Survey created by Guthrie et al. (2009) to measure students' intrinsic reading motivation. There were originally four variables, i.e., intrinsic motivation, avoidance, self-efficacy, and perceived difficulty. However, only seven items under intrinsic reading motivation in their research with the same Likert response format (from 1= never to 4= always) were adopted. After pre-testing this instrument, the intrinsic reading motivation scale was also found acceptable considering its 0.724 internal consistency.
Data Collection and Analysis Procedures
Considering the ethical concerns and requirements for conducting research with our target participants, we submitted official letters of request to the College of Education deans of one private and one state university in Central Mindanao. An informed consent letter was included in the correspondence, which detailed the aim, procedures for participation in the study, risks, and benefits, as well as the consent form for the participants. A full printed copy of our survey questionnaire was also included in our submission. After the approval, they sent us the official lists of students enrolled in English and Elementary Education courses.
We, then, immediately conducted a pretesting of instruments twice among the 23 thirdyear English major students from one of the targeted schools. The first attempt yielded unreliable results, specifically with the avoidance scale under reading motivation. Thus, the same respondents were requested to answer the same questionnaire again. They were also instructed to accomplish the survey with careful consideration and truthfulness. However, the same problem occurred after the second accomplishment and therefore led to the decision of removing the avoidance scale.
After the pre-testing of the instruments, we sent the web-based survey to the respondents via email. In analyzing the relationship between L2 grit and intrinsic reading motivation, the Pearson product-moment correlation using SPSS was utilized. Before that, however, reverse coding was used to accurately analyze the score of the results. The objective of reverse scoring is to recode responses so that a high score corresponds to a low score on the scale. On a 5point scale, for instance, a four becomes a two, and vice versa.
3
Results and Discussion Table 1 summarizes the findings from the descriptive analysis of ESL students' and English majors' L2 grit and their intrinsic reading motivation. It reveals that both groups have high levels of intrinsic reading motivation and L2 grit, which implies that whether learners are specializing in English or simply studying English, they are most likely to be gritty and intrinsically motivated to read English texts.
Levels of L2 Grit and Intrinsic Reading Motivation
Results concerning L2 grit confirm that students majoring in English have remarkable grit in L2 learning (M = 3.58, SD = 1.027). Considering that they receive substantial inputs in the English language as they learn its fundamentals, such a result is barely surprising. It is expected that English majors will most likely be gritty in learning the English language because it is their specialization. They have been exposed to academic coursework related to English pedagogy, linguistics, and literature since their first year at the university. The same expectation was confirmed in the study of Teimouri et al. (2020) in which English-major students, whose future line of work was heavily dependent on their communicative skills in the English language, had high levels of L2 grit. Additionally, it is important to note that these English majors were under the teacher education program that prepares students for an English language teaching career. Hence, if these students aim to obtain an English teaching job after graduation, they will surely set a long-term goal and persevere to achieve it. As Duckworth et al. (2007) explained, pursuing English programs is similar to joining a marathon that requires mastery of the target language despite the frustrations its journey necessarily entails. Thus, in order to develop the necessary skills related to English teaching or learning, one must possess grit. For instance, the pre-service teachers from the study of Zawodniak et al. (2021) confessed awareness of their weaknesses in English language learning, but they were firm in addressing and possibly eliminating their identified weaknesses.
Given that literature on L2 grit is still scarce, the presented assumption can also be drawn from the Commission on Higher Education (CHED) Memorandum Order No. 75, s. 2017 under Article IV, Section 6.3.1 which requires pre-service English teachers to use English, when teaching language and literature, as a glocal language in a multilingual context. This implies that they are provided multiple opportunities to immerse themselves in practicing the target language, whereas these opportunities may encompass a variety of language learning experiences, depending on where the task at hand sits on the spectrum of its difficulty. Hence, as grittier learners of the English language, English majors are expected to keep thriving until they display their desired level of proficiency in the target language.
Similarly, learners majoring in elementary education have high levels of L2 grit (M = 3.39, SD = 1.061). This connotes that the majority of them may also be as passionate and persevering in learning the second language as the English majors are. A possible reason for this is the goal they have in learning it. Duckworth et al. (2007) emphasized that having not only passion and perseverance but also a long-term goal is the quality of a gritty individual. To achieve that goal, a person must demonstrate a strong desire and resilience despite setbacks or difficulties in the learning process. It can be inferred that these pre-service elementary teachers who have taken English subjects since grade school may have long-term objectives of learning the second language. Although they do not specialize in English like the English majors, these learners showed grit in L2 learning.
The results of this study can be supported by the English language attitude of Filipino adults across professions as surveyed by Mahboob and Cruz (2013). They found that majority of their respondents, across ESL communities, preferred English to be taught as a subject in school and be used as a language of instruction, which led them to claim that "English is the language that is perceived to be worthy of investment" (Mahboob & Cruz, 2013, p. 10). Most Filipinos devote their time to learning this language because English still holds a hegemonic position in the Philippines (Mahboob & Cruz, 2013). English continues to be the language in various societal domains in the country such as education and business and is a key to local and international job opportunities. Additionally, this positive attitude towards English can have an impact on one's behavior. For instance, Hein et al. (2020) claimed that a positive attitude serves as a stimulus for one to take action and manifest perseverance, while others are withdrawing in the face of change and setbacks. They also argued that attitudes toward lifelong learning, as well as general learning strategies, were found to predict one component of gritthe persistence of effort (PE). Thus, when students invest in this language, it may mean that they have the goal to acquire it and develop the necessary linguistic skills no matter how tedious the process is. As a result, the pre-service English and elementary teachers in this study may have developed high L2 grit through the years of studying to improve their English language skills that will help them achieve their career goals.
When it comes to reading, intrinsically motivated learners experience genuine pleasure and maintain interest while doing the said activity. They are further described as those who spend their time and effort, especially when developing a thorough knowledge of the texts they read, while also employing appropriate reading strategies (Hebbecker et al., 2019). In this study, the data above indicate that both English majors (M = 3.20, SD = 0.774) and ESL learners (M = 3.23, SD = 0.792) are highly intrinsically motivated readers of English texts. This corroborates the assumption made specifically on English majors having the tantamount intrinsic reading motivation to their L2 grit because their future careers heavily utilize English as a second language.
As mentioned earlier in this paper, interest can also complement one's reading motivation (Alhamdu, 2015). Moreover, such a claim suggests that learners' motivation to read increases when the text piques their interest. In other words, they are most likely to have higher reading motivation when both are taken into consideration. Based on the definition of intrinsic reading motivation provided by Ryan and Deci (2000), it can be surmised that the respondents of this study have the inner zeal to read English reading materials that give them personal satisfaction. Table 2 presents the correlation data between L2 grit and intrinsic reading motivation of ESL students with the Pearson correlation coefficient as the statistical treatment. The analysis reveals that the L2 grit and intrinsic reading motivation of the said group have a low positive significant relationship (r = .357, pvalue < .000). This suggests that when ESL students have a high level of grit in L2 learning, they will most likely have a high level of intrinsic reading motivation as well. Similarly, Table 3 shows the correlation data between the same variables of students specializing in English. The same analysis found a low positive significant relationship (r = .371, p-value < .000) between L2 grit and intrinsic reading motivation, indicating that English majors who tend to have high levels of L2 grit may also become highly intrinsically motivated readers. At the beginning of a long-term journey to learn a second language, students must reflect on their interest in and intended effort in doing so (Cavilla, 2017). The results presented above are consistent with the research conducted by Changlek and Palanukulwong (2015). In their study, one major statistical finding revealed a significant and positive correlation between intrinsic and extrinsic motivation and grit among high achievers who are learning English as a foreign language. Particularly between intrinsic motivation and perseverance of effort, the same study found a significant and positive but weak relationship. This positive relationship implies that gritty people are more focused on their goals, and such is demonstrated when they get obsessed, as Lehrer (2011) would describe it, with particular activities like reading, in relation to the present study. We can expect, therefore, that intrinsically motivated language learners have the strength to endure when confronted with a difficult task, i.e., reading extremely challenging texts. Gritty students can overcome their fear of failure as they welcome challenges as part of the learning process. They recognize that mastering the target language requires a lot of reading, spanning from simple to complex materials, along with other activities that develop their skills. This reinforces Keegan's (2017) argument that one's personality and motivation are important determinants in language acquisition and educational accomplishments. Existing studies have also discovered that grit is associated with a variety of beneficial outcomes which include academic motivation (Eskreis-Winkler et al., 2014), persistence in accomplishing difficult tasks (Lucas & Nordgren, 2015), and even in delivering outstanding performances such as nationwide spelling competitions (Duckworth et al., 2010).
Relationship between L2 Grit and Intrinsic Reading Motivation
However, the low correlation between the variables examined in the present study can possibly be explained by the context in which the L2 grit is measured. This indicates that despite both variables heading in the same direction, they do not necessarily have a linear correlation. It should be emphasized that most of the studies conducted on grit came from the Western culture, i.e., the United States, a country whose society is thought to be individualistic (Hofstede, 2001). In order to investigate the potential variances of grit, cultural theories must be considered. The self-construal theory of Markus and Kityama (1991) was used extensively to explain this phenomenon. This theory describes Western individualistic societies as people who view themselves with an autonomous identity, free of their social context (independent self-construal), and capable of pursuing their own goals (Markus & Kitayama, 1991). Contrastingly, the collective societies from the East, or other cultures, see themselves as inherently interdependent components of society (interdependent self-construal). In other words, they share a fundamental connection with one another. As per empirical evidence, past studies among Asian communities revealed that learners usually invest a lot of their time in studying to broaden their academic achievements and maintain a 'face.' This approach to academic success is greatly encouraged in the East (King, 2015) but not from the end of Western students. Taken as a maladaptive approach by the latter group, they do not recommend it as an effective way of gaining learning success (Elliot & Murayama, 2008). Datu et al. (2016) further added that studies focusing on other cultures, such as the Asian contexts, are still in the marginalized area considering the very little research done. They also pointed out that the situation within the said community may be different, considering their collective values, social conventions, and traditions. Such major differences necessitate the investigation of the applicability of the concept of grit in a collectivist society. Hence, there is a high possibility that the Western individualistic concept of grit might not be appropriate for collective societies. This, then, calls for a modified model of grit that is more culturally applicable in a collectivist society like the Philippines.
Furthermore, grit entails long periods of time by definition. Given that the present study used a cross-sectional survey, this could have also played a factor in such results. Thus, there is a great possibility that measuring grit for a single period of time, by merely answering the questionnaire, and amongst a collective society may be simply not entirely appropriate.
Conclusion
Over the past few years, the growing interest in grit does not seem to slow down anytime soon. Various studies have already highlighted its importance across different domains, including the realm of language learning. Previously conducted research studies have proven that grit, as a non-cognitive construct, can help a learner succeed in achieving their identified long-term goals (Duckworth et al., 2007;Duckworth & Gross, 2014). In fact, motivation has been among the various factors that are associated with grit, particularly in language learning (Changlek & Palanukulwong, 2015;Chen et al., 2020;Feng & Papi, 2020). In line with this, the present study discovered that learners of English as an L2 and student teachers specializing in the said language are highly gritty individuals who are also intrinsically motivated learners. Contrary to the presumptions made at the onset of this study, results demonstrated that their differences in terms of their focus of study do not necessarily determine the level of grit they have in L2 learning. However, as Datu et al. (2016) would suggest, L2 Grit scales need to be culturally sensitive, too. Nevertheless, this does not necessarily negate the link determined between the variables. This can, instead, mean that stimulating students' intrinsic reading motivation can also improve their L2 grit. Additionally, the researchers only used pure quantitative methods which might have different possible results if they used different designs in research, such as qualitative research design or mixed method research design in the current study.
Recommendations
The relationship between grit and motivation has been established in previous studies.
However, linguistic research particularly focusing on language-specific grit seems scant. A mixed-methods study offering a qualitative perspective on the subject and the participants' complex viewpoints may be done as a follow-up. Moreover, several studies across different fields have been stressing the importance of grit, for it almost always guarantees success among those who possess it. In the landscape of academics, there have been empirical data that prove its crucial role in positive academic outcomes (Datu et al., 2016).
The importance of developing L2 grit is highlighted in this study as it positively correlates with intrinsic reading motivation. Thus, the present research recommends identifying teaching strategies that specifically increase the L2 grit of the students to be tested (Alamer, 2021). Conducting grit intervention studies shall help our teachers plan their teaching strategies or practices that promote L2 grit, as well as intrinsic reading motivation, among the learners. This will further shed light on how teachers can provide the help needed by the students in class.
While the literature provides a few strategies that foster L2 grit in students, Duckworth (2013), in her appearance at the TED Conference, suggested that a growth mindset is a good idea for building grit. This means that teachers can use teaching strategies that promote a growth mindset in learning as it also develops the grit of the students (Zhao et al., 2018). For example, English language teachers should praise the reading effort of the students who positively view effort-ability relationships (Calingasan & Plata, 2022), provide processfocused criticism (Dweck, 2008), help students set and achieve a learning goal instead of a performance goal (Dweck & Yeager, 2019), and give them challenging learning tasks (Grant & Dweck, 2003). Duckworth (2013) believed that fostering a growth mindset among individuals is one of the ways to build grit. In fact, gritty individuals are more successful than those who have higher IQs. According to the study by Schwinger et al. (2009) andWolters (1998), college students have a set of motivational regulation strategies associated with increased effort, academic performance, and persistence. With this, we can infer that a growth mindset would be a great help to boost the motivational regulation strategies of college students in pursuing their long-term goals and it is also a way to increase their grit at the same time.
Moreover, in order to contribute to the growing body of literature on L2 grit, it is first suggested that an L2 Grit Scale, acknowledging the significant differences among collective societies, be developed. Grit may be a personality that anyone can have, but not everyone views it so similarly. Cultural factors may play a crucial part in how one evaluates grit. Hence, for the purpose of gathering as accurate data as possible, a culturally sensitive L2 grit scale must be proposed, examined, and validated for further use.
Lastly, future language researchers may opt to conduct studies that involve ESL inservice teachers. Teimouri et al. (2020) asserted that investigating language teachers' grit and their motivation in teaching is of equal importance. Given that they have first-hand experience in facilitating an ESL classroom, it would be interesting to see just how gritty they are. This trait can be reflected in their teaching practices or in their professional development in general. As someone who always deals with the element of spontaneity common to the teaching profession, it is also of interest to know how they confront difficult situations that arise within the language classroom. Hence, it would be much better to also conduct an interview that shall allow a more in-depth understanding of the variable being examined.
Table 1 .
1L2 Grit and Intrinsic Reading Motivation of
Filipino Pre-service Teachers
Mea
n
SD
Interpretati
on
Pre-service
English
Teachers
L2 Grit
3.58 1.02
7
High
Intrinsic Reading
Motivation
(IRM)
3.20 0.77
4
High
Pre-service
Elementary
Teachers
L2 Grit
3.39 1.06
1
High
Intrinsic Reading
Motivation
(IRM)
3.23 0.79
2
High
Note: 1.00-2.49 = Low IRM; 2.50-4.00 = High IRM
1.00-2.99 = Low L2 Grit; 3.00-5.00 = High L2 Grit
Table 2 .
2Relationship Between L2 Grit and Intrinsic Reading Motivation of Pre-service Elementary TeachersVariables
r
p-
value
L2 Grit
0.35
7
0.000
Intrinsic Reading Motivation
Note: Significant at the .01 level (2-tailed)
N= 108
Table 3 .
3Relationship Between L2 Grit and Intrinsic Reading Motivation of Pre-service English TeachersVariables
r
p-
value
L2 Grit
0.37
1
0.000
Intrinsic Reading Motivation
Note: Significant at the .01 level (2-tailed)
N= 128
Grit and language learning: construct validation of L2-Grit scale and its relation to later vocabulary knowledge. Abdullah Alamer, Educational Psychology. 5Abdullah Alamer. 2021. Grit and language learning: construct validation of L2-Grit scale and its relation to later vocabulary knowledge. Educational Psychology, 41(5), 544-562.
. 10.1080/01443410.2020.1867076https://doi.org/10.1080/01443410.2020.1867076
English and mother-tongue-based multilingual education: Language attitudes in the Philippines. Ahmar Mahboob, Priscilla Cruz, Asian Journal of English Language Studies. 1Ahmar Mahboob and Priscilla Cruz. 2013. English and mother-tongue-based multilingual education: Language attitudes in the Philippines. Asian Journal of English Language Studies, 1, 2-19.
Interest and reading motivation. Alhamdu Alhamdu, 10.19109/psikis.v1i1.552Psikis: Jurnal Psikologi Islami. 11Alhamdu Alhamdu. 2015. Interest and reading motivation. Psikis: Jurnal Psikologi Islami, 1(1), 1- 10. https://doi.org/10.19109/psikis.v1i1.552
On the measurement of achievement goals: Critique, illustration, and application. Andrew J Elliot, Kou Murayama, Journal of Educational Psychology. 100Andrew J. Elliot and Kou Murayama. 2008. On the measurement of achievement goals: Critique, illustration, and application. Journal of Educational Psychology, 100, 613-628.
Grit: The power of passion and perseverance. Angela Lee Duckworth, Angela Lee Duckworth. 2013, May. Grit: The power of passion and perseverance. [Video]. TED Conferences.https://www.ted.com/talks/angela_lee_ duckworth_grit_the_power_of_passion_and_persev eranc
Grit: perseverance and passion for long-term goals. Angela Lee Duckworth, Christopher Peterson, Michael D Matthews, Dennis R Kelly, Journal of personality and social psychology. 9261087Angela Lee Duckworth, Christopher Peterson, Michael D. Matthews, and Dennis R. Kelly. 2007. Grit: perseverance and passion for long-term goals. Journal of personality and social psychology, 92(6), 1087.
Self-Control and Grit. Angela Lee Duckworth, James J Gross, Current Directions in Psychological Science. 235Angela Lee Duckworth and James J. Gross. 2014. Self-Control and Grit. Current Directions in Psychological Science, 23(5), 319-325.
. 10.1177/0963721414541462https://doi.org/10.1177/0963721414541462
Deliberate practice spells success: Why grittier competitors triumph at the National Spelling Bee. Angela Lee Duckworth, Teri A Kirby, Eli Tsukayama, Heather Berstein, K Anders Ericsson, Social Psychological and Personality Science. 22Angela Lee Duckworth, Teri A. Kirby, Eli Tsukayama, Heather Berstein, and K. Anders Ericsson. 2010. Deliberate practice spells success: Why grittier competitors triumph at the National Spelling Bee. Social Psychological and Personality Science, 2(2), 174-181.
. 10.1177/1948550610385872https://doi.org/10.1177/1948550610385872
Motivation and grit: Predictors of language learning achievement. Ansari Changlek, Thanyapa Palanukulwong, Social Sciences and arts). 84Silpakorn University (HumanitiesVeridian E-JournalAnsari Changlek and Thanyapa Palanukulwong. 2015. Motivation and grit: Predictors of language learning achievement. Veridian E-Journal, Silpakorn University (Humanities, Social Sciences and arts), 8(4), 23-36.
Role of the Big Five personality traits and motivation in predicting performance in collaborative presentations. A Brent, Hsin-Yi Kelsen, Liang, Psychological Reports. 1225Brent A. Kelsen and Hsin-Yi Liang. 2019. Role of the Big Five personality traits and motivation in predicting performance in collaborative presentations. Psychological Reports, 122(5), 1907- 1924.
. W Brent, Joshua J Roberts, Jennifer V Jackson, Brent W. Roberts, Joshua J. Jackson, Jennifer V.
Grant Fayard, Jenna Edmonds, Meints, Conscientiousness. Fayard, Grant Edmonds, and Jenna Meints. 2009. Conscientiousness.
Personality trait development in adulthood: Findings and implications. W Brent, M Roberts, Patrick L Brent Donnellan, Hill, 10.1002/9781118133880.hop205009Handbook of psychology. H. Tennen & J. SulsNew YorkWiley PublishingBrent W. Roberts, M. Brent Donnellan, and Patrick L. Hill. 2012. Personality trait development in adulthood: Findings and implications. In H. Tennen & J. Suls (Eds.), Handbook of psychology (pp. 183- 196). New York: Wiley Publishing, https://doi.org/10.1002/9781118133880.hop205009
People underestimate the value of persistence for creative performance. Brian J Lucas, Loran F Nordgren, Journal of Personality and Social Psychology. 1092232Brian J. Lucas and Loran F. Nordgren. 2015. People underestimate the value of persistence for creative performance. Journal of Personality and Social Psychology, 109(2), 232.
Mindsets: How praise is harming youth and what can be done about it. Carol S Dweck, School Library Media Activities. 24Carol S. Dweck. 2008. Mindsets: How praise is harming youth and what can be done about it. School Library Media Activities, 24, 55-58.
Mindsets: A view from two eras. Carol S Dweck, David S Yeager, Perspectives on Psychological Science. 143Carol S. Dweck and David S. Yeager. 2019. Mindsets: A view from two eras. Perspectives on Psychological Science, 14(3), 1-16.
. 10.1177/1745691618804166https://doi.org/10.1177/1745691618804166
Empirical identification of the major facets of conscientiousness. Richard D Roberts, Learning and individual differences. 194Richard D. Roberts. 2009. Empirical identification of the major facets of conscientiousness. Learning and individual differences, 19(4), 451-458.
Self-regulated learning and college students' regulation of motivation. A Christopher, Wolters, Journal of educational psychology. 902Christopher A. Wolters. 1998. Self-regulated learning and college students' regulation of motivation. Journal of educational psychology, 90(2), 224. Commission on Higher Education . 2017. CMO No. 75 s. 2017. https://ched.gov.ph/wp- content/uploads/2017/11/CMO-No.-75-s.-2017.pdf
The effects of student reflection on academic performance and motivation. Derek Cavilla, Sage Open. 732158244017733790Derek Cavilla. 2017. The effects of student reflection on academic performance and motivation. Sage Open, 7(3), 2158244017733790.
The influence of personality characteristics on children's intrinsic reading motivation. Emma Medford, Sarah P Mcgeown, Learning and Individual Differences. 226Emma Medford and Sarah P. McGeown. 2012. The influence of personality characteristics on children's intrinsic reading motivation. Learning and Individual Differences, 22(6), 786-791.
Culture's consequences: Comparing values, behaviors, institutions and organizations across nations. Geert Hofstede, Sage publicationsGeert Hofstede. 2001. Culture's consequences: Comparing values, behaviors, institutions and organizations across nations. Sage publications.
Culture and the self: Implications for cognition, emotion, and motivation. R Hazel, Shinobu Markus, Kitayama, Psychological review. 982224Hazel R. Markus and Shinobu Kitayama. 1991. Culture and the self: Implications for cognition, emotion, and motivation. Psychological review, 98(2), 224.
Clarifying achievement goals and their impact. Heidi Grant, Carol S Dweck, 10.1037/0022-3514.85.3.541Journal of Personality and Social Psychology. 853Heidi Grant and Carol S. Dweck. 2003. Clarifying achievement goals and their impact. Journal of Personality and Social Psychology, 85(3), 541-553. https://doi.org/10.1037/0022-3514.85.3.541
The successful life of gritty students: Grit leads to optimal educational and well-being outcomes in a collectivist context. Jesus Alfonso Daep Datu, Jana Patricia Millonado Valdez, King, The psychology of Asian learners. SingaporeSpringerJesus Alfonso Daep Datu, Jana Patricia Millonado Valdez, and Ronnel Bornasal King. 2016. The successful life of gritty students: Grit leads to optimal educational and well-being outcomes in a collectivist context. In The psychology of Asian learners (pp. 503-516). Springer, Singapore.
The Role of Grit Among Polish EFL Majors: A Comparative Study of 1st-, 2nd-, and 3rd-Year University Students. Joanna Zawodniak, Miroslaw Pawlak, Mariusz Kruk, Journal for the Psychology of Language Learning. 32Joanna Zawodniak, Miroslaw Pawlak, and Mariusz Kruk. 2021. The Role of Grit Among Polish EFL Majors: A Comparative Study of 1st-, 2nd-, and 3rd- Year University Students. Journal for the Psychology of Language Learning, 3(2), 118-132.
Profiles of reading motivation among African American and Caucasian students. John T Guthrie, Cassandra S Coddington, Allan Wigfield, Journal of Literacy Research. 413John T. Guthrie, Cassandra S. Coddington, and Allan Wigfield. 2009. Profiles of reading motivation among African American and Caucasian students. Journal of Literacy Research, 41(3), 317-353.
Research design: Qualitative, quantitative, and mixed methods approaches. John W Creswell, John David Creswell, Sage publicationsJohn W. Creswell and John David Creswell. 2017. Research design: Qualitative, quantitative, and mixed methods approaches. Sage publications.
John W Creswell, John David Creswell, Research design: Qualitative, quantitative, and mixed methods approaches. 5th ed.). SAGEJohn W. Creswell and John David Creswell. 2018. Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). SAGE.
Which traits predict success? The importance of grit. Jonah Lehrer, Jonah Lehrer. 2011. Which traits predict success? The importance of grit. Wired.
Reciprocal effects between reading achievement and intrinsic and extrinsic reading motivation. Karin Hebbecker, Natalie Förster, Elmar Souvignier, Scientific Studies of Reading. 235Karin Hebbecker, Natalie Förster, and Elmar Souvignier. 2019. Reciprocal effects between reading achievement and intrinsic and extrinsic reading motivation. Scientific Studies of Reading, 23(5), 419- 436.
Good practice in the conduct and reporting of survey research. Kate Kelley, Belinda Clark, Vivienne Brown, John Sitzia, International Journal for Quality in health care. 153Kate Kelley, Belinda Clark, Vivienne Brown, and John Sitzia. 2003. Good practice in the conduct and reporting of survey research. International Journal for Quality in health care, 15(3), 261-266.
Identifying and building grit in language learners. Kelly Keegan, English Teaching Forum. 55Kelly Keegan. 2017 Identifying and building grit in language learners. In English Teaching Forum, 55(3), 2-9.
Effects of effort praise on struggling Filipino ESL readers' mindset and motivation. Sterling King Arman Calingasan, Plata, Indonesian Journal of Applied Linguistics. 3King Arman Calingasan and Sterling Plata. 2022. Effects of effort praise on struggling Filipino ESL readers' mindset and motivation. Indonesian Journal of Applied Linguistics, 11(3). 601-611.
The grit effect: Predicting retention in the military, the workplace, school and marriage. Lauren Eskreis-Winkler, Angela Lee Duckworth, Elizabeth P Shulman, Scott A Beal, Frontiers in psychology. 536Lauren Eskreis-Winkler, Angela Lee Duckworth, Elizabeth P. Shulman, and Scott A. Beal. 2014. The grit effect: Predicting retention in the military, the workplace, school and marriage. Frontiers in psychology, 5, 36.
The importance of motivation in second language acquisition. Leila Anjomshoa, Firooz Sadighi, International Journal on Studies in English Language and Literature (IJSEL). 32Leila Anjomshoa and Firooz Sadighi. 2015. The importance of motivation in second language acquisition. International Journal on Studies in English Language and Literature (IJSEL), 3(2), 126- 137.
Persistence in language learning: The role of grit and future selfguides. Liying Feng, Mostafa Papi, 10.1016/j.lindif.2020.101904Learning and Individual Differences. 81Liying Feng and Mostafa Papi. 2020. Persistence in language learning: The role of grit and future self- guides. Learning and Individual Differences, 81, 101904. https://doi.org/10.1016/j.lindif.2020.101904
How do motivational regulation strategies affect achievement: Mediated by effort management and moderated by intelligence. Malte Schwinger, Ricarda Steinmayr, Birgit Spinath, Learning and individual differences. 194Malte Schwinger, Ricarda Steinmayr, and Birgit Spinath. 2009. How do motivational regulation strategies affect achievement: Mediated by effort management and moderated by intelligence. Learning and individual differences, 19(4), 621-627.
. Marcus Credé, Michael C Tynan, Peter D , Marcus Credé, Michael C. Tynan, and Peter D.
Much ado about grit: A meta-analytic synthesis of the grit literature. Journal of Personality and social Psychology. 1133492Harms. 2017. Much ado about grit: A meta-analytic synthesis of the grit literature. Journal of Personality and social Psychology, 113(3), 492.
Examining relationships between academic motivation and personality among college students. Mari H Clark, A Christopher, Schroth, Learning and individual differences. 201Mari H. Clark and Christopher A. Schroth. 2010. Examining relationships between academic motivation and personality among college students. Learning and individual differences, 20(1), 19-24.
Getting to the Nitty-Gritty of Grit: A Descriptive Characterization of Gritty L2 Learners from Thailand. Mark Freiermuth, Chomraj Patanasorn, Latha Ravindran, Hsin-Chou Huang, 10.52598/jpll/3/2/9Journal for the Psychology of Language Learning. 32Mark Freiermuth, Chomraj Patanasorn, Latha Ravindran, and Hsin-chou Huang. 2021. Getting to the Nitty-Gritty of Grit: A Descriptive Characterization of Gritty L2 Learners from Thailand, Malaysia, Taiwan, and Japan. Journal for the Psychology of Language Learning, 3(2), 133- 155. https://doi.org/10.52598/jpll/3/2/9
The relationship between the big five personality traits and academic motivation. Meera Komarraju, Steven J Karau, Personality and individual differences. 393Meera Komarraju and Steven J. Karau. 2005. The relationship between the big five personality traits and academic motivation. Personality and individual differences, 39(3), 557-567.
Role of the Big Five personality traits in predicting college students' academic motivation and achievement. Meera Komarraju, Steven J Karau, Ronald R Schmeck, Learning and individual differences. 191Meera Komarraju, Steven J. Karau, and Ronald R. Schmeck. 2009. Role of the Big Five personality traits in predicting college students' academic motivation and achievement. Learning and individual differences, 19(1), 47-52.
The joint power of personality and motivation dynamics for occupational success: Bridging two largely separated fields. D Naemi, Anne Brandt, Michael Israel, Jenny Becker, Wagner, European Journal of Personality. 354Naemi D. Brandt, Anne Israel, Michael Becker, and Jenny Wagner. 2021. The joint power of personality and motivation dynamics for occupational success: Bridging two largely separated fields. European Journal of Personality, 35(4), 480-509.
Intrinsic and Extrinsic Motivations: Classic Definitions and New Directions. M Richard, Ryan, L Edward, Deci, Contemporary Educational Psychology. 251Richard M. Ryan and Edward L. Deci. 2000. Intrinsic and Extrinsic Motivations: Classic Definitions and New Directions. Contemporary Educational Psychology, 25(1), 54-67.
. https:/di.org/10.1006/ceps.1999.1020https://di.org/10.1006/ceps.1999.1020
Examining the dimensional structure and nomological network of achievement goals in the Philippines. B Ronnel, King, Journal of Adolescence. 44Ronnel B. King. 2015. Examining the dimensional structure and nomological network of achievement goals in the Philippines. Journal of Adolescence, 44, 214-218.
The big five personality factors and personal values. Personality and social psychology bulletin. Sonia Roccas, Lilac Sagiv, Shalom H Schwartz, Ariel Knafo, 28Sonia Roccas, Lilac Sagiv, Shalom H. Schwartz, and Ariel Knafo. 2002. The big five personality factors and personal values. Personality and social psychology bulletin, 28(6), 789-801.
Dimensions of reading motivation and their relation to reading behavior and competence. Ulrich Schiefele, Ellen Schaffner, Jens Möller, Allan Wigfield, A , Reading research quarterly. 474Ulrich Schiefele, Ellen Schaffner, Jens Möller, and Allan Wigfield, A. 2012. Dimensions of reading motivation and their relation to reading behavior and competence. Reading research quarterly, 47(4), 427- 463.
The effect of grit on leisure time physical activity. An application of the theory of planned behaviour. Vello Hein, Andre Koka, Hanna Kalajas-Tilga, Henri Tilga, Lennart Raudsepp, Balt J Health Phys Act. 121Vello Hein, Andre Koka, Hanna Kalajas-Tilga, Henri Tilga, and Lennart Raudsepp. 2020. The effect of grit on leisure time physical activity. An application of the theory of planned behaviour. Balt J Health Phys Act, 12(1), 78-85.
. 10.29359/BJHPA.12.1.08https://doi.org/10.29359/BJHPA.12.1.08
Grit and motivation for learning English among Japanese university students. System, 96, 102411. Yasser Teimouri, Luke Plonsky and Farhad Tabandeh. 2020. L2 grit: Passion and perseverance for second-language learning. Xinjie Chen, Julie Lake, Amado M Padilla, 10.1177/1362168820921895Language Teaching Research. Xinjie Chen, Julie Lake, and Amado M. Padilla. 2020. Grit and motivation for learning English among Japanese university students. System, 96, 102411. Yasser Teimouri, Luke Plonsky and Farhad Tabandeh. 2020. L2 grit: Passion and perseverance for second-language learning. Language Teaching Research, 136216882092189. https://doi.org/10.1177/1362168820921895
From growth mindset to grit in Chinese schools: The mediating roles of learning motivations. Frontier Psychology. Yukun Zhao, Gengfeng Niu, Hanchao Hou, Guang Zeng, Liying Xu, Kaiping Peng, Feng Yu, 10.3389/fpsyg.2018.02007Yukun Zhao, Gengfeng Niu, Hanchao Hou, Guang Zeng, Liying Xu, Kaiping Peng, and Feng Yu. 2018. From growth mindset to grit in Chinese schools: The mediating roles of learning motivations. Frontier Psychology. https://doi.org/10.3389/fpsyg.2018.02007 |
12,348,021 | An Answer Bank for Temporal Inference | Answering questions that ask about temporal information involves several forms of inference. In order to develop question answering capabilities that benefit from temporal inference, we believe that a large corpus of questions and answers that are discovered based on temporal information should be available. This paper describes our methodology for creating AnswerTime-Bank, a large corpus of questions and answers on which Question Answering systems can operate using complex temporal inference. | [
7425249,
137155
] | An Answer Bank for Temporal Inference
Sanda Harabagiu
Human Language Technology Research Institute
The University of Texas at Dallas Richardson
75083-0688TXUSA
Cosmin Adrian Bejan
Human Language Technology Research Institute
The University of Texas at Dallas Richardson
75083-0688TXUSA
An Answer Bank for Temporal Inference
Answering questions that ask about temporal information involves several forms of inference. In order to develop question answering capabilities that benefit from temporal inference, we believe that a large corpus of questions and answers that are discovered based on temporal information should be available. This paper describes our methodology for creating AnswerTime-Bank, a large corpus of questions and answers on which Question Answering systems can operate using complex temporal inference.
Introduction
TimeML (Hobbs and Pustejovsky, 2003) is a corpus annotated with: (a) time expressions; (b) events and (c) links between them. These annotations enable several forms of temporal inference (Boguraev and Ando, 2005), (Moldovan et al., 2005), (Harabagiu and Bejan, 2005). However, additional forms of temporal inference are involved when answering questions. For example, in TimeML, the passage illustrated in Figure 1 has annotations that relate (a) the temporal expression " May 22, 1995" to the verb phrase "made a brigadier general" and (b) the temporal expression "the following year" to the verb phrase "appointed military attache". This passage is answering the question "Q 1 : How long it took Frakas to become military attache at the Hungarian embassy in Washington after his promotion to brigadier general ?" On May 22, 1995, Frakas was made a brigadier general, and the following year he was appointed military attache at the Hungarian embassy in Wa− shington. Automatic Question Answering (Q/A) involves (1) the question processing; (2) the passage retrieval; and (3) the answer extraction. When processing question Q 1 , three goals must be achieved: GOAL 1: As reported in (Harabagiu et al., 2001) the expected answer type (EAT) of the question must be determined. In the case of Q 1 , the EAT is a TIME DURATION. This EAT is typically associated with question stems of the form "How long" and with idiomatic expressions like "it takes". GOAL 2: Second, question processing involves the discovery of dependencies between the EAT and the other concepts from the question. When we apply shallow semantic parsing on Q 1 , we discover the dependencies illustrated in Figure 2. The semantic information is produced by a semantic parser trained on the PropBank annotations (www.cis.upenn.edu/∼ace), which was reported in (Moschitti and Bejan, 2004). The semantic parser is able to recognize predicate-argument structures in which the predicates are lexicalized by (a) verb or (b) nominalizations. For the case when predicates are nominalizations, the seman- tic parser relies on its classifiers trained on the NomBank annotations (http://nlp.cs.nyu.edu/meyers/NomBank.html). For example, in Figure 2, the predicate-argument structure E1 is generated due to the data available from PropBank, whereas the recognition of E2 is enabled by data available from NomBank. Furthermore, the two predicate-argument structures are connected by a temporal relation made explicit by the signal "after". This dependency needs to be interpreted as: (i) the beginning of the time duration sought by the EAT is simultaneous with the event illustrated as E1 in Figure 2 and (ii) the end of the time duration sought by the EAT is simultaneous with the event illustrated as E2 in Figure 2. GOAL 3: Keywords from the question need to be selected. The semantic dependencies resulting from the fulfillment of GOAL 2 help selecting the best keywords. The keywords are grouped in two classes, each corresponding to a different predicate-argument structure that needs to be retrieved. The first class of keywords KC 1 includes The passage retrieval component of our Q/A system returns a ranked list of passages to each of the queries. The answer extraction module needs to select the partial answers and to infer the correct answer. If it selects the passage illustrated in Figure 1, the answer is "around one year". In the passage, the time duration is not explicit, but a temporal expression is linked to each of the events. However, two more problems hinder the answer inference process:
(1) the events from the question do not match the events from the passage, thus the confidence that they are paraphrases needs to be assessed; and (2) there is no temporal signal like "after" connecting the two events in the passage, thus other form of temporal inference needs to be used. The first problem is addressed by acquiring paraphrases of events, whereas the second problem is solved by having access to temporal normalizations. For example, the normalization of temporal expression T E 1 ="the following year" from the passage illustrated in Figure 1, is 1996DDMM (where DD represent the day of the MM, which is the month), because the reference to the implicit current year is resolved to 1995, which was derived from T E 2 ="May 22, 1995". The two temporal expressions have the roles T E 1 =END(EAT(Q 1 )) and T E 2 =START(EAT(Q 1 )). When computing the TIME DURATION from the normalizations of expressions T E 1 and T E 2 , the answer extractor cannot generate an exact answer, but only the approximation "around one year". This is because of the unknown month and day from the normalization of T E 1 . If the MM digits are between 01 and 05 the TIME DURATION is less than a year, whereas if it is larger than 05, it becomes more than a year. To enable Question Answering systems to operate with complex temporal inference, there is need of a large corpus of questions and answers on which Q/A systems can be trained. We created such a corpus, that we call AnswerTime-Bank, in which the answers are selected and benefit from the TimeML annotations. We aimed at producing a large set of complex questions, that are answered by different forms of temporal inference. (Saquete et al., 2004) has illustrated the need for such resources. Our annotations mark: temporal normalizations, paraphrases, as well as inference that justifies the answer. Additionally, temporal inference interacts with other forms of textual inference, that may benefit the Q/A task. The recent PASCAL RTE evaluation (Dagan et al., 2005) as well as the AQUAINT inference evaluations have shown need for capabilities to infer and draw entailments constrained by temporal information. Textual entailment has been defined as the task of deciding, given two text fragments, whether the meaning of one of the texts can be inferred from the other text. The AQUAINT KB evaluations have also considered the case when one of the texts is a question, the other text is a background to the question, and the textual inference enables the answering to the question. For example, Figure 3 illustrates the question Q KB that is entailed by the passage P KB because the prediction of a further increase presupposes a past increase. To be able to infer the answer A KB , we need to recognize:
1. Two events in P KB : e1 = the predicting event and e2 = the increasing event, in which e2 is temporally constrained to happen DURING 1999; 2. The event e2 = the increasing event in Q KB which this time is constrained to happen BEFORE 1999; 3. The factive relation between event e1 and e2 in P KB ; and most importantly 4. The interpretation of the modifier "further" for event e2, which indicates that there is a CONTINUATION of e2 from a previous time.
Based on this information, the answer A KB may be inferred. Figure 4(a) illustrates the events, modifiers and temporal expressions from Figure 3. We represent events as circles, their modifiers as diamonds and temporal expressions as squares. The EAT is represented as well. For example, if the modifier m indicates that the event e shall be in a CONTINUATION process, and the event e takes place DURING the time period t, we infer that the event e was also happening before the time period t. Inference rules, like the one illustrated in Figure 4(b), are based on possible relations that exist between (a) events; (b) time expressions and (c) modifiers of events. Example of such temporal relations were introduced in (Allen, 1991). Temporal relations, when discovered, may lead to other questions than Q KB which was illustrated in Figure 3. Two examples of additional questions that are answered by P KB are: All these questions and their answers are useful for Q/A system developers. Question Q KB tests the ability to use temporal inference, whereas question Q 1 KB or Q 2 KB test the ability to locate information that is constrained temporally. The reminder of the paper is organized as follows. Section 2 describes the methodology employed for selecting questions and answers in our TimeAnswer-Bank. Section 3 details the bootstrapping of new data. Section 4 reports on the usage of semantic and pragmatic knowledge required by temporal inference in Q/A. Section 5 summarizes the conclusions.
Question and Answer Selection Based on TimeML Annotations
Time expressions anchor events and states in narratives. They do the same anchoring in questions. We have used human-generated questions and annotated them in the same way as narratives are annotated in TimeML. There are three types of objects that are annotated:
• Time expressions, annotated through TIMEX3 tags;
• Event expressions, corresponding to EVENT tags;
• LINK tags that encode various relations that hold between temporal elements. There are three types of TIMEX3 expressions: (a) fully specified temporal expressions, e.g. "August 14, 1990"; (b) underspecified temporal expressions, e.g. "Monday", "next month", "last year", "two days ago"; and (c) durations, e.g. "two months", "a week". In addition, a TIMEX3 expression can provide a temporal anchor for other temporal expressions in the document. In TimeML, seven types of events are considered:
1 occurence, e.g. "die", "crash".
state, e.g. "on board", "kidnapped", "loved". reporting, e.g. "say", "report", "announce". immediate−action, e.g. "attempt", "try". immediate−state, e.g. "believe", "intend". aspectual, e.g. "begin", "finish", "stop". perception, e.g. "see", "hear", "feel". In TimeML texts, there are annotations of two types of relations:
• binary relations, that are established between (i) pairs of events or (ii) events and temporal expressions; and • signaled relations, which link events and/or temporal expressions through temporal signals. Temporal signals are: (a) temporal prepositions, e.g. "during", "on", (b) temporal connectors, e.g. "when", "while" and (c) temporal subordinates, e.g. "if", "then". To capture all temporal relations in text and to provide means for disambiguating them, TimeML uses a set of three LINK tags:
1 TLink or Temporal Link 1 , representing temporal relations holding between events or between an event and a time; 2 SLink or Subordination Link 2 , used for contexts introducing relations between two events; and 3 ALink or Aspectual Link 3 representing the relationship between an aspectual event and its argument event.
Additionally, we have marked up modifiers that entail temporal information, similarly to the adjective "further" in Figure 3. We have used a new LINK tag, that we called MLink, for Modifier Link. The relations made explicit by MLink overlap with relations made explicit by TLink, SLink and ALink.
1 The TLink makes explicit the following relations: (1) BE- The annotations available from TimeML can be used for selecting answers for which we can generate multiple questions. In Section 1 we have exemplified an answer originating in TimeML ( Figure 1) and we have discussed how it can answer question Q 1 . Our search for answers available from TimeML starts with the discovery of two temporal expressions T 1 and T 2 . The Answer Selection Procedure is: Discover T1 and T2, temporal expressions in the same
Step 1:
Step 2:
Step 3:
Step 4:
Step 5: sentence or in adjacent sentences. Find events E1 and E2 linked to T1 and T2 respectively. Find link chains CE1 between E1 and other events. Find link chains CE2 between E2 and other events. Use implicit temporal inference on CE1 and CE2.
When applying the Answer Selection Procedure to the example illustrated in Figure 5 we discover: (1) temporal expressions t1 and t2 (Step 1); (2) events e1 and e4 linked to t1 and t2 with TLink:IS INCLUDED (Step 2); and (3) the event chain {e1, e2, e3} (Step 3). Because t1 and t2 are linked (by an ANCHORTIME(t2) = t1) we conclude that {e1, e2, e3} and e4 are simultaneous (Step 5). Figure 6(b), the event chain created from e1 {e1, e3} has a TLink:BEFORE, indicating that e3 happened before e1. If the anchor of t2 is t1, the temporal inference rule has two conclusions: (1) that between e3 and e2 there is a TLink:BEFORE and (b) that between e1 and e2 there is a TLink:SIMULTANEOUS. For the example illustrated in Figure 5, the temporal inference rule that applies (Step5 of Answer Selection Procedure) is illustrated in Figure 6(c). There are three conclusions of the temporal inference rule illustrated in Figure 6(c) because there were three events in the event chain connected to t1 and only one event connected to t2. Figure 6 illustrates the format of our implicit temporal inference rules. The left-hand side of the rule represents the possible relations between events (chains) and temporal expressions whereas the right-hand side represents one or more conclusions which are expressed by pairs of events connected by new TLink expressions. When the answer is selected and implicit inference has been discovered, we can generate questions that require temporal inference. For the answer illustrated in Figure 5, since all events are simultaneous (as indicated by the implicit inference rule from Figure 6(c)), we can refer to all events with the generic expression (e.g. "actions") on a specific time (e.g. "Sunday March 8, 1998"). Furthermore, the predicate-argument structures derived from the two sentences illustrated in Figure 5 indicate that all events have as actors ethnic Albanians. Thus, we may associate this paragraph with the generic question Q G 1 . In order to create the AnswerTime-Bank, we also need a Question Suggestion Procedure which employs (a) the answers selected as well as (b) the forms of temporal inference that are available on them. This procedure also uses 40 different possible EATs to produce question suggestions. The Question Suggestion Procedure is:
Step 1:
Step 2:
Step 3:
place them in [All−EATs] For every EAT from [All−EATs]
suggest the question dependencies.
Find EATs compatible with the selected answers and
Use the semantic dependencies from the answer to
Map the dependencies on a set of question patterns.
a paraphrase having the same semantic dependencies.
Ask the linguist researcher to suggest a question by using
Validate the question.
Step 5:
Step 4:
Step 6: (End loop)
(Begin loop)
For example, the EAT of Q G 1 is a list of actions carried by the same agent:"ethnic Albanians" and in the same location:"Turkey" on the same date:"March 8, 1998". Also, the factive relationship between e2:"burning" and e3:"protest" indicates that the actions that were referred to in Q G 1 can be specialized, as "forms of protest" and enable the generation of Q G 2 . The other questions that were generated had either the time as the expected answer (Q G 3 ) or some of the entities involved in the events constrained by time (for example Q G 4 , Q G 5 ). When between two events we find an SLink:FACTIVE relation, since such relations introduce a presupposition or entailment between the events, we can generate a question that requests causal information (Q G 6 ). The questions Q G 1 , Q G 2 , Q G 3 , Q G 4 , Q G 5 and Q G 6 , illustrated in Figure 7, were created by humans such that Q/A system developers can test their ability to answer them when employing (i) textual inference and (ii) relations between events and temporal expressions. Not all questions that humans generated were factual and related to a single date. For example, for the passage illustrated in Figure 1, we generated the question Q 1 , introduced in section 1, which asks about a time interval that is not explicit in the passage. To be able to create a TimeAnswer-Bank that encodes a large variety of questions that require temporal inference we needed to recognize automatically the temporal expressions, events and their interconnecting links such that we could find many examples that use the same form of inference. With the annotations from TimeML, we were able to detect 4125 answers, to which we applied 120 implicit temporal inference rules similar to those illustrated in Figure 6. Because we found that event chains can have lengths from 1 to 6, we believed that it would be useful to have all the possible combinations of such links available such that we can generate questions that exploit the implicit temporal inference. In the first phase of our work we have used event chains with the maximum length of 4.
Bootstrapping the AnswerTime-Bank
The Answer Selection Procedure, together with the Question Suggestion Procedure, enabled us to assemble 3472 questions that require temporal inference and to have available answers for them as well as annotations that inform the temporal inference. However, in this form, AnswerTime-Bank has several limitations . First, we could not assemble examples for all the forms of questions that require temporal inference that were listed in (Harabagiu and Bejan, 2005). Second, for each type of question, we did not have a very large number of examples. Third, due to the limitations of the Answer Selection Procedure, we did not have any instance of answers that originated in different documents. In order to address these issues, we have started to bootstrap the AnswerTime-Bank by selecting answers from the AQUAINT corpus. In the bootstrapping procedure, we have modified the Answer Selection such that the pair of time expressions do not necessarily belong to the same or adjacent sentences. The bootstrapping procedure requires the discovery of (1) time expressions; (2) events; (3) temporal signals and (4) links between them. To discover time expressions, we relied on the TIMEX3 annotations produced for us by the TASER time recognition and normalization system (Aarseth et al., 2005). We considered as events only the verbs, which are part of predicate-argument structures recognized by our semantic parser (Moschitti and Bejan, 2004), filtering out all the forms of the verb "be" and several form of generics as well, as is described in (Sauri et al., 2006). To classify events in text we implemented similar methods as the ones described in (Sauri et al., 2005). Temporal signals were recognized based on lexicons. We also needed to discover the three types of links. For this reason, we have developed and implemented four link detection methods that are illustrated in Figures 8, 9, 10, 11. Since TLink relations need to be identified in the AQUAINT corpus, we have implemented a method for automatically recognizing such relations by extending the method reported in (Lapata and Lascarides, 2004), which aimed the discovery of temporal constraints between two clauses from the same sentence. Predicate-argument structures discovered by semantic parsers enable us to detect relations between events expressed as verbs and temporal expressions. But such predicate-argument structures do not indicate what type of TLink exist. Thus, we first generated a classifier, of which features are illustrated in Figure 8, that enabled us to detect TLinks between such events and temporal expressions. For discovering temporal relations between events in free text, we used an event graph-based representation. Specifically, the nodes in the graph are represented by events and the edges between the nodes are either TLink, SLink or ALink relations. We have extended the model proposed in (Lapata and Lascarides, 2004) for classifying the TLink relations between events in two consecutive sentences and we also have enhanced the model with additional features. Concretely, for each pair of events from the same sentence or from consecutive sentences we used an SVM classifier that predicts and classifies a possible TLink relation. The features used for training the classifier are illustrated in Figure 9. For discovering TLink relations at the discourse level, we observed that transitional words introducing sentences or clauses play an important role. For example, transitional words expressing addition like "in addition", "additionally", "moreover" introduce SIMULTANEOUS TLink relations, result transitional words like "as a result of", "in consequence" introduce AFTER and BEFORE relations, while time transitional words like "meanwhile", "immediately", "in the meantime", "in the past", "in the future", "finally", "then", "next", "afterward" may introduce all the types of TLink relations. All these TLink relations represent the edges in the event graph built over the entire text for which the method is applied. However, we cannot rely entirely on the method presented above and therefore we have to check the con-
TLink Detection Method 2
We have trained an SVM classifier that considers the following features: −all the features described in (Lapata and Lascarides, 2004) −the temporal signals between the two verbs −the temporal signals that are in the clauses containing the verbs −distance in words between the two verbs Output: − TLink (Y/N) and TLink class secutive sentences Input: − a pair of events in the same sentence or in two con− since they perform well in discovering temporal relations between events in the same sentence −transitional words introducing sentences or clauses of the verbs Figure 9: Method 2 for discovering TLink relations.
sistency of the event graph and to remove all contradictory relations between two events in the graph. For this, we inferred all the possible temporal relations between two events in the graph following all possible paths that connect these two events. If we find contradictions in the inferred temporal relations, we discard all the temporal relations that connect these two events. An example of a contradiction in an event graph is: if we have event E 1 TLink:AFTER event E 2 and E 2 TLink:SIMULTANEOUS E 3 , then we cannot have in the event graph E 1 TLink: BEFORE E 3 . We also discard all the TLink relations in the event graph that can be replaced by an ALink or SLink relation discovered by the next two methods. ALink relations represent the temporal relations introduced by aspectual events. We observed in TimeML corpus that different aspectual events trigger different types of aspectual relations. For example, the most frequent aspectual events for each type of the ALink relation in TimeML are:
• initiation: "open", "begin", "become", "start", "trigger".
• termination: "end", "suspend", "stop", "abandon".
• continuation: "extend", "reinsate", "remain", "continue".
• reinitiation: "resume", "restore", "return".
• culmination: "finish", "complete", "reach".
Starting from this observation, we derived the method illustrated in Figure 10 that identify aspectual relations.
ALink Detection Method
Input: − a pair of events in the same sentence Output: − ALink (Y/N) and ALink class
The method for identifying aspectual relations is described in the following steps: 1. Build aspectual event clusters from TimeBank with the most frequent events that introduce ALink relations.
2. Bootstrap the clusters with aspectual events that require se− mantic processing. To accomplish this task, we used WordNet relations for determining if a event is in relation with events from the aspectual event clusters constructured at Step 1. For exam− ple, we classify "graduate" as an event that introduce culmination relation, because it has "culminates" in its WordNet gloss.
3. Identify an ALink relation between two events inside a sentence if: (a) the first event is an aspectual event that belongs in one of the five aspectual event clusters and (b) the second event is situa− ted in the same verbal phrase structure with the first event. Label relation with the cluster label of the aspectual event. In general, the SLink relations are introduced by particular classes of events. Some of these classes are presented below:
• events expressing presuppositions and beliefs: "think", "believe", "try", "predict", "want", "able to", "hope". • perception events: "see, "look", "hear", "perceive". • reporting events: "say", "tell", "report", "quote".
• events expressing negative polarity: "deny", "reject".
We build these semantic classes of events form TimeML and, in a similar way as in ALink method, we used Word-Net to enrich the semantic classes with additional events. Not only this classification of events help in identifying the SLink relations, but also they are used as features in a multiclass classifier for identifying the SLink relation types as illustrated in Figure 11. For example, reporting and perception events introduce EVIDENTIAL SLink relations and events expressing negative polarity introduce NEGATIVE EVIDENTIAL SLink relations. Other features we used for classifying the SLink relations are illustrated in Figure 11.
SLink Detection Method
Input: − a pair of events in the same sentence Output: − SLink (Y/N) and SLink class
We have trained an SVM classifier that considers the following features:
−the temporal signals that are present in the event clauses −obligation: presence of modals like "must", "ought", "should" −verb lemma −verb tense −lexical features and the part of speeches for the words surrounding the two events −the semantic class of the events (if the case) −the type of the events −possibility: presence of modals like "can", "could" −ability: presence of modals like "might", "may" −future: presence of modals like "will", "shall", "would" −negation: presence of words like "not", "n't" Figure 11: Method for discovering SLink relations.
Many complex questions do not have the entire answer in the same document; they require answer fusion. In view of this condition, we have included new documents, annotated them in the same way as TimeML and then decided on the partial answers before creating the complex question. Our resource characterizes both question decomposition and the answer fusion in terms of types of links between events or events and time expressions. One key aspect of the bootstrapping process is the identification of answer types for questions created by humans. They enable us to propose new questions and answers. For example, given an answer type FORMS-OF-PROTEST that is constrained by a given date (for Q G 2 ), we acquired a set of patterns that represent forms of protest with the method reported in (Thelen and Riloff, 2002) and determined which events occurred in the same time and location. Then we replaced the date to generate questions like Q G 7 . 7 Q : Who were the protesters during the Scotland Summit in 2004? G This is an example of complex question, where we employed the temporal connector "during" to express the temporal constrains.
Inference with Semantic and Pragmatic Knowledge
One important property of the AnswerTime-Bank is the semantic and pragmatic variation between questions and answers. We have carefully used (1) paraphrases of the answer and (2) generalizations such that we could allow for semantic and pragmatic inference while processing temporal questions. Consequently, we have also annotated the forms of semantic knowledge that are required and suggested possible sources of such knowledge. For example, often domain knowledge was required. For the sentence, we relied on semantic glossing of the concept "revenue" as the money earned by a company during a given year. Such a gloss is available from WordNet, and we have encoded the mappings between the question and answer concepts using the WordNet glosses. An important factor is the bootstrapping of such lexico-semantic resources that account for the inference of temporal answers. The resource is important as well for studying paraphrases under temporal constraints.
Conclusions
We have described the methodology we employed to date for generating a corpus of questions and answers that require temporal inference. AnswerTime-Bank was built using the annotations from TimeBank. We have described as well our method of bootstrapping the resource by discovering automatically TimeBank-like expressions and links. We believe that AnswerTime-Bank shall be a valuable resource for researchers interested in Question Answering.
Figure 1 :
1Example of passage from TimeML.
Figure 2 :
2Temporal and semantic dependencies in a question.
Figure 3 :
3Answering temporal questions with entailment.
Figure 4 :
4Inference rule that enables the entailment from Figure 3.
did the increase of the concetration of the toxic agents in marine burials of chemical weapons happen ?
FORE; ( 2 )
2AFTER; (3) INCLUDES; (4) IS INCLUDED; (5) DUR-ING; (6) SIMULTANEOUS; (7) IMMEDIATELY AFTER; (8) IMME-DIATELY BEFORE; (9) IDENTITY; (10) BEGINS; (11) ENDS; (12) BEGUN BY and (13) ENDED BY. 2 The SLinks are one of the following sorts: (1) MODAL; (2) NEGATIVE; (3) EVIDENTIAL; (4) NEGATIVE EVIDENTIAL; (5) FACTIVE; (6) COUNTER-FACTIVE and (7) CONDITIONAL. 3 The ALink relations are (1) INITIATES; (2) CULMINATES; (3) TERMINATES; (4) CONTINUES and (5) REINITIATES.
flags to protest the killings of the ethnic Albanians by Serb police in southern Serb Kosovo province.Meanwhile in the capital, Ankara, a few hundred ethnic Albanians laid wreath at the gate of Yugoslavian embassy.
Figure 5 :
5Example of TimeML annotation.
Figure 6
6illustrates three forms of temporal inference that are dictated by the types of links in event chains. In Figure 6(a), the fact that the anchor of t2 is t1 indicates that events e1 and e2 must be simultaneous. Therefore, the conclusion of the temporal inference rule illustrated inFigure 6(a) creates a new TLink:SIMULTANEOUS relation between the two events. In
Figure 6 :
6Inference rules based on TimeML links.
Figure 7 :
7Example of questions generated for the text illustrated inFigure 5.
Figure 8 :
8temporal signal that begins the ARGM−TMP (if it exists) −the temporal signal that ends the ARGM−TMP (if it exists) −the temporal signals that are in the clause containing the verb −distance in words between ARG−TMP and the verb −position of the ARG−TMP with respect to the verb −presence in ARGM−TMP of words like:later, past, future, recently, late, previously, over, ago Method 1 for discovering TLink relations.
Figure 10 :
10Method for discovering ALink relations.
In fiscal 1989, Elco earned $7.8 million, or $1.65 a share.What was Elco's revenue in 1989? to produce the question
AcknowledgmentsThis material is based upon work funded in whole or in part by the U.S. Government and any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the U.S. Government.
Paul Aarseth, Murat Deligonul, John Lehmann, Luke Nezda, Andrew Hickl, ACE 2005 TERN System Description: TASER. In Proceedings of ACE 2005 Workshop. Paul Aarseth, Murat Deligonul, John Lehmann, Luke Nezda, and Andrew Hickl. 2005. ACE 2005 TERN System Description: TASER. In Proceedings of ACE 2005 Workshop.
Time and time again: The many ways to represent time. James F Allen, International Journal of Intelligent Systems. 64James F. Allen. 1991. Time and time again: The many ways to represent time. International Journal of Intelligent Systems, 6(4):341-356.
TimeML-Compliant Text Analysis for Temporal Reasoning. Branimir Boguraev, Rie Kubota Ando, Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI-2005). the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI-2005)Branimir Boguraev and Rie Kubota Ando. 2005. TimeML-Compliant Text Analysis for Temporal Reasoning. In Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI-2005).
The PASCAL recognizing textual entailment challenge. Oren Ido Dagan, Bernardo Glickman, Magnini, Proceedings of the PASCAL Challenges Workshop Recognizing Textual Entailment. the PASCAL Challenges Workshop Recognizing Textual EntailmentIdo Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recog- nizing textual entailment challenge. In Proceedings of the PASCAL Challenges Workshop Recognizing Textual Entailment.
Question Answering Based on Temporal Inference. Sanda Harabagiu And Cosmin Adrian Bejan, Proceedings of the AAAI-2005 Workshop on Inference for Textual Question Answering. the AAAI-2005 Workshop on Inference for Textual Question AnsweringSanda Harabagiu and Cosmin Adrian Bejan. 2005. Question Answering Based on Temporal Inference. In Proceedings of the AAAI-2005 Workshop on Inference for Textual Question Answering.
The Role of Lexico-Semantic Feedback in Open-Domain Textual Question-Answering. Sanda M Harabagiu, Dan I Moldovan, Marius Pasca, Rada Mihalcea, Mihai Surdeanu, C Razvan, Roxana Bunescu, Vasile Girju, Paul Rus, Morarescu, Proceedings of 39th Annual Meeting of the Association for Computational Linguistics (ACL-2001). 39th Annual Meeting of the Association for Computational Linguistics (ACL-2001)Sanda M. Harabagiu, Dan I. Moldovan, Marius Pasca, Rada Mihalcea, Mihai Sur- deanu, Razvan C. Bunescu, Roxana Girju, Vasile Rus, and Paul Morarescu. 2001. The Role of Lexico-Semantic Feedback in Open-Domain Textual Question- Answering. In Proceedings of 39th Annual Meeting of the Association for Com- putational Linguistics (ACL-2001), pages 274-281.
Annotating and Reasoning about Time and Events. Jerry Hobbs, James Pustejovsky, Proceedings of the AAAI Spring Symposium on Logical Formalizations of Commonsense Reasoning. the AAAI Spring Symposium on Logical Formalizations of Commonsense ReasoningJerry Hobbs and James Pustejovsky. 2003. Annotating and Reasoning about Time and Events. In Proceedings of the AAAI Spring Symposium on Logical Formal- izations of Commonsense Reasoning.
Inferring Sentence-internal Temporal Relations. Mirella Lapata, Alex Lascarides, Proceedings of the North American Chapter of the Assocation of Computational Linguistics. the North American Chapter of the Assocation of Computational LinguisticsBostonMirella Lapata and Alex Lascarides. 2004. Inferring Sentence-internal Temporal Relations. In In Proceedings of the North American Chapter of the Assocation of Computational Linguistics, 153-160. Boston.
Temporal Context Representation and Reasoning. Dan Moldovan, Christine Clark, Sanda Harabagiu, Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI-2005). the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI-2005)Dan Moldovan, Christine Clark, and Sanda Harabagiu. 2005. Temporal Context Representation and Reasoning. In Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI-2005).
A Semantic Kernel for Predicate Argument Classification. Alessandro Moschitti And Cosmin Adrian Bejan, Proceedings of the Eighth Conference on Computational Natural Language Learning. the Eighth Conference on Computational Natural Language LearningBoston, MA,USAAlessandro Moschitti and Cosmin Adrian Bejan. 2004. A Semantic Kernel for Pred- icate Argument Classification. In Proceedings of the Eighth Conference on Com- putational Natural Language Learning (CoNLL-2004), Boston, MA,USA.
Splitting Complex Temporal Questions for Question Answering systems. P Saquete, R Martínez-Barco, J L Munoz, Vicedo, Proceedings of the 42th. the 42thSaquete, P. Martínez-Barco, R. Munoz, and J.L. Vicedo. 2004. Splitting Complex Temporal Questions for Question Answering systems. In Proceedings of the 42th
Annual Conference of the Association for Computational Linguistics (ACL-04). Annual Conference of the Association for Computational Linguistics (ACL-04), pages 567-574.
Evita: A Robust Event Recognizer for QA Systems. Roser Sauri, Robert Knippen, Marc Verhagen, James Pustejovsky, Proceedings of HLT/EMNLP. Roser Sauri. HLT/EMNLP. Roser SauriJessica Littman, Bob Knippen, Robert Gaizauskas, Andrea Setzer, and James PustejovskyRoser Sauri, Robert Knippen, Marc Verhagen, and James Pustejovsky. 2005. Evita: A Robust Event Recognizer for QA Systems. In Proceedings of HLT/EMNLP. Roser Sauri, Jessica Littman, Bob Knippen, Robert Gaizauskas, Andrea Setzer, and James Pustejovsky. 2006. TimeML Annotation Guidelines. http://www.timeml.org.
A Bootstrapping Method for Learning Semantic Lexicons Using Extraction Pattern Contexts. Michael Thelen, Ellen Riloff, Proceedings of the. theMichael Thelen and Ellen Riloff. 2002. A Bootstrapping Method for Learning Se- mantic Lexicons Using Extraction Pattern Contexts. In Proceedings of the 2002
Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing. |
256,461,015 | REUSE: REference-free UnSupervised quality Estimation Metric | This paper describes our submission to the WMT2022 shared metrics task. Our unsupervised metric estimates the translation quality at chunk-level and sentence-level. Source and target sentence chunks are retrieved by using a multi-lingual chunker. Chunk-level similarity is computed by leveraging BERT contextual word embeddings and sentence similarity scores are calculated by leveraging sentence embeddings of Language-Agnostic BERT models. The final quality estimation score is obtained by mean pooling the chunk-level and sentence-level similarity scores. This paper outlines our experiments and also reports the correlation with human judgements for en-de, en-ru and zh-en language pairs of WMT17, WMT18 and WMT19 testsets. Our submission will be made available at https://github.com/ AnanyaCoder/WMT22Submission_REUSE | [
773282,
203316451,
245855874,
245855695,
245855668
] | REUSE: REference-free UnSupervised quality Estimation Metric
December 7-8, 2022
Ananya Mukherjee ananya.mukherjee@research.iiit.ac.inm.shrivastava@iiit.ac.in
Language Technologies Research Centre International Institute of Information Technology -Hyderabad
Manish Shrivastava
Language Technologies Research Centre International Institute of Information Technology -Hyderabad
REUSE: REference-free UnSupervised quality Estimation Metric
Proceedings of the Seventh Conference on Machine Translation (WMT)
the Seventh Conference on Machine Translation (WMT)December 7-8, 2022Machine Translation -Natural Language Processing Lab
This paper describes our submission to the WMT2022 shared metrics task. Our unsupervised metric estimates the translation quality at chunk-level and sentence-level. Source and target sentence chunks are retrieved by using a multi-lingual chunker. Chunk-level similarity is computed by leveraging BERT contextual word embeddings and sentence similarity scores are calculated by leveraging sentence embeddings of Language-Agnostic BERT models. The final quality estimation score is obtained by mean pooling the chunk-level and sentence-level similarity scores. This paper outlines our experiments and also reports the correlation with human judgements for en-de, en-ru and zh-en language pairs of WMT17, WMT18 and WMT19 testsets. Our submission will be made available at https://github.com/ AnanyaCoder/WMT22Submission_REUSE
Introduction
Quality Estimation (QE) is an essential component of the machine translation workflow as it assesses the quality of the translated output without conferring reference translations (Specia et al., 2009;Blatz et al., 2004). High quality reference translations are often hard to find, QE helps to evaluate the translation quality based on the source sentences. Recently QE has emerged as an alternative evaluation approach for NMT systems (Specia et al., 2018). Recently, many researchers have been working on QE, as a part of Quality Estimation Shared Task, several QE systems (Zerva et al., 2021;Lim et al., 2021;Chowdhury et al., 2021;Geigle et al., 2021) were evaluated in WMT conference (Barrault et al., 2021). However, most of the quality estimation systems are supervised i.e., the model regresses on the human judgements. Often, human assessments are not available and it is very difficult to procure high quality human judgements. This motivated our research to emerge with an Unsupervised Quality Estimation System. Also, QE is usually performed at different granularity (e.g., word, sentence, document) (Kepler et al., 2019); in this work, we focus on the chunk-level and sentence-level similarity. The final QE score of the target sentence is obtained by mean pooling the chunk similarity scores and sentence similarity scores. Overall, our main contribution is as follows:
• We propose a concept of chunk level similarity i.e., matching the source and target chunks by leveraging multilingual BERT embeddings.
• We release a multilingual chunking model which returns meaningful word group boundaries.
• We present our unsupervised reference free QE metric (REUSE) that estimates the quality of translation by doing a chunk-level and sentence-level comparison with the source.
Motivation to use chunks
Usually, the words in translated output might not always follow the word sequence of the source text. However, it is observed that few word-groups often occur together irrespective of the order in source. Figure 1 illustrates two example pairs: English-German (en-de) pair and English-Hindi (en-hi) pair. In the first example pair, the words sequence is not highly altered as English and German belong to the same language family (West Germanic), whereas in en-hi pair we can see a drastic change in the word order as Hindi belongs to a different language family (Indo-Aryan). However, we can observe that few word groups (here we refer as chunk) always occur together in both source and target. This phenomenon has motivated our research in the direction of chunk level assessment.
REUSE
We propose REUSE, a REference-free UnSupervised quality Estimation Metric that evaluates a machine translated output based on the corresponding source sentence regardless of the reference. Figure 2 depicts the high-level architecture of our model. The chunks of source and hypothesis are acquired from the multilingual chunking model. Further chunk-wise subword contextual BERT embeddings are mean-pooled to obtain the chunk-level embeddings. Meanwhile, LaBSE model (Feng et al., 2020) is used for the sentence-level embeddings. Using these embeddings, we compute chunk-level similarity and sentence-level similarity, finally combine them by averaging chunk-and sentence-level similarity scores 1 . We discuss the working details of our system in the following sections.
Chunk-level Similarity
We measure the number of matches between source chunks and hypothesis chunks. These matches are obtained by computing a cosine similarity (Foreman, 2014) of the individual chunk embeddings (refer 2.1.2) of source and translation sentence. An all-pair comparison is done to determine the best chunk match. Based on these matches, we compute precision and recall i.e, precision is count of matches / length of hypothesis and recall is count of matches / length of source. Ultimately, the chunklevel similarity score is calculated as the parameterized harmonic mean (Sasaki, 2007) of precision and recall, assigning more weightage to recall (β = 3).
Multilingual Chunker
The fundamental innovation in recent neural models lie in learning the contextualized representations by pre-training a language modeling task. Multilingual BERT is one such transformer-based masked language model that is pre-trained on monolingual Wikipedia corpora of 104 languages with a shared word-piece vocabulary. Training the pre-trained mBERT model for a supervised downstream task (finetuning) has dominated performance across a broad spectrum of NLP tasks (Devlin et al., 2018). We leverage this finetuning capability of BERT so as to create a Multilingual Chunker model that inputs a sentence and returns a set of divided chunks (word-groups).
We use BertForTokenClassification which has BERT (Bidirectional Encoder Representations from Transformers) as its base architecture, with a token classification head on top, allowing it to make predictions at the token level, rather than the sequence level. We use this BertForTokenClassification model and load it with the pretrained weights of "bert-base-multilingual-cased" 2 . We train the token classification head, together with the pretrained weights, using our labelled dataset (chunk annotated data). We employ Cross Entropy as the loss function and Adam optimizer (Kingma and Ba, 2014) with a learning rate of 1e-05.
Chunk Embeddings
Currently, we have word embedding models and sentence embedding models, but there is no specific chunk-level embedding models. Therefore, we embed the chunks leveraging the BERT embeddings by loading the weights of "distiluse-basemultilingual-cased" 3 . For a given sentence, this model return embeddings at a subword-level. To obtain the desired chunk embeddings, we perform a chunk to subword mapping and mean-pool the subword embeddings belonging to each chunk.
Sentence Similarity
To compute similarity at the sentence level, we find the cosine similarity (Foreman, 2014) of source sentence embedding and translation sentence embedding. We use LaBSE (Language Agnostic BERT Sentence Embedding) model to obtain the sentence embeddings. LaBSE model (Feng et al., 2020) is built on BERT architecture and trained on filtered and processed monolingual (for dictionaries) and bilingual training data. The resulting sentence embeddings achieve excellent performance on measures of sentence embedding quality, such as the semantic textual similarity (STS) benchmark and sentence embedding-based transfer learning (Feng et al., 2020).
Experiments and Results
Results on WMT17-19 testset
Each year, the WMT Translation shared task organisers collect human judgements in the form of Direct Assessments. Those assessments are then used in the Metrics task to measure the correlation between metrics and therefore decide which metric works best. Therefore, we estimated the translation quality of about 9K translations from the testset of WMT17 (Bojar et al., 2017), WMT18 (Bojar et al., 2018), WMT19 (Bojar et al., 2019a,b,c) for en-ru, en-de, zh-en language pairs and computed the pearson correlation (Benesty et al., 2009) of human judgements with Chunk-level Similarity scores, Sentence-level Similarity scores and their combination (REUSE). The segment level correlation scores are mentioned in Table 2. It is clearly evident from the correlations that the ensemble of Chunk Similarity model and Sentence Similarity model outperforms the individual models.
3.2 WMT22 QE-as-a-metric task submission Table 1 shows the WMT22 QE-as-a-metric task test-set details for the language pairs we have experimented on.
Language Pair #Sentences #Systems en-ru 36723 88 en-de 82356 91 zh-en 41127 103 Table 1: Data statistics of WMT22 QE-as-a-metric task testset for en-ru, en-de and zh-en pairs.
Segment Level Evaluation
For Segment-level task, we submitted the sentence level scores obtained by our reference free quality estimation metric (REUSE) for en-ru, en-de and zh-en language pairs.
System Level Evaluation
We compute the system-level score for each system by averaging the segment-level scores obtained. A similar method is also used to compute systemlevel scores based on segment-level human annotations such as DA's and MQM, implying that a metric with a high segment-level correlation should also demonstrate high system-level correlation.
Conclusion
In this paper, we describe our submission to the WMT22 Metrics Shared Task (QE-as-a-metric). Our submission includes segment-level and systemlevel quality estimation scores for sentences of three language pairs Chinese-English (zh-en), English-Russian (en-ru) and English-German (ende provides a quality estimation score by evaluating a hypothesis against the source sentence. REUSE estimates the translation quality by combining chunklevel similarity score and sentence-level similarity score, leveraging multilingual BERT embeddings. We performed our experiments on testsets of WMT17, WMT18, WMT19 and it has been emperically observed that the combination of chunkand sentence-level similarity scores performed better in terms of agreement with human assessments. Potential research directions definitely include improving the multilingual chunking model. As part of future work, we aim to further experiment and emerge with such effortless efficient unsupervised approach to estimate the translation quality and exhibit higher agreement with humans.
Figure 1 :
1Illustration of chunk similarity for two example sentences (en-de & en-hi).
Figure 2 :
2High-level architecture of REUSE model. 1 REUSE score ranges between 0-1.
). We evaluate this year's test set using our unsupervised, reference-free metric -REUSE, thatWMT
test-set
Language Pair
Chunk Similarity
using chunker
Sentence Similarity
using LaBSE
REUSE
(chunk + sentence)
wmt17
zh-en
0.269
0.242
0.316
en-ru
0.308
0.223
0.337
en-de
0.280
0.167
0.278
wmt18
zh-en
0.135
0.2
0.210
en-ru
0.145
0.2
0.213
en-de
0.306
0.107
0.273
wmt19
zh-en
0.225
0.279
0.3
en-ru
-0.112
0.144
-0.003
en-de
0.254
0.131
0.251
Table 2 :
2Correlation with Human Judgements on WMT17, WMT18 and WMT19 testset.
https://huggingface.co/ bert-base-multilingual-uncased 3 https://huggingface.co/ distiluse-base-multilingual-cased
Loic Barrault, Ondrej Bojar, Fethi Bougares, Rajen Chatterjee, Marta R Costa-Jussa, Christian Federmann, Mark Fishel, Alexander Fraser, Markus Freitag, Yvette Graham, Roman Grundkiewicz, Paco Guzman, Barry Haddow, Proceedings of the Sixth Conference on Machine Translation. Makoto Morishita, and Christof Monzthe Sixth Conference on Machine TranslationMatthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Tom Kocmi; OnlineAssociation for Computational LinguisticsAndre MartinsLoic Barrault, Ondrej Bojar, Fethi Bougares, Rajen Chatterjee, Marta R. Costa-jussa, Christian Feder- mann, Mark Fishel, Alexander Fraser, Markus Fre- itag, Yvette Graham, Roman Grundkiewicz, Paco Guzman, Barry Haddow, Matthias Huck, Antonio Ji- meno Yepes, Philipp Koehn, Tom Kocmi, Andre Mar- tins, Makoto Morishita, and Christof Monz, editors. 2021. Proceedings of the Sixth Conference on Ma- chine Translation. Association for Computational Linguistics, Online.
. Jacob Benesty, Jingdong Chen, Yiteng Huang, Israel Cohen, 10.1007/978-3-642-00296-0_5SpringerBerlin Heidelberg; Berlin, HeidelbergJacob Benesty, Jingdong Chen, Yiteng Huang, and Israel Cohen. 2009. Pearson Correlation Coefficient, pages 1-4. Springer Berlin Heidelberg, Berlin, Heidelberg.
Confidence estimation for machine translation. John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto Sanchis, Nicola Ueffing, COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics. Geneva, Switzerland. COLINGJohn Blatz, Erin Fitzgerald, George Foster, Simona Gan- drabur, Cyril Goutte, Alex Kulesza, Alberto Sanchis, and Nicola Ueffing. 2004. Confidence estimation for machine translation. In COLING 2004: Pro- ceedings of the 20th International Conference on Computational Linguistics, pages 315-321, Geneva, Switzerland. COLING.
Ondřej Bojar, Christian Buck, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Julia Kreutzer, 2017. Proceedings of the Second Conference on Machine Translation. Copenhagen, DenmarkAssociation for Computational Linguistics1Ondřej Bojar, Christian Buck, Rajen Chatterjee, Chris- tian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, and Julia Kreutzer, editors. 2017. Proceed- ings of the Second Conference on Machine Transla- tion, Volume 1: Research Papers. Association for Computational Linguistics, Copenhagen, Denmark.
Ondřej Bojar, Rajen Chatterjee, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, André Martins, Christof Monz, Matteo Negri, 2019a. Proceedings of the Fourth Conference on Machine Translation. Karin VerspoorMariana Neves, Matt Post, Marco Turchi; Florence, ItalyAssociation for Computational Linguistics1Ondřej Bojar, Rajen Chatterjee, Christian Feder- mann, Mark Fishel, Yvette Graham, Barry Had- dow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, André Martins, Christof Monz, Matteo Negri, Aurélie Névéol, Mariana Neves, Matt Post, Marco Turchi, and Karin Verspoor, editors. 2019a. Proceed- ings of the Fourth Conference on Machine Transla- tion (Volume 1: Research Papers). Association for Computational Linguistics, Florence, Italy.
Ondřej Bojar, Rajen Chatterjee, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, André Martins, Christof Monz, Matteo Negri, Shared Task Papers, Day 1). Association for Computational Linguistics. Marco Turchi, and Karin VerspoorMariana Neves, Matt Post; Florence, Italy22019b. Proceedings of the Fourth Conference on Machine TranslationOndřej Bojar, Rajen Chatterjee, Christian Feder- mann, Mark Fishel, Yvette Graham, Barry Had- dow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, André Martins, Christof Monz, Matteo Negri, Aurélie Névéol, Mariana Neves, Matt Post, Marco Turchi, and Karin Verspoor, editors. 2019b. Proceed- ings of the Fourth Conference on Machine Transla- tion (Volume 2: Shared Task Papers, Day 1). As- sociation for Computational Linguistics, Florence, Italy.
Ondřej Bojar, Rajen Chatterjee, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, André Martins, Christof Monz, Matteo Negri, 2019c. Proceedings of the Fourth Conference on Machine Translation. Marco Turchi, and Karin VerspoorMariana Neves, Matt Post; Florence, Italy3Association for Computational LinguisticsOndřej Bojar, Rajen Chatterjee, Christian Feder- mann, Mark Fishel, Yvette Graham, Barry Had- dow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, André Martins, Christof Monz, Matteo Negri, Aurélie Névéol, Mariana Neves, Matt Post, Marco Turchi, and Karin Verspoor, editors. 2019c. Proceed- ings of the Fourth Conference on Machine Transla- tion (Volume 3: Shared Task Papers, Day 2). As- sociation for Computational Linguistics, Florence, Italy.
Ondřej Bojar, Rajen Chatterjee, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Christof Monz, Matteo Negri, Proceedings of the Third Conference on Machine Translation. Association for Computational Linguistics. Marco Turchi, and Karin Verspoorthe Third Conference on Machine Translation. Association for Computational LinguisticsMariana Neves, Matt Post, Lucia Specia; Belgium, BrusselsOndřej Bojar, Rajen Chatterjee, Christian Feder- mann, Mark Fishel, Yvette Graham, Barry Had- dow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Christof Monz, Matteo Negri, Aurélie Névéol, Mariana Neves, Matt Post, Lucia Specia, Marco Turchi, and Karin Verspoor, editors. 2018. Proceedings of the Third Conference on Machine Translation. Association for Computational Linguis- tics, Belgium, Brussels.
Ensemble fine-tuned mbert for translation quality estimation. Shaika Chowdhury, Naouel Baili, Brian Vannah, Proceedings of the Sixth Conference on Machine Translation. the Sixth Conference on Machine TranslationOnline. Association for Computational LinguisticsShaika Chowdhury, Naouel Baili, and Brian Vannah. 2021. Ensemble fine-tuned mbert for translation qual- ity estimation. In Proceedings of the Sixth Confer- ence on Machine Translation, pages 897-903, Online. Association for Computational Linguistics.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.48550/ARXIV.1810.04805Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing.
Language-agnostic BERT sentence embedding. CoRR, abs. Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, Wei Wang, Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Ari- vazhagan, and Wei Wang. 2020. Language-agnostic BERT sentence embedding. CoRR, abs/2007.01852.
John Foreman, COSINE DISTANCE, COSINE SIMILARITY, ANGULAR COSINE DISTANCE, ANGULAR COSINE SIMILARITY. John Foreman. 2014. COSINE DISTANCE, COSINE SIMILARITY, ANGULAR COSINE DISTANCE, ANGULAR COSINE SIMILARITY.
Tuda at wmt21: Sentence-level direct assessment with adapters. Gregor Geigle, Jonas Stadtmüller, Wei Zhao, Jonas Pfeiffer, Steffen Eger, Proceedings of the Sixth Conference on Machine Translation. the Sixth Conference on Machine TranslationOnline. Association for Computational LinguisticsGregor Geigle, Jonas Stadtmüller, Wei Zhao, Jonas Pfeiffer, and Steffen Eger. 2021. Tuda at wmt21: Sentence-level direct assessment with adapters. In Proceedings of the Sixth Conference on Machine Translation, pages 911-919, Online. Association for Computational Linguistics.
. Fabio Kepler, Jonay Trénous, Marcos Treviso, Miguel Vera, António Góis, M Amin Farajian, V António, Fabio Kepler, Jonay Trénous, Marcos Treviso, Miguel Vera, António Góis, M. Amin Farajian, António V.
Unbabel's participation in the WMT19 translation quality estimation shared task. Lopes, F T André, Martins, 10.18653/v1/W19-5406Proceedings of the Fourth Conference on Machine Translation. the Fourth Conference on Machine TranslationFlorence, ItalyAssociation for Computational Linguistics3Lopes, and André F. T. Martins. 2019. Unbabel's par- ticipation in the WMT19 translation quality estima- tion shared task. In Proceedings of the Fourth Con- ference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 78-84, Florence, Italy. Association for Computational Linguistics.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, 10.48550/ARXIV.1412.6980Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization.
Papago's submission for the wmt21 quality estimation shared task. Seunghyun Lim, Hantae Kim, Hyunjoong Kim, Proceedings of the Sixth Conference on Machine Translation. the Sixth Conference on Machine TranslationOnline. Association for Computational LinguisticsSeunghyun Lim, Hantae Kim, and Hyunjoong Kim. 2021. Papago's submission for the wmt21 quality estimation shared task. In Proceedings of the Sixth Conference on Machine Translation, pages 935-940, Online. Association for Computational Linguistics.
The truth of the f-measure. Yutaka Sasaki, Teach Tutor MaterYutaka Sasaki. 2007. The truth of the f-measure. Teach Tutor Mater.
Quality estimation for machine translation. Lucia Specia, Carolina Scarton, Gustavo Paetzold, 10.2200/S00854ED1V01Y201805HLT039Synthesis Lectures on Human Language Technologies. 11Lucia Specia, Carolina Scarton, and Gustavo Paetzold. 2018. Quality estimation for machine translation. Synthesis Lectures on Human Language Technolo- gies, 11:1-162.
Estimating the sentence-level quality of machine translation systems. Lucia Specia, Marco Turchi, Nicola Cancedda, Nello Cristianini, Marc Dymetman, Proceedings of the 13th Annual conference of the European Association for Machine Translation. the 13th Annual conference of the European Association for Machine TranslationBarcelona, SpainEuropean Association for Machine TranslationLucia Specia, Marco Turchi, Nicola Cancedda, Nello Cristianini, and Marc Dymetman. 2009. Estimating the sentence-level quality of machine translation sys- tems. In Proceedings of the 13th Annual conference of the European Association for Machine Translation, Barcelona, Spain. European Association for Machine Translation.
Ist-unbabel 2021 submission for the quality estimation shared task. Chrysoula Zerva, Ricardo Daan Van Stigt, Ana C Rei, Pedro Farinha, Ramos, G C José, Taisiya De Souza, Fabio Glushkova, André F T Kepler, Martins, Proceedings of the Sixth Conference on Machine Translation. the Sixth Conference on Machine TranslationChrysoula Zerva, Daan van Stigt, Ricardo Rei, Ana C Farinha, Pedro Ramos, José G. C. de Souza, Taisiya Glushkova, miguel vera, Fabio Kepler, and André F. T. Martins. 2021. Ist-unbabel 2021 submission for the quality estimation shared task. In Proceedings of the Sixth Conference on Machine Translation, pages 961-972, Online. Association for Computational Lin- guistics. |
256,461,250 | The SPECTRANS System Description for the WMT22 Biomedical Task | This paper describes the SPECTRANS submission for the WMT 2022 biomedical shared task. We present the results of our experiments using the training corpora and the JoeyNMT (Kreutzer et al., 2019) and SYSTRAN Pure Neural Server/ Advanced Model Studio toolkits for the language directions English to French and French to English. We compare the predictions of the different toolkits. We also use JoeyNMT to fine-tune the model with a selection of texts from WMT, Khresmoi and UFAL data sets. We report our results and assess the respective merits of the different translated texts. | [
208117506,
244487653,
198967823,
236477365,
13753208
] | The SPECTRANS System Description for the WMT22 Biomedical Task
December 7-8, 2022
Nicolas Ballier nicolas.ballier@u-paris.fr
CLILLAC-ARP
Jean-Baptiste Yunès jean-baptiste.yunes@u-paris.fr
IRIF
Guillaume Wisniewski guillaume.wisniewski@u-paris.fr
LLF Université Paris Cité
F-75013ParisFrance
Lichao Zhu lichao.zhu@u-paris.fr
LLF Université Paris Cité
F-75013ParisFrance
Maria Zimina-Poirot maria.zimina-poirot@u-paris.fr
CLILLAC-ARP
The SPECTRANS System Description for the WMT22 Biomedical Task
Proceedings of the Seventh Conference on Machine Translation (WMT)
the Seventh Conference on Machine Translation (WMT)December 7-8, 2022
This paper describes the SPECTRANS submission for the WMT 2022 biomedical shared task. We present the results of our experiments using the training corpora and the JoeyNMT (Kreutzer et al., 2019) and SYSTRAN Pure Neural Server/ Advanced Model Studio toolkits for the language directions English to French and French to English. We compare the predictions of the different toolkits. We also use JoeyNMT to fine-tune the model with a selection of texts from WMT, Khresmoi and UFAL data sets. We report our results and assess the respective merits of the different translated texts.
Introduction
For this WMT22 Biomedical workshop, we focused on the selection of texts used for fine-tuning. We selected what we believe to be the two best models we produced for the EN-FR track with two different neural toolkits but we mostly took the opportunity to discuss the translated texts. The rest of the paper is organised as follows: Section 2 summarises our approaches to the task, Section 3 details the training data of our experiments, Section 4 presents the results. Section 5 discusses them.
Our Approaches to the Task
This section presents our various strategies for this task and our four submissions. We compared the predictions of two toolkits but our comparison is very partial as the training data differs. We trained several systems with JoeyNMT (Kreutzer et al., 2019) training and fine-tuning with UFAL, WMT and Khresmoi data. We used the SYSTRAN Pure Neural ® Server generic system and tried to finetune with specialised terminology. We used SYS-TRAN Advanced Model Studio ® to fine-tune a generic model with in-house data based on 2,700 aligned segments collected during the translation of the French federation for diabetes. 1 Table 1 summarises our submissions.
With JoeyNMT, we selected the training data, comparing the performance with and without the added data and applied fine-tuning to the model based on UFAL medical corpora The following section details the model selection and fine-tuning.
Data and Tools Used
In this section, we present different approaches that we adopted to train baseline models and proceed to fine-tuning. We have built two baseline models : one trained with generic data set fine-tuned with in-domain data, and the other trained directly with in-domain data, in order to compare their performances and to better understand functioning of in-domain NMT training.
Data for baseline models training
We used two baseline models : the first one is built based on our model submitted for WMT 2021. It took the Europarl 7 parallel corpus as data set trained with 341,554 sentences in two directions (EN⇔FR) (Ballier et al., 2021); the second one has been built by using bilingual (EN-FR) in-domain parallel corpora data set UFAL provided by WMT 2022 (with 2,693,509 sentences). The corpora have been normalized and sentences longer that 50 words have been removed. Thus we have retained 2,159,307 sentences. These sentences are split in the ratio of 6-2-2 : 60% for training, 20% for development and the last 20% for evaluation. Two tokenizations are applied to all the data sets : standard tokenization (Spacy) segments data into words and BPE tokenization into sub-words with SentencePiece (Kudo, 2018).
Experiments and Results
In our experiments, we aimed to compare the different JoeyNMT models (baseline and finetuning) that we have trained with SYSTRAN model. JoeyNMT, which is based on TRANS-FORMER (Vaswani et al., 2017), requires lighter implementations than OpenNMT (Klein et al., 2017).
Baseline with JoeyNMT
We have trained a baseline model with in-domain data set UFAL. For FR→EN model, the best checkpoint is recorded at step 60,000 with a BLEU score of 61.01 (PPL: 1.53); as for EN→FR model, the best checkpoint is recorded at step 40,000 with a BLEU score of 59.23 (PPL: 1.45, see Figure 1) 4 .
Fine-tuning with JoeyNMT
The generic baseline model was fine-tuned with the following parameters: vocabulary size: 32,000, maximum sentence length: 50, maximum output length: 100, training initializer: XAVIER, number of layers: 6, number of heads: 8 normalization: tokens, encoder embedding dimension: 512, decoder embedding dimension: 512, hidden size: 512. It was fine-tuned with two data sets. The first one with Medline-Khresmoi data set got the best BLEU score from French to English 54.8, 38.4 as from English to French (see Figure 2).
2 https://github.com/biomedical-translationcorpora/corpora 3 http://hdl.handle.net/11234/1-2122 4 We noticed that the validation processes were extremely long. Every validation after 20,000 steps took about 28 hours. With the same parameters, the model fine-tuned with UFAL data set had, surprisingly, relatively low scores : we obtained a BLEU score of 18.60 for French→English model and 21.13 as for English→French model (see Figure 3).
Training and Fine-tuning with Systran
Model Studio SYSTRAN Pure Neural ® Server is a multilingual translation platform that offers website translation and localisation features. 5 The server uses Pure Neural ® Machine Translation (PNMT ® ), a commercial engine based on AI and deep learning, launched in 2016. This technology enables neural engines to learn language rules from a given translated text and to produce a translation achieving the current state of the art. An open source neural machine translation system OpenNMT developed by the Harvard NLP group and Systran is available online: http://opennmt.net. For our work, we used SYSTRAN Pure Neural® Server installed on PAPTAN 6 .
We used characteristic elements computation (Lebart et al., 1997) implemented in iTrameur 7 to compare the results of run2 (generated by The official product website is available at https://www.systransoft.com/translation-products/systranpure-neural-server/.
6 Plateforme pour l'apprentissage profond pour la traduction automatique neuronale, in English: Deep Learning for Machine Translation at Université de Paris-Cité). See the description of the platform on the project website: https://uparis.fr/plateforme-paptan. 7 https://itrameur.clillac-arp.univ-paris-diderot.fr TRAN Pure Neural ® Server) and run1 (generated by JoeyNMT), using characteristic elements computation (Lebart et al., 1997). In this paper, we discuss the results of FR→EN translation ( Table 2). As one can see in Table 2, in the SYSTRAN translation, a sentence always starts with capitalization ("The", "This", "In"). Capital letters are also used for acronyms and abbreviations ("BMI", "VCE"). This can be explained by the default detokenization function of JoeyNMT in detokenizing translation in sub-tokenized form. The modal verb "must" is overused in the SYS-TRAN translation (IndSP = +5) and is never used in the JoeyNMT translation, which tends to prefer the use of the modal verb "should" (Figure 4). The absence of "must" produced by the JoeyNMT system might be due to the large difference of frequencies of both words in training data : 18,462 occurrences of "should" and 4,061 occurrences of "must". The preponderance of "should" in the training corpus has seemingly induced the system to systematically produce the word whenever the system needs to produce a modal verb before a base verb.
We also note that JoeyNMT translation under- We take stock of knowledge about this addiction and its management.
knowledge about this dependency and the management thereof is a pending state. Table 3: "we" in SYSTRAN and JoeyNMT translations uses "we" (IndSP = -4). This finding is interesting because it makes sometimes possible to identify substantial differences between both translations in Table 3.
These results show how training data affects translation results. To our knowledge, SYSTRAN NMT relies upon a broad selection of general texts that do not belong to any single text type, subject field, or register (many of them are translated texts from the web available on https://opus.nlpl.eu). The WMT corpus consists of randomly selected sentences from abstracts and main texts of scientific articles published in medical journals. The articles follow the so-called introduction, methods, results and discussion structure (IMRAD) (Heßler et al., 2020). The selection is not necessarily balanced in terms of represented discourse functions. Thus, we noticed the overuse of "should be" that definitely constrained our translation output (see Figure 4 "should be given", "should be reached", "should be considered", etc.).
Discussion
Degrees of Specialisation
If the Biomedical terminology was indeed present in the testing set (eg "hypertension artérielle pulmonaire","nutriments", "supplémentation en vitamine D" ), some sentences were not particularly specialised. For instance, "Le but de cet article est de les résumer de manière relativement exhaustive." is representative of Scientific French for specific purposes but not really of biomedical specialised language. The same holds for the test set from English into French. In view of these observations, it is easy to understand why models trained on more generic data perform so well in this task.
The performance of gigamodels
We have not submitted translations produced on mBART-50 (Tang et al., 2021), but we compared the translations of our best system (PNS for Pure Neural Server) with those of mBART. 8 . The translation based on mBART produces fluent grammatical sentences but seems to be less specific in the terminology. For instance "vapotage" (vaping testing) was translated as poultry testing and instead of vaping frequency the system produced pooping frequency. The terminology is not always consistent or accurate : hyperthyroïdie frustre was translated as rough (SYSTRAN) or fruity (MBART). Oddly enough, with mBART, percentages were literally translated as "per cent" instead of the % symbol. Figure 5 plots the vocabulary growth curves (VGCs) of the two translated texts. The y axis corresponds to the number of new types and the x axis corresponds to the number of tokens in the translated texts. As can be seen, the two systems have remarkably similar patterns of VGCs, with SYSTRAN PNS slightly above MBART, in spite of the variants we noticed. For the French translation of "keloids", mBART varies between "céloïdes" and "keloïdes", whereas SYSTRAN PNS only produces "chéloïdes".
Measuring specificity indices (Lebart et al., 1997) allowed us to spot differences in the translation. One of the most striking ones was the choice of feminine determiner la for la COVID in the PNS translations, as evidenced by the specificity of la COVID in the two translations ( Figure 6). A somewhat belated and debated ruling of the Académie française endorsed and imposed "la" for the gender of COVID in French. This benign detail probably can be used as a chronological landmark for the training data collection of the two systems: it seems that PNS was trained with more recent French texts. It may also be the case that SYSTRAN has used rule-based normalisation to regularise the output for la COVID.
Conclusion
This paper presents the SPECTRANS system description for the WMT 2022 biomedical Shared Task. We participated in the English-to-French and French-to-English tasks. We only used the data provided by the organisers but also analysed the translations produced with mBART. We obviously concur with previous research that training data is key. For the MT system, we applied a variety of strategies, toolkit comparison and fine-tuning to compare outcomes of different NMT systems in biomedical translation.
Our contribution mostly lies in the textometric analysis of the output. This allowed us to raise the issue of the role of the variability observed for the gender of COVID in French or for technical terms like "keloids".
Figure 1 :Figure 2 :Figure 3 :
123Baseline trained with UFAL data set FR⇔EN Fine-tuning with Medline and Khresmoi data set FR⇔EN Fine-tuning with UFAL data set FR⇔EN
Figure 4 :
4Comparison occurrences of "must" and "should" in SYSTRAN and JoeyNMT translations SYSTRAN translation JoeyNMT translation
Figure 5 :Figure 6 :
56Comparison of Vocabulary Growth Curves in SYSTRAN PNS and mBART translations Comparison of Specificity Vocabulary Growth Curves in Systran PNS and mBART translations
Table 2 :
2Characteristic elements of Systran translation (run2) and JoeyNMT translation (run1)
https://www.federationdesdiabetiques.org. Diabetes terminology proved to be not so useful for the actual test set.
We used mBARTl through the HuggingFace API(Wolf et al., 2020).https://huggingface.co/ facebook/mbart-large-50-many-to-many-mmt
AcknowledgementsThe SPECTRANS project is funded under the 2020 émergence research project and this publication has emanated from research supported in part by a 2021 research equipment grant from the Scientific Platforms and Equipment Committee (PAP-TAN project), both under the ANR grant (ANR-18-IDEX-0001, Financement IdEx Université de Paris).We gratefully acknowledge support from the CNRS/TGIR HUMA-NUM and IN2P3 Computing Center (Lyon -France) for providing some of the computing and data-processing resources needed for this work.
The SPECTRANS System Description for the WMT21 Terminology Task. Nicolas Ballier, Dahn Cho, Bilal Faye, Zong-You Ke, Hanna Martikainen, Mojca Pecman, Jean-Baptiste Yunès, Guillaume Wisniewski, Lichao Zhu, Maria Zimina-Poirot, EMNLP 2021 SIXTH CONFERENCE ON MA-CHINE TRANSLATION (WMT21), Proceedings of the Sixth Conference on Machine Translation. Punta Cana, Dominican Republic. ACLNicolas Ballier, Dahn Cho, Bilal Faye, Zong-You Ke, Hanna Martikainen, Mojca Pecman, Jean-Baptiste Yunès, Guillaume Wisniewski, Lichao Zhu, and Maria Zimina-Poirot. 2021. The SPECTRANS Sys- tem Description for the WMT21 Terminology Task. In EMNLP 2021 SIXTH CONFERENCE ON MA- CHINE TRANSLATION (WMT21), Proceedings of the Sixth Conference on Machine Translation, pages 815-820, Punta Cana, Dominican Republic. ACL.
Empirical analysis of the text structure of original research articles in medical journals. Nicole Heßler, Miriam Rottmann, Andreas Ziegler, 10.1371/journal.pone.0240288PLOS ONE. 1510Nicole Heßler, Miriam Rottmann, and Andreas Ziegler. 2020. Empirical analysis of the text structure of original research articles in medical journals. PLOS ONE, 15(10):1-10.
Opennmt: Opensource toolkit for neural machine translation. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, Alexander M Rush, arXiv:1701.02810arXiv preprintGuillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander M Rush. 2017. Opennmt: Open- source toolkit for neural machine translation. arXiv preprint arXiv:1701.02810.
Joey NMT: A minimalist NMT toolkit for novices. Julia Kreutzer, Jasmijn Bastings, Stefan Riezler, 10.18653/v1/D19-3019Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAssociation for Computational LinguisticsSystem DemonstrationsJulia Kreutzer, Jasmijn Bastings, and Stefan Riezler. 2019. Joey NMT: A minimalist NMT toolkit for novices. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 109-114, Hong Kong, China. Association for Computational Linguistics.
Subword regularization: Improving neural network translation models with multiple subword candidates. Taku Kudo, 10.18653/v1/P18-1007Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics1Taku Kudo. 2018. Subword regularization: Improv- ing neural network translation models with multiple subword candidates. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66-75, Melbourne, Australia. Association for Computational Linguistics.
Exploring textual data. Ludovic Lebart, André Salem, Lisette Berry, Kluwer Academic4Ludovic Lebart, André Salem, and Lisette Berry. 1997. Exploring textual data, volume 4. Kluwer Academic.
Multilingual translation from denoising pre-training. Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan, 10.18653/v1/2021.findings-acl.304Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Na- man Goyal, Vishrav Chaudhary, Jiatao Gu, and An- gela Fan. 2021. Multilingual translation from de- noising pre-training. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3450-3466, Online. Association for Computa- tional Linguistics.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Illia Kaiser, Polosukhin, Advances in Neural Information Processing Systems. Curran Associates, Inc30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations. the 2020 conference on empirical methods in natural language processing: system demonstrationsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 con- ference on empirical methods in natural language processing: system demonstrations, pages 38-45. |
171,835,271 | Prédiction automatique de fonctions pragmatiques dans les reformulations | La reformulation participe à la structuration du discours, notamment dans le cas des dialogues, et contribue également à la dynamique du discours. Reformuler est un acte significatif qui poursuit des objectifs précis. L'objectif de notre travail est de prédire automatiquement la raison pour laquelle un locuteur effectue une reformulation. Nous utilisons une classification de onze fonctions pragmatiques inspirées des travaux existants et des données analysées. Les données de référence sont issues d'annotations manuelles et consensuelles des reformulations spontanées formées autour de trois marqueurs (c'est-à-dire, je veux dire, disons). Les données proviennent d'un corpus oral et d'un corpus de discussions sur les forums de santé. Nous exploitons des algorithmes de catégorisation supervisée et un ensemble de plusieurs descripteurs (syntaxiques, formels, sémantiques et discursifs) pour prédire les catégories de reformulation. La distribution des énoncés et phrases selon les catégories n'est pas homogène. Les expériences sont positionnées à deux niveaux : générique et spécifique. Nos résultats indiquent qu'il est plus facile de prédire les types de fonctions au niveau générique (la moyenne des F-mesures est autour de 0,80), qu'au niveau des catégories individuelles (la moyenne des F-mesures est autour de 0,40). L'influence de différents paramètres est étudiée.ABSTRACTAutomatic prediction of pragmatic functions in reformulations.Reformulations participate in structuring of discourse, especially in dialogues, and also contributes to the dynamics of the discourse. Reformulation is a significant act which has to satisfy precise objectives. The purpose of our work is to automatically predict the reason for which a speaker performs a reformulation. We use a classification with eleven pragmatic functions inspired by the existing work and by the data analyzed. The reference data are built through manual and consensual annotations of spontaneous reformulations introduced by three markers (c'est-à-dire, je veux dire, disons). The data are provided by spoken corpora and a corpus with forum discussions on health issues. We exploit supervised categorization algorithms and a set with several descriptors (syntactic, formal, semantic and discursive) for the prediction of the reformulation categories. The distribution of utterances and sentences is not homogeneous across categories. The experiments are positioned at two levels : general and specific. Our results indicate that it is easier to predict the types of functions at the general level (the average F-measure is around 0.80), than at the level of individual categories (the average F-measure is around 0.40). We study the influence of various parameters. MOTS-CLÉS : Reformulation, apprentissage automatique, paraphrase, classification, fonction pragmatique. | [
9842595,
10986188,
17652653
] | Prédiction automatique de fonctions pragmatiques dans les reformulations
Natalia Grabar natalia.grabar@univ-lille3.fr
UMR 8163 STL
CNRS
Université Lille 3
59653Villeneuve d'AscqFrance (
Iris Eshkol-Taravella
UMR 7270 LLL
CNRS
Université d'Orléans
45100OrléansFrance
Prédiction automatique de fonctions pragmatiques dans les reformulations
Reformulationmachine learningparaphraseclassificationpragmatic function Actes de la conférence conjointe JEP-TALN-RECITAL 2016volume 2 : TALN 262
La reformulation participe à la structuration du discours, notamment dans le cas des dialogues, et contribue également à la dynamique du discours. Reformuler est un acte significatif qui poursuit des objectifs précis. L'objectif de notre travail est de prédire automatiquement la raison pour laquelle un locuteur effectue une reformulation. Nous utilisons une classification de onze fonctions pragmatiques inspirées des travaux existants et des données analysées. Les données de référence sont issues d'annotations manuelles et consensuelles des reformulations spontanées formées autour de trois marqueurs (c'est-à-dire, je veux dire, disons). Les données proviennent d'un corpus oral et d'un corpus de discussions sur les forums de santé. Nous exploitons des algorithmes de catégorisation supervisée et un ensemble de plusieurs descripteurs (syntaxiques, formels, sémantiques et discursifs) pour prédire les catégories de reformulation. La distribution des énoncés et phrases selon les catégories n'est pas homogène. Les expériences sont positionnées à deux niveaux : générique et spécifique. Nos résultats indiquent qu'il est plus facile de prédire les types de fonctions au niveau générique (la moyenne des F-mesures est autour de 0,80), qu'au niveau des catégories individuelles (la moyenne des F-mesures est autour de 0,40). L'influence de différents paramètres est étudiée.ABSTRACTAutomatic prediction of pragmatic functions in reformulations.Reformulations participate in structuring of discourse, especially in dialogues, and also contributes to the dynamics of the discourse. Reformulation is a significant act which has to satisfy precise objectives. The purpose of our work is to automatically predict the reason for which a speaker performs a reformulation. We use a classification with eleven pragmatic functions inspired by the existing work and by the data analyzed. The reference data are built through manual and consensual annotations of spontaneous reformulations introduced by three markers (c'est-à-dire, je veux dire, disons). The data are provided by spoken corpora and a corpus with forum discussions on health issues. We exploit supervised categorization algorithms and a set with several descriptors (syntactic, formal, semantic and discursive) for the prediction of the reformulation categories. The distribution of utterances and sentences is not homogeneous across categories. The experiments are positioned at two levels : general and specific. Our results indicate that it is easier to predict the types of functions at the general level (the average F-measure is around 0.80), than at the level of individual categories (the average F-measure is around 0.40). We study the influence of various parameters. MOTS-CLÉS : Reformulation, apprentissage automatique, paraphrase, classification, fonction pragmatique.
Introduction
La reformulation consiste à reprendre ou redire quelque chose qui a déjà été dit. Elle est effectuée à la demande de l'interlocuteur ou par la volonté du locuteur. Cette notion est centrale dans notre travail. Les cadres applicatifs potentiels, qui impliquent la reformulation, concernent par exemple la recherche et l'extraction d'information, où il est nécessaire de détecter les segments équivalents afin d'augmenter le rappel, ou la traduction automatique, où il est nécessaire d'éviter des répétitions. L'objectif de notre travail est d'analyser les segments reformulés et surtout de prédire automatiquement la fonction pragmatique associée à chaque reformulation : étudier les raisons qui poussent les locuteurs à effectuer ces reformulations (donner une précision, définir, expliquer...). Une des utilités de ce travail consiste à travailler sur le phénomène de la reformulation et de la fonction pragmatique, qui sont des notions assez complexes à cerner et à décrire (section 1.2). Notre hypothèse est que le contenu des segments reformulés fournit les indices, qu'ils soient non linguistiques (e.g. taille des segments) ou linguistiques (e.g. lexicaux, syntaxiques, sémantiques, etc.), pour la prédiction des fonctions pragmatiques. Le coeur de notre travail et les expriences sont positionnés à deux niveaux : -à un niveau général, selon le type de transformations linguistiques associées aux reformulations : ajout, suppression ou encore volume constant d'information ; -à un niveau spécifique, exploitant les fonctions pragmatiques précises. Il s'agit de catégories comme la définition, l'explication, le résultat ou la précision, décrites dans la section 1.2. Le travail présenté concerne la reformulation spontanée dans le discours oral et dans les discussions sur le web. La reformulation est introduite par trois marqueurs formés sur le verbe dire (c'est-à-dire, je veux dire, disons) dans la structure S1 marqueur S2. 1.1 Travaux de l'état de l'art Travaux en linguistique de textes écrits. Dans la langue écrite, la reformulation est liée à plusieurs notions, plus ou moins proches : -Paraphrase. La reformulation peut être vue comme la variante paraphrastique d'un segment linguistique dans laquelle des modifications formelles sont opérées (Neveu, 2004). Dans ce cas, la paraphrase apparaît comme le résultat d'une reformulation. La paraphrase est étudiée de différents points de vue : dans sa situation d'énonciation (Culioli, 1976;Flottum, 1995;Fuchs, 1994;Martin, 1976;Vezin, 1976) ; à travers les transformations linguistiques que subissent les segments paraphrasés à différents niveaux (Melčuk, 1988;Vila et al., 2011;Bhagat & Hovy, 2013) ; en fonction de la taille d'entités paraphrasées (Flottum, 1995;Fujita, 2010;Bouamor, 2012). -Glose. Ce terme, issu de la tradition philologique, désigne un commentaire sur un mot. Il impose au premier segment d'être une unité lexicale, alors que le deuxième segment correspond à la glose, souvent écrite en langage formel ou semi-formel (Authier-Revuz, 1995;Steuckardt, 2005). -Reprise. Un terme plus générique, reprise, correspond à des pures et simples répétitions d'un segment textuel aux différents degrés de ses reformulations (Vion, 2006). La proximité sémantique entre les segments refomulés apparaît être un critère caractéristique de la reformulation.
-Description. Dans des études littéraires, la reformulation est liée avec la notion de description (Magri-Mourgues, 2013). Trois types sont alors distingués : -reformulation par addition : lorsque le même référent a plusieurs dénominations. Dans les exemples cités, l'objet est décrit de manière assez extensive en expliquant sa fonction, alors que les reprises anaphoriques assurent la cohésion et la cohérence au texte ; -reformulation par substitution (ou reformulation corrective) : la seconde occurrence tend à effacer la première, dans un mouvement de correction. Dans les exemples présentés, les deux dénominations successives ({charmilles; dentelles}, {la neige; la glace}) sont liées par une relation d'analogie, qui, activant certains sèmes communs, rendent la juxtaposition cohérente ; -reformulation par superposition : la reformulation se limite au cadre phrastique sans établir de hiérarchie entre les unités successives. Il s'agit de la traduction inter-linguale : {djahels; ignorants}, {cuevas; habitations troglodytes}, {chanteuses; oualems}, {danseuses; ghavasies}. -Élaboration. Le projet Annodis, consacré à la constitution du corpus de référence annoté en structures discursives, distingue une relation rhétorique appelée élaboration qui semble se rapprocher du phénomène de reformulation (Péry-Woodley et al., 2009). Travaux en linguistique du discours parlé. La langue orale diffère cependant de l'écrit car on assiste à son élaboration à la différence du produit final que l'on peut trouver dans l'écrit (Blanche-Benveniste et al., 1991). En effet, l'écrit concrétise une version définitive du discours quand l'oral le présente dans son processus avec ses hésitations, ses faux départs, ses ratures et ses reformulations.
De nombreux travaux s'intéressent à la reformulation dans l'oral car elle en constitue une des caractéristiques fondamentales. Plusieurs points de vue sont possibles, mais la reformulation y est fortement associée aux disfluences et auto-réparations : -Le terme de reformulation est utilisé dans le cadre des analyses d'interactions verbales (Gülich & Kotschi, 1987;Roulet, 1987). Deux types de reformulations sont distingués : les reformulations paraphrastiques, qui instaurent une équivalence entre les segments reformulés, et les reformulations non paraphrastiques, qui opèrent un changement de perspective énonciative (Rossari, 1990). Comme tout acte de reformulation dans l'oral n'introduit pas toujours une paraphrase, deux catégories de marqueurs sont distinguées : les marqueurs de reformulation paraphrastique (MRP), comme c'est-à-dire, je veux dire, je m'explique, en d'autres termes etc. qui ont pour tâche principale d'établir une relation paraphrastique, et les marqueurs de reformulation non-paraphrastique, comme en somme, en tout cas, de toute façon, enfin, etc., qui montrent ce rôle dans des contextes précis. En outre, les propriétés sémantiques des MRP permettent d'instaurer une relation de paraphrase même entre des segments qui n'entretiennent aucune équivalence sémantique constatable. -Les études syntaxiques sur le langage oral ont rapproché le phénomène de reformulation avec celui d'énumération ou de répétition (Levelt, 1983;Blanche-Benveniste et al., 1991;Benzitoun, 2004). Dans tous les cas, il s'agit d'un même procédé syntaxique : les éléments répétés, reformulés ou énumérés ont une même place syntaxique dans l'énoncé sur un axe paradigmatique. La distinction est pourtant possible grâce à l'utilisation des indices formels, tels que les marqueurs d'énumération et de reformulation, ou bien des accords. -La reformulation peut aussi être associée au procédé de correction ou de précision, comme c'est le cas dans l'annotation multi-niveau de l'oral dans le corpus Treebank Rhapsodie (Kahane & Pietrandrea, 2012). Les auteurs utilisent la notion de reformulation pour présenter la typologie des entassements à l'oral qui, suite à d'autres travaux (Blanche-Benveniste et al., 1991;Blanche-Benveniste, 1995;Bilger, 1999;Guénot, 2006), peuvent être utilisés pour établir des relations entre dénotations, créer de nouvelles dénotations, reformuler, exemplifier, préciser ou encore intensifier. Les auteurs introduisent également la notion de reformulation dénotative.
Travaux de TAL. Dans les travaux de TAL, la reformulation dans le corpus écrits est très souvent associée à la paraphrase, qui est vue comme le résultat de la reformulation.
Fonctions pragmatiques de la reformulation
Reformuler est un acte qui est toujours significatif et qui poursuit des objectifs précis. C'est ce que nous appelons la fonction pragmatique de la reformulation, à savoir le rôle de la reformulation spontanée que l'on peut observer dans l'oral ou dans les discussions sur le web. La reformulation met en relation deux segments : le segments reformulé S1 et le segment qui contient la refomulation S2. Dans notre étude, la reformulation est établie grâce à des marqueurs formés sur le verbe dire (c'est-à-dire, je veux dire, disons) au sein de la structure S1 marqueur S2.
Nous distinguons plusieurs fonctions pragmatiques entre S1 et S2, qui sont inspirée des typologies proposées dans la littérature (Gülich & Kotschi, 1987;Hölker, 1988;Beeching, 2007;Kanaan, 2011) et motivées par nos données. Des exemples provenant de nos corpus sont également présentés : -Définition : un terme dans S1 est défini dans S2. Il s'agit souvent de termes techniques spécialisés. La définition est neutre et précise, dont l'objectif est de faire comprendre une notion technique : la TVA je n'ai jamais bien compris ce que c'était eh ben c'est-à-dire c'est une taxe c'est une taxe à la valeur qu'on ajoute et la taxe est toujours comptée sur la valeur qui est ajoutée autant de fois comme la marchandise est ah bon , est transaxée oui et c'est une taxe qui est ajoutée à la transaction encore une taxe sur la va-sur la marge bénéficiaire [eslo1-011] des rumeurs euh disons qu'en fait il y a des événements qui se passent et après on s-on s'en fait tu vois euh je te dis un truc tu le répètes à quelqu'un au bout de trois personnes ça a déjà pas forcément même voire pas du tout le même sens [eslo2-12] avec une ETO c'est à dire une echographie tansoesophagienne (une écho ou le palpeur est introduit dans l'estomac) [forum] -Explication : le locuteur explique quelque chose à son interlocuteur (S2 explique S1). Pour vérifier la fonction, on peut remplacer le marqueur par parce que. L'explication est similaire à la définition tout en étant moins formelle. De plus, elle porte sur des situations. Cette relation est proche du lien de cause à effet dans les annotations Annodis (Péry-Woodley et al., 2009).
ce garçon je sais bien qu'il ne peut pas se marier avec euh c'est-à-dire qu'il aurait pu avtrouver une jeune fille euh qui fasse sa licence euh dans un milieu comme le nôtre [eslo1-010]
on a apprécié un peu plus le voyage c'est-à-dire qu'on ouais hm hm hm on a rencontré voilà des des des des vrais gens on va dire qui sont pas intéressés que par notre pognon qui ouais ouais ouais pour le coup eux nous ont invité donc nous ont ouvert leur porte et hm hm hm hm c'est même eux qui nous ont euh donné des choses quoi et on a bien pu discuter [eslo2-10] j'ai entendu parler (sur le net) des bêtabloquants, or il parait que céest des médicaments a vie et pour la vie, c'est-à-dire ils ne sont efficaces que lorsquéils sont pris tous les jours [forum] -Exemplification : Le locuteur donne des exemples dans S2 d'une entité mentionnée dans S1. Ainsi, S2 peut comporter des entités nommées ou des énumérations : des morceaux nobles ce qu'ils appellent quoi c'est à dire les rosbifs les biftecks et tout ça [eslo1-001] y a un peu de règles c'est-à-dire que euh oui en règle générale effectivement on regarde pas la télé le soir quand on a classe le lendemain surtout quand on est en sixième on est enc-on a encore besoin de dormir [eslo2-5] -2 heures plus tard, elle a eu tous les symptomes d'un avc c'est à dire perte de parole, hémiplégie, fièvre...
[forum] -Justification : le locuteur justifie quelque chose (des événements, des actes) à son interlocuteur.
Dans ce cas, S2 propose une justification de S1 : la langue française est plus difficile disons on peut pas dire la plus difficile des langues européennes mais c'est difficile [eslo1-007] ça c'est c'est un peu connu c'est à dire c'est alors j'ai pas à le dire pas vraiment [eslo2-21] -Je voudrais mieux comprendre pour mieux pouvoir aider ....merci pour votre aide, bien évidement vous n'êtes absolument pas obligés de me répondre si vous ne le souhaitez pas, mais disons que votre vécu m'apporterai un plus [forum] -Précision : c'est une fonction assez large. Elle marque la volonté du locuteur d'ajouter une information dans le but d'éclaircir S1. Elle ressemble à la relation élaboration dans les annotations Annodis (Péry-Woodley et al., 2009) et donne plus de détails sur l'événement décrit dans S1 : je lis oui l'Equipe depuis l'âge de dix-sept ou dix-huit ans c'est-à-dire avant c'était l'Auto mais j'ai toujours ah oui toujours toujours toujours toujours lu l'Equipe [eslo1-045] les aînés partent eux aussi de manière moins systématique c'est-à-dire que les aînés partent pas forcément tous les ans mais souvent [eslo2-5] -La trinitrine m'a été prescrite vendredi dernier, c'est à dire depuis une semaine [forum] -Dénomination : il s'agit de l'attribution d'un nom à une entité unique mentionnée dans S1, ce qui la différencie de exemplification où l'existence d'autres entités du même type est présupposée : en particulier c'est l'endroit où en somme ça s'est produit le plus au début c'est-à-dire à Nanterre [eslo1-058] depuis le début c'est à dire que j'ai fait euh [...] depuis soixante-dix-sept [eslo2-16] depuis qu'on m'avait changer de traitement c'est a dire le nebilox [forum] -Résultat : le locuteur résume ou bien indique la conséquence de S1. Le marqueur marque une conclusion ou une conséquence par rapport à S1, qui peut être implicite ou explicite. Le marqueur peut être remplacé par exemple par pour en somme, en bref, en conclusion, en résumé, donc.
Segments en relation de reformulation
Nous disposons de 4 120 énoncés ou phrases comportant les trois marqueurs de reformulation étudiés. Ces énoncés et phrases proviennent des corpus analysés : ESLO1, ESLO2 et forum. Dans le corpus forum, une phrase correspond à une séquence linguistique séparée par la ponctuation forte. Dans les corpus oraux, la segmentation en énoncés est faite en fonction des groupes de souffle, des tours de parole mais prend également en compte les chevauchements où les locuteurs parlent en même temps. Ainsi, en cas de chevauchement, l'énoncé du locuteur, qui continue de parler après ce chevauchement, continue. Ces phrases et énoncés sont annotés par deux annotateurs indépendants et sont soumis ensuite à des séances de consensus. L'accord inter-annotateur est calculé avec le kappa de Cohen (Cohen, 1960). Dans le tableau 1, nous indiquons les accords constatés sur la décision quant à la présence de reformulations, et leur interprétation standard (Landis & Koch, 1977
Ressources linguistiques
Nous exploitons plusieurs types de ressources : (1) une liste de mots vides ;
(2) les marqueurs de disfluence ;
(3) les clusters distributionnels de mots générés à partir de nos corpus ; (4) un lexique d'hyponymes ; (5) les marqueurs lexicaux qui peuvent être associés aux fonctions pragmatiques. (n=69) Marqueurs lexicaux. Nous utilisons un petit ensemble de marqueurs (n=17), qui sont associés aux fonctions pragmatiques. Trois types de marqueurs sont distingués : (1) les marqueurs introductoires (e.g. voilà, c'est, ce sont), qui peuvent marquer les définitions ;
Mots vides. Les mots vides
(2) les marqueurs de cause (e.g. c'est pourquoi, parce que, car), qui peuvent apparaître avec résultat ; (3) les marqueurs d'exemplification (e.g. exemple, comme, entre autre), qui peuvent apparaître avec la fonction exemplification. Catégories. Les catégories correspondent aux fonctions pragmatiques du tableau 2. Comme trois catégories sont très peu peuplées (correction linguistique, correction référentielle, opposition), nous faisons également des expériences avec les huit catégories les plus fréquentes sur l'ensemble de données. De plus, une autre expérience est positionnée à un niveau plus général, selon le volume d'information fournie lors de la reformulation et mesuré par la taille des segments : -suppression d'information dans S2 par rapport à S1 : résultat, dénomination ; -ajout d'information par rapport à S1 : définition, exemplification, explication, justification, précision ; -volume d'information comparable : paraphrase, correction linguistique, correction référentielle, opposition. Cette typologie ressemble à celle proposée dans un travail existant (Magri-Mourgues, 2013), mais nous distinguons en plus la suppression d'information dans S2. Cela nous permet d'effectuer des expériences à deux niveaux : niveau générique avec trois catégories (ajout, suppression et stabilité du volume d'information), et niveau spécifique avec huit catégories.
Descripteurs. Nous utilisons plusieurs descripteurs pour cerner la nature des fonctions pragmatiques. Les valeurs de tous les descripteurs sont transformées en valeurs numériques : -la longueur des segments S1 et S2, en mots et en caractères, -la différence de longueur des segments S1 et S2, en mots et en caractères, -l'équivalence entre les catégories syntaxiques des deux segments, -si la catégorie syntaxique des deux segments est un groupe nominal ou une proposition, -la présence des segments ou de leurs mots dans les mêmes clusters : tous les mots, tous les mots sauf les mots identiques, tous les mots sauf les mots vides, tous les mots sauf les mots vides et identiques. Les nombres et les pourcentages de mots partagés sont calculés. Nous utilisons plusieurs ensembles de clusters : ils sont calculés sur différents corpus (ESLO1, ESLO2, ESLO, forum et tous les corpus pris ensemble (total)) et avec des nombres différents de clusters à générer (nous retenons 300 et 600 clusters dans l'analyse des résultats), -la présence de marqueurs de disfluence dans les segments, -la présence de nombres dans les segments, -la présence de marqueurs lexicaux spécifiques de exemplifications, de cause et de structures introductoires, -la présence des segments ou de leurs mots dans les couples reliés par la relation d'hyperonymie. Comme nous voyons, ces descripteurs se positionnent à différents niveaux : formel, syntaxique, sémantique et discursif. Ces descripteurs sont calculés automatiquement, en exploitant ou non des ressources linguistiques.
Évaluation. L'évaluation est effectuée avec des mesures classiques en TAL : précision, rappel et F-mesure. Nous présentons les résultats de cette évaluation telle que calculés par la plateforme Weka. Par ailleurs, nous effectuons une validation croisée à 10 plis : les données sont partagées en 10 ensembles et, à chaque itération, un ensemble sert à effectuer l'entraînement alors que les autres ensembles servent pour le test. L'évaluation finale correspond à la moyenne des évaluations de chaque itération.
Résultats
Fonct.
J48
REPTree -la figure 2 indique la moyenne des performances (précision, rappel, F-mesure) obtenues lorsque différents descripteurs sont supprimés de l'ensemble de descripteurs. Globalement, les performances restent proches de celles obtenues avec l'ensemble total des descripteurs (l'expérience avec tous les descripteurs all apparaît en première position sur la courbe) : 0,4 avec huit catégories et 0,8 avec trois catégories. Notons cependant que la suppression de certains descripteurs (équivalence des catégories syntaxiques équiv, informations sur les clusters (clu-all, clu-in, clu-vides, no-clu), caractères majuscules maj) est bénéfique pour la prédiction des trois catégories ( figure 3(b)), alors qu'avec huit catégories l'ensemble total des descripteurs est toujours plus efficace que lorsque la suppression de certains d'entre eux est effectuée ( figure 3(a)). La suppression des informations sur la longueur de S1 et S1 long conduit toujours à une détérioration importante ; -avec l'ensemble total de descripteurs, le descripteur le plus efficace est celui qui indique la différence de longueur en caractères entre S1 et S2. Avec ce descripteur utilisé seul, la F-mesure globale est 0,28 et 0,73, avec huit et trois classes respectivement, en gardant les paramètres du tableau 3. D'autres descripteurs liés à la longueur de S1 et S2 sont aussi importants. Lorsque ces descripteurs sont supprimés, c'est la présence de marqueurs de disfluence qui est retenue comme le meilleur descripteur ; -la ressource distributionnelle ne montre pas d'influence entre le corpus total et le corpus forum. En revanche, avec ESLO2, il est préférable d'avoir les ressources distributionnelles générées sur le même corpus ou bien sur les deux corpus ESLO : la nature et le contenu du corpus oral ESLO2 restent sans doute spécifiques ; -la figure 3 indique la reconnaissance des fonctions pragmatiques dans trois corpus (ESLO, forum et total).
RandomForest SMO DecisionTable P R F P R F P R F P R F P R F def 0,
Cette figure reprend en partie les données du tableau 3 pour le corpus total. Avec huit catégories, nous voyons que la fonction résultat est la mieux reconnue dans tous les corpus. Une analyse des matrices de confusion entre les huit catégories indique que certaines fonctions sont très proches et souvent confondues. La fonction précision est confondue souvent avec d'autres fonctions. L'explication provient de la nature de la fonction même. Précision semble être une catégorie assez large qui peut contenir explication, définition, exemplification, dénomination et demande donc des contraintes plus formelles à spécifier. Une autre raison est la fréquence très importante de cette fonction dans le corpus annoté par rapport aux autres fonctions, ce qui peut favoriser sa reconnaissance automatique. Notons aussi que la catégorie dénomination cause de très nombreuses confusions : la plupart de ses instances sont catégorisées ailleurs.
Conclusion et Perspectives
Nous proposons d'étudier les reformulations dans les corpus oraux et les corpus contenant les discussions du web. Nous nous concentrons sur les fonctions pragmatiques des reformulations : la raison pour laquelle un locuteur effectue une reformulation. Nous avons constitué une classification avec onze fonctions (e.g. définition, exemplification, résultat, paraphrase, correction linguistique). Notre objectif est d'étudier et de prédire ces fonctions, grâce à l'analyse du contenu des segments S1 et S2 mis en relation par trois marqueurs (c'est-à-dire, je veux dire, disons). L'exploitation des données de référence consensuelles et d'algorithmes d'apprentissage supervisé permet d'effectuer des expériences à deux niveaux : (1) au niveau générique, avec les catégories selon que l'information est ajoutée, supprimée ou constante, nous obtenons des performances autour de 0,80 ;
(2) au niveau spécifique des catégories individuelles, nous obtenons des performances autour de 0,40. Quelques descripteurs (ceux liés à la longueur des segments et aux disfluences) jouent un rôle important.
Prédire et apprendre l'information de nature pragmatique est extrêmement difficile. Ce travail est donc exploratoire et permet de constater les différents points qui doivent être pris en compte dans les travaux qui vont suivre. Du point de vue linguistique, il serait important de reconsidérer certaines fonctions : correction linguistique, opposition, précision. Cette dernière devrait être affinée avec plus de critères formels.
Dans la suite de ce travail, nous décrivons d'abord les travaux de l'état de l'art (section 1.1) et cernons les fonctions pragmatiques (section 1.2). Nous présentons ensuite les données traitées (section 2) et les méthodes proposées (section 3). Finalement, nous présentons et discutons les résultats (section 4), et terminons avec les perspectives à ce travail (section 5).
Fig. 1 -
1Schéma général de la méthode. La figure 1 présente le schéma général de la méthode. Les étapes principales sont : (1) le prétraitement et la création des données de référence ; (2) la catégorisation supervisée des segments pour prédire leur fonction pragmatique ; et (3) l'évaluation. Nous effectuons la catégorisation supervisée en exploitant la plateforme Weka (Manning & Schütze, 1999) et plusieurs des algorithmes dans leur configuration standard : J48 et REPTree (Quinlan, 1993), RandomForest (Breiman, 2001), SMO (Platt, 1998), DecisionTable (Kohavi, 1995), OneR (Manning & Schütze, 1999). Nous décrivons les données de référence, les catégories et les descripteurs utilisés, et les modalités de l'évaluation. Données de référence. Les données de référence sont obtenues suite à des annotations manuelles et consensuelles des phrases et énoncés. Le tableau 2 présente ces données, selon les fonctions pragmatiques et les corpus. Deux types de corpus sont traités : les corpus oraux ESLO et le corpus de discussions sur le web forum. Nous pouvons voir qu'entre ces deux types de corpus, les reformulations sont distribuées de manière homogène, tandis que nous y observons une surreprésentation de plusieurs fonctions, comme précision, résultat, définition, explication et exemplification.
Fig. 2 -
2Élimination de descripteurs différents à chaque expérience.
Fig. 3 -
3Performance de reconnaissance des functions pragmatiques dans chaque corpus.
Cette relation est proche du lien de cause à effet dans le corpus Annodis(Péry-Woodley et al., 2009) : avec l'accent un peu de travers je veux dire l'accent [eslo1-002] quand je rentrais le soir à la maison que mes enfants me demand-me faisaient une petite demande ou une petite crise je me disais non non mais attendait là on va on va recadrer tout ça euh non enfin je veux dire on a un un super décalage [eslo2-2] -A ma sortie, j'ai retrouvé pratiquement l'usage de ma jambe, de mon bras et ma main gauche, disons que je pouvais être autonome [forum] Correction référentielle : S2 apporte une correction de lieu, de temps, etc. Il s'agit également d'une correction faite à l'initiative du locuteur : jusqu'à seize ans oui oui bien bon c'est-à-dire euh dans le primaire privé [eslo1-010] j'habitais rue Lazare Carnot c'est à dire donc au sud de la Source [eslo2-16] -Paraphrase : S1 répète l'information de S2, mais d'une autre manière, et on ne voit pas de différence entre les deux. Cette fonction concerne aussi les répétitions identiques : Comme nous voyons, les emplois de marqueurs de reformulation observés dans les corpus étudiés dépassent largement le phénomène de la paraphrase, qui présuppose une équivalence sémantique entre les expressions paraphrasées, et couvrent un ensemble de situations très large. Données traitées et exploitées Nous travaillons avec plusieurs types de données : (1) deux types de corpus (deux corpus ESLO et les forums de discussions médicales (section 2.1)), (2) les segments en relation de reformulation (section 2.2) obtenus suite à l'annotation manuelle consensuelle des corpus, et(3)plusieurs ressources linguistiques (section 2.3). ESLO. Les corpus ESLO (Enquêtes Sociolinguistiques à Orléans) (Eshkol-Taravella et al., 2012) : ESLO1 et ESLO2 sont des corpus oraux de la langue française. ESLO1, la première enquête sociolinguistique à Orléans, a été réalisée en 1968-1971. Ce corpus comprend 300 heures de parole (4 500 000 mots environ) et inclut une gamme d'enregistrements variés. En prenant en compte l'expérience d'ESLO1 et l'évolution des cadres théoriques et méthodologiques de la constitution et de l'exploitation de grands corpus oraux à visée variationniste, une nouvelle enquête ESLO2 a été entamée en 2008. À terme, ESLO2 comprendra plus de 350 heures d'enregistrements afin de former avec ESLO1 un corpus de plus de 700 heures et d'atteindre les dix millions de mots. Les corpus ESLO1 et ESLO2 sont accessibles en ligne 1 . Forum de discussion. Le corpus forum est collecté sur le forum de discussions Hypertension de Doctissimo 2 . Ce corpus fournit 12 588 fils de discussion contenant 67 652 messages et 6 788 361 occurrences de mots. Les messages de ce corpus sont écrits par des internautes, qui ont besoin de s'exprimer sur leurs maladies. Il s'agit des écrits non normés, qui peuvent contenir des erreurs d'orthographe et de syntaxe, et d'autres éléments linguistiques non conventionnels (abréviations spécifiques, émoticônes...).-Correction linguistique : S2 apporte une correction linguistique (article, nombre...) de S1. En
général, il s'agit d'une correction faite à l'initiative du locuteur :
-des artisans euh hm hm hm hm hm alors c'est-à-dire artisans [eslo2-13]
--quelque chose de potable disons quelque chose euh de correct [eslo1-007]
-toujours les mêmes c'est-à-dire euh tous ceux qu'on connait [eslo2-4]
-Il n'a acune maladie (je veux dire qu'il ne prend aucun médicament [forum]
-Opposition : S1 reprend l'information de S2 sous forme négative :
-elle était incapable de rien faire elle au point de vue vendeuse c'est-à-dire elle elle est pas
mauvaise euh elle est agréable au point de vue clientèle elle a été incapable de passer son
certificat d'études [eslo1-045]
2 2.1 Corpus
TABLE 1 -TABLE 2 -
12). Il s'agit d'un accord fort et modéré. Lorsque l'accord inter-annotateur est calculé au niveau des fonctions pragmatiques, il est extrêmement faible : 0,127 sur ESLO1 et 0,0211 sur ESLO2. Accord inter-annotateur sur la présence de reformulations dans les énoncés et phrases. Distribution de phrases et énoncés selon les fonctions pragmatiques et les corpus. ESLO=ESLO1+ESLO2 ; total=ESLO+forum. Les pourcentages sont indiqués entre les parenthèses.Parmis les 4 120 emplois de marqueurs, 594 occurrences introduisent les reformulations. Dans le tableau 2, nous indiquons la distribution de ces emplois selon les corpus et les fonctions pragmatiques. Le corpus ESLO est composé des deux corpus oraux ESLO1 et ESLO2 ; le corpus total est composé Les fonctions les plus rares sont correction linguistique et opposition, certainement parce qu'elles ne sont pas souvent marquées par les marqueurs étudiés. Leur pertinence peut être reconsidérée. D'autres fonctions ne sont pas distribuées de la même manière dans les corpus. Définition est très fréquente dans les discussions sur le web, qui traitent les questions médicales : la présence de termes médicaux et de leurs définitions y est importante. Exemplification et explication sont très utilisées dans ESLO1 et ESLO2, sans doute parce que l'intervieweur est d'origine anglaise. Justification a la même distribution dans ESLO2 et forum : les discussions sont plus libres dans ces deux corpus, alors que les conditions d'enregistrement dans ESLO1 sont plus formelles et, comme il a été remarqué ci-dessus, les intervieweurs ont été d'origine anglaise. Paraphrase est employée d'une manière plus au moins comparable dans les trois corpus. Dénomination est peu présente dans le corpus oral, contrairement au forum. Dénommer un médicament, un traitement sont les cas observés dans le corpus du web. Ces remarquent montrent que la nature des corpus et le contexte de leur production guident et influencent l'utilisation du processus de reformulation chez le locuteur.Corpus Accord Interprétation
ESLO1 0,617
Accord fort
ESLO2 0,526
Accord modéré
Forum
0,784
Accord fort
Fonction ESLO1 ESLO2
ESLO
forum
total
cor-ling
-
2 (0)
2 (0)
-
2 (0)
cor-ref
5 (3)
1 (0)
6 (2)
-
6 (1)
def
16 (10)
14 (8)
30 (9) 41 (16)
71 (12)
denom
2 (1)
3 (1)
5 (1)
24 (9)
29 (5)
exempl
29 (18)
15 (9)
44 (13)
21 (8)
65 (11)
explic
26 (16)
16 (9)
42 (13) 25 (10)
67 (11)
justif
1 (0)
8 (5)
9 (3)
8 (3)
17 (3)
oppo
2 (1)
-
2 (0)
-
2 (0)
para
14 (9) 18 (10)
32 (10)
20 (8)
52 (9)
prec
47 (29) 54 (31) 101 (30) 88 (34) 189 (32)
res
19 (12) 43 (25)
62 (18) 32 (12)
94 (16)
total
161
174
335
259
594
correspondent surtout aux mots grammaticaux du français. Ils sont utilisés pour alléger les traitements et pour se concentrer sur les mots non grammaticaux.Clusters de mots. Les clusters distributionnels de mots sont générés à partir des corpus de notre travail : ESLO1, ESLO2, ESLO (la fusion de ESLO1 et ESLO2), forum et tous les corpus pris ensemble (total). Les corpus sont segmentés, la casse est réduite vers les minuscules, les mots vides sont éliminés. Les clusters sont générés en exploitant les algorithmes de clusterisation(Brown et al., 1992;Liang, 2005). Il s'agit d'un clustering hiérarchique agglomeratif basé sur l'information distributionnelle des mots. Au sein d'un cluster, les mots sont reliés sémantiquement car ils apparaissent dans des contextes similaires. Nous générons des ressources distributionnelles avec 200 à 600 clusters.Marqueurs de disfluence. Nous utilisons un ensemble de marqueurs de disfluence : allez, allons, alors,
là, enfin, euh, heu, bah, ben, hm, hum, hein, quoi, ah, oh, donc, bon, bè, eh.
Hyponymes. Un lexique d'hyponymes est extrait automatiquement à partir de la ressource Wiktionary 3
en français. La structure des articles de Wiktionary est exploitée pour extraire les libellés de l'entrée
et de ses hyperonymes. Le lexique contient 12 161 couples {hypéronyme; hyponyme}, comme par
exemple {lexique; dictionnaire}, {armée; légion}, {disque; CDROM}, {période; année}. Les mots,
qui se trouvent dans un même couple, ont un lien sémantique fort.
TABLE 3 -
3Performances de différents algorithmes dans la prédiction des fonctions pragmatiques : huit et trois catégories, l'ensemble de corpus, clusters générés sur l'ensemble de corpus. Dans le tableau 3, nous indiquons les performances de différents algorithmes. Il s'agit de l'expérience pour la prédiction de huit (partie haute) et trois (partie basse) catégories, respectivement. Tous les corpus (ESLO et forum) sont traités ensemble, la ressource distributionnelle est également générée sur l'ensemble de corpus, avec 600 clusters, tous les descripteurs sont utilisés. Nous pouvons voir que RandomForest optimise la prédiction des catégories et ne se focalise pas sur certaines d'entre elles. Notons que ceci est aussi le cas avec J48, mais les moyennes obtenues avec J48 sont un peu moins bonnes. Avec RandomForest, nous avons une moyenne de 0,38 pour les huit catégories, et 0,76 pour les trois catégories. Nous continuons la présentation des résultats avec RandomForest et le même paramétrage que dans le tableau 3.Différents paramètres et descripteurs influencent les résultats :
précision et définition montrent une prédiction moins bonne mais également stable selon les corpus. paraphrase est assez bien reconnue dans forum, mais plus faiblement dans les autres corpus, alors que exemplification, explication et justification sont assez bien reconnus dans les corpus ESLO. Avec trois catégories, la catégorie plus est la mieux reconnue. Ces observations doivent être liées avec le volume de données de référence dans chaque corpus et catégorie : résultat, précision et, par conséquent plus, sont les catégories les plus peuplées.m
a
j h
y
p
o
performance moyenne
P
R
F
(b) Descripteurs supprimés, 3 catégories
. http://eslo.tge-adonis.fr/ 2. http://forum.doctissimo.fr/sante/hypertension-problemes-cardiaques/liste\_sujet-1.htm Actes de la conférence conjointe JEP-TALN-RECITAL 2016, volume 2 : TALN
. https://fr.wiktionary.org/wiki/Wiktionnaire Actes de la conférence conjointe JEP-TALN-RECITAL 2016, volume 2 : TALN
A survey of paraphrasing and textual entailment methods. Androutsopoulos I Malakasiotis P, Journal of Artificial Intelligence Research. 38ANDROUTSOPOULOS I. & MALAKASIOTIS P. (2010). A survey of paraphrasing and textual entailment methods. Journal of Artificial Intelligence Research, 38, 135-187.
Ces mots qui ne vont pas de soi : boucles réflexives et non-coïncidences du dire. Authier-Revuz J, LarousseParisAUTHIER-REVUZ J. (1995). Ces mots qui ne vont pas de soi : boucles réflexives et non-coïncidences du dire. Paris : Larousse.
Paraphrasing with bilingual parallel corpora. Bannard C, Callison-Burch C, ACL. BANNARD C. & CALLISON-BURCH C. (2005). Paraphrasing with bilingual parallel corpora. In ACL, p. 597-604.
Extracting paraphrases from a parallel corpus. Barzilay R, Mckeown L, ACL. BARZILAY R. & MCKEOWN L. (2001). Extracting paraphrases from a parallel corpus. In ACL, p. 50-57.
La co-variation des marqueurs discursifs "bon. Beeching K, 154c'est-à-dire", "enfin", "hein", "quand même", "quoi" et "si vous voulez" : une question d'identité ? Langue françaiseBEECHING K. (2007). La co-variation des marqueurs discursifs "bon", "c'est-à-dire", "enfin", "hein", "quand même", "quoi" et "si vous voulez" : une question d'identité ? Langue française, 154(2), 78-93.
L'annotation syntaxique de corpus oraux constitue-t-elle un problème spécifique ? In RECITAL. C Benzitoun, BENZITOUN C. (2004). L'annotation syntaxique de corpus oraux constitue-t-elle un problème spécifique ? In RECITAL 2004.
What is a paraphrase ?. Bhagat R. & Hovy E, Computational Linguistics. 393BHAGAT R. & HOVY E. (2013). What is a paraphrase ? Computational Linguistics, 39(3), 463-472.
Coordination : analyses syntaxiques et annotations. Recherches sur le français parlé. Bilger M, 15BILGER M. (1999). Coordination : analyses syntaxiques et annotations. Recherches sur le français parlé, 15, 255-272.
Le semblamble et le dissemblable en syntaxe. Recherches sur le français parlé. Blanche-Benveniste C, 13BLANCHE-BENVENISTE C. (1995). Le semblamble et le dissemblable en syntaxe. Recherches sur le français parlé, 13, 7-33.
Blanche-Benveniste C, CNRS ÉditionsM Bilger, CNRS ÉditionsC & Rouget, CNRS ÉditionsVan Den Eynde K, CNRS ÉditionsLe français parlé. Études grammaticales. ParisBLANCHE-BENVENISTE C., BILGER M., ROUGET C. & VAN DEN EYNDE K. (1991). Le français parlé. Études grammaticales. Paris : CNRS Éditions.
Étude de la paraphrase sous-phrastique en traitement automatique des langues. H Bouamor, ParisUniversité Paris SudThèse de doctoratBOUAMOR H. (2012). Étude de la paraphrase sous-phrastique en traitement automatique des langues. Thèse de doctorat, Université Paris Sud, Paris.
Analyse des erreurs de performance et des stratégies correctives dans le dialogue oral spontané : apports à l'étude des pathologies du langage. Revue Parole, 29-30. Bouraoui J.-L Vigouroux N, BOURAOUI J.-L. & VIGOUROUX N. (2004). Analyse des erreurs de performance et des stratégies correctives dans le dialogue oral spontané : apports à l'étude des pathologies du langage. Revue Parole, 29-30, 121-152.
Random forests. Breiman L, Machine Learning. 45BREIMAN L. (2001). Random forests. Machine Learning, 45(1), 5-32.
Class-based n-gram models of natural language. Brown P, P Desouza, Mercer R, Pietra V Della, Lai J, Computational Linguistics. 184BROWN P., DESOUZA P., MERCER R., DELLA PIETRA V. & LAI J. (1992). Class-based n-gram models of natural language. Computational Linguistics, 18(4), 467-479.
A coefficient of agreement for nominal scales. J Cohen, Educational and Psychological Measurement. 201COHEN J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37-46.
Automatic detection of disfluencies in speech transcriptions. Constant M. & Dister A, C. S. PUBLISHING, Ed., Spoken CommunicationCONSTANT M. & DISTER A. (2010). Automatic detection of disfluencies in speech transcriptions, In C. S. PUBLISHING, Ed., Spoken Communication, p. 259-272.
Notes du séminaire de DEA. Culioli A, ParisCULIOLI A. (1976). Notes du séminaire de DEA, 1983-84. Paris.
Recognizing Textual Entailment. Roth D Dagan I, & Sammons M, Zanzotto F, Morgan & Claypool PublishersMilton Keynes, UKDAGAN I., ROTH D., SAMMONS M. & ZANZOTTO F. (2013). Recognizing Textual Entailment. Milton Keynes, UK : Morgan & Claypool Publishers.
A CRF-based approach to automatic disfluency detection in a French call-centre corpus. C Dutrey, C Clavel, S Rosset, Vasilescu I. & Adda-Decker M, International Speech Communication Association Conference. 5INTERSPEECH 2014DUTREY C., CLAVEL C., ROSSET S., VASILESCU I. & ADDA-DECKER M. (2014). A CRF-based approach to automatic disfluency detection in a French call-centre corpus. In International Speech Communication Association Conference (INTERSPEECH 2014), p.5.
Un grand corpus oral disponible : le corpus d'Orléans. Eshkol-Taravella I, O Baude, D Maurel, L Hriba, Dugua C. & Tellier I, Traitement Automatique des Langues. 523ESHKOL-TARAVELLA I., BAUDE O., MAUREL D., HRIBA L., DUGUA C. & TELLIER I. (2012). Un grand corpus oral disponible : le corpus d'Orléans 1968-2012. Traitement Automatique des Langues, 52(3), 17-46.
Détection automatique de reformulations -correspondance de concepts appliquée à la détection de plagiat. J Ferrero, Simac-Lejeune A, EGC 2015, RNTI-E-28. FERRERO J. & SIMAC-LEJEUNE A. (2015). Détection automatique de reformulations -correspondance de concepts appliquée à la détection de plagiat. In EGC 2015, RNTI-E-28, p. 287-298.
Dire et redire. La reformulation introduite par "c'est-à-dire. Flottum K, Hogskolen i Stavanger, StavangerThèse de doctoratFLOTTUM K. (1995). Dire et redire. La reformulation introduite par "c'est-à-dire". Thèse de doctorat, Hogskolen i Stavanger, Stavanger.
Paraphrase et énonciation. Fuchs C, OrphysParisFUCHS C. (1994). Paraphrase et énonciation. Paris : Orphys.
Typology of paraphrases and approaches to compute them. Fujita A, CBA to Paraphrasing & Nominalization. Barcelona, SpainInvited talkFUJITA A. (2010). Typology of paraphrases and approaches to compute them. In CBA to Paraphrasing & Nominalization, Barcelona, Spain. Invited talk.
Les actes de reformulation dans la consultation. La dame de Caluire. Gülich E Kotschi T, GÜLICH E. & KOTSCHI T. (1987). Les actes de reformulation dans la consultation. La dame de Caluire. In P.
analyse des interactions verbales. La dame de Caluire : une consultation. Ed Bange, P LangBerneBANGE, Ed., L'analyse des interactions verbales. La dame de Caluire : une consultation, p. 15-81. Berne : P Lang.
La coordination considérée comme un entassement paradigmatique : description, formalisation et intégration. Guénot M, P. MERTENS, C. FAIRON, A. DISTER & P. WATRINGUÉNOT M. (2006). La coordination considérée comme un entassement paradigmatique : description, formalisation et intégration. In P. MERTENS, C. FAIRON, A. DISTER & P. WATRIN, Eds., TALN 2006, p. 178-187.
Zur Analyse von Markern. Hölker K, Franz SteinerStuttgartHÖLKER K. (1988). Zur Analyse von Markern. Stuttgart : Franz Steiner.
La typologie des entassements en français. Kahane S Pietrandrea P, CMLF 2012. KAHANE S. & PIETRANDREA P. (2012). La typologie des entassements en français. In CMLF 2012, p. 1809-1828.
Reformulations, contacts de langues et compétence de communication : analyse linguistique et interactionnelle dans des discussions entre jeunes Libanais francophones. Kanaan L, OrléansUniversité d'OrléansThèse de doctoratKANAAN L. (2011). Reformulations, contacts de langues et compétence de communication : analyse linguis- tique et interactionnelle dans des discussions entre jeunes Libanais francophones. Thèse de doctorat, Université d'Orléans, Orléans.
The power of decision tables. Kohavi R, Proceedings of the European Conference on Machine Learning. the European Conference on Machine LearningSpringer VerlagKOHAVI R. (1995). The power of decision tables. In Proceedings of the European Conference on Machine Learning, p. 174-189 : Springer Verlag.
The measurement of observer agreement for categorical data. J Landis, Koch G, Biometrics. 33LANDIS J. & KOCH G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159-174.
Monitoring and self-repair in speech. Levelt W, Cognition. 14LEVELT W. (1983). Monitoring and self-repair in speech. Cognition, (14), 41-104.
Semi-Supervised Learning for Natural Language. Liang P, Boston, USAMassachusetts Institute of TechnologyLIANG P. (2005). Semi-Supervised Learning for Natural Language. Master, Massachusetts Institute of Technology, Boston, USA.
Dirt -discovery of inference rules from text. Lin D Pantel L, ACM SIGKDD Conference on Knowledge Discovery and Data Mining. LIN D. & PANTEL L. (2001). Dirt -discovery of inference rules from text. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining, p. 323-328.
Generating phrasal and sentential paraphrases : A survey of data-driven methods. J Madnani N. & Dorr B, Computational Linguistics. 36MADNANI N. & DORR B. J. (2010). Generating phrasal and sentential paraphrases : A survey of data-driven methods. Computational Linguistics, 36, 341-387.
Reformulation et dialogisme dans le récit de voyage. Magri-Mourgues V, Echos des voix, échos des textes. O. GANNIERMAGRI-MOURGUES V. (2013). Reformulation et dialogisme dans le récit de voyage, In O. GANNIER, Ed., Echos des voix, échos des textes, Classiques Garnier.
Learning textual entailment using SVMs and string similarity measures. Malakasiotis P. & Androutsopoulos I, ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. MALAKASIOTIS P. & ANDROUTSOPOULOS I. (2007). Learning textual entailment using SVMs and string similarity measures. In ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, p. 42-47.
Foundations of statistical natural language processing. D Manning C, Schütze H, MIT PressCambridge, MAMANNING C. D. & SCHÜTZE H. (1999). Foundations of statistical natural language processing. Cambridge, MA : MIT Press.
Inférence, antonymie et paraphrase. Martin R, Paris : KlincksieckMARTIN R. (1976). Inférence, antonymie et paraphrase. Paris : Klincksieck.
Paraphrase et lexique dans la théorie linguistique sens-texte. Melčuk I, Lexique et paraphrase. Lexique. 6MELČUK I. (1988). Paraphrase et lexique dans la théorie linguistique sens-texte. In Lexique et paraphrase. Lexique, 6, 13-54.
Un modèle de reformulation automatique fondé sur la théorie sens-texte : application aux langues contrôlées. Nasr A, Thèse de doctoratUniversité Paris 6NASR A. (1996). Un modèle de reformulation automatique fondé sur la théorie sens-texte : application aux langues contrôlées. Thèse de doctorat, Université Paris 6.
Dictionnaire des sciences du langage. Neveu F, ColinParisNEVEU F. (2004). Dictionnaire des sciences du langage. Paris : Colin.
Fast training of support vector machines using sequential minimal optimization. J C Platt, Advances in Kernel Methods -Support Vector Learning. MIT PressPLATT J. C. (1998). Fast training of support vector machines using sequential minimal optimization. In Advances in Kernel Methods -Support Vector Learning : MIT Press.
ANNODIS : une approche outillée de l'annotation de structures discursives. Péry-Woodley M.-P Asher N, P Enjalbert, F Benamara, M Bras, C Fabre, Ferrari S, Ho-Dac L.-M, Le Draoulec A, Y Mathet, Muller P, L Prévot, J Rebeyrolle, L Tanguy, Vergez-Couret M, Vieu L. & Widlöcher A, TALN. PÉRY-WOODLEY M.-P., ASHER N., ENJALBERT P., BENAMARA F., BRAS M., FABRE C., FERRARI S., HO-DAC L.-M., LE DRAOULEC A., MATHET Y., MULLER P., PRÉVOT L., REBEYROLLE J., TANGUY L., VERGEZ-COURET M., VIEU L. & WIDLÖCHER A. (2009). ANNODIS : une approche outillée de l'annotation de structures discursives. In TALN 2009.
C4.5 Programs for Machine Learning. Quinlan J , Morgan KaufmannSan Mateo, CAQUINLAN J. (1993). C4.5 Programs for Machine Learning. San Mateo, CA : Morgan Kaufmann.
Projet pour une typologie des opérations de reformulation. Cahiers de linguistique française. C Rossari, 11ROSSARI C. (1990). Projet pour une typologie des opérations de reformulation. Cahiers de linguistique française, 11, 345-359.
Complétude interactive et connecteurs reformulatifs. Cahiers de linguistique française. Roulet E , 8ROULET E. (1987). Complétude interactive et connecteurs reformulatifs. Cahiers de linguistique française, 8, 111-140.
Automatic paraphrase acquisition from news articles. Y Shinyama, S Sekine, Sudo K. & Grishman R, Proceedings of HLT. HLTSHINYAMA Y., SEKINE S., SUDO K. & GRISHMAN R. (2002). Automatic paraphrase acquisition from news articles. In Proceedings of HLT, p. 313-318.
Les marqueurs formés sur dire. Steuckardt A, Les Marqueurs de glose. A. STEUCKARDT & A. NIKLAS-SALMINENSTEUCKARDT A. (2005). Les marqueurs formés sur dire, In A. STEUCKARDT & A. NIKLAS-SALMINEN, Eds., Les Marqueurs de glose, p. 51-65.
Les paraphrases : étude sémantique, leur rôle dans l'apprentissage. L'année psychologique. Vezin L, 76VEZIN L. (1976). Les paraphrases : étude sémantique, leur rôle dans l'apprentissage. L'année psychologique, 76(1), 177-197.
Paraphrase concept and typology. a linguistically based and computationally oriented approach. M Vila, Mart M Antònia, Rodríguez H, Procesamiento del Lenguaje Natural. esamiento del Lenguaje Natural46VILA M., ANTÒNIA MART M. & RODRÍGUEZ H. (2011). Paraphrase concept and typology. a linguistically based and computationally oriented approach. Procesamiento del Lenguaje Natural, 46, 83-90.
Reprise et modes d'implication énonciative. Vion R, La Linguistique. 242VION R. (2006). Reprise et modes d'implication énonciative. La Linguistique, 2(42), 11-28.
. Actes De La Conférence Conjointe, Jep-Taln-Recital , TALN2Actes de la conférence conjointe JEP-TALN-RECITAL 2016, volume 2 : TALN |
14,903,169 | MultiDPS -A multilingual Discourse Processing System | 1 This paper presents an adaptable online Multilingual Discourse Processing System (Mul-tiDPS), composed of four natural language processing tools: named entity recognizer, anaphora resolver, clause splitter and a discourse parser. This NLP Meta System allows any user to run it on the web or via web services and, if necessary, to build its own processing chain, by incorporating knowledge or resources for each tool for the desired language. In this paper is presented a brief description for each independent module, and a case study in which the system is adapted to five different languages for creating a multilingual summarization system. | [
10538655,
14687186,
964287
] | MultiDPS -A multilingual Discourse Processing System
August 23-29 2014
Daniel Alexandru Anechitei
Faculty of Computer Science 16
"Al. I. Cuza" University of Iasi
General Berthelot St
700483IasiRomania
MultiDPS -A multilingual Discourse Processing System
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: System Demonstrations
COLING 2014, the 25th International Conference on Computational Linguistics: System DemonstrationsDublin, IrelandAugust 23-29 2014
1 This paper presents an adaptable online Multilingual Discourse Processing System (Mul-tiDPS), composed of four natural language processing tools: named entity recognizer, anaphora resolver, clause splitter and a discourse parser. This NLP Meta System allows any user to run it on the web or via web services and, if necessary, to build its own processing chain, by incorporating knowledge or resources for each tool for the desired language. In this paper is presented a brief description for each independent module, and a case study in which the system is adapted to five different languages for creating a multilingual summarization system.
Introduction
This paper describes a multilingual discourse processing system (MultiDPS) consisting in four different modules: Named Entity Recognizer (NER), Anaphora Resolver (AR), Clause Splitter (CS), Discourse Parser (DP), and for the summarization scope, the proper summarizer (SUM). This system can run online via web services such that it can be accessed from any programming environment and the architecture allows each tool to be individually trained. Each task, except for discourse parsing, MultiDPS's component tools combines machine learning techniques with heuristics to learn from a manually created corpus (a gold corpus of discourse trees is very difficult to obtain due to the complexity of the task). The complexity of the processing tasks (reaching to discourse analysis) and the multilingual capabilities, make MultiDPS an important system in the field of natural language processing.
System Design
The MultiDPS architecture includes two main parts as it can be seen in Figure 1. The Prerequisite part includes usually known basic NLP tools and it is a primary step for obtaining the input for MultiDPS. The system consists of four different modules which will be discussed in detail in the next sections. All modules implement a language independent vision in which the algorithm is separated from linguistic details. Each phase and the output of each module is an input for a next phase, not necessarily the immediately next one, as it is depicted in Figure 1 (dotted arrows suggest different paths that the system supports). Depending on individual needs or on the existence of specific resources (manual annotated corpora for a specific language), different language processing chains can be created. The entire system is designed in such a way that each individual module brings an extra annotation to the text therefore, when building a processing chain, some modules can be skipped.
Named Entity Recognizer
Named Entity Recognition (NER) is a computational linguistic task that seeks to classify sequences of words in predefined categories. In this approach the categories are organized under four top level classes (PERSON, LOCATION, ORGANIZATION and MISC) and a total of nine subclasses.
In order to identify the type of entities a voting system is implemented, being meant to decide between different heuristics, which use automatically calibrated weights for different features, where high scores are given for the entities within gazetteers. Examples of features are: context bi/tri grams for different classes; appearance of a definite article; partial matches with gazetteers or within the same text.
Anaphora Resolution
The AR module used in MultiDPS is based on the work done in Anechitei et al (2013), and improved by adding a classifier, to predict whether there is a relation between each pair of noun phrases, resulting in a hybrid approach. Examples of features used to decide if there is a co-referential chain between two noun phrases are: number agreement, gender agreement, and morphological description, implementing on the head noun; similarity between the two noun phrases, both at lemma level and text level implemented on the head noun and also on the entire noun phrase; condition if the two noun phrases belong to the same phrase or not.
If the matching score given by the two methods is greater than an automatically computed threshold, then the actual noun phrase is added to already existing chain of referential expressions attached to the noun phrase, and all the features are copied onto the list of features of the new referential expression. If there is no previous noun phrase, for which the matching score to be greater than the threshold, then a new co-referential chain is created containing only the actual noun phrase along with its features.
Clause Splitter
A clause is a grammatical unit comprising a predicate and an explicit or implied subject, and expresses a proposition. For the present work, the delimitation of clauses follows the work done in Anechitei et al (2013) and starts from the identification of verbs and verb compounds. Verb compounds are sequences of more than one verb in which one is the main verb and the others are auxiliaries ("is writing", "like to read"). Examples of features used to build the model of compound verbs are: distance between the verbs; the existence of punctuation or markers between them; the lemma and the morphological description of the verbs, etc.
The semantics of the compound verbs makes it necessary to take the whole construction together not putting boundary in the interior, so that the clause does not lose its meaning. Clause boundaries are looked between verbs and compound verbs which are considered the pivots of clauses. The exact location of a boundary is, in many cases, best indicated by discourse markers. A discourse marker is a word, or a group of words, that also have the function to indicate a rhetorical relation between two clauses. The features used to build the marker's model are: the lemma and the context of the marker expressed as configurable length sequences of POS tags and the distance from the verb in front of it.
When markers are missing, boundaries can still be indicated by statistical methods, trained on explicit annotations. The weights of the features are tuned like in previous examples, by running the calibration system on the manual annotated corpora and creating the models using MaxEnt 1 library.
Discourse Parser
The approach to discourse parsing implemented in MultiDPS follows the one described in Anechitei et al (2013) and is a symbolic approach rooted on (Marcu, 1999). The generated discourse trees put in evidence only the nuclearity of the nodes, while the name of relations is ignored. The discourse parser adopts an incremental policy in developing the trees and it is constrained by two general principles, well known in discourse parsing: sequentiality of the terminal nodes (Marcu, 2000) and attachment restricted to the right frontier (Cristea, 2005). The algorithm involves a generate-rank-evaluate method by generating a forest of developing trees at each step, followed by heuristics for ranking and evaluating the trees. The heuristics are suggested by both Veins Theory (Cristea et al, 1998) and Centering Theory (Grosz et al, 1995). The aim of these heuristics is to assign scores to the developing trees and also to master the exponential explosion of the developing structure.
The Summarizer
For the summarization purpose, the discourse structure gives more information than properly needed. The summary is achieved by trimming unimportant clauses/sentences on the basis of the relative saliency, cohesion and coherence properties. For each discourse unit, a score is attached and reflects the properties mentioned above. Each component of MultiDPS contributes to the calculation of this score.
Implementation of the modules
The main idea behind the system architecture is that, if a module is fuelled with appropriate language resources, it can be put to work on any language. For the Romanian language, the input for MultiDPS is obtained using a deep noun phrase chunker (Simionescu, 2011) and for the English language using the Stanford Parser (Socher et al, 2013). All the resources (manually annotated corpora for English and Romanian) are available for download.
The clear benefit of this system architecture using web services is that if an improvement is made in a certain module, the results will be propagated through the others, without the need of human intervention. Figure 2 illustrates the web interface for the discourse parser, where the XML annotations are mapped in a visual mode. In addition to the web applications and the core of the system (each module can be used as a library), what is made available is a wide range of free additional tools like online annotation services and calibration systems for each individual module. MultiDPS was easily adapted for other languages where there was input provided for the system entry and training corpus for each module.
Experiments and results
In this paper I present the results obtained after combining all the modules to create a multilingual summarization system. The results were obtained after attending an international workshop on summarization (Kubina et al., 2013), where the objective of each participant was to compute a maximum 250 words summary for each document for at least two of the dataset languages (30 documents per language). The submitted summaries were evaluated using ROUGE metric (Lin, 2004) Nevertheless, the results are encouraging for this complex system (s1 is the id of the system presented in this paper).
Conclusions
MultiDPS's strength is manifested through its online availability and the existence of the online services for creating corpora for each module. Moreover, considering that the results obtained by putting together all the modules are similar for different languages, the system can be regarded as having language-wide validity.
Figure 1 :
1The MultiDPS's component modules and supported workflows This work is licenced under a Creative Commons Attribution 4.0 International License. Page numbers and proceedings footer are added by the organizers. License details: http://creativecommons.org/licenses/by/4.0/
Figure 2 :
2View of the Discourse Parser web application that illustrates all annotations.
and presented in the next table, where the oracle in the table represents the "perfect summary":Table 1: ROUGE-1 average for all five languages (Bulgarian, German, Greek, English and Romanian)Language
System
baseline
s1
s2
s3
s4
s5
s6
oracle
bg
0.2854 0.3190
0.2955
0.2969
0.2974
0.3966
de
0.2529 0.3414
0.3198
0.3341
0.3203
0.3675
el
0.2899 0.3229
0.2777
0.2747
0.2698
0.3775
en
0.4113 0.3273
0.2781
0.2799
0.2765
0.3638
0.3411
0.5554
ro
0.3125 0.3337 0.29048
0.3006
0.2985
0.4361
The Maximum Entropy Framework: http://maxent.sourceforge.net/about.html
Centering: A framework for modeling the local coherence of discourse. Barbara J Grosz, Aravind K Joshi, Scott Weinstein, Computational Linguistics. 212Barbara J. Grosz, Aravind K. Joshi and Scott Weinstein. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21(2), pages 203-226.
Rouge: A package for automatic evaluation of summaries. Chin-Yew Lin, Proceedings of the ACL Workshop on Text Summarization Branches Out. the ACL Workshop on Text Summarization Branches OutBarcelona, SpainChin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Proceedings of the ACL Workshop on Text Summarization Branches Out, Barcelona, Spain.
Veins theory: A model of global discourse cohesion and coherence. Dan Cristea, Nancy Ide, Laurent Romary, Proceedings of the 17 th international conference on Computational linguistics. the 17 th international conference on Computational linguisticsMontrealDan Cristea, Nancy Ide, Laurent Romary. 1998. Veins theory: A model of global discourse cohesion and coher- ence. In Proceedings of the 17 th international conference on Computational linguistics, pages 281-285, Mon- treal.
The Right Frontier Constraint Holds Unconditionally. Dan Cristea, Proceedings of the Multidisciplinary Approaches to Discourse (MAD'05). the Multidisciplinary Approaches to Discourse (MAD'05)Chorin/Berlin, GermanyDan Cristea. 2005. The Right Frontier Constraint Holds Unconditionally. In Proceedings of the Multidiscipli- nary Approaches to Discourse (MAD'05), Chorin/Berlin, Germany.
Summarizing Short Texts Through a Discourse-Centered Approach in a Multilingual Context. Daniel A Anechitei, Dan Cristea, Ioannidis Dimosthenis, Eugen Ignat, Diman Karagiozov, Svetla Koeva, Mateusz Kopeć, Cristina Vertan, Where Humans Meet Machines: Innovative Solutions to Knotty natural Language Problems. Neustein, A., Markowitz, J.A.Heidelber/New YorkSpringer VerlagDaniel A. Anechitei, Dan Cristea, Ioannidis Dimosthenis, Eugen Ignat, Diman Karagiozov, Svetla Koeva, Ma- teusz Kopeć, Cristina Vertan. 2013. Summarizing Short Texts Through a Discourse-Centered Approach in a Multilingual Context. In Neustein, A., Markowitz, J.A. (eds.), Where Humans Meet Machines: Innovative So- lutions to Knotty natural Language Problems. Springer Verlag, Heidelber/New York.
Discourse trees are good indicators of importance in text. Daniel Marcu, Advances in Automatic Text Summarization. I. Mani and M. MayburyThe MIT PressDaniel Marcu. 1999. Discourse trees are good indicators of importance in text. In I. Mani and M. Maybury (eds.), Advances in Automatic Text Summarization, pages 123-136, The MIT Press.
The Theory and Practice of Discourse Parsing and Summarization. Daniel Marcu, The MIT PressCambridge, MassachusettsDaniel Marcu. 2000. The Theory and Practice of Discourse Parsing and Summarization. The MIT Press. Cam- bridge, Massachusetts.
MultiLing Pilot Overview. Jeff Kubina, John M Conroy, Judith D Schleisinger, workshop in conjunction with the 51 th Annual Meeting of the Association for Computational Linguistics. Sofia, BulgariaProceedings of MultiLing 2013 Workshop on Multilingual Multi-document SummarizationJeff Kubina, John M. Conroy, Judith D. Schleisinger. 2013. ACL 2013 MultiLing Pilot Overview. In Proceedings of MultiLing 2013 Workshop on Multilingual Multi-document Summarization, Sofia, Bulgaria, pages 29-38, workshop in conjunction with the 51 th Annual Meeting of the Association for Computational Linguistics (ACL 2013).
Romanian Deep Noun Phrase Chunking Using Graphical Grammar Studio. Radu Simionescu, Proceedings of The International Conference on Resources and tools for Romanian Language. The International Conference on Resources and tools for Romanian LanguageRadu Simionescu. 2011. Romanian Deep Noun Phrase Chunking Using Graphical Grammar Studio. In Pro- ceedings of The International Conference on Resources and tools for Romanian Language.
Parsing with Compositional Vector Grammars. Richard Socher, John Bauer, Christopher D Manning, Andrew Y Ng, Proceedings of ACL. ACLRichard Socher, John Bauer, Christopher D. Manning, Andrew Y. Ng. 2013. Parsing with Compositional Vector Grammars. In Proceedings of ACL. |
218,974,534 | [] | Email Classification Incorporating Social Networks and Thread Structure
May 2020
Sakahr Alkhereyf
Owen Rambow owen.rambow@gmail.com
Elemental Cognition New York
NYUSA
Columbia University
Email Classification Incorporating Social Networks and Thread Structure
Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)
the 12th Conference on Language Resources and Evaluation (LREC 2020)MarseilleMay 20201336text classificationemail classificationsocial networksgraph algorithms
Existing methods for different document classification tasks in the context of social networks typically only capture the semantics of texts, while ignoring the users who exchange the text and the network they form. However, some work has shown that incorporating the social network information in addition to information from language is effective for various NLP applications including sentiment analysis, inferring user attributes, and predicting inter-personal relations. In this paper, we present an empirical study of email classification into "Business" and "Personal" categories. We represent the email communication using various graph structures. As features, we use both the textual information from the email content and social network information from the communication graphs. We also model the thread structure for emails. We focus on detecting personal emails, and we evaluate our methods on two corpora, only one of which we train on. The experimental results reveal that incorporating social network information improves over the performance of an approach based on textual information only. The results also show that considering the thread structure of emails improves the performance further. Furthermore, our approach improves over a state-of-the-art baseline which uses node embeddings based on both lexical and social network information.
Introduction
There has been much work on using social networks to predict user characteristics. This work exploits homophily (for example, young people are more likely to communicate with other young people). In contrast, there has been far less work that uses the communication network (the network induced by conversations) to improve document classification of the communications themselves. This is a harder problem, since homophily is not relevant when characterizing the communications themselves: in various document classification tasks, the document category might not be directly inferred from the relationship of the participants when the same participants exchange different types of documents. For instance, the same people might exchange both personal and business emails, or urgent and nonessential emails. In this paper, we study document classification in the context of written conversations. As our task, we choose classification of email into personal or business emails. There are several reasons for this choice.
1. We are interested in how personal relationships affect communication, taking into account that the same pair of people may have multiple types of relationships.
2. The task we choose is relevant. Email remains a crucial communication medium for both individuals and organizations for both personal and business communications. Kiritchenko and Matwin (2011) show that a typical user daily receives 40-50 emails. And despite the massive growth of other social media over the past decade, company email is still used for personal purposes as the recent Avocado corpus shows (section 3.).
3. Two large data sets are available, the Enron corpus and a data set of emails from an anonymous defunct information technology company referred to as Avocado.
4. Unlike other text classification tasks, particularly for emails (e.g. spam filtering), email classification into business and personal has not received much attention and it remains a challenging (as shown in the human inter-annotator agreement reported in (Alkhereyf and Rambow, 2017;Jabbari et al., 2006)) and unsolved task.
5. We are interested in how people communicate in conversations, and email has real conversations. This distinguishes email from blogs and Twitter, which are readily available, but typically used for broadcasting to a large group of followers.
As for any document classification task, the language used (reflecting both content and language style) is highly predictive of the class. For instance, when a student speaks with her friends, she will probably use relatively less formal language than when she speaks with her professor, and she will talk about different topics. As we will see, using word embeddings provides a strong baseline for our task. In this paper, our task is to use the textual content of documents and the underlying social network of email exchange for email classification into two categories, "Business" and "Personal". We use two annotated e-mail datasets, Enron and Avocado. We model the task of finding the rarer class (personal emails) in a set of all emails. We are interested in developing models that can be applied to unseen datasets, so that we can detect personal emails in new datasets with no retraining. The specific contributions of this paper are as follows:
• It is not obvious how to model the email communication as a social network for the classification task in this paper. We extract features of emails from various graph structures representing the email exchange network and then use these features with machine learning models.
• We show that that a combination of social network information and email content leads to classification improvements over the performance of an approach based on textual information only.
• We show that by adding sequential modeling of threads (conversations), we get an important improvement in performance, significantly outperforming the individual email modeling approach. We have thus established that modeling the social network helps in document classification, but modeling the thread structure is also important.
• We show that our approach outperforms a state-of-theart method proposed in the literature based on node embeddings, namely GraphSAGE.
Because we are interested in modeling thread structure, we use datasets which maintain the integrity of the thread (i.e. all emails belong to threads and all threads have labeled emails) and which we introduced in our previous work (Alkhereyf and Rambow, 2017). The Enron dataset is based on Columbia's Enron release (Agarwal et al., 2012). This paper adds the following research to our previous publication:
• We use neural network models.
• We use a strong baseline based on graph embeddings, namely, GraphSAGE (sections 5. and 6.3.).
• We explicitly model email threads (subsection 6.4.).
• We use word embeddings trained on our data (section 4.).
Also, as part of the submission we release the annotated Enron corpus in addition to other annotations including power relations as a language resource (Agarwal et al., 2020). For Avocado, we release the annotation labels with their corresponding email ids without the email content (because of licensing restrictions on the corpus itself) (Alkhereyf and Rambow, 2020). The paper is organized as follows: we first review related literature in section 2., and then describe our datasets in section 3.. We discuss lexical features in section 4.. We present our baseline, a state-of-the-art node embedding model, in section 5.. Then we show how we model emails as a social network in section 6.. We present the experimental study to evaluate our models in section 7., and conclude in section 8..
Related Work
Incorporating Network and Language Information
Many previous studies on various natural language processing tasks in the context of social networks mainly focus on textual information and ignore other information that can be extracted from the underlying social network. However, there are some studies that incorporate the social network structure to improve the performance for different tasks including: inferring user attributes (Filippova, 2012;Al Zamal et al., 2012;Perozzi and Skiena, 2015;Aletras and Chamberlain, 2018) predicting user stance (Tan et al., 2011;West et al., 2014;Gryc and Moilanen, 2014;Gui et al., 2017;Wang et al., 2018;Volkova et al., 2014), and extracting inter-personal relations (Elangovan and Eisenstein, 2015;West et al., 2014;Abu-Jbara et al., 2013;Hassan et al., 2012). Most of these studies exploiting social network information are guided by an assumption of homophily, i.e., the tendency of individuals to associate and bond with similar others (McPherson et al., 2001). Our work differs from these studies in that we focus on classifying a given document (i.e. email) exchanged between users, not on predicting user information, nor interpersonal relations. Note that different emails exchanged between the same set of users can belong to different classes, where in these studies, the attributes remains the same for a given set of users.
Graphs are an important data representation which occur naturally in various real-world applications, and graph analytics has been used in various tasks, including: node classification (Wang et al., 2017;Sen et al., 2008;Jian et al., 2018), link prediction (Wei et al., 2017;Pachev and Webb, 2017), and community detection (Fortunato, 2010;Cavallari et al., 2017). Node embedding (a.k.a. graph or network embedding) aims to learn low-dimensional representations for nodes in graphs. Recently, network embedding methods have gained attention from the research community. Many recent node embedding models are inspired by neural language embedding models (Mikolov et al., 2013). These models include: DeepWalk (Perozzi et al., 2014), and node2vec (Grover and Leskovec, 2016). In these graph embedding models, a graph is represented as a set of sampled random walk paths. The embeddings for nodes then are learned in an unsupervised approach by applying the word2vec model (Mikolov et al., 2013) on the sampled paths. Hamilton et al. (2017b) categorize these models under shallow learning as they are inherently transductive and do not naturally generalize to unseen nodes. In our work, we are interested in applying models for email classification to new datasets. GraphSAGE (Hamilton et al., 2017a;Hamilton, 2018) is an inductive graph embedding model. Unlike transductive models, it generalizes to unseen nodes and new graphs without requiring re-training. To do so, it learns a function that maps a node to low-dimensional representation by aggregating neighboring nodes' attribute information. We use GraphSAGE as a strong baseline for our email classification task. We discuss our usage of GraphSAGE in section 5..
Email Classification
Since the Enron corpus was made public, many researchers have used it for different tasks. Jabbari et al. (2006) released "the Sheffield dataset", in which they categorize a subset containing more than 12,000 Enron emails into two main categories "Business" and "Personal". Unlike our work, they do not utilize email thread structure, and many emails in the Sheffield dataset are not part of a thread and some threads are partially labeled (i.e. some emails in the thread are unlabeled). They also present a preliminary experiment for automatic classification of personal and business. We don't use this dataset for training as we are in-terested in modeling threads. However, we show the performance of some of our models on this dataset in subsection 7.2.. The Sheffield dataset has been used in other studies. In particular, Peterson et al. (2011) show that the formality level in emails is affected by the interpersonal nature of email (personal or business). They use email gold labels in the Sheffield dataset to determine the email type. Mitra and Gilbert (2012) use the Sheffield dataset to study the proportion of gossip in business and personal emails. In our work, we focus on automatic classification of emails into business and personal. There has been some previous work on incorporating email communication network information for different tasks. Yoo et al. (2009) propose a semi-supervised method for personalized email prioritization. They find that including social features along with message content based features leads to a significant reduction in the prediction error when learning to identify the emails that a given user will consider important. Another task is to predict the recipient of an email. Graus et al. (2014) propose a generative model to predict the recipient of an email. They report that the optimal performance is achieved by combining features from both the communication graph and email content. Similar to our work, they use both Enron and Avocado. Our work is similar to (Wang et al., 2012) who propose a model for email classification into "Business" and "Personal". However, unlike our work, they don't use the email content. Their approach requires that the users (i.e. sender and recipients) have been seen in the labeled training data. Therefore, their approach cannot generalize to unseen users, let alone a new corpus (i.e. another email exchange). In contrast, our models do not require users to be seen before and can generalize to unseen nodes and new networks.
Corpus
We use the two datasets from our previous work (Alkhereyf and Rambow, 2017) that maintain the thread structure of emails (i.e. all emails belong to threads and all threads have labeled emails). The emails are taken from the well-known Enron email corpus, and the more recent Avocado corpus (Oard et al., 2015). We split Enron into train, development and test sets with 50%, 25% and 25% of the emails respectively. We do not split threads. Avocado is divided equally into development and test sets (since we will not train on Avocado). Threads are chronologically ordered according to the time of the first email such that the training set contains the earliest threads and the test set contains the latest threads. We use subscripts tr, dev, and ts to refer to the train, development and test sets respectively. We use the Enron dev for optimization. Our Enron dataset contains 10,528 emails and Avocado contains 5,277 emails. Table 1 shows the distribution of "Business" and "Personal" emails in the datasets. In our experiments we optimize the personal F-1 score because our goal is to find the personal emails (minority class
Lexical Features
We use FastText (Bojanowski et al., 2017) to obtain word embeddings from the emails, which we use as lexical features. We use task-specific embeddings trained on the whole Enron email collection (not just our labeled subset). Both the body and subject are included in the training data. We use the CBOW mode with the default argument values. Arguments include the size of word vectors (100), the size of the context window (5), and the maximum and minimum length of n-grams (3 and 6, respectively).
To represent an email, we average the corresponding vectors for all character n-grams of every word in the email, then we compute the average vector for all words in the email (both the body and subject).
We have also tried various pre-trained GloVe (Pennington et al., 2014) vector sets that are available online, each trained using different corpora and embedded into various dimension sizes. We found that embeddings obtained using Fast-Text from our data performed better than all pre-trained GloVe vector sets on all scores.
Baseline: Email Classification using GraphSAGE
GraphSAGE (Hamilton et al., 2017a) is a recent state-ofthe-art inductive model for learning node embeddings for different tasks including node classification. It learns an embedding for a given node by aggregating information from its neighboring nodes and from attributes of the node. It is designed for homogeneous graphs where nodes belong to one type. Thus, we construct a graph which has only emails as nodes (We do not construct a graph with people as nodes since we also need access to the lexical content for GraphSAGE.) In this graph, nodes represent emails and edges link emails if they share a certain percentage of participants. We do not distinguish between senders and recipients as participants. Then, we feed the GraphSAGE supervised model with this graph of emails with their corresponding labels, and furthermore, we use the lexical features described in section 4. as node attributes. We use the Jaccard similarity to measure the similarity between the participant sets of two emails and then link two emails with an edge if their similarity score is above a certain threshold. We define Jaccard similarity J between two emails as:
J(e i , e j ) = τ (e i ) ∩ τ (e j ) τ (e i ) ∪ τ (e j )
e 1 e 2 e 3 e n . . . Where τ (e i ) denotes the set of participants in email e i (both the sender and recipients). We experiment on Enron with different threshold values for J(e i , e j ) and report the one that optimizes the performance on the development set. Note that GraphSAGE implicitly models the thread structure, as emails in the same threads share the same participants, and thus are linked together in the graph.
Our Approach to Exploiting the Social Network
In this section, we present our approach to using the email network structure in our classification task. We start out by presenting two different ways of representing the social network induced by emails (subsection 6.1.). We then show how we derive features from these two types of graphs (subsection 6.2.). In subsection 6.3., we discuss an extension to GraphSAGE based on the bipartite graph we propose in subsection 6.1. and the features we extract in subsection 6.2.. Finally, in subsection 6.4., we propose a model that incorporates information from the thread structure of email into the prediction.
Graph Structures to Represent the Social Network
A very natural representation of the social network induced by email exchange is a bipartite graph with two disjoint sets of nodes: documents (i.e. emails) and users (i.e. people), such that there is an edge between an email and a user if and only if the user's email address appears as either the sender or a recipient (either in the "to" or "cc" list) in that email; we refer to this structure as the bipartite email-user network. Another option is a graph (not bipartite) whose nodes represent people (i.e. email addresses) and whose edges represent email communication such that an edge exists if there is at least one email that has been exchanged between the two end nodes; we refer to this structure as the user network. This graph is simply a one-mode projection of the bipartite graph. Figure 1 illustrates these two types of graphs. In both graphs we normalize multiple email addresses belonging to the same person into one user node. For each corpus (i.e. Enron and Avocado), we construct directed and undirected graphs from these two networks (i.e. the bipartite email-user network and the user networks).
We use the whole exchange network, including all labeled and unlabeled emails to build these graphs.
In the directed bipartite network, each edge shows explicitly the directionality of the email (i.e. sender and recipients), while in the undirected bipartite graph, the direction-ality of communication is not reflected. The weights are always 1 in the bipartite graph. For the directed user network, edge directions indicate that the source user has sent emails to the target user, and the edge weight reflects the number of emails that have been sent from the source to the target, while in the undirected email network, edges indicate that the two connected nodes (i.e., users) have exchanged emails regardless of who sent the email.
Features Extracted from the Social Network
We extract different features from nodes in the corresponding directed and undirected graphs of both the bipartite email-user graph and the user graph. Some features are defined for only certain types of graphs (i.e. user vs. bipartite email-user; directed vs. undirected graphs), while other features are defined for all types of graphs. Then, we use these features with standard machine learning classifiers. Table 2 shows all the social network features we use in our experiments. We have chosen the feature names to be as self-explanatory as possible. We divide them into three sets, as indicated by double horizontal lines in Table 2. First, node features that can be computed from its edges only. Second, features extracted by considering the node and its neighbors (i.e. adjacent nodes). Finally, for the third set, the values on a node feature depend on the node position in the whole graph. These three sets of features allow us to extract local and global properties of individual nodes.
First feature set: The in-degree and out-degree scores for a node indicate how many edges are directed to/from this node. For directed graphs, the total degree is the sum of these two numbers, and number of edges connected to the node in undirected graphs. For users, we extract this score from both the user graph and the bipartite graph. In the user graph, in-degree for a user is the number of other users who sent at least one email to this user, out-degree is the number of other users who received at least one email from this user, and the total degree indicates the number of people who have exchanged emails (sent or received) with this user. In the bipartite graph, in-degree score for a user node indicates how many emails have been received by this user and the out-degree indicates how many emails have been sent by this user. The total degree is the amount of all emails in which the user is participant in. For emails, indegree is always equal to 1 (as any email always has only a single sender) so we ignore it. While out-degree indicates the number of recipients.
Second feature set: The second set of features measure dyadic relations and we extract them from the correspond- ing sender and recipient nodes of a given email in the user graph only. We extract these features for each pair of sender-recipient in case that an email has multiple recipients. Number of common neighbors counts the common nodes shared between the sender and recipient(s). The number of common neighbors alone might not be a good indicator of how close a pair of users are in case that one of them is part of too many triangles. To overcome this issue, we calculate the number of triangles involving the sender. Then we use it as normalization factor for the number of common neighbors between the sender and recipient(s). The intuition is that if the sender has only a few triangles, then a high number of common neighbors indicates that the two users are well connected through common people. In contrast, a high number of triangles for the sender indicates that the sender is directly linked to many people who are linked to each other. We also compute Jaccard's coefficient score between the sender and recipient(s) which is simply the normalized number of common neighbors by the total neighbors (the union). The last feature in this set is the local clustering coefficient, which measures how close are neighbors for a given node to form a clique. We calculate local clustering coefficient for the sender and each recipient.
Third feature set: The last set of features measure the global importance of nodes in graphs. The degree centralities are the normalized degree scores (in, out and total) by the maximum possible degree. Degree centralities measure importance of a node by looking at its direct neighbors. This might be useful for users but not emails as there are important emails sent to a small number of users and less important emails sent to many users (e.g. announcements). Thus, we compute them only for users in the user graph. Other centrality measures (betweenness, eigenvector, and closeness centralities, hub/auth) take into account nodes other than direct neighbors. Each centrality score computes the importance of a node differently. Particularly, closeness centrality indicates how close a node is to all other nodes in the network. It is the reciprocal of the sum of the length of the shortest paths between the node and all other nodes in the graph. While betweenness centrality measures the number of times a node lies as a bridge on the shortest path between two other nodes. All of these scores do not take into account the importance of the other nodes. For instance, a node might be connected (or acts as a bridge) to a few but important nodes but has a lower score than another node which is connected to a lot of less important nodes. To overcome this issue, we use eigenvector centrality. It measures the importance of a node by taking into account the importance of other nodes. A high eigenvector score means that a node is connected to many nodes who themselves have high scores. Hub/Auth is a generalization of eigenvector centrality. For each node, we compute two scores: hub score and authority score. A high hub score for a nodes means that it points to nodes with high authority scores. While a high authority score means the node is being connected by nodes with high hub scores. We compute these scores for both user (sender and recipients) and email nodes in both the bipartite and user graphs.
Final network feature vector: As we are interested in classifying emails, we extract features corresponding to emails and their participants. For each email, we extract features described above from the corresponding email node in the bipartite email-user graph as well as features from both the sender and the recipients (either in the "to" or "cc" list) from both the user graph and the bipartite emailuser graph. In case the email has multiple recipients, we compute the max, min and average of the value corresponding to each feature. We then feed these features to machine learning models.
GraphSAGE with Bipartite Graph
Our baseline, GraphSAGE (section 5.), is not designed to deal with the heterogeneous network induced by email exchange that includes emails and participants. Therefore, we extend GraphSAGE as follows. We construct a bipartite graph of users and emails as discussed in subsection 6.1.. Then, we feed this graph to a version of Graph-SAGE which we modified such that we have different encoders and aggregates for users and emails. For emails, we use lexical features to represent them. For users, we use the network features extracted from the corresponding node in the user graphs as discussed in subsection 6.2.. We refer to this method as GraphSAGE-BiP. Because of the exten-sion to bipratite graphs and the use of our own features, GraphSAGE-BiP represents a contribution of this paper.
Sequential Modeling of Threads
In the previous subsections, we have presented models on individual emails without looking to other emails in the same thread. However, we can predict the class of an email from other emails in the same thread; in fact, we observe that only 2.8% of threads in our Enron data set contain both "Personal" and "Business" emails. In this subsection, we discuss how we incorporate information from other emails in the same thread in order to improve the classification. We try two methods: first, using sequential models on threads, namely, LSTMs; and second, we add a simple approach that re-predicts email labels based on the majority of the predicted email labels in the same thread.
Modeling threads using LSTMs We apply Long Short Term Memories (LSTMs) networks to model thread structure. We concatenate two Bidirectional LSTMs (BiL-STMs), one for lexical features and the other for the social network features. Figure 2 illustrates the model architecture.
Majority of the thread We first predict emails using LSTMs. Then, we compute the majority vote of all emails in the same thread and assigning the majority label to each email in the thread. In case that there is no majority (i.e. the numbers of predicted business and personal labels are the same), we consider "Personal" to be the majority label.
Experiments
In this section, we present experimental results of the email classification task into "Business" and "Personal" by conducting different experiments in different settings. In these experiments, we optimize the F-1 score on Personal emails since we are trying to identify personal emails, which are rare. We also report accuracy and Business F-1, along with recall and precision, since all measures together give a more complete understanding of the performance of our classifiers. In our results, we report the model with the optimal hyper-parameters that maximize the Personal F-1 score. In the following subsections, we first define weak baselines in subsection 7.1.. Then we evaluate some models on the Sheffield data set (Jabbari et al., 2006) in subsection 7.2.. In subsection 7.3., we evaluate different models and feature sets on individual emails without looking to other emails in the same thread. In subsection 7.4., we discuss the results of models for sequential modeling of threads. Finally, we discuss performance on the test set (subsection 7.5.). Table 5 summarizes the results.
Weak Baselines
An addition to our strong baseline, GraphSAGE (section 5.), we define two weak baselines: a random classifier, and the all-business classifier. The former predicts the classes by respecting the class distribution in the Enron training dataset, while the latter predicts the majority class (i.e. "business"). Table 4 shows the results of these two baselines on our datasets.
While the random baseline can be compared against the performance of our models on the minority class ("Personal"), for the all-business baselines, the personal F-1 score could be trivially beaten (zero score). However, it is harder to beat the business F-1 score of the all-business baseline, since the datasets are highly unbalanced (all datasets have more than 80% business emails). We consider a model robust if it has a personal F-1 score higher than random and a business F-1 score higher than all-business. Table 3: Results of our models on the Sheffield dataset. We show numbers reported in (Jabbari et al., 2006) as (shf); their results are not directly comparable and are only shown for rough benchmarking.
Evaluation on Sheffield Data
In this subsection, we evaluate SVM classifiers on the Sheffield dataset (subsection 2.2.). The information about the experiments described in Jabbari et al. (2006) is not detailed and does not mention the train and test ratios. We divide the Sheffield set into 75% and 25% for train and test respectively. Table 3 shows results of three SVM classifiers: with network features only, with lexical features only, and with combination of both features (see subsection 7.3. for details). In addition, we report the results of the preliminary experiment reported in Jabbari et al. (2006) for convenience. However, the results are not directly comparable as we do not know what their training data was. The results show that our models outperform the results in Jabbari et al. (2006). Moreover, it shows that incorporating social network features with the lexical features outperforms modeling emails with lexical features only.
Classifying Emails Individually
In this subsection, we evaluate different models using individual emails without looking to other emails in the same thread.
We experiment with three classifiers: Deep Neural Networks (DNNs), Support Vector Machines (SVMs) and GraphSAGE-BiP (see subsection 6.3.). For DNNs, we use feed-forward neural networks and we try different hyperparameters (i.e. number of hidden units, and number of layers). We try linear and RBF kernels for SVMs. We tune the hyperparameters on Enron dev . For the SVM and NN classifiers we use three feature sets: net, using social network features only (section 6.); lexical, using word embeddings only (section 4.); all, the combination of the two feature sets. In the all feature setting, for neural networks, we concatenate the two networks (branches) of the lexical and the network features. For SVMs, we take the average of the two kernels (a kernel for each feature set). Table 5: Results for all models on Enron and Avocado using different classifiers with different feature sets. All models are trained only on Enron tr . GS is the GraphSage baseline. The SVM, NN (neural network), and GS-BiP models (GraphSage with our extension to bipartite graphs) model emails individually (without thread structure). For SVM and NN results, we give results with different feature sets, namely net (social network features only), lex (lexical feature only), and all (all features). The LSTMs model the thread structure explicitly. LSTM+: LSTM with majority vote Table 5 shows the results of models for email classification on both corpora: Enron and Avocado. In the first line, we report the results for our baseline, GraphSAGE (GS). We then present the results for our experiments using SVM and NN, each with the three possible feature sets. Finally, we present the results for our version of GraphSAGE using bipartite graphs, GS-BiP. To determine whether the performance improvement of different classifiers over others is statistically significant, we use the non-parametric Wilcoxon Signed-Rank Test (Sidney, 1957) on pairs of the Personal F-1 scores of different classifiers using 10 fold-cross validation runs on Enron. We perform the test on some crucial results, and we report the results of all significance tests we have performed, whether successful or not. We observe that all models beat the random baseline on both Bus F1 and Pers F1 scores. However, classifiers with the network features alone perform worse than the allbusiness classifier on the business F1 score on both corpora. Other classifiers (i.e. lex and all) outperform the all-business classifier on Enron while only a few individual email modeling classifiers have higher Bus F1 scores than the all-business classifier on Avocado. In general, lexical features alone outperform the network features alone. However, for all models on both corpora, incorporating social network information with lexical features improves the performance over the lexical features alone. For SVMs on Enron, this increase is significant at p < 0.01. Note that the neural model also profits from the addition of "feature engineered" network features. For GraphSAGE on Enron, we performed an additional experiment (we do not give full results) in which we remove the network information by simply creating a graph without any edges between the nodes that represent emails. This amounts to just using lexical information in creating the node embeddings. Using lexical information only in this manner does not significantly decrease the results over using the network structure in conjunction with lexical information. We conclude that GraphSAGE does not succeed in exploiting the information in the network induced by emails, while our feature-based approach to the network structure does. The SVM-all and NN-all models both beat GraphSAGE and GraphSAGE-BiP on Enron by a small margin (the difference is statistically significant for Personal F-1). Furthermore, as expected, the NN models outperform the SVM models (recall that both models use exactly the same features). We observe that the extension of GraphSAGE to bipartite graph (GS-BiP) outperforms GraphSAGE using homogeneous graphs. The results also show that the performance in the intercorpora setting is lower than the performance in the intracorpus setting for both social network and lexical features, for all models. We observe that GraphSAGE performs much worse in the inter-corpora setting compared to the intra-corpus. In addition, in the intra-corpus setting, the network features add more improvement. This is expected since Enron and Avocado have different email graphs and different professional languages (Enron was an energy company and Avocado was an IT company operat-ing a decade later). These observations and results suggest that incorporating social network information with the lexical features indeed improves the performance in our approach. Also, the models can generalize to a new corpus without the need for retraining, and the network features play an important role in the performance on a new corpus.
Classifying Emails in Threads
The last two lines for each corpus in Table 5 show the results of LSTMs only and LSTMs with majority vote (LSTM+). The results show that LSTMs models perform better than models trained on individual emails on both the personal and business F-1 scores. Also, they beat the All-business baseline on the Bus F1 score, which makes them robust classifiers. The improvements of LSTMs over best non-sequential models on both corpora (NN-all and GraphSAGE-BiP) are statistically significant (p < 0.01). We observe that applying majority vote to LSTM models increases the personal F-1 score but in some cases decreases the business F-1. We also observe that using LSTMs increases the performance across the board, but the increase is particularly marked for the testing on Avocado. This reflects that the LSTM can exploit similarities among emails of a thread. We also note that the LSTM model with majority vote outperforms the GraphSAGE model by a substantial margin, providing our best results.
Performance on the Test Set
The results on the blind test set mirror, by and large, the results on the dev set. We observe a drop in the performance for both test sets in comparison to the corresponding development set. For Enron, we expect a slight decrease in the results since we optimize our models on the development set. However, for Avocado, we have not optimized any of our models on the Avocado development set. This suggests that Avocado ts is just harder than Avocado dev . Note that the sizes of Avocado dev and Avocado ts are almost the same and their ratio of personal emails is very similar: 8.6% and 9.1, respectively.
Conclusion
In this paper, we propose a new way of incorporating social network information from the underlying email exchange network for email classification into "Business" and "Personal". In addition, we use a state-of-the-art graph embedding model namely, GraphSAGE, as a strong baseline. Our main finding is that adding social network information to lexical features improves the classification performance over the performance of an approach based on textual information only. Our models beat the strong baseline. We also find that modeling the thread structure improves the classification performance further, giving a substantial boost over GraphSAGE. The results also show that our network features can generalize to unseen nodes and graphs as we train on the email of one company (Enron) and test on the emails of another company (Avocado) that has different email graphs. We suggest that generic graph embedding models such as GraphSAGE are powerful tools for exploiting the social network, but they don't always have the best performance on some tasks. More importantly, the results of the extension of GraphSAGE to bipartite graph (i.e. GS-BiP) suggest that the choice of graph representation of the communication network is crucial for the classification performance, and requires changes to the GraphSAGE algorithm. For future work, we intend to experiment with combining our approach with the GraphSAGE embeddings. Our methodology of incorporating social network information is not limited to email classification, and we intend to investigate other interpersonal document classification tasks.
Figure 1 :
1Email exchange graphs.
Figure 2 :
2Two concatenated BiLSTMs for thread sequential modeling; one for lexical features and the other for social network features.
Table 2 :
2Social Network Features. Check marks indicate that a feature is extracted only from the corresponding graph(s).
Table 4 :
4Results of different baselines trained on Enron tr and tested on the indicated set. Here, we report the expected values for the random classifier.DEV
AcknowledgmentsWe would like to thank the anonymous reviewers for their constructive feedback. The first author is sponsored by the KACST graduate scholarship program.
Identifying opinion subgroups in Arabic online discussions. A Abu-Jbara, B King, M Diab, D Radev, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational Linguistics2Abu-Jbara, A., King, B., Diab, M., and Radev, D. (2013). Identifying opinion subgroups in Arabic online discus- sions. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 829-835.
A comprehensive gold standard for the Enron organizational hierarchy. A Agarwal, A Omuya, A Harnly, O Rambow, 50th Annual Meeting of the Association for Computational Linguistics. 161Agarwal, A., Omuya, A., Harnly, A., and Rambow, O. (2012). A comprehensive gold standard for the Enron organizational hierarchy. In 50th Annual Meeting of the Association for Computational Linguistics, page 161.
Homophily and latent attribute inference: Inferring latent attributes of Twitter users from neighbors. Al Zamal, F Liu, W Ruths, D , ICWSM. 270Al Zamal, F., Liu, W., and Ruths, D. (2012). Homophily and latent attribute inference: Inferring latent attributes of Twitter users from neighbors. ICWSM, 270:2012.
Predicting Twitter user socioeconomic attributes with network and language information. N Aletras, B P Chamberlain, Proceedings of the 29th on Hypertext and Social Media. the 29th on Hypertext and Social MediaACMAletras, N. and Chamberlain, B. P. (2018). Predicting Twitter user socioeconomic attributes with network and language information. In Proceedings of the 29th on Hy- pertext and Social Media, pages 20-24. ACM.
Work hard, play hard: Email classification on the Avocado and Enron corpora. S Alkhereyf, O Rambow, Proceedings of TextGraphs-11: the Workshop on Graph-based Methods for Natural Language Processing. TextGraphs-11: the Workshop on Graph-based Methods for Natural Language ProcessingAlkhereyf, S. and Rambow, O. (2017). Work hard, play hard: Email classification on the Avocado and Enron cor- pora. In Proceedings of TextGraphs-11: the Workshop on Graph-based Methods for Natural Language Process- ing, pages 57-65.
Enriching word vectors with subword information. P Bojanowski, E Grave, A Joulin, T Mikolov, Transactions of the Association for Computational Linguistics. 5Bojanowski, P., Grave, E., Joulin, A., and Mikolov, T. (2017). Enriching word vectors with subword informa- tion. Transactions of the Association for Computational Linguistics, 5:135-146.
Learning community embedding with community detection and node embedding on graphs. S Cavallari, V W Zheng, H Cai, K. C.-C Chang, E Cambria, Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. the 2017 ACM on Conference on Information and Knowledge ManagementACMCavallari, S., Zheng, V. W., Cai, H., Chang, K. C.-C., and Cambria, E. (2017). Learning community embed- ding with community detection and node embedding on graphs. In Proceedings of the 2017 ACM on Confer- ence on Information and Knowledge Management, pages 377-386. ACM.
You're Mr. ebowski, I'm the dude": Inducing address term formality in signed social networks. V K Elangovan, J Eisenstein, HLT-NAACL. Elangovan, V. K. and Eisenstein, J. (2015). "You're Mr. ebowski, I'm the dude": Inducing address term formality in signed social networks. In HLT-NAACL, pages 1616- 1626.
User demographics and language in an implicit social network. K Filippova, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningAssociation for Computational LinguisticsFilippova, K. (2012). User demographics and language in an implicit social network. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Lan- guage Processing and Computational Natural Language Learning, pages 1478-1488. Association for Computa- tional Linguistics.
Community detection in graphs. S Fortunato, Physics reports. 4863-5Fortunato, S. (2010). Community detection in graphs. Physics reports, 486(3-5):75-174.
Recipient recommendation in enterprises using communication graphs and email content. D Graus, D Van Dijk, M Tsagkias, W Weerkamp, M De Rijke, Proceedings of the 37th international ACM SI-GIR conference on Research & development in information retrieval. the 37th international ACM SI-GIR conference on Research & development in information retrievalACMGraus, D., Van Dijk, D., Tsagkias, M., Weerkamp, W., and De Rijke, M. (2014). Recipient recommendation in enterprises using communication graphs and email con- tent. In Proceedings of the 37th international ACM SI- GIR conference on Research & development in informa- tion retrieval, pages 1079-1082. ACM.
node2vec: Scalable feature learning for networks. A Grover, J Leskovec, Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining. the 22nd ACM SIGKDD international conference on Knowledge discovery and data miningACMGrover, A. and Leskovec, J. (2016). node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 855-864. ACM.
Leveraging textual sentiment analysis with social network modelling. From Text to Political Positions: Text analysis across disciplines. W Gryc, K Moilanen, 5547Gryc, W. and Moilanen, K. (2014). Leveraging textual sentiment analysis with social network modelling. From Text to Political Positions: Text analysis across disci- plines, 55:47.
Learning representations from heterogeneous network for sentiment classification of product reviews. Knowledge-Based Systems. L Gui, Y Zhou, R Xu, Y He, Q Lu, 124Gui, L., Zhou, Y., Xu, R., He, Y., and Lu, Q. (2017). Learn- ing representations from heterogeneous network for sen- timent classification of product reviews. Knowledge- Based Systems, 124:34-45.
Inductive representation learning on large graphs. W Hamilton, Z Ying, J Leskovec, Advances in Neural Information Processing Systems. Hamilton, W., Ying, Z., and Leskovec, J. (2017a). Induc- tive representation learning on large graphs. In Advances in Neural Information Processing Systems, pages 1025- 1035.
W L Hamilton, R Ying, J Leskovec, arXiv:1709.05584Representation learning on graphs: Methods and applications. arXiv preprintHamilton, W. L., Ying, R., and Leskovec, J. (2017b). Rep- resentation learning on graphs: Methods and applica- tions. arXiv preprint arXiv:1709.05584.
Representation Learning Methods for Computational Social Science. W L Hamilton, Stanford UniversityPh.D. thesisHamilton, W. L. (2018). Representation Learning Methods for Computational Social Science. Ph.D. thesis, Stanford University.
Detecting subgroups in online discussions by modeling positive and negative relations among participants. A Hassan, A Abu-Jbara, D Radev, Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning. the 2012 joint conference on empirical methods in natural language processing and computational natural language learningAssociation for Computational LinguisticsHassan, A., Abu-Jbara, A., and Radev, D. (2012). Detect- ing subgroups in online discussions by modeling positive and negative relations among participants. In Proceed- ings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning, pages 59-70. Association for Com- putational Linguistics.
Towards the Orwellian nightmare: separation of business and personal emails. S Jabbari, B Allison, D Guthrie, Guthrie , L , Proceedings of the COLING/ACL on Main conference poster sessions. the COLING/ACL on Main conference poster sessionsAssociation for Computational LinguisticsJabbari, S., Allison, B., Guthrie, D., and Guthrie, L. (2006). Towards the Orwellian nightmare: separation of business and personal emails. In Proceedings of the COLING/ACL on Main conference poster sessions, pages 407-411. Association for Computational Linguis- tics.
Toward online node classification on streaming networks. L Jian, J Li, H Liu, Data Mining and Knowledge Discovery. 321Jian, L., Li, J., and Liu, H. (2018). Toward online node classification on streaming networks. Data Mining and Knowledge Discovery, 32(1):231-257.
Email classification with co-training. S Kiritchenko, S Matwin, Proceedings of the 2011 Conference of the Center for Advanced Studies on Collaborative Research. the 2011 Conference of the Center for Advanced Studies on Collaborative ResearchIBM CorpKiritchenko, S. and Matwin, S. (2011). Email classifica- tion with co-training. In Proceedings of the 2011 Con- ference of the Center for Advanced Studies on Collabo- rative Research, pages 301-312. IBM Corp.
Birds of a feather: Homophily in social networks. Annual review of sociology. M Mcpherson, L Smith-Lovin, J M Cook, 27McPherson, M., Smith-Lovin, L., and Cook, J. M. (2001). Birds of a feather: Homophily in social networks. An- nual review of sociology, 27(1):415-444.
Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, J Dean, arXiv:1301.3781arXiv preprintMikolov, T., Chen, K., Corrado, G., and Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
Have you heard?: How gossip flows through workplace email. T Mitra, E Gilbert, ICWSM. Mitra, T. and Gilbert, E. (2012). Have you heard?: How gossip flows through workplace email. In ICWSM.
Fast link prediction for large networks using spectral embedding. B Pachev, B Webb, Journal of Complex Networks. 61Pachev, B. and Webb, B. (2017). Fast link prediction for large networks using spectral embedding. Journal of Complex Networks, 6(1):79-94.
Glove: Global vectors for word representation. J Pennington, R Socher, C Manning, Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). the 2014 conference on empirical methods in natural language processing (EMNLP)Pennington, J., Socher, R., and Manning, C. (2014). Glove: Global vectors for word representation. In Pro- ceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532- 1543.
Exact age prediction in social networks. B Perozzi, S Skiena, Proceedings of the 24th International Conference on World Wide Web. the 24th International Conference on World Wide WebACMPerozzi, B. and Skiena, S. (2015). Exact age prediction in social networks. In Proceedings of the 24th Inter- national Conference on World Wide Web, pages 91-92. ACM.
Deepwalk: Online learning of social representations. B Perozzi, R Al-Rfou, S Skiena, Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. the 20th ACM SIGKDD international conference on Knowledge discovery and data miningACMPerozzi, B., Al-Rfou, R., and Skiena, S. (2014). Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701-710. ACM.
Email formality in the workplace: A case study on the Enron corpus. K Peterson, M Hohensee, F Xia, Proceedings of the Workshop on Languages in Social Media. the Workshop on Languages in Social MediaAssociation for Computational LinguisticsPeterson, K., Hohensee, M., and Xia, F. (2011). Email for- mality in the workplace: A case study on the Enron cor- pus. In Proceedings of the Workshop on Languages in Social Media, pages 86-95. Association for Computa- tional Linguistics.
Collective classification in network data. P Sen, G Namata, M Bilgic, L Getoor, B Galligher, T Eliassi-Rad, AI magazine. 29393Sen, P., Namata, G., Bilgic, M., Getoor, L., Galligher, B., and Eliassi-Rad, T. (2008). Collective classification in network data. AI magazine, 29(3):93.
Nonparametric statistics for the behavioral sciences. S Sidney, The Journal of Nervous and Mental Disease. 1253497Sidney, S. (1957). Nonparametric statistics for the behav- ioral sciences. The Journal of Nervous and Mental Dis- ease, 125(3):497.
User-level sentiment analysis incorporating social networks. C Tan, L Lee, J Tang, L Jiang, M Zhou, Li , P , Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. the 17th ACM SIGKDD international conference on Knowledge discovery and data miningACMTan, C., Lee, L., Tang, J., Jiang, L., Zhou, M., and Li, P. (2011). User-level sentiment analysis incorporating so- cial networks. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1397-1405. ACM.
Inferring user political preferences from streaming communications. S Volkova, G Coppersmith, B Van Durme, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational Linguistics1Volkova, S., Coppersmith, G., and Van Durme, B. (2014). Inferring user political preferences from streaming com- munications. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 186-196.
Social feature-based enterprise email classification without examining email contents. M.-F Wang, M.-F Tsai, S.-L Jheng, C.-H Tang, Journal of Network and Computer Applications. 352Wang, M.-F., Tsai, M.-F., Jheng, S.-L., and Tang, C.-H. (2012). Social feature-based enterprise email classifica- tion without examining email contents. Journal of Net- work and Computer Applications, 35(2):770-777.
Community preserving network embedding. X Wang, P Cui, J Wang, J Pei, W Zhu, Yang , S , AAAI. Wang, X., Cui, P., Wang, J., Pei, J., Zhu, W., and Yang, S. (2017). Community preserving network embedding. In AAAI, pages 203-209.
Shine: signed heterogeneous information network embedding for sentiment link prediction. H Wang, F Zhang, M Hou, X Xie, M Guo, Q Liu, Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. the Eleventh ACM International Conference on Web Search and Data MiningACMWang, H., Zhang, F., Hou, M., Xie, X., Guo, M., and Liu, Q. (2018). Shine: signed heterogeneous informa- tion network embedding for sentiment link prediction. In Proceedings of the Eleventh ACM International Confer- ence on Web Search and Data Mining, pages 592-600. ACM.
Cross view link prediction by learning noise-resilient representation consensus. X Wei, L Xu, B Cao, Yu , P S , Proceedings of the 26th International Conference on World Wide Web. the 26th International Conference on World Wide WebInternational World Wide Web Conferences Steering CommitteeWei, X., Xu, L., Cao, B., and Yu, P. S. (2017). Cross view link prediction by learning noise-resilient repre- sentation consensus. In Proceedings of the 26th Inter- national Conference on World Wide Web, pages 1611- 1619. International World Wide Web Conferences Steer- ing Committee.
Exploiting social network structure for person-to-person sentiment analysis. R West, H S Paskov, J Leskovec, C Potts, Transactions of the Association for Computational Linguistics. 2West, R., Paskov, H. S., Leskovec, J., and Potts, C. (2014). Exploiting social network structure for person-to-person sentiment analysis. Transactions of the Association for Computational Linguistics, 2:297-310.
Mining social networks for personalized email prioritization. S Yoo, Y Yang, F Lin, I.-C Moon, Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining. the 15th ACM SIGKDD international conference on Knowledge discovery and data miningACMYoo, S., Yang, Y., Lin, F., and Moon, I.-C. (2009). Min- ing social networks for personalized email prioritization. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 967-976. ACM.
Language Resource References. Language Resource References
Business/Personal Type Annotations for the Enron Email Corpus. Columbia University. Power Gender, 903-314-357-253-8Gender, Power, Business/Personal Type Annotations for the Enron Email Corpus. Columbia University, ISLRN 903-314-357-253-8.
Business/Personal Type Annotations for the Avocado Email Corpus. Sakhar Alkhereyf, Owen Rambow, 528-821-149- 515-9Columbia UniversitySakhar Alkhereyf and Owen Rambow. (2020). Busi- ness/Personal Type Annotations for the Avocado Email Corpus. Columbia University, ISLRN 528-821-149- 515-9.
Avocado Research Email Collection. Linguistics Data Consortium. Douglas Oard, William Webber, David Kirsch, Sergey Golitsynskiy, Douglas Reynolds, 102-408-869-995-0Douglas Oard and William Webber and David Kirsch and Sergey Golitsynskiy and Douglas Reynolds. (2015). Av- ocado Research Email Collection. Linguistics Data Con- sortium, ISLRN 102-408-869-995-0. |
||
220,058,855 | υBLEU: Uncertainty-Aware Automatic Evaluation Method for Open-Domain Dialogue Systems | Because open-domain dialogues allow diverse responses, basic reference-based metrics such as BLEU do not work well unless we prepare a massive reference set of high-quality responses for input utterances. To reduce this burden, a human-aided, uncertainty-aware metric, ∆BLEU, has been proposed; it embeds human judgment on the quality of reference outputs into the computation of multiplereference BLEU. In this study, we instead propose a fully automatic, uncertainty-aware evaluation method for open-domain dialogue systems, υBLEU. This method first collects diverse reference responses from massive dialogue data and then annotates their quality judgments by using a neural network trained on automatically collected training data. Experimental results on massive Twitter data confirmed that υBLEU is comparable to ∆BLEU in terms of its correlation with human judgment and that the state of the art automatic evaluation method, RUBER, is improved by integrating υBLEU. | [
129945216,
198229379,
1880070,
52967399,
6628106,
6078795,
9197196,
102351981,
29050992,
11267601,
11336213,
1957433,
964287
] | υBLEU: Uncertainty-Aware Automatic Evaluation Method for Open-Domain Dialogue Systems
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 5 -July 10, 2020. 2020
Yuma Tsuta tsuta@tkl.iis.u-tokyo.ac.jp
Institute of Industrial Science
Institute of Industrial Science
The University of Tokyo
The University of Tokyo
The University of Tokyo
Naoki Yoshinaga
Institute of Industrial Science
Institute of Industrial Science
The University of Tokyo
The University of Tokyo
The University of Tokyo
Masashi Toyoda toyoda@tkl.iis.u-tokyo.ac.jp
Institute of Industrial Science
Institute of Industrial Science
The University of Tokyo
The University of Tokyo
The University of Tokyo
υBLEU: Uncertainty-Aware Automatic Evaluation Method for Open-Domain Dialogue Systems
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
the 58th Annual Meeting of the Association for Computational Linguistics: Student Research WorkshopAssociation for Computational LinguisticsJuly 5 -July 10, 2020. 2020199
Because open-domain dialogues allow diverse responses, basic reference-based metrics such as BLEU do not work well unless we prepare a massive reference set of high-quality responses for input utterances. To reduce this burden, a human-aided, uncertainty-aware metric, ∆BLEU, has been proposed; it embeds human judgment on the quality of reference outputs into the computation of multiplereference BLEU. In this study, we instead propose a fully automatic, uncertainty-aware evaluation method for open-domain dialogue systems, υBLEU. This method first collects diverse reference responses from massive dialogue data and then annotates their quality judgments by using a neural network trained on automatically collected training data. Experimental results on massive Twitter data confirmed that υBLEU is comparable to ∆BLEU in terms of its correlation with human judgment and that the state of the art automatic evaluation method, RUBER, is improved by integrating υBLEU.
Introduction
There has been increasing interest in intelligent dialogue agents such as Apple Siri, Amazon Alexa, and Google Assistant. The key to achieving higher user engagement with those dialogue agents is to support open-domain non-task-oriented dialogues to return a meaningful response for any user input.
The major challenge in developing open-domain dialogue systems is that existing evaluation metrics for text generation tasks, such as BLEU (Papineni et al., 2002), correlate poorly with human judgment on evaluating responses generated by dialogue systems (Liu et al., 2016). In open-domain dialogues, even though responses with various contents and styles are acceptable (Sato et al., 2017), only a few responses, or often only one, are available as reference responses in evaluation datasets made from actual conversations. It is, therefore, hard for these reference-based metrics to consider uncertain responses without writing additional reference responses by hand ( § 2).
To remedy this problem, proposed ∆BLEU ( § 3), a human-aided evaluation method for text generation tasks with uncertain outputs. The key idea behind ∆BLEU is to consider human judgments on reference responses with diverse quality in BLEU computation. Although ∆BLEU correlates more strongly with human judgment than BLEU does, it still requires human intervention. Therefore it cannot effectively evaluate open-domain dialogue systems in a wide range of domains.
To remove the human intervention in ∆BLEU, we propose an automatic, uncertainty-aware evaluation metric, υBLEU. This metric exploits reference responses that are retrieved from massive dialogue logs and rated by a neural network trained with automatically collected training data ( § 4). We first retrieve diverse response candidates according to the similarity of utterances to which the responses were directed. We then train a neural network that judges the quality of the responses by using training data automatically generated from utterances with multiple responses. We also propose integrating υBLEU into the state of the art evaluation method, RUBER (Tao et al., 2018) ( § 2) to advance the state of the art by replacing its reference-based scorer.
Using our method, we experimentally evaluated responses generated by dialogue systems such as a retrieval-based method (Liu et al., 2016) and a generation-based method using Twitter dialogues ( § 5). Our method is comparable to ∆BLEU in terms of its correlation with human judgment, and when it is integrated into RUBER (Tao et al., 2018), it substantially improves that correlation ( § 6).
Our contributions are the followings:
• We developed an uncertainty-aware automatic evaluation method for dialogue systems. Our method automates the human ratings required in ∆BLEU while keeping the performance.
• We showed that integrating υBLEU into RU-BER greatly improves RUBER's performance by providing the robustness to evaluate responses with uncertainty.
Related work
This section introduces recent studies on evaluating open-domain dialogue systems. We focus here on model-agnostic methods than can evaluate the quality of a response for a given utterance. 1 For evaluation of dialogue systems, researchers have adopted existing evaluation metrics for other text generation tasks such as machine translation and summarization. Unfortunately, referencebased metrics such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) correlate poorly with human judgment on evaluating dialogue systems (Liu et al., 2016). This is because only a few responses, or often only one, can be used as reference responses when actual conversations are used as datasets, even though responses in open-domain dialogues can be diverse (Sato et al., 2017).
To consider uncertain responses in open-domain dialogues, attempted to collect multiple reference responses from dialogue logs for each test utterance-response pair. improved that method by manually rating the augmented reference responses and used the ratings to perform discriminative BLEU evaluation, as detailed later in § 3.2. Gupta et al. (2019) created multiple reference responses by hand for the Daily Dialogue dataset (Li et al., 2017). Although the last two studies empirically showed that the use of human-rated or -created reference responses in evaluation improves the correlation with human judgment, it is costly to create such evaluation datasets for various domains.
As for evaluation methods, ADEM learns an evaluation model that predicts human scores for given responses by using large-scale human-rated responses that are originally generated by humans or dialogue systems. The drawback of that method is the cost of annotation to train the 1 Perplexity is sometimes used to evaluate dialogue systems (Hashimoto et al., 2019). It is only applicable, however, to generation-based dialogue systems, so we do not discuss it here, like (Liu et al., 2016). evaluation model. Moreover, the evaluation model has been reported to overfit the dialogue systems used for generating the training data. RUBER (Tao et al., 2018) is an automatic evaluation method that combines two approaches: its referenced scorer evaluates the similarity between a reference and a generated response by using the cosine similarity of their vector representations, while its unreferenced scorer, trained by negative sampling, evaluates the relevance between an input utterance and a generated response. Ghazarian et al. (2019) showed that use of BERT embedding (Devlin et al., 2019) in pretrained vectors improves the unreferenced scorer but not the referenced scorer in RUBER. the referenced scorer is similar to ∆BLEU in that they both are referenced-based evaluation metrics. We later confirm that the referenced scorer in RUBER underperforms our method, and we thus propose replacing it with our method ( § 5.5).
Preliminaries
This section reviews ∆BLEU , a human-aided evaluation method for text generation tasks with uncertain outputs, after explaining the underlying metric, BLEU (Papineni et al., 2002).
BLEU
BLEU (Papineni et al., 2002) calculates an evaluation score based on the number of occurrences of n-gram tokens that appear in both reference and generated response. Specifically, the score is calculated from a modified n-gram precision p n and a brevity penalty (BP):
BLEU = BP · exp n 1 N log p n ,(1)BP = 1 if η > ρ e (1−ρ/η) otherwise ,(2)p n = i g∈n-grams(h i ) max j {# g (h i , r i,j )} i g∈n-grams(h i ) # g (h i ) .(3)
Here, ρ and η are the average lengths of reference and generated responses, respectively; n and N are the n-gram length and its maximum, h i and {r i,j } are the generated response and the jth reference response for the ith utterance, respectively; # g (u) is the number of occurrences of n-gram token g in sentence u; and # g (u, v) is defined as min{# g (u), # g (u)}. Figure 1: An overview of υBLEU: retrieving diverse reference responses from dialogue logs ( § 4.1) to augment the reference response in each test example, followed by neural network (NN)-rater that judges the their quality ( § 4.2).
∆BLEU: Discriminative BLEU
∆BLEU is a human-aided evaluation method for text generation tasks with uncertain outputs, such as response generation in open-domain dialogues. To augment the reference responses for each test example (an utteranceresponse pair), following the work by , ∆BLEU first retrieves, from Twitter, utterance-response pairs similar to the given pair. The similarities between utterances and between responses are next calculated by using BM25 (Robertson et al., 1994), and they are multiplied to obtain the similarity between the utterance-response pairs. Then, the responses for the top-15 similar utteranceresponse pairs and the utterance (as a parrot return) are combined with the original response to form an extended set of reference responses. Each of the extended references is then rated by humans in terms of its appropriateness as a response to the given utterance. Finally, ∆BLEU calculates p n (Eq. 3) with the extended reference r i,j and its manual quality judgment w i,j for the input utterance i:
i g∈n-grams(h i ) max j:g∈r i,j {w i,j · # g (h i , r i,j )} i g∈n-grams(h i ) max j {w i,j · # g (h i )} .
In this way, ∆BLEU weights the number of occurrence of n-gram g in Eq. 3 with manual quality judgement w i,j . The problem with ∆BLEU is the cost of manual judgment. Although we want to evaluate opendomain dialogue systems in various domains, the annotation cost prevents effective evaluation.
Proposed method: υBLEU
This section describes our approach to the problems of ∆BLEU described in § 3.2. To remove the cost of human judgments of extended references, we propose using a neural network trained on automatically collected training data to rate each of the retrieved responses (Figure 1, § 4.2). In addition, to diversify the extended reference responses in terms of content and style, we propose a relaxed response retrieval approach using continuous vector representations of utterances only ( § 4.1).
Retrieving diverse reference responses
Given an utterance-response pair (test example), ∆BLEU expands the original reference response by retrieving utterance-response pairs, in which both the utterance and response are similar to the test example, from massive dialogue logs (here, Twitter). Because using the similarity between responses prevents us from retrieving diverse responses in terms of content, we propose considering only the similarity between the utterances. In addition, we use an embedding-based similarity instead of BM25 to flexibly retrieve semantically-similar responses with synonymous expressions (style variants).
We compute the similarity of utterances by using the cosine similarity between utterance vectors obtained from the average of pretrained embeddings of the words in the utterances. In addition to the retrieved responses, we add the utterance (as a parrot return) to the reference responses as in ∆BLEU.
Rating extended reference responses
∆BLEU manually judges the appropriateness of the extended reference responses for the utterance. To remove this human intervention, we propose rating each reference response by using a neural network that outputs a probability for that response as a response to the given utterance.
Specifically, our neural network (NN)-rater takes two utterance-response pairs as inputs: a given pair of utterance U 1 and reference response R 1 (test example), and a retrieved pair of utterance U 2 and response R 2 . The NN-rater is trained to output the probability that the retrieved response R 2 for U 2 can be a response to given utterance U 1 with response R 1 . This probability is then used as a quality judgment after normalization to the interval [−1, 1] as in ∆BLEU.
The key issue here is how to prepare the training data for the NN-rater. We use utterances with multiple responses in dialogue data (here, Twitter) as positive examples; for negative examples, we randomly sample two utterance-response pairs.
We then train the NN-rater in Figure 1 from the collected training data. Because the utterances in the two utterance-response pairs in a positive example are identical, while those in a negative example are independent, we do not feed both utterances to the NN-rater. This input design prevents overfitting.
Specifically, given a test example of utterance U 1 and response R 1 and a retrieved utteranceresponse pair of U 2 and R 2 , we give two triplets, U 1 , R 1 , R 2 and U 2 , R 2 , R 1 , as inputs to the NN-rater. Next, we make two vectors by concatenating triplet vectors returned from bi-directional gated recurrent unit (Bi-GRU) (Cho et al., 2014) as the last hidden state for the utterance and the two responses. We concatenated forward and backward hidden states (h f , h b ) in Bi-GRU to represent a utterance/response vector as v = [h f , h b ]. We then feed each triplet vector to feed-forward neural network (FFNN) with softmax function to obtain a pair of probabilities that R 2 can be a response to U 1 or not (similarity, another pair of probabilities that R 1 can be a response to U 2 or not). The maximum of these two probabilities is used as the qualitative judgment of the response R 2 (or R 1 ) and multiplied by −1 if classified as negative to normalize into [−1, 1]. This formulation is inspired by Tao et al. (2018) and Ghazarian et al. (2019).
Experimental Settings
This section describes how to evaluate our method for evaluating open-domain dialogue systems. Using utterances from Twitter ( § 5.1), responses written by humans, and responses obtained by dialogue systems ( § 5.2), we evaluated our method in terms of its correlation with human judgment ( § 5.3-5.5).
Twitter dialogue datasets
We built a large-scale Japanese dialogue dataset from Twitter posts of 2.5 million users that have been collected through the user timeline API since March 2011 (Nishi et al., 2016). Posts that are neither retweets nor mentions of other posts were regarded as utterances, and posts mentioning these posts were used as responses.
We use this dataset for training and testing dialogue systems and for training the NN-rater that judges the quality of retrieved responses. In these experiments, to simulate evaluating dialogue systems trained with dialogue data that are unseen by evaluation methods, we used dialogue data posted during 2017 for training and running the NN-rater, and dialogue data posted during 2018 for training and during 2019 for testing the dialogue systems as summarized in Table 1.
Target responses for evaluation
Following Liu et al. (2016) and , we adopted three methods to obtain responses for each utterance in the test set: a retrieval-based method C-TFIDF (Liu et al., 2016), with BM25 as the similarity function (C-BM25), a generationbased method VHRED (Serban et al., 2017), and HUMAN responses, which are the actual responses except for the reference response.
Following Ritter et al. (2010) and Higashinaka et al. (2011), to use a series of dialogues as training data for the above methods, we recursively follow replies from each non-reply post to obtain a dialogue between two users that consists of at least three posts. We then randomly selected pairs of the first utterances and its replies in the obtained dialogues as our dialogue data: 2.4M pairs for training VHRED and for retrieving responses in C-BM25, 10K pairs as validation data for VHRED, and 100 pairs as test data. 2 These dialogues were tokenized with SentencePiece (Kudo and Richardson, 2018) for VHRED and with MeCab 0.996 (ipadic 2.7.0) 3 for C-BM25 to retrieve responses based on words that are less ambiguous than subwords. Finally, six Japanese native speakers in our research group evaluated the 300 target responses for the 100 test examples in terms of the appropriateness as a response to a given utterance. We used a 5-point Likert-type scale with 1 meaning inappropriate or unrecognizable and 5 meaning very appropriate or seeming to be an actual response.
NN-rater to evaluate reference responses
To train the NN-rater for evaluating the extended references ( § 4.2), we randomly extracted 5.6M and 10K utterance-response pairs for training and validation data, respectively. The number of positive and negative examples were set equal in both data.
Before these examples were fed to the NN-rater, they are tokenized with SentencePiece.
For the NN-rater, we used a 512-dimensional embedding layer, one Bi-GRU layer with 512dimensional hidden units, five layers for the FFNN with 1024-dimensional hidden units, and a ReLU as the activation function. We used Adam optimizer (Kingma and Ba, 2015) with an initial learning rate of 0.001 and calculated the loss by the cross entropy. We trained the NN-rater with a batch size of 1000 and up to 15 epochs. The model with parameters that achieved the minimum loss on the validation data was used for evaluating the test data.
Response retrieval and scoring
Following , for each test example, the 15 most similar utterance-response pairs were retrieved to augment the reference response in addition to the utterance (as a parrot return) to apply ∆BLEU and υBLEU. We retrieved utteranceresponse pairs from approximately 16M utteranceresponse pairs of our dialogue data (Table 1). These dialogue data were tokenized with MeCab for response retrieval; we then trained GloVe embeddings (Pennington et al., 2014) to compute utterance or response vectors ( § 4.1) from this data.
We then judged the quality of each retrieved reference response by humans for ∆BLEU and by NN-rater for υBLEU in terms of appropriateness as a response to a given utterance. We asked four of the six Japanese native speakers to judge the quality of each retrieved reference response.
Compared response evaluation methods
We have so far proposed two modifications to improve and automate ∆BLEU: more diverse reference retrieval ( § 4.1) and automatic reference quality judgment ( § 4.2). To see the impact of each modification, we first compare BLEU with various reference retrieval methods. We then compare BLEU with only one reference, ∆BLEU, and υBLEU. Finally, we compared υBLEU with the state of the art evaluation method, RUBER, and examined the performance of RUBER when its referenced scorer was replaced with υBLEU.
Specifically, we applied each evaluation method to the 300 responses ( § 5.2). ∆BLEU and υBLEU used the extended references in evaluation. BLEU used the original (single) references or the extended references. The reference scorer in RUBER used the original (single) references.
Following previous studies (Liu et al., 2016;Tao et al., 2018), we evaluated the performance of the evaluation methods in terms of their correlation to human judgments on the 300 responses. To calculate the correlation, we used Spearman's ρ and Pearson's r. To understand the stability of the evaluation, we computed the maximum and minimum correlation with human judgments given by each annotator. All evaluation methods using the modified n-gram precision were calculated with n ≤ 2 (BLEU-2), following . Table 2 lists the correlations between human judgment and BLEU for each reference retrieval method. In terms of Spearman's ρ, all methods using the extended reference exhibited higher maximum and minimum correlation with human judgment than BLEU did with only one reference. For Pearson's r, only the proposed retrieval method, which uses an embedding-based similarity for utterances, showed higher minimum correlation than BLEU did with only one reference. This means that the proposed retrieval method was the most appropriate way to extend the reference responses. We, therefore, used reference responses extended by the proposed method for υBLEU in the following evaluation. Next, Table 3 compares υBLEU with ∆BLEU and the state of the art evaluation method, RUBER. The comparison between υBLEU and BLEU in Table 2 revealed that the use of our NN-rater improved the minimum correlation with human judgment. Here, υBLEU was comparable to ∆BLEU, which implies that our method can successfully automate ∆BLEU, a human-aided, uncertainty-aware evaluation method. υBLEU performed better than RUBER did (unreferenced scorer + referenced scorer) for all correlations other than the maximum Spearman's ρ. We attribute the poor performance of RUBER to the poor performance of its referenced scorer, which was even worse than BLEU with only one reference in Table 2. This shows that merely adopting embedding-based similarity does not address the uncertainty of outputs. By replacing the reference scorer in RUBER with our υBLEU, however, we obtained the best overall correlations, which advances the state of the art.
Results
Examples Table 4 shows examples of responses retrieved and evaluated by our method, along with evaluation scores for responses generated by C-BM25. The BLEU score with a single-reference response was almost zero. The υBLEU scores were the closest to human judgment, multi-reference BLEU (BLEU multi ) was the secondary closest, and single-reference BLEU was the last. Table 4: Examples of responses retrieved and evaluated by our method for a given test example, along with evaluation scores for responses generated by C-BM25. BLEU refers to BLEU score with the original response, while BLEU multi refers to BLEU score with the extended references. For comparison, we normalized all evaluation scores to the interval for BLEU, i.e., [0, 1].
Conclusions
We have proposed a method to remove the need for costly human judgment in ∆BLEU and obtain an automatic uncertainty-aware metric for dialogue systems. Our proposed υBLEU rates diverse reference responses retrieved from massive dialogue logs by using a neural network trained with automatically-collected training data, and it uses the responses and the scores to run ∆BLEU. Experimental results on massive Twitter dialogue data revealed that υBLEU is comparable to human-aided ∆BLEU, and that, by integrating it into RUBER, the state of the art method for evaluating open-domain dialogue systems, we can improve the correlation with human judgment.
We will release all code and datasets (tweet IDs) to promote the reproducibility of our experiments. 4 The readers are referred to our code to evaluate their dialogue systems for their native languages.
an interest on it since I couldn't dl it.) Generated response (score): むしろ辞めたほうが良いのでは (You'd better to stop) (human: 0.33, BLEU: 0.01, BLEUmulti: 0.07, υBLEU: 0.25)
Table 1 :
1Statistics of the dialogue data used to run each task. The numbers in the parentheses mean year.
Table 2 :
2Correlation between human judgment and BLEU with reference responses retrieved by various methods.
Table 3 :
3Correlation between each method and human judgment; human refers to the inter-rater correlations.
To obtain HUMAN responses for evaluation, we only used dialogues whose first utterances had more than one responses.3 https://taku910.github.io/mecab/
http://www.tkl.iis.u-tokyo.ac.jp/ tsuta/acl-srw-2020/
AcknowledgmentsThe research was supported by NII CRIS collaborative research program operated by NII CRIS and LINE Corporation.
On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. Kyunghyun Cho, Dzmitry Bart Van Merriënboer, Yoshua Bahdanau, Bengio, 10.3115/v1/W14-4012Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical TranslationDoha, QatarAssociation for Computational LinguisticsKyunghyun Cho, Bart van Merriënboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Sta- tistical Translation, pages 103-111, Doha, Qatar. Association for Computational Linguistics.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
deltaBLEU: A Discriminative Metric for Generation Tasks with Intrinsically Diverse Targets. Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Margaret Mitchell, Jianfeng Gao, Bill Dolan, 10.3115/v1/P15-2073Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, China2Short Papers). Association for Computational LinguisticsMichel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Mar- garet Mitchell, Jianfeng Gao, and Bill Dolan. 2015. deltaBLEU: A Discriminative Metric for Generation Tasks with Intrinsically Diverse Targets. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 2: Short Papers), pages 445-450, Beijing, China. Association for Computational Lin- guistics.
Better Automatic Evaluation of Open-Domain Dialogue Systems with Contextualized Embeddings. Johnny Sarik Ghazarian, Aram Wei, Nanyun Galstyan, Peng, 10.18653/v1/W19-2310Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation. the Workshop on Methods for Optimizing and Evaluating Neural Language GenerationMinneapolis, MinnesotaAssociation for Computational LinguisticsSarik Ghazarian, Johnny Wei, Aram Galstyan, and Nanyun Peng. 2019. Better Automatic Evaluation of Open-Domain Dialogue Systems with Contextu- alized Embeddings. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 82-89, Minneapolis, Minnesota. Association for Computational Linguis- tics.
Investigating Evaluation of Open-Domain Dialogue Systems With Human Generated Multiple References. Prakhar Gupta, Shikib Mehri, Tiancheng Zhao, Amy Pavel, Maxine Eskenazi, Jeffrey Bigham, 10.18653/v1/W19-5944Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue. the 20th Annual SIGdial Meeting on Discourse and DialogueStockholm, SwedenAssociation for Computational LinguisticsPrakhar Gupta, Shikib Mehri, Tiancheng Zhao, Amy Pavel, Maxine Eskenazi, and Jeffrey Bigham. 2019. Investigating Evaluation of Open-Domain Dialogue Systems With Human Generated Multiple Refer- ences. In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, pages 379-391, Stockholm, Sweden. Association for Computational Linguistics.
Unifying Human and Statistical Evaluation for Natural Language Generation. Tatsunori Hashimoto, Hugh Zhang, Percy Liang, 10.18653/v1/N19-1169Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsTatsunori Hashimoto, Hugh Zhang, and Percy Liang. 2019. Unifying Human and Statistical Evaluation for Natural Language Generation. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1689-1701, Minneapolis, Minnesota. Association for Computational Linguis- tics.
Building a conversational model from two-tweets. Ryuichiro Higashinaka, Noriaki Kawamae, Kugatsu Sadamitsu, Yasuhiro Minami, Toyomi Meguro, Kohji Dohsaka, Hirohito Inagaki, 10.1109/ASRU.2011.61639532011 IEEE Workshop on Automatic Speech Recognition Understanding. Ryuichiro Higashinaka, Noriaki Kawamae, Kugatsu Sadamitsu, Yasuhiro Minami, Toyomi Meguro, Ko- hji Dohsaka, and Hirohito Inagaki. 2011. Building a conversational model from two-tweets. In 2011 IEEE Workshop on Automatic Speech Recognition Understanding, pages 330-335.
Adam: A Method for Stochastic Optimization. P Diederik, Jimmy Kingma, Ba, International Conference for Learning Representations. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In Interna- tional Conference for Learning Representations.
SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing. Taku Kudo, John Richardson, 10.18653/v1/D18-2012Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2018 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsBrussels, BelgiumAssociation for Computational LinguisticsTaku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for Neural Text Processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.
DailyDialog: A manually labelled multi-turn dialogue dataset. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, Shuzi Niu, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingTaipei, Taiwan1Asian Federation of Natural Language ProcessingYanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A manu- ally labelled multi-turn dialogue dataset. In Proceed- ings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Pa- pers), pages 986-995, Taipei, Taiwan. Asian Federa- tion of Natural Language Processing.
ROUGE: A Package for Automatic Evaluation of Summaries. Chin-Yew Lin, Text Summarization Branches Out: Proceedings of the ACL-04. Chin-Yew Lin. 2004. ROUGE: A Package for Auto- matic Evaluation of Summaries. In Text Summa- rization Branches Out: Proceedings of the ACL-04
Association for Computational Linguistics. Workshop, Barcelona, SpainWorkshop, pages 74-81, Barcelona, Spain. Associa- tion for Computational Linguistics.
How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, Joelle Pineau, 10.18653/v1/D16-1230Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsChia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Nose- worthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Met- rics for Dialogue Response Generation. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122-2132, Austin, Texas. Association for Computational Lin- guistics.
Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses. Ryan Lowe, Michael Noseworthy, Iulian Vlad Serban, Nicolas Angelard-Gontier, Yoshua Bengio, Joelle Pineau, 10.18653/v1/P17-1103Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1Ryan Lowe, Michael Noseworthy, Iulian Vlad Ser- ban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an Automatic Tur- ing Test: Learning to Evaluate Dialogue Responses. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1116-1126, Vancouver, Canada. Association for Computational Linguistics.
Reply trees in Twitter: Data analysis and branching process models. Ryosuke Nishi, Taro Takaguchi, Keigo Oka, Takanori Maehara, Masashi Toyoda, Ken-Ichi Kawarabayashi, Naoki Masuda, 10.1007/s13278-016-0334-0Social Network Analysis and Mining. 6126Ryosuke Nishi, Taro Takaguchi, Keigo Oka, Takanori Maehara, Masashi Toyoda, Ken-ichi Kawarabayashi, and Naoki Masuda. 2016. Reply trees in Twitter: Data analysis and branching pro- cess models. Social Network Analysis and Mining, 6(1):26.
Bleu: A Method for Automatic Evaluation of Machine Translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, 10.3115/1073083.1073135Proceedings of 40th Annual Meeting of the Association for Computational Linguistics. 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, Pennsylvania, USAAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: A Method for Automatic Evaluation of Machine Translation. In Proceedings of 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Glove: Global Vectors for Word Representation. Jeffrey Pennington, Richard Socher, Christopher Manning, 10.3115/v1/D14-1162Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsJeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.
Unsupervised Modeling of Twitter Conversations. Alan Ritter, Colin Cherry, Bill Dolan, Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Los Angeles, CaliforniaAssociation for Computational LinguisticsAlan Ritter, Colin Cherry, and Bill Dolan. 2010. Un- supervised Modeling of Twitter Conversations. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the As- sociation for Computational Linguistics, pages 172- 180, Los Angeles, California. Association for Com- putational Linguistics.
Okapi at TREC-3. S E Robertson, S Walker, S Jones, M M Hancock-Beaulieu, M Gatford, Proceedings of the 3rd Text REtrieval Conference. the 3rd Text REtrieval ConferenceNational Institute of Standards and Technology (NISTS. E. Robertson, S. Walker, S. Jones, M. M. Hancock- Beaulieu, and M. Gatford. 1994. Okapi at TREC-3. In Proceedings of the 3rd Text REtrieval Conference, pages 109-126. National Institute of Standards and Technology (NIST).
Modeling situations in neural chat bots. Shoetsu Sato, Naoki Yoshinaga, Masashi Toyoda, Masaru Kitsuregawa, Proceedings of ACL 2017. ACL 2017Vancouver, CanadaStudent Research WorkshopAssociation for Computational LinguisticsShoetsu Sato, Naoki Yoshinaga, Masashi Toyoda, and Masaru Kitsuregawa. 2017. Modeling situations in neural chat bots. In Proceedings of ACL 2017, Stu- dent Research Workshop, pages 120-127, Vancou- ver, Canada. Association for Computational Linguis- tics.
A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, Yoshua Bengio, Association for the Advancement of Artificial Intelligence. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A Hierarchical Latent Vari- able Encoder-Decoder Model for Generating Dia- logues. In Association for the Advancement of Ar- tificial Intelligence, pages 3295-3301.
A Neural Network Approach to Context-Sensitive Generation of Conversational Responses. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan, 10.3115/v1/N15-1020Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesDenver, ColoradoAssociation for Computational LinguisticsAlessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A Neural Network Approach to Context-Sensitive Generation of Conversational Responses. In Pro- ceedings of the 2015 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 196-205, Denver, Colorado. Association for Compu- tational Linguistics.
RUBER: An Unsupervised Method for Automatic Evaluation of Open-Domain Dialog Systems. Chongyang Tao, Lili Mou, Dongyan Zhao, Rui Yan, AAAI Conference on Artificial Intelligence. Chongyang Tao, Lili Mou, Dongyan Zhao, and Rui Yan. 2018. RUBER: An Unsupervised Method for Automatic Evaluation of Open-Domain Dialog Sys- tems. In AAAI Conference on Artificial Intelligence, pages 722-729. |
4,940,347 | Definition and Analysis of Intermediate Entailment Levels | In this paper we define two intermediate models of textual entailment, which correspond to lexical and lexical-syntactic levels of representation. We manually annotated a sample from the RTE dataset according to each model, compared the outcome for the two models, and explored how well they approximate the notion of entailment. We show that the lexicalsyntactic model outperforms the lexical model, mainly due to a much lower rate of false-positives, but both models fail to achieve high recall. Our analysis also shows that paraphrases stand out as a dominant contributor to the entailment task. We suggest that our models and annotation methods can serve as an evaluation scheme for entailment at these levels. | [
15455102,
6387310
] | Definition and Analysis of Intermediate Entailment Levels
June 2005
Roy Bar-Haim barhair@cs.biu.ac.il
Computer Science Department
Bar Ilan University Ramat-Gan 52900
Israel
Idan Szpektor szpekti@cs.biu.ac.il
Computer Science Department
Bar Ilan University Ramat-Gan 52900
Israel
Oren Glickman glikmao@cs.biu.ac.il
Computer Science Department
Bar Ilan University Ramat-Gan 52900
Israel
Definition and Analysis of Intermediate Entailment Levels
Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment
the ACL Workshop on Empirical Modeling of Semantic Equivalence and EntailmentAnn ArborJune 2005
In this paper we define two intermediate models of textual entailment, which correspond to lexical and lexical-syntactic levels of representation. We manually annotated a sample from the RTE dataset according to each model, compared the outcome for the two models, and explored how well they approximate the notion of entailment. We show that the lexicalsyntactic model outperforms the lexical model, mainly due to a much lower rate of false-positives, but both models fail to achieve high recall. Our analysis also shows that paraphrases stand out as a dominant contributor to the entailment task. We suggest that our models and annotation methods can serve as an evaluation scheme for entailment at these levels.
Introduction
Textual entailment has been proposed recently as a generic framework for modeling semantic variability in many Natural Language Processing applications, such as Question Answering, Information Extraction, Information Retrieval and Document Summarization. The textual entailment relationship holds between two text fragments, termed text and hypothesis, if the truth of the hypothesis can be inferred from the text.
Identifying entailment is a complex task that incorporates many levels of linguistic knowledge and inference. The complexity of modeling entailment was demonstrated in the first PASCAL Challenge Workshop on Recognizing Textual Entailment (RTE) (Dagan et al., 2005). Systems that participated in the challenge used various combinations of NLP components in order to perform entailment inferences. These components can largely be classified as operating at the lexical, syntactic and semantic levels (see Table 1 in (Dagan et al., 2005)). However, only little research was done to analyze the contribution of each inference level, and on the contribution of individual inference mechanisms within each level. This paper suggests that decomposing the complex task of entailment into subtasks, and analyzing the contribution of individual NLP components for these subtasks would make a step towards better understanding of the problem, and for pursuing better entailment engines. We set three goals in this paper. First, we consider two modeling levels that employ only part of the inference mechanisms, but perform perfectly at each level. We explore how well these models approximate the notion of entailment, and analyze the differences between the outcome of the different levels. Second, for each of the presented levels, we evaluate the distribution (and contribution) of each of the inference mechanisms typically associated with that level. Finally, we suggest that the definitions of entailment at different levels of inference, as proposed in this paper, can serve as guidelines for manual annotation of a "gold standard" for evaluating systems that operate at a particular level. Altogether, we set forth a possible methodology for annotation and analysis of entail-ment datasets.
We introduce two levels of entailment: Lexical and Lexical-Syntactic. We propose these levels as intermediate stages towards a complete entailment model. We define an entailment model for each level and manually evaluate its performance over a sample from the RTE test-set. We focus on these two levels as they correspond to well-studied NLP tasks, for which robust tools and resources exist, e.g. parsers, part of speech taggers and lexicons. At each level we included inference types that represent common practice in the field. More advanced processing levels which involve logical/semantic inference are less mature and were left beyond the scope of this paper.
We found that the main difference between the lexical and lexical-syntactic levels is that the lexicalsyntactic level corrects many false-positive inferences done at the lexical level, while introducing only a few false-positives of its own. As for identifying positive cases (recall), both systems exhibit similar performance, and were found to be complementary. Neither of the levels was able to identify more than half of the positive cases, which emphasizes the need for deeper levels of analysis. Among the different inference components, paraphrases stand out as a dominant contributor to the entailment task, while synonyms and derivational transformations were found to be the most frequent at the lexical level.
Using our definitions of entailment models as guidelines for manual annotation resulted in a high level of agreement between two annotators, suggesting that the proposed models are well-defined.
Our study follows on previous work (Vanderwende et al., 2005), which analyzed the RTE Challenge test-set to find the percentage of cases in which syntactic analysis alone (with optional use of thesaurus for the lexical level) suffices to decide whether or not entailment holds. Our study extends this work by considering a broader range of inference levels and inference mechanisms and providing a more detailed view. A fundamental difference between the two works is that while Vanderwende et al. did not make judgements on cases where additional knowledge was required beyond syntax, our entailment models were evaluated over all of the cases, including those that require higher levels of infer-ence. This allows us to view the entailment model at each level as an idealized system approximating full entailment, and to evaluate its overall success.
The rest of the paper is organized as follows: section 2 provides definitions for the two entailment levels; section 3 describes the annotation experiment we performed, its results and analysis; section 4 concludes and presents planned future work.
Definition of Entailment Levels
In this section we present definitions for two entailment models that correspond to the Lexical and Lexical-Syntactic levels. For each level we describe the available inference mechanisms. Table 1 presents several examples from the RTE test-set together with annotation of entailment at the different levels.
The Lexical entailment level
At the lexical level we assume that the text T and hypothesis H are represented by a bag of (possibly multi-word) terms, ignoring function words. At this level we define that entailment holds between T and H if every term h in H can be matched by a corresponding entailing term t in T . t is considered as entailing h if either h and t share the same lemma and part of speech, or t can be matched with h through a sequence of lexical transformations of the types described below.
Morphological derivations This inference mechanism considers two terms as equivalent if one can be obtained from the other by some morphological derivation. Examples include nominalizations (e.g. 'acquisition ⇔ acquire'), pertainyms (e.g. 'Afghanistan ⇔ Afghan'), or nominal derivations like 'terrorist ⇔ terror'.
Ontological relations This inference mechanism refers to ontological relations between terms. A term is inferred from another term if a chain of valid ontological relations between the two terms exists (Andreevskaia et al., 2005). In our experiment we regarded the following three ontological relations as providing entailment inferences: (1) 'synonyms' (e.g. 'free ⇔ release' in example 1361, Table 1);
(2) 'hypernym' (e.g. 'produce ⇒ make') and (3) 'meronym-holonym' (e.g. 'executive ⇒ company'). Lexical World knowledge This inference mechanism refers to world knowledge reflected at the lexical level, by which the meaning of one term can be inferred from the other. It includes both knowledge about named entities, such as 'Taliban ⇒ organization' and 'Roscommon ⇔ Co. Roscommon' (example 1584 in Table 1), and other lexical relations between words, such as WordNet's relations 'cause' (e.g. 'kill ⇒ die') and 'entail' (e.g. 'snore ⇒ sleep').
The Lexical-syntactic entailment level
At the lexical-syntactic level we assume that the text and the hypothesis are represented by the set of syntactic dependency relations of their dependency parse. At this level we ignore determiners and auxiliary verbs, but do include relations involving other function words. We define that entailment holds between T and H if the relations within H can be "covered" by the relations in T . In the trivial case, lexical-syntactic entailment holds if all the relations composing H appear verbatim in T (while addi-tional relations within T are allowed). Otherwise, such coverage can be obtained by a sequence of transformations applied to the relations in T , which should yield all the relations in H.
One type of such transformations are the lexical transformations, which replace corresponding lexical items, as described in sub-section 2.1. When applying morphological derivations it is assumed that the syntactic structure is appropriately adjusted. For example, "Mexico produces oil" can be mapped to "oil production by Mexico" (the NOMLEX resource (Macleod et al., 1998) provides a good example for systematic specification of such transformations).
Additional types of transformations at this level are specified below.
Syntactic transformations This inference mechanism refers to transformations between syntactic structures that involve the same lexical elements and preserve the meaning of the relationships between them (as analyzed in (Vanderwende et al., 2005)). Typical transformations include passive-active and apposition (e.g. 'An Wang, a native of Shanghai ⇔ An Wang is a native of Shanghai').
Entailment paraphrases
This inference mechanism refers to transformations that modify the syntactic structure of a text fragment as well as some of its lexical elements, while holding an entailment relationship between the original text and the transformed one. Such transformations are typically denoted as 'paraphrases' in the literature, where a wealth of methods for their automatic acquisition were proposed (Lin and Pantel, 2001;Shinyama et al., 2002;Barzilay and Lee, 2003;Szpektor et al., 2004). Following the same spirit, we focus here on transformations that are local in nature, which, according to the literature, may be amenable for large scale acquisition. Examples include: 'X is Y man by birth → X was born in Y' (example 1584 in Table 1), 'X take in Y ⇔ Y join X' 1 and 'X is holy book of Y ⇒ Y follow X' 2 .
Co-reference Co-references provide equivalence relations between different terms in the text and thus induce transformations that replace one term in a text with any of its co-referenced terms. For example, the sentence "Italy and Germany have each played twice, and they haven't beaten anybody yet." 3 entails "Neither Italy nor Germany have won yet", involving the co-reference transformation 'they ⇒ Italy and Germany'. Table 1 demonstrates the need to combine different inference mechanisms to achieve lexical-syntactic entailment, requiring world-knowledge, paraphrases and syntactic transformations.
Example 1584 in
Empirical Analysis
In this section we present the experiment that we conducted in order to analyze the two entailment levels, which are presented in section 2, in terms of relative performance and correlation with the notion of textual entailment.
Data and annotation procedure
The RTE test-set 4 contains 800 Text-Hypothesis pairs (usually single sentences), which are typical to various NLP applications. Each pair is annotated with a boolean value, indicating whether the hypothesis is entailed by the text or not, and the test-set is balanced in terms of positive and negative cases. We shall henceforth refer to this annotation as the gold standard. We constructed a sample of 240 pairs from four different tasks in the test-set, which correspond to the main applications that may benefit from entailment: information extraction (IE), information retrieval (IR), question answering (QA), and comparable documents (CD). We randomly picked 60 pairs from each task, and in total 118 of the cases were positive and 122 were negative.
In our experiment, two of the authors annotated, for each of the two levels, whether or not entailment can be established in each of the 240 pairs. The annotators agreed on 89.6% of the cases at the lexical level, and 88.8% of the cases at the lexical-syntactic level, with Kappa statistics of 0.78 and 0.73, respectively, corresponding to 'substantial agreement' (Landis and Koch, 1977). This relatively high level of agreement suggests that the notion of lexical and lexical-syntactic entailment we propose are indeed well-defined.
Finally, in order to establish statistics from the annotations, the annotators discussed all the examples they disagreed on and produced a final joint decision. Table 2 summarizes the results obtained from our annotated dataset for both lexical (L) and lexicalsyntactic (LS) levels. Taking a "system"-oriented perspective, the annotations at each level can be viewed as the classifications made by an idealized system that includes a perfect implementation of the inference mechanisms in that level. The first two rows show for each level how the cases, which were recognized as positive by this level (i.e. the entailment holds), are distributed between "true positive" (i.e. positive according to the gold standard) and "false positive" (negative according to the gold standard). The total number of positive and negative pairs in the dataset is reported in parentheses. The rest of the table details recall, precision, F 1 and accuracy.
Evaluating the different levels of entailment
The distribution of the examples in the RTE testset cannot be considered representative of a realworld distribution (especially because of the controlled balance between positive and negative examples). Thus, our statistics are not appropriate for accurate prediction of application performance. Instead, we analyze how well these simplified models of entailment succeed in approximating "real" entailment, and how they compare with each other.
The proportion between true and false positive cases at the lexical level indicates that the correlation between lexical match and entailment is quite low, reflected in the low precision achieved by this level (only 59%). This result can be partly attributed to the idiosyncracies of the RTE test-set: as reported in (Dagan et al., 2005), samples with high lexical match were found to be biased towards the negative side. Interestingly, our measured accuracy correlates well with the performance of systems at the PAS-CAL RTE Workshop, where the highest reported accuracy of a lexical system is 0.586 (Dagan et al., 2005).
As one can expect, adding syntax considerably reduces the number of false positives -from 36 to only 10. Surprisingly, at the same time the number of true positive cases grows from 52 to 59, and correspondingly, precision rise to 86%. Interestingly, neither the lexical nor the lexical-syntactic level are able to cover more than half of the positive cases (e.g. example 1911 in Table 1).
In order to better understand the differences between the two levels, we next analyze the overlap between them, presented in Table 3. Looking at Table 3(a), which contains only the positive cases, we see that many examples were recognized only by one of the levels. This interesting phenomenon can be explained on the one hand by lexical matches that could not be validated in the syntactic level, and on the other hand by the use of paraphrases, which are introduced only in the lexical-syntactic level. (e.g. example 322 in Table 1). This relatively symmetric situation changes as we move to the negative cases, as shown in Table 3(b). By adding syntactic constraints, the lexical-syntactic level was able to fix 29 false positive errors, misclassified at the lexical level (as demonstrated in example 2127, Table 1), while introducing only 3 new false-positive errors. This exemplifies the importance of syntactic matching for precision. In order to get a sense of the contribution of the various components at each level, statistics on the inference mechanisms that contributed to the coverage of the hypothesis by the text (either full or partial) were recorded by one annotator. Only the positive cases in the gold standard were considered.
The contribution of various inference mechanisms
For each inference mechanism we measured its frequency, its contribution to the recall of the related level and the percentage of cases in which it is required for establishing entailment. The latter also takes into account cases where only partial coverage could be achieved, and thus indicates the significance of each inference mechanism for any entailment system, regardless of the models presented in this paper. The results are summarized in Table 4.
From Table 4 it stands that paraphrases are the most notable contributors to recall. This result indicates the importance of paraphrases to the entailment task and the need for large-scale paraphrase collections. Syntactic transformations are also shown to contribute considerably, indicating the need for collections of syntactic transformations as well. In that perspective, we propose our annotation framework as means for evaluating collections of paraphrases or syntactic transformations in terms of recall.
Finally, we note that the co-reference moderate contribution can be partly attributed to the idiosyncracies of the RTE test-set: the annotators were guided to replace anaphors with the appropriate reference, as reported in (Dagan et al., 2005).
Conclusions
In this paper we presented the definition of two entailment models, Lexical and Lexical-Syntactic, and analyzed their performance manually. Our experiment shows that the lexical-syntactic level outperforms the lexical level in all measured aspects. Furthermore, paraphrases and syntactic transformations emerged as the main contributors to recall. These results suggest that a lexical-syntactic framework is a promising step towards a complete entailment model.
Beyond these empirical findings we suggest that the presented methodology can be used generically to annotate and analyze entailment datasets.
In future work, it would be interesting to analyze higher levels of entailment, such as logical inference and deep semantic understanding of the text.
Turnout for the historic vote for the first time since the EU took in 10 new members in May has hit a record low of 45.3%.Table 1: Examples of text-hypothesis pairs, taken from the PASCAL RTE test-set. Each line includes the example number at the RTE test-set, the text and hypothesis, the task within the test-set, whether entailment holds between the text and hypothesis (Ent.), whether Lexical entailment holds (Lex. Ent.) and whether Lexical-Syntactic entailment holds (Syn. Ent.).No. Text
Hypothesis
Task Ent. Lex.
Ent.
Syn.
Ent.
322 New members joined the
EU.
IR
true false true
1361 A Filipino hostage in Iraq was released. A Filipino hostage was
freed in Iraq.
CD true true true
1584 Although a Roscommon man by birth,
born in Rooskey in 1932, Albert "The
Slasher" Reynolds will forever be a
Longford man by association.
Albert Reynolds was born
in Co. Roscommon.
QA true true true
1911 The SPD got just 21.5% of the vote
in the European Parliament elections,
while the conservative opposition par-
ties polled 44.5%.
The SPD is defeated by
the opposition parties.
IE
true false false
2127 Coyote shot after biting girl in Vanier
Park.
Girl shot in park.
IR
false true false
Table 2 :
2Results per level of entailment.
Table 3 :
3Correlation between the entailment lev-
els. (a) includes only the positive examples from
the RTE dataset sample, and (b) includes only the
negative examples.
Table 4 :
4The frequency (f ), contribution to recall ( R) and percentage (%), within the gold standard positive examples, of the various inference mechanisms at each level, ordered by their significance.
Example no 322 in the PASCAL RTE test-set. 2 Example no 1575 in the PASCAL RTE test-set. 3 Example no 298 in the PASCAL RTE test-set. 4 The complete RTE dataset can be obtained at http://www.pascal-network.org/Challenges/RTE/Datasets/
AcknowledgementsWe would like to thank Ido Dagan for helpful discussions and for his scientific supervision. This work was supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778. This publication only reflects the authors' views.
Can Shallow Predicate Argument Structures Determine Entailment. Alina Andreevskaia, Zhuoyan Li, Sabine Bergler, Proceedings of Pascal Challenge Workshop on Recognizing Textual Entailment. Pascal Challenge Workshop on Recognizing Textual EntailmentAlina Andreevskaia, Zhuoyan Li and Sabine Bergler. 2005. Can Shallow Predicate Argument Structures Determine Entailment?. In Proceedings of Pascal Challenge Workshop on Recognizing Textual Entail- ment, 2005.
Learning to paraphrase: An unsupervised approach using multiplesequence alignment. Regina Barzilay, Lillian Lee, Proceedings of HLT-NAACL 2003. HLT-NAACL 2003Edmonton, CanadaRegina Barzilay and Lillian Lee. 2003. Learning to paraphrase: An unsupervised approach using multiple- sequence alignment. In Proceedings of HLT-NAACL 2003. pages 16-23, Edmonton, Canada.
The PASCAL Recognising Textual Entailment Challenge. Ido Dagan, Bernardo Magnini, Oren Glickman, Proceedings of Pascal Challenge Workshop on Recognizing Textual Entailment. Pascal Challenge Workshop on Recognizing Textual EntailmentIdo Dagan, Bernardo Magnini and Oren Glickman. 2005. The PASCAL Recognising Textual Entailment Chal- lenge. In Proceedings of Pascal Challenge Workshop on Recognizing Textual Entailment, 2005.
The measurement of observer agreement for categorical data. J R Landis, G G Koch, Biometrics. 33J.R. Landis and G.G. Koch. 1977. The measurement of observer agreement for categorical data. Biometrics, 33:159-174.
Discovery of inference rules for Question Answering. Dekang Lin, Patrick Pantel, Natural Language Engineering. 74Dekang Lin and Patrick Pantel. 2001. Discovery of infer- ence rules for Question Answering. Natural Language Engineering, 7(4):343-360.
Nomlex: A lexicon of nominalizations. C Macleod, R Grishman, A Meyers, L Barrett, R Reeves, Proceedings of the 8th International Congress of the European Association for Lexicography. the 8th International Congress of the European Association for LexicographyLiège, BelgiumEURALEXC. Macleod, R. Grishman, A. Meyers, L. Barrett and R. Reeves. 1998. Nomlex: A lexicon of nominalizations. In Proceedings of the 8th International Congress of the European Association for Lexicography, 1998. Liège, Belgium: EURALEX.
Automatic paraphrase acquisition from news articles. Yusuke Shinyama, Satoshi Sekine, Kiyoshi Sudo, Ralph Grishman, Proceedings of Human Language Technology Conference. Human Language Technology ConferenceSan Diego, USAYusuke Shinyama and Satoshi Sekine, Kiyoshi Sudo and Ralph Grishman. 2002. Automatic paraphrase acqui- sition from news articles. In Proceedings of Human Language Technology Conference (HLT 2002). San Diego, USA.
Scaling Web-based Acquistion of Entailment Relations. Idan Szpektor, Hristo Tanev, Ido Dagan, Bonnaventura Coppola, Proceedings of EMNLP. EMNLPIdan Szpektor, Hristo Tanev, Ido Dagan and Bonnaven- tura Coppola. 2004. Scaling Web-based Acquistion of Entailment Relations. In Proceedings of EMNLP 2004.
What Syntax Contribute in Entailment Task. Lucy Vanderwende, Deborah Coughlin, Bill Dolan, Proceedings of Pascal Challenge Workshop on Recognizing Textual Entailment. Pascal Challenge Workshop on Recognizing Textual EntailmentLucy Vanderwende, Deborah Coughlin and Bill Dolan. 2005. What Syntax Contribute in Entailment Task. In Proceedings of Pascal Challenge Workshop on Recog- nizing Textual Entailment, 2005. |
236,477,710 | [] | A Closer Look into the Robustness of Neural Dependency Parsers Using Better Adversarial Examples
August 1-6, 2021
Yuxuan Wang yxwang@ir.hit.edu.cn
Research Center for Social Computing and Information Retrieval
Harbin Institute of Technology
Wanxiang Che
Research Center for Social Computing and Information Retrieval
Harbin Institute of Technology
Ivan Titov ititov@inf.ed.ac.uk
School of Informatics
ILCC
University of Edinburgh
ILLC
University of Amsterdam
Shay B Cohen scohen@inf.ed.ac.uk
School of Informatics
ILCC
University of Edinburgh
Zhilin Lei zllei@ir.hit.edu.cn
Research Center for Social Computing and Information Retrieval
Harbin Institute of Technology
Ting Liu tliu@ir.hit.edu.cn
Research Center for Social Computing and Information Retrieval
Harbin Institute of Technology
A Closer Look into the Robustness of Neural Dependency Parsers Using Better Adversarial Examples
Association for Computational Linguistics: ACL-IJCNLP 2021
August 1-6, 20212344
Previous work on adversarial attacks on dependency parsers has mostly focused on attack methods, as opposed to the quality of adversarial examples, which in previous work has been relatively low. To address this gap, we propose a method to generate high-quality adversarial examples with a higher number of candidate generators and stricter filters, and then verify their quality using automatic and human evaluations. We perform analysis with different parsing models and observe that: (i) injecting words not used in the training stage is an effective attack strategy; (ii) adversarial examples generated against a parser strongly depend on the parser model, the token embeddings, and even the specific instantiation of the model (i.e., a random seed). We use these insights to improve the robustness of English parsing models, relying on adversarial training and model ensembling. 1
Introduction
Neural network-based models have achieved great successes in a wide range of NLP tasks. However, recent work has shown that their performance can be easily undermined with adversarial examples that would pose no confusion for humans . As an increasing number of successful adversarial attackers have been developed for NLP tasks, the quality of the adversarial examples they generate has been questioned (Morris et al., 2020).
The definition of a valid successful adversarial example differs across target tasks. In semantic tasks such as sentiment analysis (Zhang et al., 2019) and textual entailment (Jin et al., 2020), a valid successful adversarial example needs to be able to alter the prediction of the target model while * Work partially done while at the University of Edinburgh. 1 Our code is available at: https://github.com/ WangYuxuan93/DepAttacker.git preserving the semantic content and fluency of the original text. In contrast, in the less explored field of attacking syntactic tasks, the syntactic structure, rather than the semantic content, must be preserved while also maintaining the fluency. Preserving the syntactic structure enables us to use the gold syntactic structure of the original sentence in the evaluation process. While preserving the fluency ensures that ungrammatical adversarial examples, which not only fool the target model but also confuse humans, will not be considered valid. Therefore in this paper, we evaluate the quality of an adversarial example in two aspects, namely the fluency and syntactic structure preservation.
Recently, Zheng et al. (2020) proposed the first dependency parser attacking algorithm based on word-substitution which depended entirely on BERT (Devlin et al., 2019) to generate candidate substitutes. The rational was that the use of the pre-trained language model will ensure fluency of the adversarial examples. However, we find that using BERT alone is far from enough to preserve fluency. Therefore, in this paper, we propose a method to generate better adversarial examples for dependency parsing with four types of candidate generators and filters. Specifically, our method consists of three steps: (i) determining the substitution order, (ii) generating and filtering candidate substitutes for each word, (iii) searching for the best possible combination of substitutions, based on pre-computed candidates and the substitution order. We verify the superiority of the proposed method in terms of syntactic structure preservation and fluency using both automatic and human evaluations, and further show the limitation of the previous BERT-based method. Table 1 shows adversarial examples generated by our and the method of Zheng et al. (2020), demonstrating that examples generated by our method are Boeing received a $ 46 million Air Force America contract for developing securing cable systems for the Minuteman Missile. example-3 He used better than 5,000 words heaping scorn on the witnesses eyewitnesses for exercising the Fifth.
He used better than 5,000 words times heaping scorn on the witnesses dollars for exercising the Fifth grand. With the proposed attacking method, we evaluate the robustness of different parsing models and analyse the properties of adversarial attacks. We find that (i) the introduction of out-of-vocabulary (OOV, words not in the embedding's vocabulary) and out-of-training (OOT, words not in the training set of the parser) words in adversarial examples are two main factors that harm models' performance; (ii) adversarial examples generated against a parser strongly depend on the type of the parser, the token embeddings and even the random seed.
Adversarial training (Goodfellow et al., 2015), where adversarial examples are added in the training stage, has been commonly used in previous work (Zheng et al., 2020;Han et al., 2020) to improve a parser's robustness. Only a limited number of adversarial examples have been used in such cases, and Zheng et al. (2020) argued that overuse of them may lead to a performance drop on the clean data. However, we show that with improvement in the quality of adversarial examples produced in our method, more adversarial examples can be used in the training stage to further improve the parsing models' robustness without producing any apparent harm in their performance on the clean data. Inspired by our second finding, we propose to improve the parsers' robustness by combining models trained with different random seeds and embeddings. Such methods, which are not targeting specific types of attacks, should improve the capacity to defend against new attacks as compared to standard adversarial training.
Method
In this section, we first give a formal definition of a dependency parsing attack. Then we describe the proposed attacking method for dependency parsing, shown in Algorithm 1. It consists of three steps, namely ranking word importance (lines 1-4), generating candidates for substitution (line 7) and searching for the best substitute combination (lines 8-21).
Problem Definition
Given an input text space X containing all possible input sentences x and an output space Y containing all possible dependency trees of x, a parser F : X → Y learns to map the sentence x to its corresponding tree y, denoted by F (x) = y. The i-th word of x is denoted by x i . For sentence x, a valid adversarial example x * is crafted by adding a perturbation to x so that
F (x * ) = y, σ(x * , x) ≤ ,
where σ is a constraint function and ensures that i) the perturbation is imperceptible, ii) the true dependency tree of x * should be the same as that of x. In this paper, these two constraints are ensured through the use of various filters (see Section 2.3) and are used to evaluate the quality of adversarial examples (see details on fluency and syntactic structure preservation in Section 3.3).
Word Importance Ranking
Word importance ranking in our model is based on the observation that some words have a stronger influence on model prediction than others. Such word importance is typically computed by setting each word to unknown and examining the changes in their predictions Ren et al., 2019).
Algorithm 1 Dependency Parsing Attack
Input: Sentence example x (0) = {x1, x2, . . . , xN }, maximum percentage of words allowed to be modified γ Output: Adversarial example x (i) 1: for i = 1 to N do 2:
Compute word importance I(x (0) , xi) via Eq. 1 3: end for 4: Create a set W of all words xi ∈ x (0) sorted by the descending order of their importance I(x (0) , xi). 5: t = 0 6: for each word xj in W do 7:
Build candidate set Cj for xj following the Candidate Substitute Generating step 8:
Initialise valid candidate set VC ← {} 9:
for each candidate c k in Cj do 10:
Compute the accuracy change S(x (t) , c k , j) via Eq. 3 11:
if S(x (t) , c k , j) ≤ 0 then continue end if 12:
Add c k to the set VC 13: end for 14:
if VC is not empty then 15:
c * = argmax c∈VC S(x (t) , c, j) 16: t = t + 1 17: x (t) ← Replace xj in x (t−1) with c * 18: if t ≥ γ · N then return x (t) end if 19:
end if 20: end for 21: if t > 0 then return x (t) else return None end if This helps to determine the word substituting order in the proposed method.
In this work, we use a combination of the changes found in the unlabelled attachment score (UAS) and in the labelled attachment score (LAS) to measure word importance. Specifically, the importance of a word x i in sentence x is computed as
I(x, x i ) =λ arc ∆ UAS (x,x i )+ (1 − λ arc )∆ LAS (x,x i ),(1)where x = x 1 x 2 . . . x i . . . x N is the original sentence andx i = x 1 x 2 . . . UNK . . . x N re- places x i with an 'unknown' token. Here ∆ UAS (x,x i ) = UAS F (x) − UAS F (x i ) and ∆ LAS (x,x i ) = LAS F (x) − LAS F (x i )
are the changes in UAS and LAS respectively. λ arc is a coefficient that controls the relative importance of dependency arcs and their labels.
Generation of Substitute Candidates
Generating substitute candidates is a critical step, as it significantly influences the attack success rate and the quality of generated adversarial examples. Zheng et al. (2020) relied entirely on BERT to generate candidates, but this limits the quality of the adversarial examples. To alleviate this problem, we first collect candidate substitutes from four generation methods, then apply filters to discard inappropriate substitutes, ensuring both diversity and quality of the generated candidates.
Generating Process
We collect substitutes from the following methods:
BERT-Based Method: We use BERT to generate candidates for each target word from its context. This method generates only single subwords.
Embedding-Based Method: Following Alzantot et al. (2018), we use word embeddings of Mrkšić et al. (2016) 2 to compute the N nearest neighbours of each target word according to their cosine similarity and use them as candidates.
Sememe-Based Method: The sememes of a word represent its core meaning (Dong and Dong, 2006). Following Zang et al. (2020), we collect the substitutes of the target word x based on the rule that one of the substitutes the senses of x * must have the same sememe annotations as one of senses of x.
Synonym-Based Method: We use WordNet 3 to extract synonyms of each target word as candidates.
Filtering Process
We apply the following four types of filters to discard candidates which are likely inappropriate, either in terms of syntactic preservation or fluency.
POS Filter: We first filter out substitutes with different part-of-speech (POS) tags from the original word. 4 This filter is essential for preserving the syntactic structure of the sentence.
Word Embedding Similarity Filter: We use the word embeddings of Mrkšić et al. (2016) to compute the cosine similarity between the original word and each of the substitutes in C and filter out those whose similarities are less than a threshold w . 5 Grammar Checker Filter: We employ an offthe-shelf grammar checker 6 to filter out candidates that may introduce grammar errors. This filter helps to further ensure that the syntactic structure and fluency are preserved.
Perplexity Filter: We employ GPT-2 (Radford et al., 2019) to calculate the perplexity difference between x and x c i for each candidate c:
∆ppl(x, c, i) = ppl(x c i ) − ppl(x),(2)
where x c i is x with its i-th word replaced by c, and filter out c whose ∆ppl(x, c, i) > p .
Best Substitute Searching
In this step, we greedily search for the best possible combination of substitutions, relying both on the previously created candidate lists and word substitution order. To preserve the syntactic structure of sentences, we forbid replacement of pronouns, articles, conjunctions, numerals, interjections, interrogative determiners and punctuation. Additionally, we set the maximum percentage of words allowed to be modified γ in the experiments to control the modification number.
Specifically, given a sentence x, we substitute the words following the order computed in the word importance ranking step. For each target word x i , we build an adversarial example x c i = x 1 x 2 . . . c . . . x N for each of its substitutes c. Then we compute the accuracy change score from x to x c i as input to the parser:
S(x, c, i) =λ arc ∆ UAS (x, x c i )+ (1 − λ arc )∆ LAS (x, x c i ),(3)
where
∆ UAS (x, x c i ) = UAS F (x) − UAS F (x c i ) and ∆ LAS (x, x c i ) = LAS F (x) − LAS F (x c i )
are the changes in UAS and LAS, respectively. If the percentage of modified words in the sentence exceeds a threshold γ, we stop the process. Otherwise, we search for a substitute for the next target word.
Experimental Setup
Target Parsers and Token Embeddings
We choose the following two strong and commonly used English parsers, one graph-based, the other transition-based, as target models, both of which achieve performance close to the state-of-the-art.
Deep Biaffine Parser (Dozat and Manning, 2017) is a graph-based parser that scores each candidate arc independently and relies on a decoding algorithm to search for the highest-scoring tree.
Stack-Pointer Parser (Ma et al., 2018) is a transition-based parser that incrementally builds the dependency tree with pre-defined operations.
We used the following four types of token embeddings to study their influence on each parsers' robustness. To focus on the influence of the embeddings, we use only the embeddings as input to the parsers:
GloVe (Pennington et al., 2014) is a frequently used static word embedding.
RoBERTa (Liu et al., 2019) is a pre-trained language model based on a masked language modelling object, which learns to predict a randomly masked token based on its context. It produces contextualised word piece embeddings.
ELECTRA (Clark et al., 2020) is a pre-trained language model based on a replaced token detection object, which learns to predict whether each token in the corrupted input has been replaced. It produces contextualised word piece embeddings.
ELMo (Peters et al., 2018) is a pre-trained language representation model based on character embeddings and bidirectional language modelling.
Datasets and Experimental Settings
We train the target parsers and evaluate the proposed method on the English Penn Treebank (PTB) dataset, 7 converted into Stanford dependencies using version 3.3.0 of the Stanford dependency converter (de Marneffe et al., 2006) (PTB-SD-3.3.0). We follow the standard PTB split, using section 2-21 for training, section 22 as a development set and 23 as a test set.
It is important to note that when converting PTB into Stanford dependencies, Zheng et al. (2020) maintained the copula (linking verbs) as a head when its complement was an adjective or noun. 8 However, since the design objective of Stanford dependency is to maximize dependencies between content words (de Marneffe et al., 2006), a more typical setting is to regard copulas as auxiliary modifiers. Therefore, we first compare with the previous method by performing this step under their settings and further conduct experiments with the typical PTB-SD-3.3.0 dataset for the convenience of follow-up research.
While training the target parsers, we adopt the hyper-parameters from their respective papers. Note that to compare with the biaffine parser, which uses first-order features, we also adopt the basic setting for the stack-pointer parser. 9 When using 7 https://catalog.ldc.upenn.edu/ LDC99T42 8 Referred to as PTB-SD-3.3.0-COP in the rest of the paper. 9 According to our preliminary experiments, neither second-order features nor beam search has an obvious influence on the parser robustness under our attack. RoBERTa, ELECTRA or ELMo embeddings as input, we set the learning rate of these pre-trained models to 2e-5 and that of other parameters to 2e-2.
For the hyper-parameters of each attacking method, we set the word embedding similarity threshold w = 0.7, the candidate perplexity difference threshold p = 20.0, the arc importance coefficient λ arc = 0.5 and the maximum percentage of words allowed to be modified γ = 15%.
Evaluation Metrics
As introduced in Section 2.1, two constraints should be satisfied for an adversarial example to be valid: i) the perturbation is imperceptible, ii) the true dependency tree of x * should be the same as that of x. For the first, we use fluency to measure the imperceptibility of the perturbations, and assume that in a fluent adversarial example the perturbation is imperceptible. For the second, syntactic structure preservation is used to measure whether an adversarial example's true dependency tree is identical to that of the original text. Both automatic and human evaluations are used for analysis.
In the automatic evaluation, GPT-2 (Radford et al., 2019) is used to compute the average perplexity of the adversarially modified PTB test set to measure the overall fluency. In the human evaluation, we ask three annotators to evaluate the quality of adversarial examples in two aspects, namely syntactic structure preservation and fluency. 10 To evaluate the preservation of the syntactic structure, we randomly collect 100 sentences along with their adversarial examples and ask the annotators to decide whether the syntactic structure is preserved in each case. For the fluency evaluation, we randomly collect 100 sentences along with the adversarial examples generated by our method and those produced by the black-box method of Zheng et al. (2020). 11 For each sentence, the annotators are asked to distinguish which example is better with regard to fluency. For both evaluations, we adopt the majority vote for the final results.
To evaluate how successful the attack is, we report the parsing results of the target models on the original and the adversarially modified (afterattack) PTB test set. The results are reported in terms of unlabelled attachment score (UAS) and 10 The three human annotators are postgraduate students with a few years of research experience in syntactic parsing. 11 We thank Zheng et al. (2020) for kindly providing us with the adversarial examples they generated. labelled attachment score (LAS). We also report the attack success rate, namely the percentage of successfully attacked sentences. If the prediction accuracy of the modified sentence is lower than the original one, it is regarded as a successful attack. 12
Results
Comparison with Previous Work
We first evaluate our attacking method on PTB-SD-3.3.0-COP and compare it with previous work (Zheng et al., 2020). Since we focus on the blackbox attack in this paper, we compare with their sentence-level black-box attack against the deep biaffine parser with only word-based embeddings as input. In both their and our settings, 15% of words are allowed to be modified. Table 2 shows that adversarial examples generated by our method substantially outperform the previous method with regard to fluency and syntactic structure preservation. In the automatic evaluation, the average perplexity of examples generated by our method is 139.99, as compared to 267.96 of those generated by the previous work. For comparison, the average perplexity of the original PTB test set is 127.67, which is very close to ours.
In the human evaluation, results show that for 80% of the sentences, our adversarial examples have better fluency, which further confirms the effectiveness of our method. In addition, 85% of the examples we generated preserve the original syntactic structure, as compared to 75% reported by Zheng et al. (2020), showing that our method also improves the syntactic-structure preservation rate. Table 3 shows the attack results of the two methods. 13 It is clear that with higher quality, the adversarial examples generated by our method cause 12 Note that Zheng et al. (2020) only considered unlabelled scores, so when comparing with these, we use the difference in UAS as the measurement of successful attacks. Conversely, in experiments on PTB-SD-3.3.0, we use the difference in LAS. 13 We only compare UAS here since they did not report LAS in their paper.
Model
Orig-UAS To further demonstrate the limitation of the BERT-based method which the previous work used as the only candidate generator, we count the average number of candidates from our use of different generators before and after filtering. Results in Table 4 show that although the BERT-based method generates the most candidates before filtering, only 1.89% of them are left after the filters are applied. Whereas the left candidate percentage varies from 5% to 10% for the other three generators. The results further verify that the quality of candidates generated by the BERT-based method is worse than that from the embedding-based, sememe-based and synonym-based methods.
Model
After To evaluate the ability of the filters, we conduct an ablation study with different combinations of these filters. Results in Table 5 show that the perplexity as well as the attack success rate decreases when more filters are applied. As expected, the greatest perplexity drop is brought by the perplex-ity filter. We evaluate the robustness of the different parsing models introduced in Section 3.1 on PTB-SD-3.3.0 and report the results in Table 6. First of all, when applied to unperturbed sentences, the graph-based deep biaffine parser performs consistently better than the transition-based stack-pointer parser (using the same embeddings). Among the four kinds of embeddings, the word piece-level embeddings (i.e., ELECTRA and RoBERTa) achieve the highest results, while GloVe yields the lowest results.
Robustness Evaluation of Different Models
As for the adversarially modified sentences, we find that the drop in performance is close between the two families of parsers (using the same embeddings), while the attack success rate against the Stack-Pointer parser is slightly higher. In terms of the embeddings, RoBERTa turns out to be the most robust one, which has the lowest attack success rate and achieves the highest performance on the generated adversarial examples. ELMo is also a comparatively robust embedding. We are surprised to find that although ELECTRA achieves similar performance to RoBERTa on clean input data, it performs poorly on the adversarial examples. We hypothesise that this is due to ELECTRA's training objective, i.e. learning to predict whether a token in a corrupted sentence is genuine or not. With this objective, some of our substitutes can be predicted as incorrect tokens, yielding token representations in the space not encountered by the parser in training, and hence damaging its performance. Lastly, GloVe is the most vulnerable embedding. 14 Vocab.
Original
After
Out-of-Vocabulary and Out-of-Training Words
In this section, we investigate the roles out-ofvocabulary (OOV, words not in the embedding's vocabulary) and out-of-training (OOT, words not in the training set of the parser) words play in dependency parsing attacks. We perform attacks on the Biaffine GloVe models trained with (i) 50k vocabulary (50k), (ii) 400k vocabulary (400k) and (iii) the same 400k vocabulary but where all candidates not in the training set are filtered out (400k (T.)).
The results are shown in Table 7, where we report the attack results along with the number of OOV and OOT words in the adversarially modified words before and after the attack. Firstly, by comparing the OOV and OOT numbers before and after the attack in the 50k model, we find that words chosen to be replaced are often non-OOV and non-OOT, while their substitutes are often OOV and OOT. Secondly, the comparison between the 50k and 400k results shows that when the number of OOV words decreases, the robustness of the model increases. Therefore, it is reasonable to assume that OOV words in adversarial examples cause incorrect predictions. Thirdly, according to the 400k and 400k (T.) results, when the number of OOT words in adversarial examples are reduced to 0 by filtering out all the OOT candidates, the attack success rate drops substantially. Therefore, we have reason to believe that unfamiliar OOT words are another factor degrading a parser's performance.
The OOV problem mostly appears in models using word-level embeddings such as GloVe and can be alleviated by simply increasing the vocabulary size. While for the OOT problem, one potential solution is using adversarial training, where a new parser is trained with a mixture of clean training data and adversarial examples. model in Table 6 we attack another two trained with different random seeds. The experiment shows all the results are stable across seeds.
Adversarial Training
Previous work (Zheng et al., 2020;Han et al., 2020) used a limited number (from 2,000 sentences to half of the training data) of adversarial examples in adversarial training as (Zheng et al., 2020) argued that overuse of them may lead to a performance drop on the clean data. In this section, we investigate the adversarial training strategies on all the parsing models introduced in Section 3.1. Specifically, we generate adversarial examples for the whole PTB training set and retrain parsers on different amount of adversarial examples along with the original training set. Figure 1 shows that as the number of adversarial examples used in adversarial training increases, the robustness of the models increases accordingly. For most of the models, the increase of robustness stops between 50% and 70% of adversarial examples used. Table 9 and 10 show that the attack success rate always drops when adversarial examples are tested on other models, indicating that the adversarial examples strongly depend on the parser model, the token embeddings and even the spe- Based on the observations from Section 4.5, we propose to improve the robustness of parsing models using a cross-seed ensemble and crossembedding ensemble. To ensemble multiple parsers, we simply compute the average of the probability distributions across them and use that result as the new distribution in the ensembled model. Figure 2 shows the effect of the cross-seed ensemble, where almost all the attack success rates are dropped with such an ensemble. In addition, it is most effective with ELMo while least effective with ELECTRA and RoBERTa. Table 11 shows the effect of using the crossembedding ensemble, where robustness increases when more models with different token embeddings are ensembled. Moreover, contrary to adversarial training, the ensemble method is not tuned to specific types of attacks and appears robust to 'unseen' attacks, showing that it is more likely to defend against new attacks.
Results in
Related Work
Existing textual adversarial attacks have mostly focused on semantic tasks such as sentiment analysis (Zhang et al., 2019) and textual entailment (Jin et al., 2020). Although most of this work has applied various techniques to maintain the fluency of adversarial examples, a recent study by Morris et al. (2020) reported that quite a number of these techniques introduce grammatical errors. In syntactic tasks, Zheng et al. (2020) recently proposed the first dependency parser attacking method which depends entirely on BERT to generate candidates. However, we show that the quality of adversarial examples generated by their method is relatively low due to the limitation of the BERTbased generator, and we propose to generate better examples by using more generators and stricter filters. Han et al. (2020) proposed an approach to attack structured prediction models with a seq2seq model (Wang et al., 2016) and evaluated this model on dependency parsing. They used two reference parsers in addition to the victim parser to supervise the training of the adversarial example generator, and found that the three parsers produce better results when they have different inductive biases embedded to make the attack successful. This finding is quite close in spirit to our conclusion in Section 4.5. Hu et al. (2020) also put forth efforts to modify the text in syntactic tasks while preserving the original syntactic structure. However, their goal is to preserve privacy via the modification of words that could disclose sensitive information.
Conclusion
In this paper, we propose a method for generating high-quality adversarial examples for dependency parsing and show its effectiveness based on automatic and human evaluation. We investigate the robustness of different types of neural dependency parsers. We show that OOV and OOT words are two critical characteristics that cause a performance drop and propose to solve the OOT problem with adversarial training. We further examine three kinds of transferabilities of adversarial examples and propose to improve the robustness of parsing models by ensembling across random seeds and token embeddings.
Figure 1 :
1After-attack results with different amounts of adversarial examples used for adversarial training (best viewed in colour).
Figure 2 :
2Attack success rates of the examples with and without cross-seed ensemble.
Most of those freed emancipated had spent at least 25 years in prison.Most of those freed had were spent at least 25 years in prison. example-2 Boeing received a $ 46 million Air Force contract for developing devising cable systems for the Minuteman Missile.Model
Ours
Zheng et al.
example-1
Table 1 :
1Adversarial examples generated by our and Zheng et al. (2020)'s methods. The original words are highlighted in bold blue font while the substitute words are highlighted in bold red ones.more fluent, producing complicated substitutes like
emancipated, devising and eyewitnesses. Ad-
ditionally, since it is nontrivial to decode valid
multi-subword tokens from BERT, the BERT-based
method of Zheng et al. (2020) only generates single
subwords as substitutes.
Table 2 :
2Automatic and human evaluation results on the PTB-SD-3.3.0-COP test set. PPL denotes the average perplexity. Syntax% denotes the preserved syntacticstructure rate and Fluency% the higher fluency rate.
Table 3 :
3Results on the PTB-SD-3.3.0-COP test set. Orig-/After-UAS denotes the original and after-attack UAS respectively. Succ% denotes the success rate.fewer incorrect predictions. This suggests that
some of the previous attacks that were counted as
successful may have used invalid adversarial exam-
ples which are either ungrammatical or which have
actually changed the original syntactic structure.
Generator BERT Emb. Sem.
Syn.
before
28.64 20.54
8.57
1.74
after
0.54
2.06
0.46
0.17
left%
1.89 10.04
5.32 10.03
Table 4 :
4The average number of candidates before
and after filtering generated by BERT-based (BERT),
embedding-based (Emb.), sememe-based (Sem.) and
synonym-based (Syn.) methods respectively. And the
percentages of the left candidates.
Table 5 :
5Ablation study of filters. pos, emb, gra andppl stand for POS (part of speech), word embedding
similarity, grammar checker and perplexity filters re-
spectively.
Table 6 :
6Robustness evaluation results. Succ% de-
notes the success rate (computed based on LAS). G.,
M., E. and R. stand for GloVe, ELMo, ELECTRA and
RoBERTa respectively.
Table 7 :
7OOV and OOT test results. Vocab. stands for vocabulary size, T. means filtering out all the candidates that have not appeared in the training set.
Table 8
8shows the results of Biaffine parsers retrained with 100% of the adversarial examples generated for the original training set. We find that in most cases, the parsing results on the clean data are not obviously influenced although all of the adversarial examples are used. In addition, the robustness of all of the retrained models is substantially improved.Input
Original
After-Attack
Succ%
UAS
LAS
UAS
LAS
Biaffine
G.
95.36 93.49 88.69 85.09
55.3
G.*
95.32 93.45 91.90 89.34
38.5
M.
96.29 94.51 90.70 87.67
47.5
M.*
96.17 94.37 93.49 91.03
33.2
E.
97.12 95.38 91.05 87.79
50.6
E.*
96.96 95.23 95.03 92.58
33.4
R.
97.09 95.41 92.14 89.42
46.1
R.*
97.03 95.30 95.30 93.02
29.5
Stack-Pointer
G.
94.93 93.05 88.26 84.64
52.6
G.*
94.92 93.04 91.58 88.82
36.2
M.
95.69 93.77 89.57 86.49
46.8
M.*
95.75 93.81 92.64 90.02
34.1
E.
96.94 95.19 90.69 87.47
50.3
E.*
96.83 95.04 94.53 91.96
34.2
R.
96.93 95.20 91.58 88.84
45.1
R.*
96.80 95.01 95.10 92.83
29.1
Table 8 :
8We refer to adversarial examples as transferable if, generated against one model, they succeed in fooling another one. Previously,Jin et al. (2020) found that in text classification and entailment tasks, adversarial examples are moderately transferable between models with different embeddings. In this section, we examine the following three kinds oftransferabilities of adversarial examples in dependency parsing attacks: (i) Cross Seed: adversarial examples generated against one model are tested on another model trained with a different random seed; (ii) Cross Parser: adversarial examples generated against one model are tested on another from a different family of parsers; and (iii) Cross Embedding: adversarial examples generated against one model are tested on another trained with a different type of embedding.Adversarial training results. * denotes models
with adversarial training.
4.5 Transferability
Table 9 :
9Attack success rates (%) in the transferability test with Biaffine parser as the source parser. Src represents the source model.
Table 10 :
10Attack success rates (%) in the transferability test with Stack-Pointer parser as the source parser. Src. represents the source model. cific instantiation of the model (i.e., the random seed). Among the three kinds of transferabilities, the cross seed transfer is the strongest while the cross embedding transfer is the weakest.4.6 Cross-Seed and Cross-Embedding
Ensemble
GloVe
ELMo
ELECTRA
RoBERTa
Embeddings
44
46
48
50
52
54
56
Attack Success Rate (%)
Stack-Pointer, single
Stack-Pointer, ensemble
Biaffine, single
Biaffine, ensemble
R.G.M.E. 97.25 95.63 92.73 90.12 R.G.M.E. 97.14 95.45 92.27 89.64 41.5Input
Original
After-Attack
Succ%
UAS
LAS
UAS
LAS
Biaffine
R.
97.09 95.41 92.14 89.42
46.1
R.G.
97.16 95.55 92.39 89.77
43.8
R.G.M.
97.20 95.58 92.53 89.97
42.5
41.5
Stack-Pointer
R.
96.93 95.20 91.58 88.84
45.1
R.G.
97.01 95.32 92.15 89.49
43.4
R.G.M.
96.98 95.26 92.28 89.62
42.3
Table 11 :
11Cross-embedding ensemble results
These embeddings are post-processed to ensure that the nearest neighbours are synonyms.3 https://wordnet.princeton.edu4 We use the off-the-shelf Stanford tagger. (https:// nlp.stanford.edu/software/tagger.html) 5 Non-synonym substitutes often reduce fluency. 6 https://pypi.org/project/language_ tool
To evaluate the stability of the attack, for each parsing
AcknowledgmentsThis work was supported by the National Key R&D Program of China via grant 2020AAA0106501 and the National Natural Science Foundation of China (NSFC) via grant 61976072 and 61772153.
Generating natural language adversarial examples. Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, Kai-Wei Chang, 10.18653/v1/D18-1316Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsMoustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial ex- amples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890-2896, Brussels, Belgium. Association for Computational Linguistics.
ELECTRA: pretraining text encoders as discriminators rather than generators. Kevin Clark, Minh-Thang Luong, Quoc V Le, Christopher D Manning, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020OpenReview.netKevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pre- training text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Hownet And the Computation of Meaning. Zhendong Dong, Qiang Dong, World Scientific Publishing Co., IncUSAZhendong Dong and Qiang Dong. 2006. Hownet And the Computation of Meaning. World Scientific Pub- lishing Co., Inc., USA.
Deep biaffine attention for neural dependency parsing. Timothy Dozat, D Christopher, Manning, 5th International Conference on Learning Representations. Toulon, FranceConference Track Proceedings. Open-Review. netTimothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency pars- ing. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Open- Review.net.
Explaining and harnessing adversarial examples. Ian J Goodfellow, Jonathon Shlens, Christian Szegedy, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsIan J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversar- ial examples. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceed- ings.
Adversarial attack and defense of structured prediction models. Wenjuan Han, Liwen Zhang, Yong Jiang, Kewei Tu, 10.18653/v1/2020.emnlp-main.182Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Association for Computational LinguisticsOnlineWenjuan Han, Liwen Zhang, Yong Jiang, and Kewei Tu. 2020. Adversarial attack and defense of struc- tured prediction models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 2327-2338, On- line. Association for Computational Linguistics.
Obfuscation for privacy-preserving syntactic parsing. Zhifeng Hu, Serhii Havrylov, Ivan Titov, Shay B Cohen, 10.18653/v1/2020.iwpt-1.7Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies. the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal DependenciesAssociation for Computational LinguisticsOnlineZhifeng Hu, Serhii Havrylov, Ivan Titov, and Shay B. Cohen. 2020. Obfuscation for privacy-preserving syntactic parsing. In Proceedings of the 16th Inter- national Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into En- hanced Universal Dependencies, pages 62-72, On- line. Association for Computational Linguistics.
Is BERT really robust? A strong baseline for natural language attack on text classification and entailment. Di Jin, Zhijing Jin, Joey Tianyi Zhou, Peter Szolovits, The Thirty-Second Innovative Applications of Artificial Intelligence Conference. New York, NY, USAAAAI Press2020The Tenth AAAI Symposium on Educational Advances in Artificial IntelligenceDi Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is BERT really robust? A strong baseline for natural language attack on text clas- sification and entailment. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8018-8025. AAAI Press.
Visualizing and understanding neural models in NLP. Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky, 10.18653/v1/N16-1082Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsJiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in NLP. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 681-691, San Diego, California. As- sociation for Computational Linguistics.
Roberta: A robustly optimized BERT pretraining approach. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, abs/1907.11692CoRRYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.
Stackpointer networks for dependency parsing. Xuezhe Ma, Zecong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, Eduard Hovy, 10.18653/v1/P18-1130Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics1Long Papers)Xuezhe Ma, Zecong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, and Eduard Hovy. 2018. Stack- pointer networks for dependency parsing. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 1403-1414, Melbourne, Australia. Association for Computational Linguistics.
Generating typed dependency parses from phrase structure parses. Marie-Catherine De Marneffe, Bill Maccartney, Christopher D Manning, Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC'06). the Fifth International Conference on Language Resources and Evaluation (LREC'06)Genoa, ItalyEuropean Language Resources Association (ELRAMarie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC'06), Genoa, Italy. European Language Resources Associ- ation (ELRA).
Reevaluating adversarial examples in natural language. John Morris, Eli Lifland, Jack Lanchantin, Yangfeng Ji, Yanjun Qi, 10.18653/v1/2020.findings-emnlp.341Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsJohn Morris, Eli Lifland, Jack Lanchantin, Yangfeng Ji, and Yanjun Qi. 2020. Reevaluating adversarial examples in natural language. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3829-3839, Online. Association for Computational Linguistics.
Counter-fitting word vectors to linguistic constraints. Nikola Mrkšić, Diarmuidó Séaghdha, Blaise Thomson, Milica Gašić, Lina M Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, Steve Young, 10.18653/v1/N16-1018Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsNikola Mrkšić, DiarmuidÓ Séaghdha, Blaise Thom- son, Milica Gašić, Lina M. Rojas-Barahona, Pei- Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 142-148, San Diego, California. Association for Computational Linguis- tics.
GloVe: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, 10.3115/v1/D14-1162Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsJeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.
Deep contextualized word representations. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, 10.18653/v1/N18-1202Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.
Language models are unsupervised multitask learners. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Generating natural language adversarial examples through probability weighted word saliency. Yihe Shuhuai Ren, Kun Deng, Wanxiang He, Che, 10.18653/v1/P19-1103Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsShuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial ex- amples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1085-1097, Florence, Italy. Association for Compu- tational Linguistics.
Attention-based LSTM for aspectlevel sentiment classification. Yequan Wang, Minlie Huang, Xiaoyan Zhu, Li Zhao, 10.18653/v1/D16-1058Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsYequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016. Attention-based LSTM for aspect- level sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, pages 606-615, Austin, Texas. Association for Computational Linguistics.
Word-level textual adversarial attacking as combinatorial optimization. Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, Maosong Sun, 10.18653/v1/2020.acl-main.540Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsYuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combina- torial optimization. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 6066-6080, Online. Association for Computational Linguistics.
Generating fluent adversarial examples for natural languages. Huangzhao Zhang, Hao Zhou, Ning Miao, Lei Li, 10.18653/v1/P19-1559Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsHuangzhao Zhang, Hao Zhou, Ning Miao, and Lei Li. 2019. Generating fluent adversarial examples for natural languages. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 5564-5569, Florence, Italy. Asso- ciation for Computational Linguistics.
Adversarial attacks on deep-learning models in natural language processing: A survey. Wei Emma Zhang, Z Quan, Ahoud Sheng, Chenliang Alhazmi, Li, 10.1145/3374217ACM Trans. Intell. Syst. Technol. 113Wei Emma Zhang, Quan Z. Sheng, Ahoud Alhazmi, and Chenliang Li. 2020. Adversarial attacks on deep-learning models in natural language process- ing: A survey. ACM Trans. Intell. Syst. Technol., 11(3).
Evaluating and enhancing the robustness of neural network-based dependency parsing models with adversarial examples. Xiaoqing Zheng, Jiehang Zeng, Yi Zhou, Cho-Jui Hsieh, Minhao Cheng, Xuanjing Huang, 10.18653/v1/2020.acl-main.590Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsXiaoqing Zheng, Jiehang Zeng, Yi Zhou, Cho-Jui Hsieh, Minhao Cheng, and Xuanjing Huang. 2020. Evaluating and enhancing the robustness of neural network-based dependency parsing models with ad- versarial examples. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 6600-6610, Online. Association for Computational Linguistics. |
||
191,744,541 | [] | Montréal, 19-23 juillet 2010
Jean-Philippe Goldman jean-philippe.goldman@unige.ch
Département de Linguistique
Faculté des Lettres
LATL -Laboratoire d'Analyse et de Technologie du Langage
Université de Genève
Kamel Nebhi kamel.nebhi@unige.ch
Département de Linguistique
Faculté des Lettres
LATL -Laboratoire d'Analyse et de Technologie du Langage
Université de Genève
Christopher Laenzlinger christopher.laenzlinger@unige.ch
Département de Linguistique
Faculté des Lettres
LATL -Laboratoire d'Analyse et de Technologie du Langage
Université de Genève
J-P Goldman
Département de Linguistique
Faculté des Lettres
LATL -Laboratoire d'Analyse et de Technologie du Langage
Université de Genève
K Nebhi
Département de Linguistique
Faculté des Lettres
LATL -Laboratoire d'Analyse et de Technologie du Langage
Université de Genève
C Laenzlinger
Département de Linguistique
Faculté des Lettres
LATL -Laboratoire d'Analyse et de Technologie du Langage
Université de Genève
TALN 2010
Montréal, 19-23 juillet 2010FipsColor : grammaire en couleur interactive pour l'apprentissage du françaisMots-clés : analyse syntaxiquegrammaire générativeservices webtei Keywords: chart parsergenerative grammarweb servicestei
L'analyseur multilingue FiPS permet de transformer une phrase en une structure syntaxique riche et accompagnée d'informations lexicales, grammaticales et thématiques. On décrit ici une application qui adapte les structures en constituants de l'analyseur FiPS à une nomenclature grammaticale permettant la représentation en couleur. Cette application interactive et disponible en ligne (http://latl.unige.ch/fipscolor) peut être utilisée librement par les enseignants et élèves de primaire.Abstract The FiPS parser analyzes a sentence into a syntactic structure reflecting lexical, grammatical and thematic information. The present paper describes the adaptation of the structures in terms of constituents as existent in FiPS to a grammatical annotation, as well as its coloured representation. This online interactive application (available at http://latl.unige.ch/fipscolor) can be freely used by teachers and pupils of primary education.
Motivation
Dans cet article, nous présentons une application web directement destinée au monde de l'enseignement et basée sur l'analyseur syntaxique FiPS. FipsColor reprend les principes pédagogiques de la grammaire en couleur (Plan d'études et « Maîtrise du français » -Corome Suisse Romande) selon lesquels toute catégorie grammaticale et fonction de constituant sera représentée par une couleur. L'application va permettre à l'apprenant de différencier les catégories grammaticales de mots (représentées par des mots coloriés) d'une part, et les fonctions syntaxiques (représentées par des groupes de mots soulignés), d'autre part. Le système de couleur est un moyen mnémotechnique efficace qui permet également de mettre l'accent sur certains points problématiques de la langue, comme par exemple les ambiguïtés lexicales. La phrase suivante illustre ce cas : « Paul ferme la porte et déclare haut et ferme qu'il va gérer la ferme. » Dans cet exemple, « ferme » est verbe, adverbe puis nom commun. Les différentes colorations permettent à l'élève de distinguer clairement les catégories grammaticales de mots. Outre les ambiguïtés de la langue, le sous-lignage des fonctions syntaxiques a pour but de mieux présenter les règles d'accord et d'appréhender au mieux les structures les plus complexes (proposition relative, proposition conjonctive, etc.). Dans ce sens, FipsColor est un support didactique intéressant en milieu scolaire.
L'analyseur syntaxique FiPS
L'analyseur syntaxique FiPS (Laenzlinger et Wehrli 1991, Wehrli 1997, développé depuis plusieurs années au LATL, est un outil linguistique capable d'associer à chaque phrase d'un texte une structure syntaxique accompagnée d'informations lexicales, grammaticales et sémantiques (« thématiques ») 1 . Les applications de l'analyseur sont multiples: traduction automatique (ou aide à la traduction) (Wehrli 2003), synthèse et reconnaissance de la parole (Gaudinat et al. 1999et Goldman et al. 2001, indexation et recherche 'intelligente' d'informations, extraction terminologique (Seretan 2008) et apprentissage des langues (L'Haire & Vandeventer-Faltin 2003). FiPS a été développé sur la base de la théorie Principes & Paramètres de la Grammaire Générative (Chomsky 1995: chap. 1, Haegeman 1994, Laenzlinger 2003. La structure en constituants assignée aux phrases repose sur un schéma X-barre réduit à deux niveaux: [ XP L X R ]. XP est une projection maximale de la tête X, alors que L (Spécifieurs) et R (Compléments) sont des listes (éventuellement vides) de projections maximales correspondant respectivement aux sousconstituants gauches et droits de la tête X. X est une variable correspondant aux catégories Adv (adverbe), A (adjectif), N (nom), D (déterminant), V (verbe), P (préposition), C (conjonction), T (temps La stratégie d'analyse est de type gauche à droite avec traitement parallèle des alternatives, combinant une approche incrémentale, essentiellement ascendante avec un filtre descendant. Selon cette stratégie dite du 'coin droit', l'algorithme est dirigé par les données (data-driven), c'est-à-dire on cherche à attacher un nouvel élément au coin droit d'un constituant dans le contexte gauche déjà existant. Ce dernier spécifie un ensemble de noeuds actifs auxquels le nouvel élément est susceptible de s'attacher. Trois mécanismes fondamentaux sont utilisés par l'analyseur : la projection, la combinaison et le déplacement.
L'implémentation objet de FiPS consiste à concevoir les objets linguistiques, tels que les structures lexicales et les projections syntaxiques, comme des structures abstraites dont l'implémentation peut varier FIPSCOLOR : GRAMMAIRE EN COULEUR INTERACTIVE POUR L'APPRENTISSAGE DU FRANÇAIS d'une langue à l'autre. Ces variations sont traitées par l'extension de type en ce qui concerne les structures de données et par la redéfinition des méthodes pour ce qui est des processus de traitement de ces données. Le niveau le plus abstrait dans la hiérarchie des objets décrit les propriétés fondamentales qui sont vérifiées dans toutes les langues. Ceci s'apparente d'une certaine manière au concept chomskyen de 'grammaire universelle'. Cette approche syntaxique formelle et les avantages de l'implémentation objet permettent un temps de traitement rapide (environ un million de mots par heure), une souplesse et une facilité de développement.
Adaptation des étiquettes syntagmatiques et choix d'une sortie
Parmi les deux niveaux de représentations considérés : le niveau lexical concerne la catégorie grammaticale des mots (ou part-of- En sortie de l'analyseur, les structures syntaxiques arborescentes sont représentées au format XML avec la recommandation TEI (« Text Encoding Initiative »). La TEI permet de construire un schéma personnalisé à l'aide de différents modules dont l'utilisation forme une DTD particulière. Notre formalisme utilise la version P5 de la TEI et plus particulièrement le module consacré à l'analyse linguistique (« Simple Analytic Mechanisms »).
3 Certaines structures permettent le déplacement du COD. Les phrases (ou TP) sont, pour l'instant, exclues de la fonction COD.
Conclusion et perspectives
Cet article propose un outil interactif d'aide à l'apprentissage du français qui permet à l'apprenant de mieux prendre conscience des concepts grammaticaux de base dans un cadre ludique. FipsColor est une réponse à la création de services web basée sur une application provenant du TAL et directement exploitable par la communauté pédagogique.
). Une phrase complète a donc la structure suivante : 2 [ TP [ AdvP Hier] [ DP le [ NP garçon]] a [ VP recueilli [ DP un [ NP [ AP petit] chat [ AP noir] [ AP affamé ] ] ] ]
speech): Nom Verbe, Adjectif, Préposition, Déterminant, Adverbe, Conjonction, Pronom. La correspondance entre les étiquettes de FiPS et celles de la sortie FipsColor est univoque. Il est à signaler que la richesse de l'analyse permet d'associer à chaque mot des informations supplémentaires, par exemple le genre et le nombre pour les noms et adjectifs, ou encore le temps, la personne et le nombre pour les verbes. Le niveau syntagmatique concerne les groupes ou syntagmes. La catégorie de leur tête détermine généralement leur type : groupe prépositionnel, nominal, adjectival, adverbial ou verbal. Mais c'est la fonction des constituants qui est mis en valeur dans FipsColor. Seuls les constituants et fonctions majeurs de la phrase sont représentés. Sujet, Complément d'Objet Direct et Indirect, Attribut du Sujet, Complément Circonstanciel, et Prédicat. La figure ci-dessous résume les correspondances possibles entre constituants et fonctions et leurs équivalences dans l'analyse FiPS. [ TP [ AdvP Hier] [ DP le [ NP garçon]] a [ VP recueilli [ DP un [ NP [ AP petit] chat [ AP noir] [ AP affamé] ] ] ]
Figure 1 : Correspondances constituants/fonctions Tableau 1 : Correspondances FipsColor/Fips4 Création du service web et mise en place de la web applicationNotre sortie XML est produite par un module spécifique agissant tel un service web. L'application consomme ce service et transforme le résultat XML en HTML associé à diverses technologies web pour créer les interactions nécessaires. La figure ci-dessous est une vue de FipsColor : l'analyse montre que le pronom relatif « que » est bien interprété comme étant Cod du verbe « aimer ». Dans le second exemple, les pronoms clitiques sont aussi correctement analysés : « le » est bien Cod et « lui » est Coi. FipsColor rend compte des propositions enchâssées en utilisant un soulignage multiple. La structure arborescente inhérente au XML modélise au mieux cet enchâssement.Sujet
DP spécifieur de TP
Prédicat
Tête de VP | Tête de TP
Cod
DP complément de VP 3
Coi
PP complément de VP
Attribut
DP | PP | AP complément de VP
Cc
AdvP spécifieur de TP | VP | CompVP
PP spécifieur de TP | CompVP
DP spécifieur de TP | CompVP
COD
Préd
G Prép
CC
COI
Attr
G Nom
G Adj
G Adv
G Verb
SUJ
CC
Sujet
Prédicat
Cod
Par contraste, les analyseurs 'superficiels' ne cherchent pas à construire une représentation globale, ni a fortiori une forme logique, mais restent à un niveau de représentation morpho-syntaxique, avec un regroupement des constituants minimaux (groupes nominaux, groupes prépositionnels, etc.).2 Le groupe nominal est analysé comme un syntagme déterminant (DP) contenant un syntagme nominal (NP).
The Minimalist Program. N Chomsky, MIT PressCambridge, MassCHOMSKY, N. 1995. The Minimalist Program, Cambridge, Mass., MIT Press.
Syntax-Based Speech Recognition: How a Syntactic Parser Can Help a Recognition System. A Gaudinat, Goldman J-P. & Wehrli E, EuroSpeech Conference. Budapest, HungaryvolAGAUDINAT, A, GOLDMAN J-P. & WEHRLI E. 1999. "Syntax-Based Speech Recognition: How a Syntactic Parser Can Help a Recognition System". EuroSpeech Conference, Budapest, Hungary, 1999, volA, p.1587-1590
FipsVox : a French TTS based on a syntactic parser. Goldman J.-P Gaudinat A, Nerima L, Wehrli E, 4th Speech Synthesis Workshop. EdinburghGOLDMAN J.-P, GAUDINAT A,. NERIMA L., WEHRLI E. 2001. "FipsVox : a French TTS based on a syntactic parser". 4th Speech Synthesis Workshop. Edinburgh, 2001
Introduction to Government and Binding Theory. L Haegeman, Oxford, BlackwellHAEGEMAN, L. 1994. Introduction to Government and Binding Theory, Oxford, Blackwell.
Initiation à la Grammaire Formelle du Français: Le Modèle Principes & Paramètres de la Grammaire Générative Transformationnelle. C Laenzlinger, Peter LangBerne/BerlinLAENZLINGER, C. 2003. Initiation à la Grammaire Formelle du Français: Le Modèle Principes & Paramètres de la Grammaire Générative Transformationnelle. Peter Lang, Berne/Berlin.
FIPS : Un Analyseur interactif pour le français. C Laenzlinger, Et E, Wehrli, TA Informations. 322LAENZLINGER, C. ET E. WEHRLI, 1991. "FIPS : Un Analyseur interactif pour le français". TA Informations, 32:2, 35-49.
Special Issue Error Analysis and Error Correction in Computer-Assisted Language Learning SERETAN, V. L'haire , S Vandeventer-Faltin, CAUCO. T. Heift & M. Schulze203Univ. of GenevaCollocation extraction based on syntactic parsing. Ph.D. thesisL'HAIRE, S. & VANDEVENTER-FALTIN, A 2003. "Enor diagnosis in the FreeText project". CAUCO 20(3), T. Heift & M. Schulze (éds.). Special Issue Error Analysis and Error Correction in Computer- Assisted Language Learning SERETAN, V. 2008. "Collocation extraction based on syntactic parsing". Ph.D. thesis, Univ. of Geneva.
L'analyse syntaxique des langues naturelles: Problèmes et méthodes. E Wehrli, Wehrli Masson, E , New Orleans. Figure. 2FipsColorTranslation of Words in Context". IXth MT Summit. fenêtre de résultat et menusWEHRLI, E. 1997. L'analyse syntaxique des langues naturelles: Problèmes et méthodes, Paris, Masson WEHRLI, E. 2003. "Translation of Words in Context". IXth MT Summit, New Orleans. Figure 2 : FipsColor, fenêtre de résultat et menus |
||
218,974,405 | [] | Using Automatic Speech Recognition in Spoken Corpus Curation
May 2020
Jan Gorisch gorisch@ids-mannheim.de
Leibniz-Institute for the German Language (IDS)
Germany
Michael Gref michael.gref@iais.fraunhofer.de
Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS)
Germany
Thomas Schmidt thomas.schmidt@ids-mannheim.de
Leibniz-Institute for the German Language (IDS)
Germany
Using Automatic Speech Recognition in Spoken Corpus Curation
Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)
the 12th Conference on Language Resources and Evaluation (LREC 2020)MarseilleMay 20206423oral corporaautomatic transcriptionASRcorpus curationpluricentricspoken GermanRipuarian
The newest generation of speech technology caused a huge increase of audio-visual data nowadays being enhanced with orthographic transcripts such as in automatic subtitling in online platforms. Research data centers and archives contain a range of new and historical data, which are currently only partially transcribed and therefore only partially accessible for systematic querying. Automatic Speech Recognition (ASR) is one option of making that data accessible. This paper tests the usability of a state-of-the-art ASR-System on a historical (from the 1960s), but regionally balanced corpus of spoken German, and a relatively new corpus (from 2012) recorded in a narrow area. We observed a regional bias of the ASR-System with higher recognition scores for the north of Germany vs. lower scores for the south. A detailed analysis of the narrow region data revealed -despite relatively high ASR-confidence -some specific word errors due to a lack of regional adaptation. These findings need to be considered in decisions on further data processing and the curation of corpora, e.g. correcting transcripts or transcribing from scratch. Such geography-dependent analyses can also have the potential for ASR-development to make targeted data selection for training/adaptation and to increase the sensitivity towards varieties of pluricentric languages.
Introduction
The Archive for Spoken German (AGD, Stift and Schmidt (2014), http://agd.ids-mannheim.de) is part of the CLARIN-D Centre IDS and specializes in data of spoken German, mostly corpora of natural interaction and data on varieties of German. The AGD develops and curates such corpora and makes them available to the scientific community. Besides detailed metadata on recordings and speakers, the key to accessibility are transcripts of the recordings allowing systematic queries and the application of other automated methods. So far, the high cost of manual transcription results in large parts of the corpora remaining untranscribed and therefore accessible only to a limited degree. Automatic Speech Recognition (ASR) is often claimed to be a way through that transcription bottleneck. The present paper describes some first steps in exploring whether or not that claim can be fulfilled and if so, for which type of data ASR is suitable and what factors influence the quality of ASR results. The latest developments in the field of ASR -artificial neural networks, deep learning, and the intorduction of LF-MMI models (Povey et al., 2016) -have heaved speech technology from a level that was merely useful for limited vocabulary and clean audio tasks to a level where it could be applied to various recording conditions and large vocabulary challenges as can be found in Oral History interviews Leh et al., 2018). Most of the developments have been pushed by the Kaldi-ASR-toolkit (Povey et al., 2011). Although the Word Error Rate (WER) is still below a level where a reader of an automatically derived transcript could trust that every word is correct, most of the extracted content words can already be considered as an initial starting point for historical or language researchers for accessing the content of interview data. The specific challenge is therefore to create transcripts with a relatively low investment of time, a factor that becomes almost irrelevant considering automatic processing. There-fore we want to explore how far we can get with a state-ofthe-art ASR system. With the ultimate aim of providing correct transcripts to the users of the data center, the aim of this paper is to evaluate the performance of a state-of-the-art ASR system with respect to the usability of the resutling transcripts in the sense of (i) which ASR-transcripts are good enough for certain research communities, (ii) which ASR-transcripts are worth sending to manual correction, i.e. can we save time by correcting transcripts rather than by transcribing the audio from scratch? As ASR systems tend to perform better with data that are similar to the data that they were trained on, an additional aim (iii) of this paper is to reveal the weak points of an ASR system, i.e. those points where the system lacks training data in order to improve systems in a targeted manner. These lacks can be age related, gender related, or regional. The latter is especially relevant for pluricentric languages, such as German, i.e. the data in the AGD. We assume that the quality of ASR results depends on several factors: proximity to the standard (or similarity to training data) / recording quality (background noise etc.) / degree of interactivity (overlap, speaker change). This would require to compose a systematic test-set and rigorous testing. The data in the AGD would allow for such an enterprise of creating such a test-set with a wide range of properties. With the pilot study presented in this paper we intend to demonstrate this potential.
Material and Method
Corpora
The AGD 1 contains data of various origins and various types. It contains conversational corpora, variation corpora from within the continuous language area of German and variation corpora from outside that area (extraterritorial varieties, also called speech islands). The user community is as varied as the corpora, ranging from phoneticians, over dialectologists, conversation analysts, ethnomethodologists to historical linguists. For this study we took data from three corpora: the Pfeffer-Corpus (PF), FOLK and the BETV-Corpus 2 , which we describe in more detail below. For the experiment on regionality, we chose PF. For analysing typical ASR error types qualitatively, we chose data from BETV. The Pfeffer-Corpus (AGD-PF, 1961) is suitable for testing ASR-regionality as it was recorded with high-quality technical equipment, the speech is colloquial, from city-like regions (including major cities in Germany, Switzerland and Austria, like Hamburg, Berlin, Frankfurt, Bremen, Bern, Zürich, Innsbruck, Vienna, etc., cf. Figure 2). The speakers are interviewed, however the interviewer was instructed to let mainly the target person speak. The varietal span is not as extreme as e.g. in the Zwirner-Corpus (AGD-ZW, 1956), where speakers were especially recruited from small towns, who hadn't moved about much and who had a relatively low socio-economic status. From the Pfeffer-Corpus, we chose from each recordingplace one female and one male target speaker randomly, making up a collection of altogether 112 recordings of durations between 7 and 16 minutes summing up to 21 hours and 22 minutes. From the metadata we extracted the geocodes (latitude and longitude coordinates).
Automatic Speech Recognition
We had access through a REST-API to the Audio-Mining System from the Fraunhofer IAIS (Schmidt et al., 2016) 3 . The model that was employed at the stage of processing is described in (Gref et al., 2019): The acoustic model is trained on 1000h broadcast data (Stadtschnitzer et al., 2014) that is 3-fold noise & reverberation augmented and 3fold speed perturbed (corresponding altogether to 9000h). The speed perturbation is based on work by Ko et al. (2015). The language model was trained on 1.6 billion sentences/tokens from which a pronunciation dictionary containing 2 Million entries was automatically generated using grapheme-2-phoneme conversion following (Bisani and Ney, 2008) and using the German pronunciation dictionary Phonolex 4 from the Bavarian Archive for Speech Signals (BAS).
The ASR system provides the metric "asrQuality" that corresponds to a self-estimated confidence of how good the resulting transcription might be, based on the average recognition confidences across all word tokens (the arithmetic mean). The confidence per word is based on the recalculation of the probabilities in the search lattice, a graph that is unfolded by the acoustic model and the language model, i.e. the path through the lattice with the highest probability. The values of asrQuality may vary between 0 and 100. Additionally, we took the orthographic transcripts as references to the ASR-hypotheses and calculated the Word-Error-Rate (WER). We tested the correspondence between the two metrics 'as-rQuality' and 'WER' based on the PF-data in order to check if we can assume such a correspondence -in future, for ASR-outputs for which we do not have any reference transcripts available. The asrQuality and WER were negatively correlated, r(108) = −.89, p < .001 as shown in Figure 1. We also checked manually the outliers with WER=83.5%, Q=88 (male speaker in BERN, CH); WER=51.45%, Q=90; WER=58.58%, Q=84 (both male and female speakers from Cottbus, DE). It turned out that the alignment of text and audio of the underlying reference transcripts were bad. This had an effect on the calculation of the WER, but not on the asrQuality. It is also quite reasonable to assume that there must be a lower score in word recognition if the acoustic + language model produce a lower probability and vice versa. We cannot however assume that there would be a direct correspondence between WER and asrQuality, i.e. that an asrQuality score of e.g. 85 would correspond to a WER of 15%. The tendency is that the WER is even higher than the inverse asrQuality, as the acoustics (+ language model) might still be relatively high, even if the most probable word is not correct (false positive FP), which is sometimes the case for homonymes or similarly sounding words. This does not seem to be balanced out by false negatives (FN), where a correct word could be found even when a relatively low confidence is observed. This can be the case when the acoustic probability of the current word deviates from the acoustics of the word in the training. A last constellation is true negatives (TN), where incorrectly recognized words are accompanied by a relatively low confidence, which might be the case for e.g. out-of-vocabulary errors (OOV), where none of the words in the vocabulary can be confidently matched with the acoustics (+ language model) and the system has no other option than taking the best match, which is necessarily wrong. All these constellations are summarized in Table 1 and show that a simple measure such as WER can only represent a part of the complexity of recognition errors and their causes.
TP (correct + high conf.) FP (in-corr + high conf.) FN (correct + low conf.) TN (in-corr + low conf.) Table 1: Possible constellations of correctly vs. incorrectly recognized words (corresponding to high vs. low WER) and high vs. low confidence (corresponding to high/low "asrQuality").
The results of the recognition task, i.e. the relationship between asrQuality and regionality, are presented in Section 3.1.
Benchmarking and Qualitative Analysis
For the aforementioned transcribed test data, we had the BenchmarkViewer available, an additional tool provided by the Fraunhofer IAIS to calculate the WER (among other measures) based on the hypothesis and reference transcripts. We observed the asrQuality-WER relationship (the higher the asrQuality, the lower the WER) as shown in Table 2. The events "BUND 01" and "BUND 02" are political debates in the German parliament (Bundestag). The events "PODI 01, 02, 03" are podium discussions. Both types of events are currently built up for "FOLK" (the research and teaching corpus of spoken German (FOLK, 2019)). The events from the "BETV" corpus are political talk-shows from the regional TV-channel in the Germanspeaking area in Belgium (AGD-BETV, 2019). The events from the "PF" corpus are interviews from the 1960s. (AGD-PF, 1961 Figure 1).
Table 2 also shows that the overall WER of the recognizer between 14.3% and 24.98% for the contemporary data is relatively good. For the historical data from the Pfeffer-Corpus it is slightly worse with 31.24%. With the help of the BenchmarkViewer we extracted missrecognized words and analysed the error-sources by comparing phonetic realisations with the phonetics of the word in the reference transcript and the phonetics of the word that was hypothesized by the system, cf. Section 3.2.
Results
Regionality
In order to approximate regionality, we refer to the latitude and longitude of the place where the individual speaker grew up and developed his/her way of speaking. Figure 2 illustrates this regional distribution 5 . The ASR quality for each recording is illustrated on a blue-yellow scale with blue indicating high ASR-Quality and yellow indicating low ASR-Quality. There is no noticeable difference in ASR-Quality for female vs. male speakers as shown in Figure 3. Regarding our hypothesis that there is a regional influence on the ASR-System, the ASR-Quality seems to decline from the north to the south.
Having the geographic directions available in the metadata in form of latitude and longitude coordinates, we fed them into a correlation analysis. The North-South dimension is indicated by the latitude. A Spearman correlation 6 shows a positive coefficient (rho = 0.371) indicating that an increase in latitude corresponds with an increase in asrQuality. The correlation is significant (p < 0.0001), indicating that the ASR-System performs better with data from the north than data from the south (cf. scatter plot in Figure 4). The longitude did not show a relationship with asrQuality (rho = −0.162, p = 0.09).
Typical ASR Error Types
As mentioned above, the underlying data for this section stem from the BETV-Corpus (cf. the two recordings from Table 2), that are recordings from the two towns Amel and Burg Reuland in the German speaking area in east Belgium. The regional variety is at the intersection of the varieties Ripuarian and Mosel-Franconian, cf. the dialectal map by Wiesinger (1983, p.836). The participants of the BETV-Corpus are local politicians and TV-moderators who speak the "regional standard" of that area. Apart from errors due to overlapping talk, homonymes, capitalisation or the complex German orthography (compounding), some of the error types we observed are also attributable to the specific regional variety. Two types of recognition errors seem to be systematically frequent. Of course, there are common OOV errors concerning e.g. local names (Büllingen, Bütgenbach, Elsenborn, Kaiserbaracke) or current topics of the discussion (Entschlackung, makabere Unterstellung, Asphaltwerk). A 6 We chose a Spearman correlation as we cannot expect a linear relationship between latitude and asrQuality. second type of error is caused by consistent linguistic variation that is typical for that specific region as described by Münch (1904). For example, the word "nur" (merely) was recognized as "noch" (yet). This is due to the regional pronunciation [nu:X] vs. the standard [nu:5]. The [X] was therefore attributed to another word that contained a [x] following a vowel in the standard variety of German, in this case the "noch", cf. Münch (1904, §40 p.35f.). Another example is [waXt@n] "warten" (wait) being recognized as "wachten" (guard -simple past) or [sa:G@n] "sagen" (say) being recognized as "sahen" (saw -simple past of "see"), cf. Münch (1904, §111 p.92f.). A selection of these and other examples are shown in Table 3. The table reads as in the caption. Table 3 shows that the recognizer has problems with specific phenomena in Ripuarian, where the orthographic "er", "ör", "or", "ur" are mostly diphthongized or vocalized to a-schwa in "standard German" while in Ripuarian, it seems that these phenomena are produced with a uvular -or at least velar -voiceless fricative. The orthographic "g" as in "sagen" is in standard German a voiced velar plosive, while in Ripuarian it is almost disappearing or reduced to an intervocalic approximant.
Discussion
It is common sense that an ASR System can only recognize data (speech) that it was either trained on or adapted to. Therefore, the training data should contain all types of data, covering quiet and noisy environments, young and old speakers, male and female, in different interactional settings, etc. Here we looked at one aspect: regionality by either keeping the other parameters constant or by covering them throughout. The relationship between latitude and asrQuailty shows that there is either a bias in the training-data, e.g. more training data from the north than the south of the German speaking area. Or there is a tendency to speak more northern when being on broadcast -however this might also have been a similar setting for the speakers of the Pfeffer-Corpus being interviewed, who don't seem to have that tendency. The qualitative analysis of one of the German varieties (Ripuarian) shows that despite the relatively high asrQuality (92), there are still some region-related errors that could be tackled by either employing training-data from that specific area, or by introducing other pronunciation variants into the pronunciation dictionary -or another mechanismof the ASR framework or architecture.
Conclusions
Testing an Automatic Speech Recognition System with data that is enhanced with metadata containing information on the regional background of the speakers allowed us to reveal geographic gaps, where a system performs significantly less well in one region than in another. From the perspective of a data center being interested in creating transcripts for archived audio recordings, it is at least to be expected that automatically created transcripts can be less trusted for some areas and it would be necessary to invest more time/money for correcting them. A direct outcome of this pilot study is that we made the recognition results, i.e. transcripts, for the entire BETV- Table 3: Examples of ASR-confusions due to variety-specific pronunciations. 'Reference' is the manually transcribed words, 'Hypothesis' is the automatically recognized words, 'observed' is the phonetic pronunciation, that was then either correctly recognized as the Reference ('R-correct') or incorrectly as the Hypothesis ('H-incorrect'). Sometimes it was confused with other incorrect words ('oth. incorr.').
Corpus (10 x 1h recordings) available to the research community with the latest release 2.13 of the Database for Spoken German (DGD, Schmidt (2014), http://dgd.idsmannheim.de). This is now the first corpus of the AGD whose transcripts are created entirely automatically.
Limitations and Future Work
Sofar, the BenchmarkViewer calculates error rates, precision and recall, etc. on different parameters -words (based on insertions, deletions, substitutions), speaker-diarization, punctuation, etc.
). An additional feature that we have in mind are the classification of different word error types, e.g. missing hesitation markers, compounds ("Asphalt Werk" vs. "Alphaltwerk"), overlaps, etc. Once they are classified, the remaining errors are most likely due to regional or speaker-related characteristics, which are useful to analyse for both developers of speech technology and for linguists. Extending such regionality testing to other available corpora, e.g. the Zwirner-Corpus as mentioned above, should make it possible to make finer grained evaluations. This can also be enhanced with more sophisticated analysis methods such as geographical clustering. We also haven't used all the available metadata yet (age, place of birth, language background of parents, etc.) that could be included in such analyses. The data could also be used to train/adapt the ASR-System to these regional variants and for evaluating the new System. ASR developers would also need to think about the trade-off: up to what point does it make sense to train a system with all varieties of a language, and at what point does it make sense to split a system and create recognition for individual varieties of a pluricentric language. From the data center perspective, we also need to consider which way we want to be going once correcting automatically derived transcripts is getting faster than transcribing from scratch, as this development needs to find a couterpart in software development from transcription-software to correction-software.
Figure 1 :
1Scatterplot of WER and asrQuality with regression line.
Figure 2 :
2Distribution of speakers' variety and ASR-Quality.
Figure 3 :
3Regional Distribution of ASR-Quality for female (left) and male speakers (right).
Figure 4 :
4Scatterplot of latitude vs. asrQuality ("F" = female, "M" = male) with the regression line from a linear model.
).Corpus
Event(s)
dur
Q WER ci-WER
FOLK
BUND 01
1h58 94 14.30
13.33
FOLK
BUND 02
1h32 94 18.71
17.41
FOLK
PODI 01
1h21 93 17.35
16.26
FOLK
PODI 02
0h59 91 24.98
23.88
FOLK
PODI 03
1h03 92 22.81
21.83
BETV
E 00001
0h58 92 19.87
18.70
BETV
E 00002
0h56 92 22.82
21.85
PF
112 events 21h22 89 31.24
30.02
Table 2 :
2relationship between asrQuality (Q) and Word-
Error-Rate (WER) in [%] or case-insensitive WER (ci-
WER) for recordings from the corpora FOLK and BETV
(averaged values for the events from corpus PF, including
the outliers mentioned regarding
Reference R-canonical Hypothesis H-canonical observed R-corr. H-incorr. oth. incorr.total
dort
dO5t
doch
dOx
dOXt
38
18
5
61
nur
nu:5
noch/nun
nOx/nu:n
nu:X
30
4/5
4
43
noch
nOx
nur
nu:5
nO
73
1
10
84
sagen
sa:g@n
sahen
sa:h@n
sa:G@n
58
3
5
66
Dörfer
doe5f@
doch vor
dOx fo:6
doeXfO
2
1
1
4
Dorf
dO5f
doch
dOx
dOXf
4
1
0
5
Vorteile
fo:5taIl@
Bruchteile
bRUxtaIl@
VoXtaIl@
1
1
0
2
gebucht
g@bu:xt
Geburt
g@bu:5t
g@bu:t
0
1
0
1
Geburten
g@bu:5t@n
gebucht
g@bu:Xt
g@bu:Xt@n
0
1
0
1
aber
a:b5
aber auch
a:b5 aUx
a:b@X
101
5
19
125
Most of the corpora are disseminated through the Database of Spoken German (http://dgd.ids-mannheim.de). Corpora that have not been sufficiently curated, yet, are accessible through the personal service of the AGD (http://agd.ids-mannheim.de).
The two recordings of BETV that we analysed here are also part of FOLK. 3 cf. https://www.iais.fraunhofer.de/en/businessareas/content-technologies-and-services.html 4 Phonolex: https://www.phonetik.uni-muenchen.de/ Bas/BasPHONOLEXeng.html
The map was plotted with the function shiny(Chang et al., 2019) in R (R Core Team, 2019) and the open-source JavaScript library Leaflet https://leafletjs.com/.
AcknowledgementsWe would like to thank the Fraunhofer IAIS for providing API access to the Audio Mining system and the Bench-markViewer, Sascha Wolfer and Sandra Hansen for advice on the statistical analysis, and Ralf Knöbl for informative exchanges on the Ripuarian variety.
Joint-sequence models for grapheme-to-phoneme conversion. Speech Communication. M Bisani, H Ney, 50Bisani, M. and Ney, H. (2008). Joint-sequence models for grapheme-to-phoneme conversion. Speech Communica- tion, 50(5):434-451.
shiny: Web Application Framework for R. W Chang, J Cheng, J Allaire, Y Xie, J Mcpherson, R package version 1.4.0Chang, W., Cheng, J., Allaire, J., Xie, Y., and McPherson, J., (2019). shiny: Web Application Framework for R. R package version 1.4.0.
Improved transcription and indexing of oral history interviews for digital humanities research. M Gref, J Köhler, A Leh, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC. the Eleventh International Conference on Language Resources and Evaluation (LRECGref, M., Köhler, J., and Leh, A. (2018). Improved tran- scription and indexing of oral history interviews for dig- ital humanities research. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
Two-staged acoustic modeling adaption for robust speech recognition by the example of german oral history interviews. M Gref, C Schmidt, S Behnke, J Köhler, 2019 IEEE International Conference on Multimedia and Expo (ICME). IEEEGref, M., Schmidt, C., Behnke, S., and Köhler, J. (2019). Two-staged acoustic modeling adaption for ro- bust speech recognition by the example of german oral history interviews. In 2019 IEEE International Confer- ence on Multimedia and Expo (ICME), pages 796-801. IEEE.
Audio augmentation for speech recognition. T Ko, V Peddinti, D Povey, S Khudanpur, 16th Annual Conference of the International Speech Communication Association (Interspeech). Ko, T., Peddinti, V., Povey, D., and Khudanpur, S. (2015). Audio augmentation for speech recognition. In 16th An- nual Conference of the International Speech Communi- cation Association (Interspeech), pages 3586-3589.
A Leh, J Köhler, M Gref, N Himmelmann, Speech analytics in research based on qualitative interviews. experiences from ka3. VIEW Journal of European Television History and Culture. 7Leh, A., Köhler, J., Gref, M., and Himmelmann, N. (2018). Speech analytics in research based on qualitative inter- views. experiences from ka3. VIEW Journal of European Television History and Culture, 7(14).
Grammatik der ripuarisch-fränkischen Mundart. F Münch, Friedrich CohenBonn, GermanyMünch, F. (1904). Grammatik der ripuarisch-fränkischen Mundart. Friedrich Cohen, Bonn, Germany.
The kaldi speech recognition toolkit. D Povey, A Ghoshal, G Boulianne, L Burget, O Glembek, N Goel, M Hannemann, P Motlicek, Y Qian, P Schwarz, J Silovsky, G Stemmer, K Vesely, IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). DecemberPovey, D., Ghoshal, A., Boulianne, G., Burget, L., Glem- bek, O., Goel, N., Hannemann, M., Motlicek, P., Qian, Y., Schwarz, P., Silovsky, J., Stemmer, G., and Vesely, K. (2011). The kaldi speech recognition toolkit. In IEEE Workshop on Automatic Speech Recognition and Under- standing (ASRU). IEEE Signal Processing Society, De- cember.
Purely sequence-trained neural networks for ASR based on lattice-free MMI. D Povey, V Peddinti, D Galvez, P Ghahremani, V Manohar, X Na, Y Wang, S Khudanpur, 17th Annual Conference of the International Speech Communication Association (Interspeech). Povey, D., Peddinti, V., Galvez, D., Ghahremani, P., Manohar, V., Na, X., Wang, Y., and Khudanpur, S. (2016). Purely sequence-trained neural networks for ASR based on lattice-free MMI. In 17th Annual Con- ference of the International Speech Communication As- sociation (Interspeech), pages 2751-2755.
R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. R Core Team, Vienna, AustriaR Core Team, (2019). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria.
The Fraunhofer IAIS audio mining system: Current state and future directions. C Schmidt, M Stadtschnitzer, J Köhler, 12. ITG Symposium on Speech Communication. Schmidt, C., Stadtschnitzer, M., and Köhler, J. (2016). The Fraunhofer IAIS audio mining system: Current state and future directions. In 12. ITG Symposium on Speech Com- munication, pages 1-5.
The Database for Spoken German-DGD2. T Schmidt, Ninth international conference on Language Resources and Evaluation (LREC). Reykjavik, IcelandEuropean Language Resources Association (ELRASchmidt, T. (2014). The Database for Spoken German- DGD2. In Ninth international conference on Language Resources and Evaluation (LREC), pages 1451-1457, Reykjavik, Iceland. European Language Resources As- sociation (ELRA).
Mündliche korpora am IDS: vom deutschen spracharchiv zur datenbank für gesprochenes deutsch. U.-M Stift, T Schmidt, Ansichten und Einsichten. 50 Jahre Institut für Deutsche Sprache. Melanie Steinle et al.Mannheim, GermanyInstitut für Deutsche SpracheStift, U.-M. and Schmidt, T. (2014). Mündliche korpora am IDS: vom deutschen spracharchiv zur datenbank für gesprochenes deutsch. In Melanie Steinle et al., editors, Ansichten und Einsichten. 50 Jahre Institut für Deutsche Sprache, pages 360-375. Institut für Deutsche Sprache, Mannheim, Germany.
Die Einteilung der deutschen Dialekte. P Wiesinger, Dialektologie. Ein Handbuch zur deutschen und allgemeinen Dialektforschung. Werner Besch, et al.Berlin, New Yorkde GruyterWiesinger, P. (1983). Die Einteilung der deutschen Di- alekte. In Werner Besch, et al., editors, Dialektologie. Ein Handbuch zur deutschen und allgemeinen Dialek- tforschung, pages 807-900. de Gruyter, Berlin, New York.
Language Resource References. Language Resource References
Belgische TV-Debatten. Archive for Spoken German, distributed via the DGD, Database for Spoken German. Agd-Betv , AGD-BETV. (2019). Belgische TV-Debatten. Archive for Spoken German, distributed via the DGD, Database for Spoken German.
Deutsche Umgangssprachen: Pfeffer-Korpus. Archive for Spoken German, distributed via the DGD, Database for Spoken German. Agd-Pf , AGD-PF. (1961). Deutsche Umgangssprachen: Pfeffer- Korpus. Archive for Spoken German, distributed via the DGD, Database for Spoken German.
Deutsche Mundarten: Zwirner-Korpus. Archive for Spoken German, distributed via the DGD, Database for Spoken German. Agd-Zw , AGD-ZW. (1956). Deutsche Mundarten: Zwirner-Korpus. Archive for Spoken German, distributed via the DGD, Database for Spoken German.
Research and Teaching Corpus of Spoken German. Archive for Spoken German, distributed via the DGD, Database for Spoken German. Folk, FOLK. (2019). Research and Teaching Corpus of Spoken German. Archive for Spoken German, distributed via the DGD, Database for Spoken German.
Exploiting the large-scale German broadcast corpus to boost the Fraunhofer IAIS speech recognition system. M Stadtschnitzer, J Schwenninger, D Stein, J Köhler, International Conference on Language Resources and Evaluation (LREC). Stadtschnitzer, M., Schwenninger, J., Stein, D., and Köhler, J. (2014). Exploiting the large-scale German broadcast corpus to boost the Fraunhofer IAIS speech recognition system. In International Conference on Language Re- sources and Evaluation (LREC), pages 3887-3890. |
||
40,620,337 | Une approche de recherche d'information structurée fondée sur la correction d'erreurs à l'indexation des documents | Dans cet article, nous nous sommes intéressés à la prise en compte des erreurs dans les contenus textuels des documents XML. Nous proposons une approche visant à diminuer l'impact de ces erreurs sur les systèmes de Recherche d'Information (RI). En effet, ces systèmes produisent des index associant chaque document aux termes qu'il contient. Les erreurs affectent donc la qualité des index ce qui conduit par exemple à considérer à tort des documents mal indexés comme non pertinents (resp. pertinents) vis-à-vis de certaines requêtes. Afin de faire face à ce problème, nous proposons d'inclure un mécanisme de correction d'erreurs lors de la phase d'indexation des documents. Nous avons implémenté cette approche au sein d'un prototype que nous avons évalué dans le cadre de la campagne d'évaluation INEX.ABSTRACTStructured Information Retrieval Approach based on Indexing Time Error CorrectionIn this paper, we focused on errors in the textual content of XML documents. We propose an approach to reduce the impact of these errors on Information Retrieval (IR) systems. Indeed, these systems rely on indexes associating each document to corresponding terms. Indexes quality is negatively affected by those misspellings. These errors makes it difficult to later retrieve documents (or parts of them) in an effective way during the querying phase. In order to deal with this problem we propose to include an error correction mechanism during the indexing phase of documents. We achieved an implementation of this spelling aware information retrieval system which is currently evaluated over INEX evaluation campaign documents collection. MOTS-CLÉS : Recherche d'information, dysorthographie, correction d'erreurs, xml. | [
192467293,
6230775
] | Une approche de recherche d'information structurée fondée sur la correction d'erreurs à l'indexation des documents
Arnaud Renard arnaud.renard@insa-lyon.fr
Université de Lyon
CNRS
UMR 5205
INSA-Lyon
LIRIS
F-69621Villeurbanne Cedex
Sylvie Calabretto sylvie.calabretto@insa-lyon.fr
Université de Lyon
CNRS
UMR 5205
INSA-Lyon
LIRIS
F-69621Villeurbanne Cedex
Béatrice Rumpler beatrice.rumpler@insa-lyon.fr
Université de Lyon
CNRS
UMR 5205
INSA-Lyon
LIRIS
F-69621Villeurbanne Cedex
Une approche de recherche d'information structurée fondée sur la correction d'erreurs à l'indexation des documents
Actes de la conférence conjointe JEP-TALN-RECITAL 2012, volume 2: TALN, pages 519-526, Grenoble, 4 au 8 juin 2012. c 2012 ATALA & AFCPInformation retrievalmisspellingserror correctionxml
Dans cet article, nous nous sommes intéressés à la prise en compte des erreurs dans les contenus textuels des documents XML. Nous proposons une approche visant à diminuer l'impact de ces erreurs sur les systèmes de Recherche d'Information (RI). En effet, ces systèmes produisent des index associant chaque document aux termes qu'il contient. Les erreurs affectent donc la qualité des index ce qui conduit par exemple à considérer à tort des documents mal indexés comme non pertinents (resp. pertinents) vis-à-vis de certaines requêtes. Afin de faire face à ce problème, nous proposons d'inclure un mécanisme de correction d'erreurs lors de la phase d'indexation des documents. Nous avons implémenté cette approche au sein d'un prototype que nous avons évalué dans le cadre de la campagne d'évaluation INEX.ABSTRACTStructured Information Retrieval Approach based on Indexing Time Error CorrectionIn this paper, we focused on errors in the textual content of XML documents. We propose an approach to reduce the impact of these errors on Information Retrieval (IR) systems. Indeed, these systems rely on indexes associating each document to corresponding terms. Indexes quality is negatively affected by those misspellings. These errors makes it difficult to later retrieve documents (or parts of them) in an effective way during the querying phase. In order to deal with this problem we propose to include an error correction mechanism during the indexing phase of documents. We achieved an implementation of this spelling aware information retrieval system which is currently evaluated over INEX evaluation campaign documents collection. MOTS-CLÉS : Recherche d'information, dysorthographie, correction d'erreurs, xml.
Introduction
Les documents produits dans un cadre professionnel doivent satisfaire à un niveau minimum de qualité et font l'objet de multiples cycles de relecture et correction permettant d'y parvenir. Cela constituait auparavant le principal mode de production d'informations néanmoins cette pratique a fortement évolué et à l'échelle d'Internet, il s'agit désormais d'un mode de production de l'information qui peut être considéré comme marginal. En effet, la plupart des documents sont créés par des utilisateurs hors de tout cadre professionnel. Ces derniers sont donc davantage susceptibles de commettre des erreurs en employant un lexique qu'ils ne maîtrisent pas toujours et qui peut s'avérer inadapté au sujet traité. Par ailleurs, le contenu publié sur Internet n'est pas soumis à un contrôle de qualité : les blogs ont popularisé l'auto-publication de masse à la fois gratuite et immédiatement disponible. Il est donc légitime dans ce cas d'émettre des réserves sur la qualité des documents et autres informations produits dans ce cadre (Subramaniam et al., 2009). Les systèmes de RI constituent les principaux points d'accès aux informations d'Internet. Ils sont affectés par les erreurs (Kantor et Voorhees, 2000) dont la correction constitue un axe d'amélioration important qu'il convient d'étudier (Varnhagen et al., 2009).
Dans la section 2 nous présenterons la RI dans les documents (semi-)structurés XML ainsi que les travaux tentant de mêler RI et correction d'erreurs. Dans la section 3, nous présenterons notre approche intégrant la gestion de la correction des erreurs durant la phase d'indexation des documents. Nous analyserons les résultats de l'évaluation de notre système de RI sans et avec prise en charge des erreurs sur la campagne d'évaluation INEX dans la section 4. Enfin, nous conclurons et nous présenterons nos perspectives d'évolution en section 5.
Contexte général et positionnement 2.1 Recherche d'information structurée
Les documents XML constituent un des formats de diffusion de l'information les plus répandus sur internet. Nous allons dans un premier temps modéliser ces documents dont la structure explicite est plus complexe que de simples documents textuels "plats". Un document XML structuré ds peut être représenté par un arbre dans lequel on peut distinguer 3 types de noeuds différents : les noeuds feuilles n f i représentant le contenu textuel, les noeuds internes ni i correspondant aux éléments ainsi que leurs attributs na i . <?xml version="1.0" encoding="UTF−8"?> <article> <name id="1337">Lorem. Les informations textuelles sont présentes principalement dans les noeuds feuilles qui sont les noeuds à indexer en priorité et qui constitueront le niveau de granularité le plus fin de notre système de RI. Il diffère en cela des systèmes de RI classiques dont la granularité correspond au document. Plusieurs approches de la littérature (Kamps et al., 2009) permettent la prise en compte de cette granularité plus fine mais aussi de la structure des documents. Nous proposons de nous appuyer sur une adaptation du modèle vectoriel de (Salton, 1971) ainsi que sur l'approche employée par XFIRM (Sauvagnat et Boughanem, 2005) qui introduit une méthode de propagation de la pertinence au travers de la structure des documents.
Pondération des noeuds feuilles (orientée contenu)
Lors de l'évaluation d'une requête le score relatif à la pertinence des noeuds feuilles est calculé directement, tandis que les scores des noeuds internes sont propagés dynamiquement à partir des noeuds feuilles à travers l'arborescence du document. Cela permet de retourner une liste ordonnée des noeuds (sous-arbres) les plus pertinents pour la requête.
Le score d'un noeud feuille s n f vis-à-vis d'une requête textuelle r composée d'une séquence de n termes (ou mots-clés) t 1 , ..., t n se calcule selon la formule suivante :
s n f (r) = n i=1 p r t i × p n f t i(1)
dans laquelle, p r t i et p n f t i sont respectivement les poids du i-ème terme t i dans la requête r (évalué lors de l'interrogation), et dans le noeud feuille n f (évalué lors de l'indexation). Afin d'adapter le modèle vectoriel de Salton aux documents XML structurés, nous avons choisi un système de pondération qui reflète l'importance locale des termes dans les noeuds feuilles (t f ) 1 et globale dans les documents (id f ) 2 ainsi que dans les éléments (ie f ) 3 de la collection.
p r t i = t f r t i p n f t i = t f n f t i × id f t i × ie f t i(2)
où, t f r t i et t f n f t i sont respectivement la fréquence du terme t i dans la requête r et dans le noeud feuille n f . La fréquence correspond au nombre d'occurences du terme t i respectivement dans la requête r (dénoté par |t r i |) et dans n f (dénoté par |t n f i |), divisé par le nombre de termes respectivement dans la requête r (dénoté par |r|) et dans le noeud feuille n f (dénoté par |n f |).
t f r t i = |t r i | |r| t f n f t i = |t n f i | |n f | (3)
et, id f t i (resp. ie f t i ) représente la fréquence inverse du terme t i dans les documents (resp. les noeuds feuilles). |D| (resp. |N F |) est le nombre total de documents (resp. noeuds feuilles) de la collection et |d t i | (resp. |n f t i |) le nombre de documents (resp. noeuds feuilles) qui contiennent le terme t i .
id f t i = log |D| |d t i |+1 + 1 ie f t i = log |N F | |n f t i |+1 + 1(4)
Pondération des noeuds internes (orientée structure)
Lorsqu'un noeud feuille est pertinent vis-à-vis d'une requête, les noeuds internes ancêtres de ce dernier le sont également dans une certaine mesure du fait qu'ils englobent ce dernier. Le score des noeuds feuilles peut ainsi être propagé de proche en proche à leurs noeuds ascendants (selon une fonction d'agrégation) jusqu'au noeud racine qui représente le document dans son intégralité. où, α compris dans l'intervalle [0.
s ni (r) = |N F s n f (r)>0 ni |. n f k ∈N F n α dist(ni,n f k )−1 × s n f k (r)(5)
.1] représente le facteur d'atténuation de l'importance du noeud feuille n f k vis-à-vis du noeud interne ni et d ist(ni, n f k ) représente la distance entre le noeud interne ni et le noeud feuille n f k dans la structure arborescente du document. Ainsi, les termes qui apparaissent près de la racine d'un sous-arbre sont plus pertinents pour l'élément racine que ceux qui apparaissent à un niveau plus profond du sous-arbre.
Et |N F s n f (q)>0) ni
| représente le nombre de noeuds feuilles du noeud interne qui sont pertinents car un noeud qui contient plus de noeuds pertinents peut être considéré comme plus pertinent.
En présence d'erreurs, le calcul des scores des noeuds feuilles et notamment le facteur p n f t i de la formule 1 est impacté car le t f n f t i (cf. formule 2) est amoindri voire annulé dans certain cas ce qui diminue la pertinence du noeud. Il est donc important de considérer la correction des erreurs.
Correction des erreurs dans les systèmes de RI
La plupart des approches de correction d'erreurs associées aux systèmes de RI considèrent uniquement la correction des requêtes. tels que les travaux de (Sitbon et al., 2007), ou encore le "Did you mean..." introduit par Google qui n'est donc pas adapté. Certains des travaux liés à la campagne d'évaluation TREC 4 considèrent la correction des documents.
La campagne TREC-5 Confusion track a rendu disponibles différentes versions d'une collection de plus de 55000 documents contenant respectivement des taux d'erreurs de 0%, 5%, et 20%. L'article de synthèse de la campagne (Kantor et Voorhees, 2000) présente les différentes approches pour la gestion des erreurs suivies par 5 des participants. Néanmoins, ils ont pu constater une dégradation des performances de tous les systèmes de RI en présence de documents corrompus contenant des erreurs essentiellement dues à la non correspondance entre les termes de la requête et les termes par lesquels les documents ont été indexés. Le même phénomène d'augmentation des silences à l'interrogation et de perte de précision même à de faibles taux de corruption des documents (taux d'erreurs de 3%) a été observé par (Ruch, 2002). La campagne TREC-6 Spoken document retrieval track considère des documents issus de transcriptions de même que (Gravier et al., 2011).
Dans le cadre de TREC-5, trois systèmes s'appuient sur l'expansion de requêtes en y ajoutant des versions altérées des termes qui la composaient. Cela présente l'inconvénient d'introduire du bruit supplémentaire dans les résultats du système de RI lorsque le nombre de variations des termes de la requête ajoutées à la requête initiale augmente. Deux autres systèmes ont suivi des approches différentes et ont essayé de corriger directement les erreurs présentes dans les documents ce qui semble apporter un gain plus important. Cela constitue un point de départ intéressant dans l'étude de la robustesse des systèmes de RI face aux erreurs. donc sur deux sous-systèmes : un système de RI XML fondé sur le modèle XFIRM présenté en section 2.1, et un système de correction d'erreurs qui y est intégré.
En effet, le modèle XFIRM ne permet pas la prise en compte des erreurs, c'est pourquoi les fonctions de calcul de la pondération des termes doivent être modifiées pour en tenir compte. De plus, un système de correction d'erreurs est nécessaire afin d'identifier les termes erronés et d'identifier les termes qui doivent leur être substitués dans l'index.
Supposons les deux phrases suivantes p1 et p2 appartenant respectivement à deux documents XML ds1 et ds2 simples car comportant un seul élément à leur racine respectivement nf1 et nf2 : p1 : "The trees are green." p2 : "Green paper is made of teer."
Lors de la construction de l'index, les termes sont lemmatisés (mis sous une forme standard : noms au singulier, ...) puis filtrés en fonction d'une liste de mots non significatifs ("stop-words"). L'index construit à partir de ces deux documents est ainsi représenté dans la table 1. Ainsi, une recherche comportant les mots-clés tree et paper aboutira aux scores suivants pour chacun des noeuds des deux documents : s ds1 (r) = s n f 1 (r) = p r t r ee × p n f 1 t r ee + p r paper × p n f 1 paper = 0, 25 s ds2 (r) = s n f 2 (r) = p r t r ee × p n f 2 t r ee + p r paper × p n f 2 paper = 0, 165
Comme cela peut être constaté sur cet exemple, le document ds1 obtient un score de 0, 25 supérieur au score de 0, 165 obtenu par ds2. Néanmoins, on s'aperçoit bien en lisant les 2 phrases que p1 (et donc nf1 et ds1) devrait moins bien répondre à la requête que p2 car elle ne contient pas paper alors que c'est le cas de p2 (et donc nf2 et ds2). Pour pallier cela, le système de correction d'erreurs est utilisé afin d'associer chaque erreur à sa correction avec un degré de confiance δ tel que t er r δ −→ t cor . Cela permet ainsi de détecter que le terme teer noté t er r constitue un terme erroné et qu'il doit être remplacé par le terme original tree noté t cor .
Afin de prendre en compte les occurrences potentielles des termes issus de la correction, il est nécessaire de modifier les formules permettant l'obtention de la pondération des termes dans les noeuds des documents (cf. formule 2) à savoir : le t f (cf. formule 3), l'id f et l'ie f (cf. formule 4).
t f n f t i = |t n f i | + |t n f cor | e=1 δ e |n f |(7)
où, |t n f cor | e=1 δ e est le nombre (pondéré par la confiance δ e ) de termes erronés t n f er r dont la correction t n f cor est égale au terme original t n f
i .
id f t i = log |D| |d t i |+ |dt cor | e=1 δ e +1 + 1 ie f t i = lo g |N F | |n f t i |+ |n ft cor | e=1 δ e +1 + 1 (8) où, |d tcor | e=1 δ e (resp.
|n f tcor | e=1 δ e ) est le nombre (pondéré par la confiance δ e ) de documents d t er r (resp. d'éléments n f t er r ) contenant des termes erronés dont la correction d t cor (resp. n f t cor ) est égale au terme original d t i (resp. n f t i ).
Ainsi, si on reprend l'exemple précédent en considérant un degré de confiance plutôt modéré δ de 60% dans la correction (en pratique ce degré est déterminé par le score attribué à t cor par le système de correction d'erreurs), on obtient l'index corrigé selon les formules 7 et 8 présenté dans le tableau 2 : Une recherche comportant les mots-clés tree et paper aboutira aux scores suivants pour chacun des noeuds des deux documents :
s cor ds1 (r) = s cor n f 1 (r) = p r t r ee × p n f 1 t r ee + p r paper × p n f 1 paper = 0, 19 s cor ds2 (r) = s cor n f 2 (r) = p r t r ee × p n f 2 t r ee + p r paper × p n f 2 paper = 0, 25
Par conséquent, le document ds2 sera mieux classé que le document ds1 (et cela bien que le degré de confiance dans les corrections qui a été choisi soit relativement faible), ce qui est correct compte tenu du fait que c'est ce premier qui est le plus pertinent des deux documents. L'approche proposée a servi de support à l'implémentation de nos prototypes SnAIRS/SAIRS (Spelling (non-)Aware Information Retrieval System) évalués ci-dessous.
Évaluation
Nos prototypes SnAIRS/SAIRS ont été évalués sur la collection de documents du track ad-hoc de la campagne d' ]. La correction des erreurs à l'indexation permet donc d'obtenir une précision accrue dans les premiers niveaux de rappels. Ces résultats sont prometteurs (de nombreux paramètres peuvent être améliorés) bien qu'ils soient pour l'instant relativement éloignés du Top-10 d'INEX (Kamps et al., 2009) dont les systèmes plus aboutis intègrent des mécanismes tel que l'expansion de requêtes leur permettant de mieux satisfaire aux requêtes "pauvres" composées d'un seul mot-clé.
Conclusion et perspectives
Dans cet article nous avons considéré un problème qui touche de façon transverse l'ensemble des applications amenées à manipuler des informations de qualité variable. Nous avons ainsi considéré le cas des informations textuelles qui sont souvent considérées de fait comme étant "propres". Nous avons proposé une solution à ce problème pour les systèmes de RI structurés en section 3 qui pourrait être étendue à la plupart des systèmes de RI car cette dernière consiste à y intégrer un système de correction d'erreurs lors du processus d'indexation. Nous avons dans un premier temps identifié les contraintes spécifiques imposées par les systèmes de RI vis-à-vis des systèmes de corrections d'erreurs, et nous les avons évalués dans (Renard et al., 2011). La correction d'erreurs à l'indexation présente des avantages (cf.
FIGURE 1 -
1Document XML (à gauche) et sa représentation arborescente (à droite).
TABLE 1 -
1Index des documents ds1 et ds2 (les facteurs idf et ief sont égaux car les documents ne comportent chacun qu'un seul élément).
TABLE 2 -
2Index modifié des documents ds1 et ds2 (les facteurs idf et ief sont égaux car les documents ne comportent chacun qu'un seul élément).
TABLE 3 -TABLE 4 -
34Propriétés de SnAIRS (sans correction) et SAIRS (avec correction). fait que les erreurs constituent autant de variations des termes qui viennent augmenter le nombre d'entrées différentes dans l'index. On pourrait penser qu'un index plus petit devrait permettre une exécution plus rapide des requêtes. Bien que cela ne soit pas visible (on constate une dégradation et non pas un gain) dans la table 3, c'est effectivement le cas mais cela est contrebalancé par le fait qu'il y a un nombre plus important de correspondances dans l'index et donc un nombre plus important de résultats à retourner ce qui demande plus de temps.La taille de l'index et le temps de réponse ne sont pas les seuls facteurs impactés par les erreurs, c'est aussi le cas de la pertinence des résultats. Les systèmes ont ainsi été évalués sur la tache focused qui est la plus classique car elle est dédiée à la recherche des éléments (parties de documents) les plus pertinents dans les premiers rangs des résultats de la requête. Cette tache est évaluée en fonction de la précision interpolée à 1% de rappel (iP[.01], la métrique principale), mais aussi de la moyenne des précisions interpolées (MAiP) sur les 101 points de rappel. Résultats de SnAIRS (sans correction), SAIRS (avec correction).On peut observer sur la table 4 que SnAIRS obtient une précision inférieure à SAIRS aux différents niveaux de rappels considérés par la campagne INEX et notamment pour la mesure officielle d'iP[.01Bien que la collection de documents INEX ne contienne qu'un faible taux d'erreurs (les documents
issus de Wikipedia sont de relativement bonne qualité), on peut constater dans la table 3 que le
volume occupé par l'index est beaucoup plus important pour les mêmes documents lorsque ces
derniers contiennent des erreurs même en faible quantité. Ce comportement peut s'expliquer
par le Participant
iP[.00]
iP[.01]
iP[.05]
iP[.10]
MAiP
SnAIRS
0.3073
0.2894
0.1788
0.1501
0.0499
SAIRS
0.3592
0.3141
0.1967
0.1694
0.0598
table 3 )
3et permet de construire des index plus représentatifs du contenu réel des documents ce qui aboutit à de meilleurs résultats (cf. table 4) que sans correction d'erreurs. De plus, la collection de documents basée sur Wikipedia ne contient que peu d'erreurs et il serait intéressant de corrompre volontairement cette dernière afin de mieux mettre en lumière l'apport de notre proposition.
Proposition : Construction d'index corrigésNotre approche consiste à corriger les erreurs lors de la phase d'indexation du système de RI et plus précisément pendant l'analyse du contenu textuel des documents. Notre proposition s'appuie 4. TREC : Text REtrieval Conference
K Atkinson, Correcteur Aspell. consulté le 15/01/2012ATKINSON, K. (2011). Correcteur Aspell. http://aspell.net. [consulté le 15/01/2012].
Exploiting speech for automatic TV delinearization : From streams to cross-media semantic navigation. G Gravier, C Guinaudeau, G Lecorvé, P Sébillot, EURASIP JIVP. 02011GRAVIER, G., GUINAUDEAU, C., LECORVÉ, G. et SÉBILLOT, P. (2011). Exploiting speech for automatic TV delinearization : From streams to cross-media semantic navigation. EURASIP JIVP, 2011(0).
Overview of the INEX. J Kamps, S Geva, A Trotman, A Woodley, M Koolen, KAMPS, J., GEVA, S., TROTMAN, A., WOODLEY, A. et KOOLEN, M. (2009). Overview of the INEX
éditeurs : Advances in Focused Retrieval. Ad Hoc Track, Geva In, S Kamps, J Trotman, A , Lecture Notes in Computer Science. 5631Springer-VerlagAd Hoc Track. In GEVA, S., KAMPS, J. et TROTMAN, A., éditeurs : Advances in Focused Retrieval, volume 5631 de Lecture Notes in Computer Science, pages 1-28. Springer-Verlag.
TREC-5 Confusion Track : Comparing Retrieval Methods for Scanned Text. P B Kantor, E M Voorhees, Information Retrieval. 22KANTOR, P. B. et VOORHEES, E. M. (2000). TREC-5 Confusion Track : Comparing Retrieval Methods for Scanned Text. Information Retrieval, 2(2):165-176.
An evaluation model for systems and resources employed in the correction of errors in textual documents. A Renard, S Calabretto, B Rumpler, F Morvan, A M Tjoa, R R Wagner, éditeurs : 8 th International Workshop on Text-based Information Retrieval in conjunction with the 22 nd International Conference DEXA 2011. Toulouse, FranceIEEE Computer SocietyRENARD, A., CALABRETTO, S. et RUMPLER, B. (2011). An evaluation model for systems and resources employed in the correction of errors in textual documents. In MORVAN, F., TJOA, A. M. et WAGNER, R. R., éditeurs : 8 th International Workshop on Text-based Information Retrieval in conjunction with the 22 nd International Conference DEXA 2011, pages 160-164, Toulouse, France. IEEE Computer Society.
Using contextual spelling correction to improve retrieval effectiveness in degraded text collections. P Ruch, 19 th international conference on Computational linguistics. Association for Computational Linguistics1RUCH, P. (2002). Using contextual spelling correction to improve retrieval effectiveness in degraded text collections. In 19 th international conference on Computational linguistics-Volume 1, volume 1, page 7. Association for Computational Linguistics.
The SMART Retrieval System -Experiments in Automatic Document Processing. G Salton, Prentice HallSALTON, G. (1971). The SMART Retrieval System -Experiments in Automatic Document Processing. Prentice Hall.
Using a Relevance Propagation Method for Adhoc and Heterogeneous Tracks at INEX. K Sauvagnat, M Boughanem, N Fuhr, M Lalmas, S Malik, Z Szlavik, Lecture Notes in Computer Science. 3493Springer-Verlagéditeurs : Advances in XML Information RetrievalSAUVAGNAT, K. et BOUGHANEM, M. (2005). Using a Relevance Propagation Method for Adhoc and Heterogeneous Tracks at INEX 2004. In FUHR, N., LALMAS, M., MALIK, S. et SZLAVIK, Z., éditeurs : Advances in XML Information Retrieval, volume 3493 de Lecture Notes in Computer Science, pages 499-532. Springer-Verlag.
Traitements phrastiques phonétiques pour la réécriture de phrases dysorthographiées. L Sitbon, P Bellot, P Blache, France Toulouse, L V Subramaniam, S Roy, T A Faruquie, S Negi, A Survey of Types of Text Noise and Techniques to Handle Noisy Text. Language. 14 ème conférence TALNSITBON, L., BELLOT, P. et BLACHE, P. (2007). Traitements phrastiques phonétiques pour la réécriture de phrases dysorthographiées. In 14 ème conférence TALN, Toulouse, France. SUBRAMANIAM, L. V., ROY, S., FARUQUIE, T. A. et NEGI, S. (2009). A Survey of Types of Text Noise and Techniques to Handle Noisy Text. Language, pages 115-122.
Spelling and the Web. C K Varnhagen, G P Mcfall, L Figueredo, B S Takach, J Daniels, H Cuthbertson, Journal of Applied Developmental Psychology. 304VARNHAGEN, C. K., MCFALL, G. P., FIGUEREDO, L., TAKACH, B. S., DANIELS, J. et CUTHBERTSON, H. (2009). Spelling and the Web. Journal of Applied Developmental Psychology, 30(4):454-462.
TREC-6 Spoken Document Retrieval Track. E M Voorhees, J Garofolo, K Jones, Bulletin of the American Society for Information Science and Technology. 265VOORHEES, E. M., GAROFOLO, J. et SPARCK JONES, K. (2000). TREC-6 Spoken Document Retrieval Track. Bulletin of the American Society for Information Science and Technology, 26(5):18-19. |
5,144,216 | NICT@WMT09: Model Adaptation and Transliteration for Spanish-English SMT | This paper describes the NICT statistical machine translation (SMT) system used for the WMT 2009 Shared Task (WMT09) evaluation. We participated in the Spanish-English translation task. The focus of this year's participation was to investigate model adaptation and transliteration techniques in order to improve the translation quality of the baseline phrasebased SMT system. | [
5921061,
2330566,
2650085,
26255400,
8884845,
7164502,
38407095,
6286044,
5219389
] | NICT@WMT09: Model Adaptation and Transliteration for Spanish-English SMT
Association for Computational LinguisticsCopyright Association for Computational Linguistics30 March -31 March 2009. 2009
Michael Paul michael.paul@nict.go.jp
Language Translation Group
MASTAR Project National Institute of Information and Communications Technology
Andrew Finch
Language Translation Group
MASTAR Project National Institute of Information and Communications Technology
Eiichiro Sumita
Language Translation Group
MASTAR Project National Institute of Information and Communications Technology
NICT@WMT09: Model Adaptation and Transliteration for Spanish-English SMT
Proceedings of the Fourth Workshop on Statistical Machine Translation
the Fourth Workshop on Statistical Machine TranslationAthens, GreeceAssociation for Computational Linguistics30 March -31 March 2009. 2009
This paper describes the NICT statistical machine translation (SMT) system used for the WMT 2009 Shared Task (WMT09) evaluation. We participated in the Spanish-English translation task. The focus of this year's participation was to investigate model adaptation and transliteration techniques in order to improve the translation quality of the baseline phrasebased SMT system.
Introduction
This paper describes the NICT statistical machine translation (SMT) system used for the shared task of the Fourth Workshop on Statistical Machine Translation. We participated in the Spanish-English translation task under the Constrained Condition. For the training of the SMT engines, we used two parallel Spanish-English corpora provided by the organizers: the Europarl (EP) corpus (Koehn, 2005), which consists of 1.4M parallel sentences extracted from the proceedings of the European Parliament, and the News Commentary (NC) corpus (Callison-Burch et al., 2008), which consists of 74K parallel sentences taken from major news outlets like BBC, Der Spiegel, and Le
Monde.
In order to adapt SMT systems to a specific domain, recent research focuses on model adaptation techniques that adjust their parameters based on information about the evaluation domain (Foster and Kuhn, 2007;Finch and Sumita, 2008a). Statistical models can be trained on in-domain and out-of-domain data sets and combined at run-time using probabilistic weighting between domain-specific statistical models. As the official WMT09 evaluation testset consists of documents taken from the news domain, we applied statistical model adaptation techniques to combine translation models (tm), language models (lm) and dis-tortion models (dm) trained on (a) the in-domain NC corpus and (b) the out-of-domain EP corpus (cf. Section 2).
One major problem in the given translation task was the large amount of out-of-vocabulary (OOV) words, i.e., source language words that do not occur in the training corpus. For unknown words, no translation entry is available in the statistical translation model (phrase -table). As a result, these OOV words cannot be translated. Dealing with languages with a rich morphology like Spanish and having a limited amount of bilingual resources make this problem even more severe.
There have been several efforts in dealing with OOV words to improve translation quality. In addition to parallel text corpora, external bilingual dictionaries can be exploited to reduce the OOV problem (Okuma et al., 2007). However, these approaches depend on the coverage of the utilized external dictionaries.
Data sparseness problems due to inflectional variations were previously addressed by applying word transformations using stemming or lemmatization (Popovic and Ney, 2005;Gupta and Federico, 2006). A tight integration of morphosyntactic information into the translation model was proposed by (Koehn and Hoang, 2007) where lemma and morphological information are translated separately, and this information is combined on the output side to generate the translation. However, these approaches still suffer from the data sparseness problem, since lemmata and inflectional forms never seen in the training corpus cannot be translated.
In order to generate translations for unknown words, previous approaches focused on transliteration methods, where a sequence of characters is mapped from one writing system into another. For example, in order to translate names and technical terms, (Knight and Graehl, 1997) introduced a probabilistic model that replaces Japanese katakana 1 words with phonetically equivalent English words. More recently, (Finch and Sumita, 2008b) proposed a transliteration method that is based directly on techniques developed for phrasebased SMT, and transforms a character sequence from one language into another in a subwordlevel, character-based manner. We extend this approach by exploiting the phrase-table of the baseline SMT system to train a phrase-based transliteration model that generates English translations of Spanish OOV words as described in Section 3. The effects of the proposed techniques are investigated in detail in Section 4.
Model Adaptation
Phrase-based statistical machine translation engines use multiple statistical models to generate a translation hypothesis in which (1) the translation model ensures that the source phrases and the selected target phrases are appropriate translations of each other, (2) the language model ensures that the target language is fluent, (3) the distortion model controls the reordering of the input sentence, and (4) the word penalty ensures that the translations do not become too long or too short. During decoding, all model scores are weighted and combined to find the most likely translation hypothesis for a given input sentence .
In order to adapt SMT systems to a specific domain, separate statistical models can be trained on parallel text corpora taken from the respective domain (in-domain) and additional out-ofdomain language resources. The models are then combined using mixture modeling (Hastie et al., 2001), i.e., each model is weighted according to its fit with in-domain development data sets and the linear combination of the respective scores is used to find the best translation hypothesis during the decoding of unseen input sentences.
In this paper, the above model adaptation technique is applied to combine the NC and the EP language resources provided by the organizers for the Spanish-English translation task. As the WMT09 evaluation testset consists of documents taken from the news domain, we used the NC corpus to train the in-domain models and the EP corpus to train the out-of-domain component models. Using mixture modeling, the above mentioned statistical models are combined where each component model is optimized separately. Weight opti-mization is carried out using a simple grid-search method. At each point on the grid of weight parameter values, the translation quality of the combined weighted component models is evaluated for development data sets taken from (a) the NC corpus and (b) from the EP corpus.
Transliteration
Source language input words that cannot be translated by the standard phrase-based SMT models are either left untranslated or simply removed from the translation output. Common examples are named entities such as personal names or technical terms, but also include content words like common nouns or verbs that are not covered by the training data. Such unknown occurrences could benefit from being transliterated into the MT system's output during translation of orthographically related languages like Spanish and English.
In this paper, we apply a phrase-based transliteration approach similar to the one proposed in (Finch and Sumita, 2008b). The transliteration method is based directly on techniques developed for phrase-based SMT and treats the task of transforming a character sequence from one language into another as a character-level translation process. In contrast to (Finch and Sumita, 2008b) where external dictionaries and inter-language links in Wikipedia 2 are utilized, the transliteration training examples used for the experiments in Section 4 are extracted directly from the phrasetable of the baseline SMT systems trained on the provided data sets. For each phrase-table entry, corresponding word pairs are identified according to a string similarity measure based on the editdistance (Wagner, 1974) that is defined as the sum of the costs of insertion, deletion, and substitution operations required to map one character sequence into the other and can be calculated by a dynamic programming technique (Cormen et al., 1989). In order to reduce noise in the training data, only word pairs whose word length and similarity are above a pre-defined threshold are utilized for the training of the transliteration model. The obtained transliteration model is applied as a post-process filter to the SMT decoding process, i.e.. all source language words that could not be translated using the SMT engine are replaced with the corresponding transliterated word forms in order to obtain the final translation output.
Experiments
The effects of model adaptation and transliteration techniques were evaluated using the Spanish-English language resources summarized in Table 1. In addition, the characteristics of this year's testset are given in Table 2. The sentence length is given as the average number of words per sentence. The OOV word figures give the percentage of words in the evaluation data set that do not appear in the NC/EP training data. In order to get an idea how difficult the translation task may be, we also calculated the language perplexity of the respective evaluation data sets according to 5-gram target language models trained on the NC/EP data sets.
Concerning the development sets, the news-dev2009 data taken from the same news sources as the evaluation set of the shared task was used for the tuning of the SMT engines, and the de-vtest2006 data taken from the EP corpus was used for system parameter optimization. For the evaluation of the proposed methods, we used the testsets of the Second Workshop on SMT (nc-test2007 for NC and test2007 for EP). All data sets were casesensitive with punctuation marks tokenized. The numbers in Table 1 indicate that the characteristics of this year's testset differ largely from testsets of previous evaluation campaigns. The NC devset (2,438/1,378 OOVs) contains twice as many untranslatable Spanish words as the NC evalset (1,168/73 OOVs) and the EP devset (912/63 OOVs). In addition, the high language perplexity figures for this year's testset show that the translation quality output for both baseline systems is expected to be much lower than those for the EP evaluation data sets. In this paper, translation quality is evaluated according to (1) the BLEU metrics which calculates the geometric mean of ngram precision by the system output with respect to reference translations (Papineni et al., 2002), and (2) the METEOR metrics that calculates unigram overlaps between translations (Banerjee and Lavie, 2005). Scores of both metrics range between 0 (worst) and 1 (best) and are displayed in percent figures.
Baseline
Our baseline system is a fairly typical phrasebased machine translation system (Finch and Sumita, 2008a) built within the framework of a feature-based exponential model containing the following features: • Source-target phrase translation probability • Inverse phrase translation probability • Source-target lexical weighting probability • Inverse lexical weighting probability • Phrase penalty • Language model probability • Lexical reordering probability • Simple distance-based distortion model • Word penalty For the training of the statistical models, standard word alignment (GIZA++ (Och and Ney, 2003)) and language modeling (SRILM (Stolcke, 2002)) tools were used. We used 5-gram language models trained with modified Knesser-Ney smoothing. The language models were trained on the target side of the provided training corpora. Minimum error rate training (MERT) with respect to BLEU score was used to tune the decoder's parameters, and performed using the technique proposed in (Och, 2003). For the translation, the inhouse multi-stack phrase-based decoder CleopA-TRa was used.
The automatic evaluation scores of the baseline systems trained on (a) only the NC corpus and (b) only on the EP corpus are summarized in Table 3.
Effects of Model Adaptation
In order to investigate the effect of model adaptation, each model component was optimized separately using the method described in Section 2.
Effects of Transliteration
In order to investigate the effects of transliteration, we trained three different transliteration using the phrase-table of the baseline systems trained on (a) only the NC corpus, (b) only the EP corpus, and (c) on the merged corpus (NC+EP). The performance of these phrase-based transliteration models is evaluated for 2000 randomly selected transliteration examples. Table 5 summarizes the haracter-based automatic evaluation scores for the word error rate (WER) metrics, i.e., the edit distance between the system output and the closest reference translation (Niessen et al., 2000), as well as the BLEU and METEOR metrics. The best performance is achieved when training examples from both domains are exploit to transliterate unknown Spanish words into English. Therefore, the NC+EP transliteration model was applied to the translation outputs of all mixture models described in Section 4.2. The effects of the transliteration post-process are summarized in Table 6. Transliteration consis- tently improves the translation quality of all mixture models, although the gains obtained for the NC task (BLEU: +1.3%, METEOR: +1.3%) are much larger than those for the EP task (BLEU: +0.1%, METEOR: +0.2%) which is due to the larger amount of untranslatable words in the NC evaluation data set.
WMT09 Testset Results
Based on the automatic evaluation results presented in the previous sections, we selected the SMT engine based on the tm+lm+dm weights optimized on the NC devset as the primary run for our testset run submission. All other model weight combinations were submitted as contrastive runs. The BLEU scores of these runs are listed in Table 7 and confirm the results obtained for the above experiments, i.e., the best performing system is the one based on the mixture models using separately optimized weights in combination with the transliteration of untranslatable Spanish words using the phrase-based transliteration model trained on all available language resources.
Conclusion
The work for this year's shared task focused on the task of effectively utilizing out-of-domain language resources and handling OOV words to improve translation quality. Overall our experiments show that the incorporation of mixture models and phrase-based transliteration techniques largely out-performed standard phrase-based SMT engines gaining a total of 2.4% in BLEU and 2.1% in METEOR for the news domain.
Table 1 :
1Language ResourcesCorpus
Train
Dev
Eval
NC Spanish sentences
74K
2,001
2,007
words
2,048K
49,116
56,081
vocab
61K
9,047
8,638
length
27.6
24.5
27.9
OOV (%)
-5.2 / 2.9 1.4 / 0.9
English sentences
74K
2,001
2,007
words
1,795K
46,524
49,693
vocab
47K
8,110
7,541
length
24.2
23.2
24.8
OOV (%)
-5.2 / 2.9 1.2 / 0.9
perplexity
-349 / 381 348 / 458
EP Spanish sentences 1,404K
1,861
2,000
words
41,003K
50,216
61,293
vocab
170K
7,422
8,251
length
29.2
27.0
30.6
OOV (%)
-2.4 / 0.1 2.4 / 0.2
English sentences 1,404K
1,861
2,000
words
39,354K
48,663
59,145
vocab
121K
5,869
6,428
length
28.0
26.1
29.6
OOV (%)
-1.8 / 0.1 1.9 / 0.1
perplexity
-210 / 72 305 / 125
Table 2 :
2Testset 2009Corpus
Test
NC Spanish sentences
3,027
words
80,591
vocab
12,616
length
26.6
Table 3 :
3Baseline PerformanceNC Eval
EP Eval
BLEU METEOR BLEU METEOR
baseline 17.56
40.52
33.00
56.50
Table 4 :
4Effects of Model Adaptationweight
NC Eval
EP Eval
optimization BLEU METEOR BLEU METEOR
-
17.92
40.72
34.00
58.20
tm
18.13
40.95
34.05
58.23
tm+lm
18.25
41.23
34.12
58.22
tm+dm
18.36
41.06
34.24
58.34
tm+lm+dm 18.65
41.35
34.35
58.36
Table 5 :
5Transliteration PerformanceTraining
character-based
Data
WER
BLEU
METEOR
NC
13.10
83.62
86.74
EP
11.76
85.93
87.89
NC+EP
11.72
86.08
87.89
Table 6 :
6Effects of Transliterationweight
NC Eval
EP Eval
optimization BLEU METEOR BLEU METEOR
tm
19.14
42.39
34.11
58.46
tm+lm
19.46
42.65
34.16
58.44
tm+dm
19.77
42.35
34.38
58.57
tm+lm+dm 19.95
42.64
34.48
58.60
Table 7 :
7Testset 2009 Performanceweight
NC Eval EP Eval
optimization BLEU
BLEU
tm
21.07
20.81
tm+lm
20.95
20.59
tm+dm
21.45
21.32
tm+lm+dm 21.67 *
21.27
A special syllabary alphabet used to write down foreign names or loan words.
http://www.wikipedia.org
METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. S Banerjee, A Lavie, Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for MT. the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for MTAnn Arbor, MichiganS. Banerjee and A. Lavie. 2005. METEOR: An Auto- matic Metric for MT Evaluation with Improved Cor- relation with Human Judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Eval- uation Measures for MT, pages 65-72, Ann Arbor, Michigan.
Further Meta-Evaluation of Machine Translation. C Callison-Burch, C Fordyce, P Koehn, C Monz, J Schroeder, Proceedings of the 3rd Workshop on SMT. the 3rd Workshop on SMTColumbus, OhioC. Callison-Burch, C. Fordyce, P. Koehn, C. Monz, and J. Schroeder. 2008. Further Meta-Evaluation of Ma- chine Translation. In Proceedings of the 3rd Work- shop on SMT, pages 70-106, Columbus, Ohio.
Introduction to Algorithms. H Cormen, C Leiserson, L Rivest, MIT PressH. Cormen, C. Leiserson, and L. Rivest. 1989. Intro- duction to Algorithms. MIT Press.
Dynamic Model Interpolation for Statistical Machine Translation. A Finch, E Sumita, Proceedings of the 3rd Workshop on SMT. the 3rd Workshop on SMTColumbus, OhioA. Finch and E. Sumita. 2008a. Dynamic Model Inter- polation for Statistical Machine Translation. In Pro- ceedings of the 3rd Workshop on SMT, pages 208- 215, Columbus, Ohio.
Phrase-based Machine Transliteration. A Finch, E Sumita, Proceedings of the IJC-NLP. the IJC-NLPHyderabad, IndiaA. Finch and E. Sumita. 2008b. Phrase-based Ma- chine Transliteration. In Proceedings of the IJC- NLP, pages 13-18, Hyderabad, India.
Mixture-Model Adaptation for SMT. G Foster, R Kuhn, Proceedings of the 2nd Workshop on SMT. the 2nd Workshop on SMTPrague, Czech RepublicG. Foster and R. Kuhn. 2007. Mixture-Model Adapta- tion for SMT. In Proceedings of the 2nd Workshop on SMT, pages 128-135, Prague, Czech Republic.
Exploiting Word Transformation in SMT from Spanish to English. D Gupta, M Federico, Proceedings of the EAMT. the EAMTOslo, NorwayD. Gupta and M. Federico. 2006. Exploiting Word Transformation in SMT from Spanish to English. In Proceedings of the EAMT, pages 75-80, Oslo, Nor- way.
T Hastie, R Tibshirani, J Friedman, The Elements of Statistical Learning. New YorkSpringerT. Hastie, R. Tibshirani, and J. Friedman. 2001. The Elements of Statistical Learning. Springer, New York.
Machine Transliteration. K Knight, J Graehl, Proceedings of the 35th ACL. the 35th ACLMadrid, SpainK. Knight and J. Graehl. 1997. Machine Translitera- tion. In Proceedings of the 35th ACL, pages 128- 135, Madrid, Spain.
Factored Translation Models. P Koehn, H Hoang, Proceedings of the EMNLP-CoNLL. the EMNLP-CoNLLPrague, Czech RepublicP. Koehn and H. Hoang. 2007. Factored Transla- tion Models. In Proceedings of the EMNLP-CoNLL, pages 868-876, Prague, Czech Republic.
Statistical Phrase-Based Translation. P Koehn, F J Och, D Marcu, Proceedings of the HLT-NAACL. the HLT-NAACLEdmonton, CanadaP. Koehn, F.J. Och, and D. Marcu. 2007. Statisti- cal Phrase-Based Translation. In Proceedings of the HLT-NAACL, pages 127-133, Edmonton, Canada.
Europarl: A Parallel Corpus for Statistical Machine Translation. P Koehn, Proceedings of the MT Summit X. the MT Summit XPhuket, ThailandP. Koehn. 2005. Europarl: A Parallel Corpus for Sta- tistical Machine Translation. In Proceedings of the MT Summit X, pages 79-86, Phuket, Thailand.
An Evaluation Tool for Machine Translation: Fast Evaluation for MT Research. S Niessen, F J Och, G Leusch, H Ney, Proc. of the 2nd LREC. of the 2nd LRECAthens, GreeceS. Niessen, F.J. Och, G. Leusch, and H. Ney. 2000. An Evaluation Tool for Machine Translation: Fast Evaluation for MT Research. In Proc. of the 2nd LREC, pages 39-45, Athens, Greece.
A Systematic Comparison of Various Statistical Alignment Models. F J Och, H Ney, Computational Linguistics. 291F.J. Och and H. Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Computa- tional Linguistics, 29(1):19-51.
Minimum Error Rate Training in Statistical Machine Translation. F J Och, Proceedings of the 41st ACL. the 41st ACLSapporo, JapanF.J. Och. 2003. Minimum Error Rate Training in Sta- tistical Machine Translation. In Proceedings of the 41st ACL, pages 160-167, Sapporo, Japan.
Introducing Translation Dictionary into phrase-based SMT. H Okuma, H Yamamoto, E Sumita, Proceedings of MT Summit XI. MT Summit XICopenhagen, DenmarkH. Okuma, H. Yamamoto, and E. Sumita. 2007. In- troducing Translation Dictionary into phrase-based SMT. In Proceedings of MT Summit XI, pages 361- 368, Copenhagen, Denmark.
BLEU: a Method for Automatic Evaluation of Machine Translation. K Papineni, S Roukos, T Ward, W Zhu, Proceedings of the 40th ACL. the 40th ACLPhiladelphia, USAK. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. BLEU: a Method for Automatic Evaluation of Ma- chine Translation. In Proceedings of the 40th ACL, pages 311-318, Philadelphia, USA.
Exploiting Phrasal Lexica and Additional Morpho-synatctic Language Resources for SMT with Scarce Training Data. M Popovic, H Ney, Proceedings of the EAMT. the EAMTBudapest, HungaryM. Popovic and H. Ney. 2005. Exploiting Phrasal Lex- ica and Additional Morpho-synatctic Language Re- sources for SMT with Scarce Training Data. In Pro- ceedings of the EAMT, pages 212-218, Budapest, Hungary.
SRILM -an extensible language modeling toolkit. A Stolcke, Proceedings of ICSLP. ICSLPDenverA. Stolcke. 2002. SRILM -an extensible language modeling toolkit. In Proceedings of ICSLP, pages 901-904, Denver.
The string-to-string correction problem. R W Wagner, Journal of the ACM. 211R.W. Wagner. 1974. The string-to-string correction problem. Journal of the ACM, 21(1):169-173. |
250,390,575 | UIC-NLP at SemEval-2022 Task 5: Exploring Contrastive Learning for Multimodal Detection of Misogynistic Memes | Misogynistic memes are rampant on social media, and often convey their messages using multimodal signals (e.g., images paired with derogatory text or captions). However, to date very few multimodal systems have been leveraged for the detection of misogynistic memes. Recently, researchers have turned to contrastive learning solutions for a variety of problems. Most notably, OpenAI's CLIP model has served as an innovative solution for a variety of multimodal tasks. In this work, we experiment with contrastive learning to address the detection of misogynistic memes within the context of SemEval-2022 Task 5. Although our model does not achieve top results, these experiments provide important exploratory findings for this task. We conduct a detailed error analysis, revealing promising clues and offering a foundation for follow-up work. | [
218973755,
102350787
] | UIC-NLP at SemEval-2022 Task 5: Exploring Contrastive Learning for Multimodal Detection of Misogynistic Memes
July 14-15, 2022
Charic Farinango Cuervo
Department of Computer Science
Natural Language Processing Laboratory
University of Illinois at Chicago
Natalie Parde parde@uic.edu
Department of Computer Science
Natural Language Processing Laboratory
University of Illinois at Chicago
UIC-NLP at SemEval-2022 Task 5: Exploring Contrastive Learning for Multimodal Detection of Misogynistic Memes
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
the 16th International Workshop on Semantic Evaluation (SemEval-2022)July 14-15, 2022
Misogynistic memes are rampant on social media, and often convey their messages using multimodal signals (e.g., images paired with derogatory text or captions). However, to date very few multimodal systems have been leveraged for the detection of misogynistic memes. Recently, researchers have turned to contrastive learning solutions for a variety of problems. Most notably, OpenAI's CLIP model has served as an innovative solution for a variety of multimodal tasks. In this work, we experiment with contrastive learning to address the detection of misogynistic memes within the context of SemEval-2022 Task 5. Although our model does not achieve top results, these experiments provide important exploratory findings for this task. We conduct a detailed error analysis, revealing promising clues and offering a foundation for follow-up work.
Introduction
Hateful expressions on the Internet are widespread and mostly based on religion, gender, race, or physical attributes (Lippe et al., 2020). Such language exacerbates damaging societal problems such as racism, sexism, and other types of discrimination. In particular, misogynistic abuse has become very prevalent and poses a serious problem in cyberspace (Citron, 2014). Although extensive research on hate speech and misogyny has been conducted (Kumar et al., 2020), it has thus far centered on the analysis of text or images alone.
Memes, or social media images that communicate messages through the creative use of imagery understood to carry specific rhetorical value among members of a community, are a common platform for misogyny and hateful expressions. Approximately 78% of women use image-based social media multiple times per day (compared to 65% of men) (Fersini et al., 2022), making exposure to this harmful content alarmingly common.
Classifying memes is challenging because of their multimodal interplay between image and text, as well as their regionspecific interpretation, and existing multimodal approaches do not perform very well in the classification of hateful memes (Kiela et al., 2020). The underlying goal of SemEval-2022 Task 5 was to address this research gap, tackling the challenge of identifying misogynistic memes using multimodal data by inviting researchers to experiment with a variety of approaches.
Our team, UIC-NLP, investigated an adapted version of the Contrastive Language-Image Pretraining (CLIP) technique recently established and applied with success to numerous other tasks (Radford et al., 2021;Conde and Turgutlu, 2021;Galatolo et al., 2021). Our model, recorded on the leaderboard in the 6th leaderboard cluster under the OpenReview username "Charicfc," ranked 71 st out of 83 participants and obtained an average F1 score of 0.62. We analyze our model's predictions and offer insights and recommendations for improving upon it in the future.
Background
SemEval-2022 Task 5 (MAMI)
SemEval-2022 Task 5 was created to address the rise in the use of memes as a form of hate against women, which contributes to sexual stereotyping and gender inequality. The data used for this challenge is comprised of English-language memes collected from the web and manually annotated via crowdsourcing platforms. Each data sample has an image, its raw text in English (transcript), a binary annotation indicating the presence of misogyny, and (if applicable) the type of misogyny (shaming, objectification, stereotype, or violence). Our model addresses Subtask 1, which focuses on binary classification of memes as misogynist or not misogynist. The output for a given sample is a confidence score for the predicted class. Although no approaches have sought to perform multimodal detection of misogynistic memes to date, we review work on classifying misogynistic expressions in text and multimodal classification of hateful memes in the following sections.
Identifying Misogyny in Text
Previous approaches in the related IberEval 2018 Automatic Misogyny Identification (AMI) task for misogyny detection in tweets (Fersini et al., 2018) leveraged statistical classification models, including variations of Support Vector Machines (SVM) (Nina-Alcocer, 2018;Pamungkas et al., 2018). Other recent work towards identifying misogyny in text has leveraged CNNs, LSTMs, and combined representations from models like BERT (Basile et al., 2019;Parikh et al., 2021). Recently, the Evalita 2020 AMI challenge best results were obtained by ensembles of fine-tuned BERT models (Lees et al., 2020).
Multimodal Classification of Hateful Memes
Multimodal learning has recently gained attention due to the poor performance of existing (unimodal) models on multimodal tasks (Lippe et al., 2020), with most recent solutions (context aware) employing neural architectures such as CNNs, RNNs, and Transformer-based attention models like BERT (Afridi et al., 2020;Modi and Parde, 2019;Parde, 2020). Although existing work on hate speech detection has largely relied on textbased features, this has gradually started to shift with the introduction of multimodal datasets (Lippe et al., 2020). In general, the focus has been shifted to BERT-based models (Afridi et al., 2020). In Facebook AI's Hateful Memes Challenge (Kiela et al., 2021) the top two models involved an ensemble of four Transformer-based models (Zhu, 2020;Muennighoff. 2020
System Overview
Our system architecture is a variation of OpenAI's CLIP model (Radford et al., 2021). We refer readers to the original paper for a detailed overview and illustrations of the architecture, and summarize it here. Inspired by the idea of usability and generality, CLIP uses a contrastive learning objective to build a joint visual-linguistic space for learning visual concepts from natural language supervision. We describe the various components of our architecture below.
Encoders
ResNet-50: OpenAI's best CLIP performance was achieved using a pretrained encoder which the authors called ViT-L/14@336px. In our case, we experimented with several pretrained image encoders. Although Shariatnia (2021) used ResNet-50 by default, we also considered ViT-B/16@224px, ViT-L/16@224px, and ViT-L/16@384px (Dosovitskiy et al, 2020). Nonetheless, we empirically determined that ResNet-50 (He, 2016), a deep CNN trained on more than a million images from the ImageNet database with an objective of classifying images into 1000 categories, yielded the best performance. DistilBERT: OpenAI's CLIP used a modified Transformer to encode text. We instead used a lighter version of BERT (Devlin et al., 2018) called DistilBERT (Sanh et al, 2019) which Shariatnia (2021) also uses. BERT is a large Transformerbased language model that has achieved strong performance in many NLP tasks, and DistilBERT uses a process known as knowledge distillation to reproduce its behavior by training a smaller model to replicate its probability distributions across class predictions. Radford et al. (2021) introduced contrastive objectives as a mechanism for learning multimodal representations from raw images and paired descriptions. In essence, the contrastive objective seeks to learn a multimodal embedding space where image embeddings and text embeddings are mapped to the same point if they describe the same thing, and different points otherwise. Cosine similarity is used to measure the distance between embeddings. Figure 1 shows the "logits" matrix obtained after applying the dot product between images and text embeddings. Each cell in the matrix (logits) is a measure of similarity between an image and a text caption in the dataset ( 2 pairs). It is expected that cells along the diagonal (N pairs), which contain the similarity between an image and its actual text, are maximized. Simultaneously, the 2 − incorrect pairs should be minimized.
Learning Objective
The targets for the images and texts where the similarities are maximum are obtained by computing dot products between embedding matrices. These are averaged and passed through a SoftMax, with the result being a target matrix where the diagonal is close to 1.0 and the other pairs are close to 0.0. The loss for images and texts is obtained by calculating the cross-entropy between the target and the logits matrix.
Training Procedures
The target output for the original CLIP model was the correct caption for an image. For example, one could input an image of a cat along with the captions: {"Image depicting a cat," "Image depicting a dog"} with the expectation that it would return the correct caption. We refer readers to the original paper for further implementation details.
We slightly modified this approach in our own model, such that the training text instead was the language content from the meme followed by the correct label. We separated the text and label using the [SEP] token as defined for BERT's nextsentence prediction objective. Therefore, we concatenated each instance with the following sentences depending on the example's class:
1. For class MISOGYNY: <text_1> + "[SEP] a misogynist meme" 2. For class NOT MISOGYNY: <text_1> + "[SEP] a meme" When evaluating new samples using this architecture, each instance must be a paired image and text caption. Thus, we created two versions of each text caption: one for class MISOGYNY, and one for class NOT MISOGYNY. Figure 2 shows the procedure. The predicted label for the test instance was the one for which the model made its prediction with higher confidence.
Experimental Setup
Exploratory Analysis
Prior to conducting our experiments, we performed preliminary descriptive analyses on the training data, comprising 10,000 memes (image + paired text content) with a total vocabulary of 12,611 tokens. From these, 6,649 tokens only appear once. Figure 3 shows a word cloud for the 100 most frequent words in the corpus. Interestingly, we determined that the vocabulary for non-misogynist memes is much richer, including 9,285 compared to 7,348 unique words. We also observed that while the word "woman" is repeated 1,416 times in the misogyny class, it is repeated only 528 times in the not misogyny class.
Aiming to find greater insights we computed the Pointwise Mutual Information (PMI) score for all words given each class. PMI is a feature scoring metric that can be used to estimate the association between a feature and a class (it has also traditionally been used to identify collocations in text). A close association indicates which features (words) are more important for a class. PMI is computed using the following formula: Table 1 shows the 15 words with the highest PMI for each class. As shown, despite its frequency, the word "woman" carries less value when measured via PMI. Some swear words are classified as important for the MISOGYNY class, but not the NOT MISOGYNY class.
Preprocessing
A large challenge in this task was that since the text content from memes was extracted with an OCR tool, it contained substantial noise. Therefore, we created the following preprocessing pipeline that we applied to all texts before training and evaluating:
1. Case Normalization: All text was normalized by converting it to lowercase. 2. Part of Speech (POS) Tagging: POS tags were assigned to each word. 3. Proper Name Removal: Regular expressions were applied to convert some wordforms (e.g., urls) to generic tokens. 4. Special Token Categorization: Words belonging to several word sets of interest, including celebrity names and profanity terms, were kept. 5. Lemmatization: Words were converted to their base forms. 6. Stopword Removal: Highly frequent words (e.g., "the") were removed. 7. Special Character Removal: Nonalphabetical characters (e.g., digits or punctuation) were removed.
Aspects of this preprocessing pipeline were similar to those reported by Cardoza (2022) and Kovács et al. (2020). POS tagging allowed us to target the "proper noun" and "other" tags. By excluding these tags, we resolved many lingering issues following application of regular expressions, such as remaining usernames and gibberish words. Removing specific instances of these terms aided the model in avoiding overfitting to superfluous names or unknown tokens that were irrelevant to the overarching task of recognizing misogyny. Since certain terms removed by our POS filtering may carry importance to the task (e.g., certain celebrity names or swear terms), we also searched for these terms in several predetermined word sets and kept them if and when they were found.
Experimental Settings
The training set provided by the task organizers was divided into training (80%) and validation (20%) sets of 8000 and 2000 examples respectively. We primarily adapted our hyperparameter settings from Shariatnia (2021), using AdamW (Loshchilov and Hutter, 2017) as our optimizer with an initial learning rate (LR) of 1e-3 and a scheduler to reduce the LR on plateau. The batch size is left at 32 and the maximum number of epochs was set to 4. The model converges quickly, so this early stop helps to control overfitting. The learning rates for the image and text encoders were left at 1e-4 and 1e-5 respectively. All of the texts were tokenized using the DistilBERT base model with a max number of tokens set to 200. We experimented with CLIP's temperature hyperparameter, finding that the best results were achieved with a value of 1e-0.2.
We selected the best epoch and the best hyperparameters as measured by F1 score and accuracy. In the test set evaluation, Subtask 1 systems were evaluated using macro-averaged F1; thus, the final score is the mean of the F1 for the two classes.
Results
During development we used the training data provided by the MAMI task. We divided it into training (90%) and test (10%) sets, and measured performance using F1, accuracy, precision, and recall on the 10% test set. We provide our results on the development data in Table 2. Once the official test set was released (Evaluation in Table 2), we computed the same metrics on that set. The macro-averaged F1 as returned by the task organizers was 0.62, with the system ranking 71 st in performance.
Quantitative Analysis
We briefly analyze our best models' results on the test set for Subtask 1. In particular, we observe a fairly low recall for the NOT MISOGYNY class (0.43), indicating that the system may be struggling to capture all members of this likely more diverse class. As further highlighted in the confusion matrix in Figure 4, the model classifies 28.6% of the NOT MISOGYNIST memes as MISOGYNIST. Thus, most errors were false positives.
Error Analysis
When manually analyzing the misclassified instances, we observed that they were diverse, containing cartoons, animals, people, drawings, and more. Some images contained text that was unrelated to the caption, creating additional noise. Examples of these images are shown in Figure 5. Several recurring themes occurred among false positives and false negatives, which we summarize below: False Positives: Most images that were incorrectly classified as misogynistic were primarily dominated by one or more people. For images where humans were not present, the text often contained swear words or synonyms for "woman," some of which were offensive although not employed with purposeful misogyny.
False Negatives: Most images that were incorrectly classified as not misogynistic contained cartoons, animals, storyboards or non-explicitly sexist images, although women were occasionally present.
To address these errors, we recommend actions leveraged in prior work for other tasks to improve the performance of future systems. For instance, Lippe et al. (2020) upsampled their dataset as a solution for poor performance on text confounders. Although the dataset they used (Facebook's Hateful Memes) was specially designed to introduce benign confounders, it might also work in this problem. Nozza et al., (2019) discuss biases introduced in the model by a set of identity terms that are frequently associated with the misogynistic class (e.g., "woman"). The authors propose to upsample the dataset with examples that have the identity terms for the alternate class.
Conclusion and Future Work
In this paper, we describe our system implementation for SemEval-2022 Task 5. Our
Figure 1 .
1Learning stage. Adapted from Radford et al., 2021. Figure 2. Evaluation stage. Adapted from Radford et al., 2021.
Figure 3 :
3Word cloud of the 100 most frequent words in the corpus. Warning: This image includes language that may be offensive or upsetting.
Figure 5 :
5False positives (top 10) and false negatives (bottom 10). Warning: This image includes content that may be offensive or upsetting.
More experiments and tests should be done to improve the model's performance on this task. In particular, upsampling the dataset and addressing the possible biases caused by identity terms should be investigated. Finally, at an architectural level, our current system encodes images as one form of input and encodes paired text content and labels as another form of input, similarly to text encoding strategies used for unimodal sequence prediction tasks. Exploring joint encodings of image and paired text content as a single form of input, with only labels as the other form of input, may be an additional design avenue worth pursuing. Overall, it is our hope that this work motivates additional interest in contrastive learning solutions for multimodal misogynistic meme detection. We make our source code 1 available to other researchers to facilitate follow-up work by others.Tariq Habib Afridi, Aftab Alam, Muhammad Numan Khan, Jawad Khan and Young-Koo Lee. (2020, October). A multimodal memes classification: A survey and open research issues. In The Proceedings 1 https://github.com/charicf/MAMI-CLIP of the Third International Conference on Smart City Applications (pp. 1451-1466). Springer, Cham. https://doi.org/10.1007/978-3-030-66840-2_109 Apeksha Aggarwal, Vibhav Sharma, Anshul Trivedi, Mayank Yadav, Chirag Agrawal, Dilbag Singh, Vipul Mishra and Hassène Gritli. (2021). Two-way feature extraction using sequential and multimodal approach for hateful meme classification. https://doi.org/10.1155/2021/5510253 Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana, Patti, Francisco Manuel Rangel Pardo, Paolo Ross and Manuela Sanguinetti. (2019). Semeval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter. In 13th International Workshop on Semantic Evaluation (pp. 54-63). Association for Computational Linguistics. http://dx.doi.org/10.18653/v1/S19-2007 Piotr Bojanowski, Edouard Grave, Armand Joulin and Tomas Mikolov. (2017). Enriching word vectors with subword information. Transactions of the association for computational linguistics, 5, 135-146. Maria Alejandra Cardoza Ceron. (2022). Using Word Embeddings to Analyze Protests News. arXiv preprint arXiv:2203.05875. Danielle Keats Citron. (2014). Hate crimes in cyberspace. Harvard University Press. Marcos V. Conde and Kerem Turgutlu. (2021). CLIP-Art: contrastive pre-training for fine-grained art classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 3956-3960). Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit and Neil Houlsby. (2020). An image is worth 16x16 words: Transformers for image Elisabetta Fersini, Francesca Gasparini, Giulia Rizzi, Aurora Saibene, Berta Chulvi, Paolo Rosso, Alyssa Lees, Jeffrey Sorensen. (2022). SemEval-2022 Task 5: Multimedia Automatic Misogyny Identification. In Proceedings of the 16th International WorkshopDevelopment
Precision
Recall
F1
Non-Misogynist
0.66
0.71
0.68
Misogynist
0.69
0.64
0.66
Accuracy
0.67
Evaluation
Precision
Recall
F1
Non-Misogynist
0.73
0.43
0.54
Misogynist
0.60
0.84
0.70
Accuracy
0.64
Table 2: Results on the development (from training
data) and evaluation (from test data).
Development
Evaluation
Figure 4: Confusion matrices for the development
and evaluation data.
model ranked 71 st out of 83 participants' teams on
Subtask 1. We comprehensively investigate the use
of a state-of-the-art multimodal contrastive
learning approach for the classification of
misogynistic memes. References
Complexity,
2021.
recognition
at
scale.
arXiv
preprint
arXiv:2010.11929.
AcknowledgementsWe thank the anonymous reviewers for their comments and suggestions.
Semantic Evaluation (SemEval-2022). Semantic Evaluation (SemEval-2022).
Overview of the Task on Automatic Misogyny Identification at IberEval 2018. Ibereval@ sepln. Elisabetta Fersini, Paolo Rosso, Maria Anzovino , 2150Elisabetta Fersini, Paolo Rosso, and Maria Anzovino. (2018). Overview of the Task on Automatic Misogyny Identification at IberEval 2018. Ibereval@ sepln, 2150, 214-228.
Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search. Federico A Galatolo, Mario G C A Cimino, Gigliola Vaglini, arXiv:2102.01645arXiv preprintFederico A. Galatolo, Mario G.C.A. Cimino and Gigliola Vaglini. (2021). Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search. arXiv preprint arXiv:2102.01645.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
arXiv:2103.00466NLP-CUET@DravidianLangTech-EACL2021: Investigating Visual and Textual Features to Identify Trolls from Multimodal Social Media Memes. Eftekhar Hossain, Omar Sharif and Mohammed Moshiul Hoque.arXiv preprintEftekhar Hossain, Omar Sharif and Mohammed Moshiul Hoque. (2021). NLP- CUET@DravidianLangTech-EACL2021: Investigating Visual and Textual Features to Identify Trolls from Multimodal Social Media Memes. arXiv preprint arXiv:2103.00466.
The hateful memes challenge: competition report. Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Casey A Fitzpatrick, Peter Bull, Greg Lipstein, Tony Nelli, Ron Zhu, Niklas Muennighoff, Riza Velioglu, Jewgeni Rose, Phillip Lippe, Nithin Holla, Shantanu Chandra, PMLRNeurIPS 2020 Competition and Demonstration Track. Santhosh Rajamanickam, Georgios Antoniou, Ekaterina Shutova, Helen Yannakoudakis, Vlad Sandulescu, Umut Ozertem, Patrick Pantel, Lucia Specia, Devi ParikhDouwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Casey A. Fitzpatrick, Peter Bull, Greg Lipstein, Tony Nelli, Ron Zhu, Niklas Muennighoff, Riza Velioglu, Jewgeni Rose, Phillip Lippe, Nithin Holla, Shantanu Chandra, Santhosh Rajamanickam, Georgios Antoniou, Ekaterina Shutova, Helen Yannakoudakis, Vlad Sandulescu, Umut Ozertem, Patrick Pantel, Lucia Specia, Devi Parikh. (2021, August). The hateful memes challenge: competition report. In NeurIPS 2020 Competition and Demonstration Track (pp. 344-360). PMLR.
The hateful memes challenge: Detecting hate speech in multimodal memes. Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Advances in Neural Information Processing Systems. 33Pratik Ringshia and Davide TestuggineDouwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia and Davide Testuggine. (2020). The hateful memes challenge: Detecting hate speech in multimodal memes. Advances in Neural Information Processing Systems, 33, 2611-2624.
Better Together: Modern Methods Plus Traditional Thinking in NP Alignment. Ádám Kovács, Judit Ács, Andras Kornai, Gábor Recski, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationÁdám Kovács, Judit Ács, Andras Kornai, and Gábor Recski. 2020. Better Together: Modern Methods Plus Traditional Thinking in NP Alignment. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 3635-3639, Marseille, France. European Language Resources Association.
Ritesh Kumar, Atul Kr, Bornini Ojha, Marcos Lahiri, Shervin Zampieri, Vanessa Malmasi, Daniel Murdock, Kadar, Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying. the Second Workshop on Trolling, Aggression and CyberbullyingProceedings of the Second Workshop on Trolling, Aggression and CyberbullyingRitesh Kumar, Atul Kr. Ojha, Bornini Lahiri, Marcos Zampieri, Shervin Malmasi, Vanessa Murdock and Daniel Kadar. (2020, May). Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying. In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying.
Jigsaw@ AMI and HaSpeeDe2: Fine-Tuning a Pre-Trained Comment-Domain BERT Model. Alyssa Lees, Jeffrey Sorensen, Ian Kivlichan, EVALITA. Alyssa Lees, Jeffrey Sorensen and Ian Kivlichan. (2020). Jigsaw@ AMI and HaSpeeDe2: Fine- Tuning a Pre-Trained Comment-Domain BERT Model. In EVALITA.
A multimodal framework for the detection of hateful memes. Phillip Lippe, Nithin Holla, Shantanu Chandra, Santhosh Rajamanickam, Georgios Antoniou, Ekaterina Shutova, Helen Yannakoudakis, arXiv:2012.12871arXiv preprintPhillip Lippe, Nithin Holla, Shantanu Chandra, Santhosh Rajamanickam, Georgios Antoniou, Ekaterina Shutova and Helen Yannakoudakis. (2020). A multimodal framework for the detection of hateful memes. arXiv preprint arXiv:2012.12871.
. Ilya Loshchilov, Frank Hutter, arXiv:1711.05101Decoupled weight decay regularization. arXiv preprintIlya Loshchilov and Frank Hutter. (2017). Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101.
The Steep Road to Happily Ever after: an Analysis of Current Visual Storytelling Models. Yatri Modi, Natalie Parde, Proceedings of the Second Workshop on Shortcomings in Vision and Language. the Second Workshop on Shortcomings in Vision and LanguageMinneapolis, MinnesotaAssociation for Computational LinguisticsYatri Modi and Natalie Parde (2019). The Steep Road to Happily Ever after: an Analysis of Current Visual Storytelling Models. In Proceedings of the Second Workshop on Shortcomings in Vision and Language, pages 47-57, Minneapolis, Minnesota. Association for Computational Linguistics.
Vilio: state-of-the-art Visio-Linguistic models applied to hateful memes. Niklas Muennighoff, arXiv:2012.07788arXiv preprintNiklas Muennighoff. (2020). Vilio: state-of-the-art Visio-Linguistic models applied to hateful memes. arXiv preprint arXiv:2012.07788.
AMI at IberEval2018 Automatic Misogyny Identification in Spanish and English Tweets. Victor Nina-Alcocer, Ibereval@ sepln. Victor Nina-Alcocer. (2018, September). AMI at IberEval2018 Automatic Misogyny Identification in Spanish and English Tweets. In Ibereval@ sepln (pp. 274-279).
Unintended bias in misogyny detection. Debora Nozza, Claudia Volpetti, Elisabetta Fersini, Ieee/wic/acm international conference on web intelligence. Debora Nozza, Claudia Volpetti, Elisabetta Fersini. (2019, October). Unintended bias in misogyny detection. In Ieee/wic/acm international conference on web intelligence (pp. 149-155).
. 10.1145/3350546.3352512https://doi.org/10.1145/3350546.3352512
14-ExLab@ UniTo for AMI at IberEval2018: Exploiting lexical knowledge for detecting misogyny in English and Spanish tweets. Alessandra Teresa Endang Wahyu Pamungkas, Valerio Cignarella, Viviana Patti Basile, 3rd Workshop on Evaluation of Human Language Technologies for Iberian Languages. CEUR-WS2150Endang Wahyu Pamungkas, Alessandra Teresa Cignarella, Valerio Basile, and Viviana Patti. (2018). 14-ExLab@ UniTo for AMI at IberEval2018: Exploiting lexical knowledge for detecting misogyny in English and Spanish tweets. In 3rd Workshop on Evaluation of Human Language Technologies for Iberian Languages, IberEval 2018 (Vol. 2150, pp. 234-241). CEUR-WS.
And, Action! Towards Leveraging Multimodal Patterns for Storytelling and Content Analysis. Natalie Parde, 10.1145/3422839.3423060Proceedings of the 2nd International Workshop on AI for Smart TV Content Production, Access and Delivery (AI4TV '20). the 2nd International Workshop on AI for Smart TV Content Production, Access and Delivery (AI4TV '20)New York, NY, USA, 3Association for Computing MachineryNatalie Parde (2020). And, Action! Towards Leveraging Multimodal Patterns for Storytelling and Content Analysis. In Proceedings of the 2nd International Workshop on AI for Smart TV Content Production, Access and Delivery (AI4TV '20). Association for Computing Machinery, New York, NY, USA, 3. DOI:https://doi.org/10.1145/3422839.3423060
Categorizing Sexism and Misogyny through Neural Approaches. Pulkit Parikh, Harika Abburi, Niyati Chhaya, Manish Gupta, Vasudeva Varma, 10.1145/3457189ACM Transactions on the Web (TWEB). 154Pulkit Parikh, Harika Abburi, Niyati Chhaya, Manish Gupta and Vasudeva Varma. (2021). Categorizing Sexism and Misogyny through Neural Approaches. ACM Transactions on the Web (TWEB), 15(4), 1-31. https://doi.org/10.1145/3457189
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever, PMLRInternational Conference on Machine Learning. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger and Ilya Sutskever. (2021, July). Learning transferable visual models from natural language supervision. In International Conference on Machine Learning (pp. 8748-8763). PMLR.
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf, arXiv:1910.01108arXiv preprintVictor Sanh, Lysandre Debut, Julien Chaumond and Thomas Wolf. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
Moein-Shariatnia/OpenAIclip: Simple implementation of openai clip model in PYTORCH. Moein Shariatnia, GitHub. Moein Shariatnia. (2021). Moein-Shariatnia/OpenAI- clip: Simple implementation of openai clip model in PYTORCH. GitHub. Retrieved November 29, 2021, from https://github.com/moein-shariatnia/OpenAI- CLIP.
Detecting hate speech in memes using multimodal deep learning approaches: Prize-winning solution to hateful memes challenge. Riza Velioglu, Jewgeni Rose, arXiv:2012.12975arXiv preprintRiza Velioglu and Jewgeni Rose. (2020). Detecting hate speech in memes using multimodal deep learning approaches: Prize-winning solution to hateful memes challenge. arXiv preprint arXiv:2012.12975.
Enhance multimodal transformer with external label and in-domain pretrain: Hateful meme challenge winning solution. Ron Zhu, arXiv:2012.08290arXiv preprintRon Zhu. (2020). Enhance multimodal transformer with external label and in-domain pretrain: Hateful meme challenge winning solution. arXiv preprint arXiv:2012.08290. |
221,097,366 | [] | Adding linguistic information to parsed corpora
July 2019
Susan Pintzuk
University of York
Adding linguistic information to parsed corpora
July 2019LiLT volume 18, issue 4
No matter how comprehensively corpus builders design their annotation schemes, users frequently find that information is missing that they need for their research. In this methodological paper I describe and illustrate five methods of adding linguistic information to corpora that have been morphosyntactically annotated (=parsed) in the style of Penn treebanks. Some of these methods involve manual operations; some are executed by CorpusSearch functions; some require a combination of manual and automated procedures. Which method is used depends almost entirely on the type of information to be added and the goals of the user. Of course the main goal, regardless of method, is to record within the corpus additional information that can be used for analysis and also retained through further searches and data processing.-1 The CS software and manual can be downloaded from its sourceforge site: http://corpussearch.sourceforge.net/.
Introduction
No matter how comprehensively corpus builders design their annotation schemes, users frequently find that information is missing that they need for their research, and so they must add it on their own. In this methodological paper I discuss and illustrate five methods of adding linguistic information of all types (lexical, phonological, morphological, syntactic, semantic, discourse) to corpora that have been morphosyntactically annotated (=parsed) in the style of Penn treebanks, and the advantages and disadvantages of each method. These five methods are the following: 1) adding information to the ur-text; 2) inserting CODE nodes into the token structure; 3) embedding in-formation in coding strings; 4) modifying node labels and structure; and 5) importing token information and other corpus data from the corpus into spreadsheets. Method 1 is necessarily manual, while methods 2 through 5 may involve a combination of manual and automated procedures, functions and tools. Of course the main goal, regardless of method, is to record within the corpus additional information that can be used for analysis and also retained through further searches and data processing. The search engine used for many treebanks, and the one used for the searches and the automated annotation described in this paper, is CorpusSearch (CS). 1 The manual addition of information may be the simplest procedure but, being manual, it is the most prone to error. Information can be added to the two areas of CS output that are reproduced each time CS is run under default conditions: 1) the token ur-text, which contains the token text and ID without any annotation, and 2) the token structure, including the lexical items. The main difference between the two locations is that material internal to the ur-text is not searchable by CS, while the token structure is the object that is searched and modified by CS queries.
A word of warning is appropriate here: annotation that is added manually cannot be reproduced except by repeating the same manual procedure. Annotation added by CS (i.e. coding strings, structure changes, label changes) can easily be reproduced -unless it is based on annotation that was previously added manually. The availability of automated reproduction is important for three reasons: 1) Files can be lost or damaged. Automated reproduction of annotation is relatively simple; manual reproduction is painful and time-consuming. 2) For most users, whenever we look at the output of a new CS query, we find problems, either in the query or else in the corpus; we then must find and fix the source of the problem and run CS again. One way to facilitate this repetition is to use annotated batch files so that the same processes can be documented and repeated. The use of batch files permits the effortless repetition of what may be a long and complex string of searches. An example of a batch file is given in Appendix. 3) We want other scholars to be able to reproduce our research. With this end in mind, it is encouraging to see that many researchers are making their CS queries available, either in an appendix or on the web, along with their search results.
In the remainder of this paper, I describe and evaluate the five methods listed above, presenting case studies for each method from my own recent collaborative research. 2 For readers who are not familiar with CS, some details of the search methodology will be given where space permits; interested readers are referred to the online CS manual. Because of space limitations, the background information and results for each case study are necessarily brief; interested readers are referred to the publications themselves for details and clarifications.
1 Method 1: Adding information manually to the ur-text.
The ur-text consists of the words of the token and the token ID without morphosyntactic annotation; CS outputs the ur-text above the structure for each token in the output file. As mentioned above, adding information manually to the ur-text is arguably the simplest procedure, at least in concept, but it has (at least) three major drawbacks: 1) because it is manual, it is prone to error; 2) it must be applied to CS output, not to the original corpus, because the original corpus does not contain ur-text to accompany the token structure; and 3) the ur-text is not searchable by CS, and therefore any added information can be used only by looking directly at the individual tokens in the data file, one by one. 3 This method was used for some of the tokens in the database for , described as Case study 1 below.
Case study 1: Haeberli et al. 2017, investigating verb second (V2) in Old English, looked at fronted pronominal objects to determine whether they can be analyzed as the result of Formal Movement (Frey 2006a,b;Light 2012). CS was used to retrieve all clauses with fronted pronominal objects, but the preceding context was needed to determine the topic type (familiar, aboutness, contrastive, as in Frascarelli and Hinterhölzl 2007). Examples (1) and (2) below show text manually inserted in the ur-text (the text in the area between '/ *' and '* /'). In (1) below, the ur-text is enclosed in a box, and the information added manually is in red. The original Old English token, including the token ID, is in black.
In (1), the added information is the preceding context and its gloss and the gloss of the token itself. In (2), the added information is the gloss of the token and a comment about structure and word order in the token.
(2) used the preceding context to determine the topic type and then manually added it to the coding string for each token. The counts of the different topic types are shown in Table 1 below; it is clear from these data that non-contrastive object pronouns that serve as familiar topics can be found clause-initially in early English, in this way. contra Light 2012; showed that this pattern could be analysed as in Walkden 2017. (3) Three-step process for coding and counting NP types
Step 1: Add CODE node with a CS corpus-revision query:
Step Step 1 Input:
( (IP-MAT (NP-SBJ (NPR Eue)) (VBD heold) (PP (P+NPR iparais)) (NP-OB1 (ADJ long) (N tale)) (PP (P wi+d) (NP (D +te) (N neddre))) (E_S .)) (ID CMANCRIW-1,II.54.519))
Step 1 Output: /~* Eue heold iparais long tale wi+d +te neddre. (CMANCRIW-1,II.54.519) *~/ (0 (1 IP-MAT (2 NP-SBJ (3 NPR Eue)) (5 VBD heold) (7 PP (8 P+NPR iparais)) (10 NP-OB1 (11 CODE <NPTYPE:>) (13 ADJ long) (15 N tale)) (17 PP (18 P wi+d) (20 NP (21 D +te) (23 N neddre))) (25 E_S .)) (27 ID CMANCRIW-1,II.54.519))
Step 2: Add NP characteristics manually to CODE node /~* Eue heold iparais long tale wi+d +te neddre. Eve held in-paradise (a) long conversation with the serpent (CMANCRIW-1,II.54.519) *~/ (0 (1 IP-MAT (2 NP-SBJ (3 NPR Eue)) (5 VBD heold) (7 PP (8 P+NPR iparais)) (10 NP-OB1 (11 CODE <NPTYPE:BSG-EXS>) (13 ADJ long) (15 N tale)) (17 PP (18 P wi+d) (20 NP (21 D +te) (23 N neddre))) (25 E_S .)) (27 ID CMANCRIW-1,II.54.519)) In Crisma and Pintzuk 2016, all nominal objects were coded in this way. Each type of NP was then 'counted' by searching for each type of CODE node; an example is given in Step 3 Query below. The quantitative results are shown in Table 3.
Step 3: Count NP types
Step 3 Query: node: IP* query: (NP-OB* idoms CODE) AND (CODE idoms <NPTYPE:*GNR*>) The texts in Table 3 are arranged in chronological order. The columns are arranged left to right in order of increasing saliency of presupposition of existence: there is no presupposition of existence with GNR, NPE, EXS-SCOPE-nrw; there is a clear presupposition of existence with EXS-SCOPE-wd, EXS-SPC; and finally there is a 'gray' area in the middle: EXS-SCOPE-amb, EXS, AMB.
According to Crisma 2015, an develops in three stages in the history of English. In Stage 1, an is the numeral 'one'; in Stage 2, an is an overt existential operator used when an indefinite noun phrase is interpreted as specific or when it takes wide scope over another operator; in Stage 3, an is an expletive used with all singular noun phrases. Crisma notes that in Stages 1 and 2, an is never used with generics.
The numbers are quite small in most of the cells in Table 3; presenting frequencies would be misleading. Nevertheless, clear patterns emerge. We can see that in the M1 period, an acts as an overt existential operator in the following types of nominals: 1) indefinite nominals that are interpreted as specific (EXS-SPC: 0 BSG, 14 AN); 2) nominals that take wide scope over some other operator (EXS-SCOPE-wd: 0 BSG, 1 AN). For nominals in the absence of other logical operators, an is favoured by about 2 to 1 over BSG (EXS: 17 BSG, 35 AN). For NPE nominals, either generic or narrow scope existential, as well as for existential nominatives taking narrow scope, BSG is favoured by about 2 to 1 over an (EXS SCOPE-nrw: 19 BSG, 11 AN; NPE: 26 BSG, 11 AN). In addition, we also see the first sign of change: in two texts, Ancrene Riwle and Hali Meidhad, there are two examples each of an used with generics (GNR).
In the M3 period, we see a number of changes: 1) for generics (GNR), a sharp reversal in the distribution of BSG (6 BSG, 47 AN); 2) for nominals with no presupposition of existence (NPE), there is also a reversal, with only 3 BSG and 55 AN; 3) similarly for existential nominals with narrow scope (EXS-SCOPE-nrw) and existential nominals in the absence of other logical operators (EXS), with all 11 and 17 tokens, respectively, using AN. Our conclusion is that in this period, the use of an with singular nouns has generalised to all contexts, with very few exceptions.
Method 3: Embedding information in coding strings
Coding strings are strings of characters, each character representing a linguistic or extralinguistic variable, which are inserted as nodes in the tokens of a corpus file. Method 3, the construction of coding strings, is the traditional and perhaps most widely used method of adding information to corpus data. Coding strings had their origin in quantitative sociolinguistic research and were used decades before the creation of parsed corpora. The CODING function of CS is used to construct coding strings based on the morphosyntactic annotation and the lexical content of the token; once created, coding strings may be manually extended to encode information that is not represented in the corpus. Since coding strings are part of the token structure, they may be searched and manipulated by CS. Coding strings may also be used as input to software for statistical analysis, like R; this is perhaps their most important function. Case study 3: Taylor and Pintzuk 2015 (T&P 2015) examine the position of objects in Old English and look at the effect of verb order and the length and information structure of the object to support their conclusion that there are two sources for post-verbal objects in Old English, object postposition and base-generation. As shown in (4) T&P 2015 present the following analysis of these data. They assume that in the Old English period, there was variation in underlying structure: head-initial/final IPs (AuxV/VAux) and VPs (VO/OV). V Aux O can be derived only from head-final IP/VP structure by postposition of O from preverbal position, as shown in (5)
[ TP ... [ T 0 [ VP1 [ VP2 O V ] t Aux ] Aux+T ] ] b. V Aux O: [ TP ... [ T 0 [ VP1 [ VP2 t O V ] t Aux ] Aux+T ] O ] c. Aux O V: [ TP ... [ T 0 Aux+T [ VP1 t Aux [ VP2 O V ] ] ] ] d. Aux V O: [ TP ... [ T 0 Aux+T [ VP1 t Aux [ VP2 t O V ] ] ] O ] e. Aux V O: [ TP ... [ T 0 Aux+T [ VP1 t Aux [ VP2 V O ] ] ] ]
If all post-verbal objects were derived by postposition in both V Aux and Aux V clauses, i.e. if structure (5)e didn't exist, we would expect the factors influencing post-verbal position to be the same in both clause types. To test this null hypothesis, T&P 2015 looked at the influence of weight (as measured by the length of the object in words) and informational status (given vs. new) on the position of objects in AuxV and VAux clauses. This was a four-step process. As a first step, CS was used to code each token for three factors: the order of finite auxiliary and non-finite main verb (auxv vs. vaux); the position of the object with respect to the non-finite main verb (ov vs. vo); the length of the object in words (1 . . . 11). The coding query file is given below in (6); 6 an example of a token coded for the first three factors is given in (7). The second step was to manually code the informational status of the object (given vs. new). Examples of tokens coded for all four factors are given in (8) through (11); (8) is the token in (7) The third step was to use the print_only function of CS to create an output file containing only the coding strings of the data file. The file is shown in (12) below. CS separates the factors by ':', and the user must manually insert a header naming the factors for input to statistical processing, the last step. The results for this study are shown in Table 5. Table 5, the effect of weight is significant in both clause types, but slightly weaker in AuxV clauses: each additional word in VAux clauses increases the likelihood of VO order by 2.68, in AuxV clauses by 2.43. Informational status is significant only in VAux clauses: the distance between given and new is .9 in VAux clauses, but only .08 in AuxV clauses. T&P 2015 interpret these results as follows: VAuxO clauses are derived only by postposition of the object, and postposition is strongly influenced by weight and informational status: heavy objects Table 5) and new objects are much more likely to postpose than light objects and given objects. Since AuxVO clauses are derived by two different processes, postposition and base-generation, the effects of weight and informational status are weakened; this is why the effect of weight is weaker in AuxV clauses and the effect of informational status is reduced to non-significance.
As shown in
Method 4: Modifying corpus annotation (node labels and structure)
Method 4, the modification of corpus annotation, may be done manually, but it is much more efficient (and safer) to use the corpus-revision tool of CS. This tool enables the addition, deletion, and modification of annotation in the corpus, including not only node labels but also structure. Any search that can be made using CS can act as the basis for corpus revision; the output of corpus revision is a new version of the corpus -i.e., the original version of the corpus is not deleted, in case of catastrophic errors. Corpus revision can be used to build an annotated corpus starting from a straight text corpus with only part-of-speech. I frequently find it useful to mark particular structures so that they are easy to identify, and also to evade some CS restrictions, as will be seen in Case study 4 below. Case study 4: Haeberli and Pintzuk 2017 (H&P) look at verb placement in 'true V2' contexts in Old English. H&P analyse in detail one particular clause type: clauses with an initial gif/þa/þonne 'if/when/ when' subordinate clause, followed by a resumptive adverb (e.g. þa/ þonne 'then') and the rest of the main clause; an example is given in (13). Note that initial þa/þonne in Old English main clauses is considered a 'true V2' context: 97.4% (6546/6719) of these clauses exhibit strict V2 order, with the verb in second position followed by the subject. In order to simplify the searches for and coding of these clauses, I wanted to flag the relevant IP-MATs and CP-ADVs by modifying the label. In addition, I wanted to 'remove' 8 the subtrees of all subordinate clauses other than the IP-SUB dominated by the relevant CP-ADV. Three steps were necessary, as shown below; a red font is used for highlighting.
Step 1: Flag the relevant IP-MAT, CP-ADV, and IP-SUB using the query file below. The IP-MAT and CP-ADV are flagged by appending '-z' to the label; the IP-SUB is flagged by prepending 'x-' to the label. Notice that the first token (coeust . . . 26) contains an IP-SUB that is not dominated by the clause-initial CP-ADV; the second token (cobede . . . 3530) contains an IP-MAT that does not dominate a CP-ADV as the first constituent; and the third token (cocanedgX . . . 82) contains both a CP-ADV that is not the first constituent of the IP-MAT and an IP-SUB that is not dominated by the relevant CP-ADV. These are all nodes that are irrelevant to the investigation.
Step 1 Query: node: IP-MAT* nodes_only: t remove_nodes: f query: ({1}IP-MAT* idoms {2}CP-ADV*) AND (CP-ADV* idoms P) AND (P idoms GIF|THA|THONNE 9 ) AND (CP-ADV* idoms {3}IP*SUB*) ... append_label{1}: -z append_label{2}: -z prepend_label{3}: x-
Step 1 Input:
(0 (1 IP-MAT (2 CP-ADV (3 P +Ta) (5 C 0) (7 IP-SUB (8 NP-ACC (9 D^A +t+at)) (11 NP-NOM (12 NR^N Placidas)) (14 VBDI geseah))) (16 , ,) (18 ADVP-TMP (19 ADV^T +ta)) (21 VBD gewilnode) (23 NP-NOM (24 PRO^N he)) (26 CP-THT (27 C +t+at) (29 IP-SUB (30 NP-NOM (31 PRO^N he)) (33 NP-ACC (34 PRO^A hine)) (36 VBDS gefenge))) (38 . ,)) (40 ID coeust,LS_8_[Eust]:31.26)) ( (3 IP-MAT (4 CP-ADV (5 P +Ta) (7 C 0) (9 IP-SUB (10 NP-NOM (11 PRO^N he)) (13 ADVP-TMP (14 ADV^T +da)) (16 NP (17 PRO$ his) (19 N scylde)) (21 VBD gehyrde))) (23 , ,) (25 ADVP-TMP (26 ADV^T +ta)) (28 VBDI cw+a+d) (30 NP-NOM (31 PRO^N he)) (33 , :) (35 IP-MAT-SPE (36 NP-NOM (37 Q^N Micel) (39 N^N wund)) (41 VBPI behofa+d) (43 NP-GEN (44 Q^G micles) (46 N^G l+acedomes))) (48 . :)) (50 ID cobede,Bede_4:26.350.19.3530)) ( (3 IP-MAT (4 CP-ADV (5 P Gyf) (7 C 0) (9 IP-SUB (10 NP-NOM (11 PRO^N he)) (13 NP-ACC (14 PRO$ his) (16 N^A lif)) (18 VBPS misfadige))) (20 , ,) (22 VBPS wanige) (24 NP-NOM (25 PRO$ his) (27 N^N wyr+dscipe)) (29 CP-ADV (30 P be) (32 D^D +dam) (34 C +te) (36 IP-SUB (37 NP-NOM (38 D^N seo)) (40 ADJP-NOM-PRD (41 ADJ^N d+ad)) (43 BEPS sy))) (45 . .)) (47 ID cocanedgX,WCan_1.1.2_[Fowler]:68.82))
Step 1 Output:
(0 (1 IP-MAT-z (2 CP-ADV-z (3 P +Ta) (5 C 0) (7 x-IP-SUB (8 NP-ACC (9 D^A +t+at)) (11 NP-NOM (12 NR^N Placidas)) (14 VBDI geseah))) (16 , ,) (18 ADVP-TMP (19 ADV^T +ta)) (21 VBD gewilnode) (23 NP-NOM (24 PRO^N he)) (26 CP-THT (27 C +t+at) (29 IP-SUB (30 NP-NOM (31 PRO^N he)) (33 NP-ACC (34 PRO^A hine)) (36 VBDS gefenge))) (38 . ,)) (40 ID coeust,LS_8_[Eust]:31.26)) ( (1 IP-MAT-z (2 CP-ADV-z (3 P +Ta) (5 C 0) (7 x-IP-SUB (8 NP-NOM (9 PRO^N he)) (11 ADVP-TMP (12 ADV^T +da)) (14 NP (15 PRO$ his) (17 N scylde)) (19 VBD gehyrde))) (21 , ,) (23 ADVP-TMP (24 ADV^T +ta)) (26 VBDI cw+a+d) (28 NP-NOM (29 PRO^N he)) (31 , :) (33 IP-MAT-SPE (34 NP-NOM (35 Q^N Micel) (37 N^N wund)) (39 VBPI behofa+d) (41 NP-GEN (42 Q^G micles) (44 N^G l+acedomes))) (46 . :)) (48 ID cobede,Bede_4:26.350.19.3530)) ( (1 IP-MAT-z (2 CP-ADV-z (3 P Gyf) (5 C 0) (7 x-IP-SUB (8 NP-NOM (9 PRO^N he)) (11 NP-ACC (12 PRO$ his) (14 N^A lif)) (16 VBPS misfadige))) (18 , ,) (20 VBPS wanige) (22 NP-NOM (23 PRO$ his) (25 N^N wyr+dscipe)) (27 CP-ADV (28 P be) (30 D^D +dam) (32 C +te) (34 IP-SUB (35 NP-NOM (36 D^N seo)) (38 ADJP-NOM-PRD (39 ADJ^N d+ad)) (41 BEPS sy))) (43 . .)) (45 ID cocanedgX,WCan_1.1.2_[Fowler]:68.82))
Step 2: Remove the nodes of the embedded IPs using the query file below; the output from Step 1 serves as the input to Step 2. Note that the nodes of the IP-SUB embedded under CP-ADV-z are not removed since its label begins with 'x-'.
Step 2 Query: node: IP-MAT*-z remove_nodes: t query: (IP-MAT*-z exists)
Step 2 Output:
( (1 IP-MAT-z (2 CP-ADV-z (3 P +Ta) (5 C 0) (7 x-IP-SUB (8 NP-ACC (9 D^A +t+at)) (11 NP-NOM (12 NR^N Placidas))
July 2019
(14 VBDI geseah))) (16 , ,) (18 ADVP-TMP (19 ADV^T +ta)) (21 VBD gewilnode) (23 NP-NOM (24 PRO^N he)) (26 CP-THT (27 C +t+at) (29 IP-SUB RMV:he_hine_gefenge...)) (38 . ,)) (40 ID coeust,LS_8_[Eust]:31.26)) ( (1 IP-MAT-z (2 CP-ADV-z (3 P +Ta) (5 C 0) (7 x-IP-SUB (8 NP-NOM (9 PRO^N he)) (11 ADVP-TMP (12 ADV^T +da)) (14 NP (15 PRO$ his) (17 N scylde)) (19 VBD gehyrde))) (21 , ,) (23 ADVP-TMP (24 ADV^T +ta)) (26 VBDI cw+a+d) Step 3: Remove 'x-' prepended to IP-SUB labels using the query file below; the output from Step 2 serves as the input to Step 3.
Step 3 Query: node: IP-MAT*-z query: (1x-IP* exists) pre_crop_label1: -
Step 3 Output:
( (1 IP-MAT-z (2 CP-ADV-z (3 P +Ta) (5 C 0) (7 IP-SUB (8 NP-ACC (9 D^A +t+at)) (11 NP-NOM (12 NR^N Placidas)) (14 VBDI geseah))) (16 , ,) (18 ADVP-TMP (19 ADV^T +ta)) (21 VBD gewilnode) (23 NP-NOM (24 PRO^N he)) (26 CP-THT (27 C +t+at) (29 IP-SUB RMV:he_hine_gefenge...)) (31 . ,)) (33 ID coeust,LS_8_[Eust]:31.26)) ( (1 IP-MAT-z (2 CP-ADV-z (3 P +Ta) (5 C 0) (7 IP-SUB (8 NP-NOM (9 PRO^N he)) (11 ADVP-TMP (12 ADV^T +da)) (14 NP (15 PRO$ his) (17 N scylde)) (19 VBD gehyrde))) (21 , ,) (23 ADVP-TMP (24 ADV^T +ta)) (26 VBDI cw+a+d) (28 NP-NOM (29 PRO^N he)) ( The result of this three-step procedure is a file of tokens that are simple to search further and code, since the relevant IP-MATs and CP-ADVs have labels that end in '-z' and the relevant IP-SUB is the only IP-SUB in the token that has its content preserved intact.
Method 5: Copying coding strings from the corpus into a spreadsheet
Finally, Method 5 copies coding strings from the corpus into a spreadsheet, the content of which may be ordered, manipulated, and displayed in ways that corpus data cannot be. For example, the data in the cells of a spreadsheet can be interpreted as numbers and used for simple calculations like totals, means, and frequencies; in contrast, the content of coding strings within a corpus are characters, not numerical values, and cannot be used as numbers. From the spreadsheet users can create output, e.g. a csv file, that is formatted for statistical analysis. Method 5 provides perhaps the most flexible way of working with and analyzing corpus data, but it should be used with caution, for at least two obvi-ous reasons: it involves manual manipulation of the data, and therefore is prone to error; in addition, it is not always possible to go from a spreadsheet back to a corpus format. Case study 5: Pintzuk 2017 (T&P 2017) look at the effect of weight, among other variables, on split coordination in Old English. Almost all coordinated constituents in early stages of English can be split, as illustrated in (14). T&P 2017 focus on subjects, aiming to measure the effect of length (as measured in number of words) on splitting. They need the length of the first conjunct, the length of the second conjunct (which includes both the conjunction and the nominal), and the length of the entire coordinated nominal in order to determine which of the three, if any, has an effect on splitting. But because of the way coordination is annotated in the corpus, these measurements are not at all straightforward.
If the nominal is not split and the coordinated nouns are bare, with no modification, then the nominal is annotated as a flat structure, with the two nouns and the conjunction immediately dominated by the NP, as shown in (15). In these cases the length of the entire subject can be measured, since it is a constituent (NP-NOM). Although the lengths of the two conjuncts can't be measured individually, since they are not constituents, it can be assumed that the length of the first conjunct is 1. 10 The tokens in this section are coded as follows: file name : token number : flat vs. non-flat : split vs. non.split : final vs. non.final (position within the clause) : length of 1 st conjunct : length of 2 nd conjunct: length of entire conjoined phrase. '/' is used when it is not possible to measure or assume the length. In split flat structures, we can measure the length of the 1 st conjunct and the length of the 2 nd conjunct, since each of these is a constituent; but we cannot measure the length of the entire subject. An example is given in (16) 10 The reader might think that the length of the 2 nd conjunct could be estimated as 2 (conjunction + noun). However, some conjoined constituents have more than two conjuncts, as shown below in (i); in these cases, the length of the 2 nd and following conjuncts cannot be assumed or measured.
(i) (CP-ADV (P Gif) (C 0) (IP-SUB (NP-NOM (N^N fot) (CONJ o+d+de) (N^N cneow) (CONJ o+d+de) (N^N scancan)) (VBPS swellan)) ...
July 2019
(0 (1 IP-MAT (2 CODING aelive:3418:flat:split:non.final:1:2:/) (4 CONJ ac) (6 NP-NOM (7 N^N foxunga) (9 CONJP *ICH*-1)) (11 BEDI w+aron) (13 VAG wunigende) (15 PP (16 P on) (18 NP-DAT (19 PRO^D him))) (21 , ,) (23 CONJP-1 (24 CONJ and) (26 N^N upahefednys)) (28 PP (29 P swilce) (31 CPX-CMP (32 IPX-SUB RMV:healice_fugelas...))) (34 . ,)) (36 ID coaelive,+ALS_[Memory_of_Saints]:160.3418))
In non-split non-flat structures, we can measure all three, since the two conjuncts and also the entire conjoined structure is a constituent, as shown in (17). (4 NP-NOM (5 NP-NOM (6 NR^N Eubolus) (8 NP-NOM-PRN (9 D^N se) (11 N^N u+dwyta))) (13 CONJP (14 CONJ and) (16 NP-NOM (17 D^N +ta) (19 ADJS^N yldostan) (21 N^N preostas)))) (23 VBDS stoden) (25 PP (26 P +at) (28 NP-DAT (29 D^D +t+ara) (31 N^D dura))) ... (48 ID coaelive,+ALS_[Basil]:132.537)) And finally, in split hierarchical structures, we can measure the length of the 1 st conjunct and the length of the 2 nd conjunct, since each of these is a constituent; but we cannot measure the length of the entire subject. An example is given in (18). (4 NP-NOM (5 NP-NOM (6 NR^N Eubolus) (8 NP-NOM-PRN (9 D^N se) (11 N^N u+dwyta))) (13 CONJP (14 CONJ and) (16 NP-NOM (17 D^N +ta) (19 ADJS^N yldostan) (21 N^N preostas)))) (23 VBDS stoden) (25 PP (26 P +at) (28 NP-DAT (29 D^D +t+ara) (31 N^D dura))) ... The output file can then be read into a spreadsheet, as shown below in Excel spreadsheet 1, with the header ("TEXT | TOKEN | (NON)FLAT | . . . ") added manually as the first row. When the data are sorted by the missing length, as shown in spreadsheet 2, then the formulae for the missing lengths can be inserted manually in the empty cells, as shown in spreadsheet 3.
Excel spreadsheet 1: coding strings read from the CS output file, with header added (II.2) a. generic interpretation (CODE: <NPTYPE:BSG-GNR>)
a-b. In contrast, Aux V O order can be derived in two different ways: a) head-initial IP, head-final VP structure with postposition of O, as shown in (5)c-d; b) head-initial IP/VP structure (i.e. O is merged in post-verbal position), as shown in (5)e. (5) a. O V Aux:
: (IP* idoms verb-finite) AND (IP* idoms verb-non-finite) AND (verb-finite precedes verb-non-finite) vaux: (IP* idoms verb-finite) AND (IP* idoms verb-non-finite) AND (verb-non-finite precedes verb-finite) 1x: ELSE 7 } 2: { ov: (IP* idoms verb-non-finite) AND (IP* idoms oblique-argument) AND (oblique-argument precedes verb-non-finite) vo: (IP* idoms verb-non-finite) AND (IP* idoms oblique-argument) AND (verb-non-finite precedes oblique-argument) 2x: ELSE } 3: { \1: (IP* idoms oblique-argument) AND (oblique-argument idomswords 1) \2: (IP* idoms oblique-argument) AND (oblique-argument idomswords 2) \3: (IP* idoms oblique-argument) AND (oblique-argument idomswords 3) \4: (IP* idoms oblique-argument) AND (oblique-argument idomswords 4) \5: (IP* idoms oblique-argument) AND (oblique-argument idomswords 5) \6: (IP* idoms oblique-argument) AND (oblique-argument idomswords 6) \7: (IP* idoms oblique-argument) AND (oblique-argument idomswords 7) \8: (IP* idoms oblique-argument) AND (oblique-argument idomswords 8) \9: (IP* idoms oblique-argument) AND (oblique-argument idomswords 9) \10: (IP* idoms oblique-argument) AND (oblique-argument idomswords 10) \11: (IP* idoms oblique-argument) AND (oblique-argument idomswords> 10coaelive,+ALS_[Auguries]:245.3644))
-IP-SUB (8 NP-NOM (9 PRO^N he)) (11 NP-ACC (12 PRO$ his) (14 N^A lif)) (16 VBPS misfadige))) (18 , ,) (20 VBPS wanige) (22 NP-NOM (23 PRO$ his) (25 N^N wyr+dscipe)) (27 CP-ADV (28 P be) (30 D^D +dam) (32 C +te) (34 IP-SUB RMV:seo_d+ad_sy...)) (43 . .)) (45 ID cocanedgX,WCan_1.1.2_[Fowler]:68.82))
-MAT (2 CODING aelive:537:non.flat:non.split:non.final:3:4:7)
(
ID colacnu,Med_3_[Grattan-Singer]:85.1.454))(CP-ADV (P Gif)
NP-NOM (22 Q^N ealle) (24 D^N +ta) (26 N^N tunnan)))) (28 ID coaelive,+ALS_[Julian_and_Basilissa]:332.1143)) Output: aelive:1254:flat:non.split:final:1:/:3 aelive:3418:flat:split:non.final:1:2:/) aelive:537:non.flat:non.split:non.final:3:4:7) aelive:1143:non.flat:split:final:2:4:/)
Iulianus +te +t+at eal wyste . to martine . mid micelre blisse. Then said Julianus who that all knew . to Martianus . with great joy.July 2019
(1)
/~*
Preceding context:
+Ta cw+a+d Gang into +tinum godum
Go unto your gods
+te hi clypia+d to him.
You they summon to themselves.
(coaelive,+ALS_[Julian_and_Basilissa]:160.1036)
*~/
( (1 IP-MAT-SPE (2 NP (3 PRO +te))
(5 NP-NOM (6 PRO^N hi))
(8 VBPI clypia+d)
(10 PP (11 P to)
(13 NP-DAT-RFL (14 PRO^D him)))
(16 . .))
(18 ID coaelive,+ALS_[Julian_and_Basilissa]:160.1036))
/ ~ *
~Min modor Claudia, me h+af+d gebroht min h+alend Crist to his halgena blysse, my mother Claudia, me has brought my lord Christ to his holy bliss(coaelive,+ALS_[Eugenia]:415.445)
Note low subject with transitive verb
*~/
( (1 IP-MAT-SPE (4 NP-NOM-VOC (5 PRO$^N Min) (7 N^N modor)
(9 NP-NOM-PRN (10 NR^N Claudia)))
(12 , ,)
(14 NP (15 PRO me))
(17 HVPI h+af+d)
(19 VBN gebroht)
(21 NP-NOM (22 PRO$^N min) (24 N^N h+alend)
(26 NP-NOM-PRN (27 NR^N Crist)))
(29 PP (30 P to)
(32 NP (33 NP-GEN (34 PRO$ his) (36 N^G halgena))
(38 N blysse)))
(40 . ,))
(42 ID coaelive,+ALS_[Eugenia]:415.445))
TABLE 2 :
2Coding for CODE-NPTYPE node
TABLE 3 :
3The distribution of bare singular (BSG) nominals andnominals with an (AN) in Middle English
(Crisma and Pintzuk 2016: Table 1)
below, objects can appear both before and after the verb cluster in clauses with non-finite main verbs before finite auxiliaries (O V Aux and V Aux O), and before and after non-finite verbs in clauses with auxiliary -main verb order (Aux O V and Aux V O). In these examples, finite auxiliaries are underlined, non-finite main verbs are italicised, and objects are in bold face. if she would tolerate that disgrace' (coaelive,+ALS_[Eugenia]:185.305) that he would make peace with the false widow' (coaelive,+ALS_[Eugenia]:209.315) c. Aux O V þurh through understandan understand 'through which it must understand its creator' (coaelive,+ALS_[Christmas]:157.125) so that it is lost to the eternal life' (coaelive,+ALS_[Christmas]:144.117) Table 4 shows the distribution of objects in Old English texts that have more than 100 clauses with finite auxiliaries, non-finite main verbs July 2019 and non-pronominal objects. 5 TABLE 4: Frequency of VO order by verb order in texts with more than 100 tokens (T&P 2015, Table 1)(4) a. O V Aux
gif
if
heo
she
þaet
that
bysmor
disgrace
forberan
tolerate
wolde
would
'b. V Aux O
þaet
that
he
he
friðian
make-peace-with
wolde
would
þa
the
leasan
false
wudewan
widow
'þa
which
heo
it
sceal
must
hyre
its
scippend
creator
d. Aux V O
swa
so
þaet
that
heo
it
bið
is
forloren
lost
þam
the
ecan
eternal
life
life
'Text
VAux
AuxV
N
%VO
N
%VO
Orosius
66
4.5
47
31.9
Bede
58
6.9
46
10.9
Boethius
74
8.1
49
53.1
Cura Pastoralis
51
21.6
72
55.6
Catholic Homilies I
49
10.2
95
47.4
Catholic Homilies II
42
7.1
80
46.3
Lives of Saints
33
45.5
91
62.6
Gregory's Dialogues (C)
36
27.8
66
68.2
total
409
13.9
546
49.5
with informational status added as the fourth factor. NP-ACC (13 ADJR^A sw+arran) (15 N^A witu)) (17 VB +trowian)) (19 ID coaelive,+ALS_[Vincent]:89.7849))(8)
/~*
+t+at we god don sceolon
that we good do should
(coaelive,+ALS_[Auguries]:245.3644)
*~/
(0 (1 IP-SUB-SPE (2 CODING vaux:ov:1:new)
(4 NP-NOM (5 PRO^N we))
(7 NP-ACC (8 N^A god))
(10 VB don)
(12 MDPI sceolon))
(14 ID coaelive,+ALS_[Auguries]:245.3644))
(9)
/~*
Gif +du +t+as ecan
lifes ges+al+te habban wylt
If you the
eternal life's happiness have
will
(coaelive,+ALS_[Alban]:65.4038)
*~/
(0 (1 IP-SUB-SPE (2 CODING vaux:ov:4:new)
(4 NP-NOM (5 PRO^N +du))
(7 NP (8 NP-GEN (9 D^G +t+as) (11 ADJ^G ecan)
(13 N^G lifes))
(15 N ges+al+te))
(17 HV habban)
(19 MDPI wylt))
(21 ID coaelive,+ALS_[Alban]:65.4038))
(10)
/~*
+t+at se l+ace sceolde asceotan +t+at geswell
that the leech should lance
the
tumour
(coaelive,+ALS_[+Athelthryth]:61.4177)
*~/
(0 (1 IP-SUB (2 CODING auxv:vo:2:given)
(4 NP-NOM (5 D^N se) (7 N^N l+ace))
(9 MDD sceolde)
(11 VB asceotan)
(13 NP-ACC (14 D^A +t+at) (16 N^A geswell)))
(18 ID coaelive,+ALS_[+Athelthryth]:61.4177))
(11)
/~*
for +tan +te he sylf
sceal sw+arran witu
+trowian
because
he himself shall heavier torments suffer
(coaelive,+ALS_[Vincent]:89.7849)
*~/
(0 (1 IP-SUB-SPE (2 CODING auxv:ov:2:new)
(4 NP-NOM (5 PRO^N he)
(7 ADJP-NOM (8 ADJ^N sylf)))
(10 MDPI sceal)
(12
TABLE 5 :
5Results of multivariate analysis, effects in log odds and odds ratios. Shaded cells indicate non-significant results (from T&P 2015,
'God then sent fire and foul brimstone to him in the morning' (coaelive,+ALS[Pr_Moses]:211.2976) 'and [they] lived afterwards in grief and torment' (colsigewZ,+ALet_4_[SigeweardZ]:117.49)a. DP subject
oðþaet
until
þaet
the
ad
pile
waes
was
forburnen,
burned
and
and
ealle
all
þa
the
tunnan
casks
'until the pile and all the casks were burned up'
(coaelive,+ALS_[Julian_and_Basilissa]:332.1143)
b. DP object
God
God
sende
sent
ða
then
fyr
fire
on
in
merigen
morning
and
and
fulne
foul
swefel
brimstone
him
him
to
to
c. PP
&
and
on
in
sorhge
grief
leofodon
lived
&
and
on
in
geswincum
torment
siþþan
afterwards
ID coaelive,+ALS_[Sebastian]:77.1254))(15) non-split flat structure
Þaer
There
is
is
wop
weeping
and
and
wanung
wailing
(coaelive,+ALS_[Sebastian]:77.1254)
(0 (1 IP-MAT-SPE (2 CODING aelive:1254:flat:non.split:final:1:/:3)
(4 ADVP-LOC (5 ADV^L +T+ar))
(7 BEPI is)
(9 NP-NOM (10 N^N wop) (12 CONJ and) (14 N^N wanung)))
(16
.'but foxlike wiles and haughtiness were dwelling in him, like soaring birds' (coaelive,+ALS_[Memory_of_Saints]:160.3418)(16) split 'flat' structure
ac
but
foxunga
foxlike-wiles
waeron
were
wunigende
dwelling
on
in
him,
him,
and
and
upahefednys
haughtiness,
swilce
like
healice
soaring
fugelas,
birds,
/ LiLT volume 18, issue 4 July 2019
In some cases the procedures documented in the case studies have been simplified for clarification purposes.3 Of course CS output, being straight ASCII text, can be read by word-processing software such as Word and TextEdit, so these manual additions can be searched for
In fact, while the research for this project was carried out, both the CODE nodes and their contents were inserted manually; it would have been less work if CS had been used to insert the nodes and to determine whether or not the relevant NP contained the lexeme an 'one'. In the procedure presented below, the CODE node is inserted by CS, while all other information is inserted manually.
/ LiLT volume 18, issue 4 July 2019
There are additional constraints on the object; see T&P 2015.
The terms 'verb-finite', 'verb-non-finite', and 'oblique-argument' are defined in a condition file, available to CS when queries are run.
An elsewhere condition is frequently used to find unexpected conditions and data in the corpus.
Setting the CS command-file option 'remove_nodes' to true results in removing recursive structure by removing the subtrees of all nodes that are a) embedded within an instance of the 'node' setting and b) are of the same type as the 'node' setting. The subtrees are replaced by the label 'RMV' and the first three lexical words of the node. 9 GIF, THA and THONNE are defined in a condition file as all of the variant spellings of these lexemes.
The node $ROOT is used to refer to the highest node in the token.
(embedding information in coding strings). Which method is used depends almost entirely on the type of information to be added and the goals of the user.26 / LiLT volume 18, issue 4 July 2019
AcknowledgementsThis paper was presented at the international symposium entitled "Exploiting Parsed Corpora: Application in Research, Pedagogy, and Processing" held at the National Institute for Japanese Language and Linguistics (NINJAL) on December 9-10, 2017. I thank the organizers for inviting me to that symposium, the audience for questions and comments, two anonymous reviewers for helpful suggestions, and all of my co-authors for giving me the opportunity to participate in so many interesting research projects.(4 NP-NOM (5 NP-NOM (6 D^N +t+at) (8 N^N ad)) (10 CONJP *ICH*-1)) (12 BEDI w+as) (14 VBN forburnen) (16 , ,) (18 (21 NP-NOM (22 Q^N ealle) (24 D^N +ta) (26 N^N tunnan)))) (28 ID coaelive,+ALS_[Julian_and_Basilissa]:332.1143))Once the tokens are coded in this way, CS can be used to output the coding strings to a file as shown below:Query: node: $ROOT 11 print_only: CODING* Input:(0 (1 IP-MAT-SPE (2 CODING aelive:1254:flat:non.split:final:1:/:3) (4 ADVP-LOC (5 ADV^L +T+ar)) (7 BEPI is) (9 NP-NOM (10 N^N wop) (12 CONJ and) (14 N^N wanung))) (16 ID coaelive,+ALS_[Sebastian]:77.1254)) (0 (1 IP-MAT (2 CODING aelive:3418:flat:split:non.final:1:2:/) (4 CONJ ac) (6 NP-NOM (7 N^N foxunga) (9 CONJP *ICH*-1)) (11 BEDI w+aron) (13 VAG wunigende) (15 PP (16 P on) (18 NP-DAT (19 PRO^D him))) (21 , ,) (23 CONJP-1 (24 CONJ and) (26 N^N upahefednys)) (28 PP (29 P swilce) (31 CPX-CMP (32 IPX-SUB RMV:healice_fugelas...))) (34 . ,)) (36 ID coaelive,+ALS_[Memory_of_Saints]:160.3418)) Excel spreadsheet 2: coding strings sorted by which length is missing Excel spreadsheet 3: coding strings with formulae for missing length Once the missing lengths have been calculated in the Excel spreadsheet 3 above, it is straightforward to calculate the average length of the 1 st conjunct, 2 nd conjunct and complete subject by sorting on column D and then adding the lengths of the split / non-split subjects and dividing by the total number of these tokens. T&P 2017 found that in the Old English data, the average total length of the coordination was 6.88 for split subjects and 5.99 for non-split subjects, a statistically significant difference. Similarly, the average length of the 2 nd conjunct was 5.06 for split subjects and 4.08 for non-split subjects; again, a statistically significant difference. But for the 1 st conjunct, the average length of the split subjects was less than the average length of the non-split subjects: 1.82 vs. 1.92.ConclusionsIn this article I have described and illustrated five methods for adding information to corpora that have been parsed in the Penn treebank style. These methods may involve manual operations, or they may be executed by CS functions, or they may require a combination of manual and automated procedures. Some of the methods overlap: for example, method 2 (inserting CODE nodes) is functionally equivalent to method Appendix Partial batch file for Old English data for Within a UNIX platform, batch files are run by typing 'source [filename]'. Blank lines and lines that start with the hash character (#) are ignored; in this way comments can be added to describe the procedures and searches. In the file below, the executable lines are shown in red.
The 'indefinite article' from cardinal to operator to expletive. P Crisma, Language Change at the Syntax-Semantics Interface. A. J. Gianollo, C. and D. PenkaBerlinDe Gruyter MoutonCrisma, P. 2015. The 'indefinite article' from cardinal to operator to exple- tive. In A. J. Gianollo, C. and D. Penka, eds., Language Change at the Syntax-Semantics Interface, 125-151. Berlin: De Gruyter Mouton.
An from Old to Middle English. P Crisma, S Pintzuk, Let Us Have Articles Betwixt Us: Papers in Historical and Comparative Linguistics in Honour of Johanna L. Wood. S. Vikner, H. Jørgensen, and E. van GelderenAarhusDepartment of English, School of Communication and Culture, Aarhus UniversityCrisma, P. and S. Pintzuk. 2016. An from Old to Middle English. In S. Vikner, H. Jørgensen, and E. van Gelderen, eds., Let Us Have Articles Betwixt Us: Papers in Historical and Comparative Linguistics in Honour of Johanna L. Wood , 31-53. Aarhus: Department of English, School of Communication and Culture, Aarhus University.
Types of topics in German and Italian. M Frascarelli, R Hinterhölzl, On Information Structure, Meaning and Form: Generalizations across Languages. K. Schwabe and S. WinklerJohn BenjaminsFrascarelli, M. and R. Hinterhölzl. 2007. Types of topics in German and Italian. In K. Schwabe and S. Winkler, eds., On Information Structure, Meaning and Form: Generalizations across Languages, 87-116. Amster- dam: John Benjamins.
Contrast and movement to the German prefield. W Frey, The Architecture of Focus. V. Molnár and S. WinklerBerlin, New YorkMouton de GruyterStudies in Generative Grammar 82Frey, W. 2006a. Contrast and movement to the German prefield. In V. Molnár and S. Winkler, eds., The Architecture of Focus (Studies in Generative Grammar 82), 235-264. Berlin, New York: Mouton de Gruyter.
How to get an object-es into the German prefield. W Frey, Form, Structure, and Grammar -A Festschrift Presented to Günther Grewendorf on Occasion of His 60th Birthday. P. Brandt and E. FussBerlinAkademie VerlagFrey, W. 2006b. How to get an object-es into the German prefield. In P. Brandt and E. Fuss, eds., Form, Structure, and Grammar -A Festschrift Presented to Günther Grewendorf on Occasion of His 60th Birthday, 159- 185. Berlin: Akademie Verlag.
V3 in true V2 contexts in Old English. Presented at the workshop V3 and Resumptive Adverbials. E Haeberli, S Pintzuk, Universiteit GentHaeberli, E. and S. Pintzuk. 2017. V3 in true V2 contexts in Old English. Presented at the workshop V3 and Resumptive Adverbials. Universiteit Gent, 5 October 2017.
Object pronoun fronting and the nature of V2 in early English. Ms. E Haeberli, S Pintzuk, A Taylor, University of Geneva, University of YorkHaeberli, E., S. Pintzuk, and A. Taylor. 2017. Object pronoun fronting and the nature of V2 in early English. Ms. University of Geneva, University of York.
The Syntax and Pragmatics of Fronting in Germanic. C Light, University of PennsylvaniaPh.D. thesisLight, C. 2012. The Syntax and Pragmatics of Fronting in Germanic. Ph.D. thesis, University of Pennsylvania.
Verb order, object position and information status in Old English. A Taylor, S Pintzuk, Syntax over Time: Lexical, Morphological and Information-Structural Interactions. T. Biberauer and G. WalkdenOUPTaylor, A. and S. Pintzuk. 2015. Verb order, object position and information status in Old English. In T. Biberauer and G. Walkden, eds., Syntax over Time: Lexical, Morphological and Information-Structural Interactions. Ox- ford: OUP.
Split coordination in Early English. A Taylor, S Pintzuk, References / 27. Taylor, A. and S. Pintzuk. 2017. Split coordination in Early English. In References / 27
Word order change in acquisition and language contact: Essays in honour of Ans van Kemenade. B. Los and P. de HaanJohn BenjaminsAmsterdamB. Los and P. de Haan, eds., Word order change in acquisition and language contact: Essays in honour of Ans van Kemenade, 155-183. Amsterdam: John Benjamins.
Language contact and V3 in Germanic varieties new and old. G Walkden, JCGL. 20Walkden, G. 2017. Language contact and V3 in Germanic varieties new and old. JCGL 20:49-81.
# 8. code P of CP-ADV, main|conjunct, order of verb and subject. # 8. code P of CP-ADV, main|conjunct, order of verb and subject |
||
11,296,398 | Learning from Relevant Documents in Large Scale Routing Retrieval | The normal practice of selecting relevant documents for training routing queries is to either use all relevants or the 'best n' of them after a (retrieval) ranking operation with respect to each query. Using all relevants can introduce noise and ambiguities in training because documents can be long with many irrelevant portions. Using only the 'best n' risks leaving out documents that do not resemble a query. Based on a method of segmenting documents into more uniform size subdocuments, a better approach is to use the top ranked subdocument of every relevant. An alternative selection strategy is based on document properties without ranking. We found experimentally that short relevant documents are the quality items for training. Beginning portions of longer relevants are also useful. Using both types provides a strategy that is effective and efficient. | [] | Learning from Relevant Documents in Large Scale Routing Retrieval
K L Kwok
Computer Science Department Queens College
City University of New York Flushing
11367NY
L Grunfeld
Computer Science Department Queens College
City University of New York Flushing
11367NY
Learning from Relevant Documents in Large Scale Routing Retrieval
The normal practice of selecting relevant documents for training routing queries is to either use all relevants or the 'best n' of them after a (retrieval) ranking operation with respect to each query. Using all relevants can introduce noise and ambiguities in training because documents can be long with many irrelevant portions. Using only the 'best n' risks leaving out documents that do not resemble a query. Based on a method of segmenting documents into more uniform size subdocuments, a better approach is to use the top ranked subdocument of every relevant. An alternative selection strategy is based on document properties without ranking. We found experimentally that short relevant documents are the quality items for training. Beginning portions of longer relevants are also useful. Using both types provides a strategy that is effective and efficient.
INTRODUCTION
In ad hoe Information Retrieval (IR) one employs a user-supplied free-text query as a clue to match against a textbase and rank documents for retrieval. In a routing environment, one has the additional option to consult a user need's history to obtain a set of previously judged documents. This set may be used with an automatic learning algorithm to help refine or augment the user-supplied free-text query, or even to define the query without the user description. We focus on employing the judged relevant set in this paper. (Judged nonrelevant documents have not been found to be useful in our model.) For this option, one needs to consider two separate processes:
(1) selecting the appropriate relevant documents or portions of them for training; and
(2) selecting the appropriate terms from these documents, expand the query and then effectively weighting these terms for the query.
It is well-known from TREC and other experiments [1,2,3,4,5,6,7,8] that process (2) can improve routing results substantially.
However, process (1) is normally not given much consideration. One either uses all the relevant documents, or employs the best n of them after ranking with respect to the query under consideration. However, over time in a large scale environment, hundreds and thousands of such relevant documents may accumulate for each user need. A strategy of which and what parts of the relevant documents are to be employed for training needs to be considered. Would portions of relevant documents be sufficient? One reason for using a portion is that many documents can be long and may contain extraneous paragraphs and sections that are irrelevant. Using them for learning may contribute ambiguities during the term selection, query expansion and weighting processes. The problem is that current relevance information gathering is for whole documents only, and not at a more specific level such as which sentence or paragraph that is relevant. This problem would be alleviated if users are diligent and indicate the relevant components of a document that are actually relevant. However, this could be a burden that some users may want to avoid. It is therefore useful to have an algorithm to locate the most useful relevant components for training purposes. Another reason to use only portions of the relevants is consideration of efficiency: one would like to avoid processing long documents when most of it is irrelevant, or decrease the number of documents to be processed.
This investigation concerns exploring ways to effectively choose a subset of documents for training a given set of routing queries.
PIRCS RETRIEVAL SYSTEM
PIRCS (acronym for Probabilistic Indexing and Retrieval -Components-System) is a network-based system implementing a Bayesian decision approach to IR [9,10] and extended with the concept of document components [11] as shown in Fig.1. The network [12] has three layers of nodes representing the queries (Q), terms (T) and documents (D), with edges connecting adjacent layers in a bidirectional fashion. Retrieval operation consists of initializing a document node d~ to activation 1 and spreading it via the edge weights to terms t k and to a query node q~ under focus, q, receives activation ~wa% i which is regarded as the query-focused retrieval status value (RSV) of d i for ranking purposes. If activation originates from a query q, and spreads towards dl we accumulate the document-focused RSV: ~waw~ that is based on statistics of term usage different from before. Combining the two can cooperatively provide more effective results.
The edge weights of the net are first initialized with default values using global and local term usage statistics. Later they can learn from experience as illustrated in Fig.2.
In particular for routing experiments, the edges on the query-term side of the net is first created based on the routing queries and the terms of the training collection, and given default values called self-learn relevant weights. Relevant training documents are then linked in on the document-term side of the net. Knowing which document is relevant to which query allows edge weights on the term-query side like w~, to adapt according to the term usage statistics of the relevant sets via a learning rule that is borrowed from artificial neural network studies. New edges like w~, w], can also grow between queries and terms using, for example, the K highest activated terms of the relevant documents, a process we call level K query Thus, test documents are ranked with respect to each routing query based on term usage statistics seen in the training collection and the relevant documents.
RELEVANT SUBDOCUMENT SELECTION STRATEGIES
Our approach to uneven full text collections [3,6,8] has been to segment long documents on the next paragraph boundary after a run of 360 words, giving more uniform length subdocument units. Documents with unrelated multiple stories with detectable separation markers are also segmented at the markers. This approach may impact favorably on: 1) precision because shorter, more local units may diminish chance occurrence of terms used in senses different from what is intended; 2) term weighting because unrealistic probability estimates of term weights may be avoided; 3) query training and expansion because long documents may have unrelated and irrelevant topics and concepts that can add noise to these operations; 4) retrieval output display because one can narrow down to the relevant portion of a long document for the user; and 5) general efficiency because of handling multiple, more uniform subdocuments instead of one long document. In the TREC collections, documents of thousands of words long are not uncommon, and an example of a really long document is in the Diskl itenlt 3) query training and expansion, having many of these long documents in the training set would not only overwhelm our system but also lead to ambiguity and imprecision.
Segmenting them into subdocuments may provide us with strategies in selecting the appropriate relevant portions of documents for learning. In the next subsections we consider document selection methods that can be broadly classified into three types: approaches based on document properties only, approaches based on ranking, and on combinations of both.
Subdocument Selection Based on Document Properties
These selection methods employ some heuristics on the properties of documents. Because they are based solely on a list of known relevant subdocuments they can bring in concepts that are not explicitely stated or related to the query. These methods are also efficient because no ranking operation is required. A risk of this type of approach is that if the selection method is not well designed, many irrelevant portions of relevant documents may be included for training and becomes counter-productive. Four methods have been experimented with and the rationale for their choice are given below:
(a) Use al...2 subdocuments for learning and query expansion. This is the usual approach in small collections. In a large scale environment it may have the drawback of ambiguity, imprecison and inefficiency discussed in Section 1, but will serve as a basis for comparison.
(b) Use only relevant documents that 'break' into a maximum of max subdocuments. This effectively means eliminating long documents for learning, and may diminish ambiguities that come with them. Short documents should be more concentrated and focused in their content, and can be considered as quality items for training. In particular, max=l means employing only 'nonbreak' documents. This was the strategy used in the original submitted results of our TREC-2 experiments. However, if the given relevants are mostly long, we may artificially diminish the available number of relevants used for training.
(c) Many articles including scholarly documents, certain newspaper and magazine items introduce their themes by stating the most important concepts and contents at the beginning of a document. They also summarize at the end. Therefore another approach is to use only the first or last subdocuments for training. Because of the way we segment documents so that some last subdocuments may be only a few words long, and the fact that some Wall Street Journal articles can have multiple unrelated stories within a document, we can only approximate our intent with these experiments.
(d) A method labelled ffmax=2 uses the first subdocument of max=2 items. This strategy will use quality items (b) but also include the beginning portion of documents (c) about twice as long, and would remedy the fact that there may not be sufficient quality items for training.
Subdocument Selection Based on a Ranking Operation
These methods do a subdocument ranking operation with the routing queries first so that we can select the best ranking units for training. By design, best ranking subdocuments have high probability of being 'truely relevant' to their queries and have been proven to work in user relevance feedback. By ignoring poorer ranked units one hopes to suppress the noise portions of documents for training. A drawback in this case is that the best ranked subdocuments by default share many or high-weighted terms with a query, so that learning may become limited to enhancing the given free-text representation of the query. Subdocuments that are relevant but do not resemble the query (and therefore are not ranked early) will not be used. Performing a ranking is also time-consuming compared with methods in Section 3.1. We have experimented with two methods as given below:
(e) Select the bestn best-ranked relevant subdocuments for training after ranking with respect to the given routing query representations. A variant of this method is to enhance/expand the query representations first by using method (b) max=l documents before doing the ranking. Selecting these bestnx best-ranked subdocuments would include more 'truely relevant' ones than before because the ranking operation is more sophisticated and has been shown to achieve improved performance in our initial TREC2 experiments [8].
(If) Select the topn highest ranked subdocuments of every relevant. Since our purpose is try to avoid noise portions of relevant documents, these top ranked units should have high probability that they are mostly the signal portions as in (e). Moreover, because all relevant documents are used, this method may include the advantage of Section 3.1 that units not resembling the query would also be included for training. A variant is, as before, to enhance/expand the queries first before ranking for the topnx highest ranked subdocuments for later training.
Subdocument Selection Based on Combination of Methods
By combining training document sets obtained from the best of the previous two subsections, we hope to improve on the individual approaches alone. Our objective is to define a training set of subdocuments that are specific to and resemble a query representation, as well as including overall subdocuments that are relevant. The following two methods have been tried:
(g) Merge documents obtained by method (e) bestn/bestnx retrieved, with those of method (b) using max=l. The rationale is that method (e) selects the best of those resembling the query, and method (b) uses short quality relevant documents in general.
(h) Merge documents obtained by method (e) bestn/bestnx retrieved, with those of method (If) topn/topnx=l units of every document. This is similar to (g), except that instead of using short documents only, we now incorporate the best portions of every relevant.
EXPERIMENTS AND DISCUSSION OF RESULTS
For testing our various strategies of subdocument selection for training, we performed experiments exactly as those of TREC2 routing: Topics 51-100 retrieving on the 1 GB of documents on Disk3 of the TREC collection.
Topics 51-100 have relevant document information from Disk l&2 totaling 2 GB. There are altogether 16400 relevant documents averaging out to 328 per query.
During our processing however, a small percentage of the relevants are lost, so that we in effect use only 16114 relevants that get segmented into 57751 subdocuments. This averages to about 1155 units per query. For the ranking strategies of Section 3.2, we have created a separate subcollection consisting only of the 57751 training relevants but using Disk l&2 term statistics, and ranking for the first 2000 of each query is done. Various subsets of these ranked training documents are then used for weight learning for the query-term side of the network, with term expansion level K=40 terms as the standard. For some cases we also did term expansion of K=80. After freezing these trained edge weights, Disk3 subdocuments are linked in and routing retrievals are done. Results using the 'total number of relevants retrieved' (at 1000 retrieved cutoff) and 'average precision over all recall points' as measures of effectiveness, as well as the number of training units used, are summarized in Table 1. Some of the detailed precision-recall values are given in Table 2. The overall conclusion from these results is that for this TREC-2 routing experiment, where a large number of relevant documents of different sizes and quality is available, it is possible to define good subsets of the documents or portions of them for training.
From Table 1 and using the average precision (av-p) measure for comparison, it appears that the simple strategy (b) of just using short, 'nonbreak' max=l relevant documents gives one of the best results, achieving av-p at K=40 expansion level of 0.4050, about 6.7% better than the 0.3795 of our baseline strategy (a) which uses all the relevant units. Moreover it is very efficient, requiring only 5235 units which is less than 10% of the total 57751 relevant subdocuments available and about 1/3 of the 16114 documents. Using longer documents that break into two and six units (max=2 and 6) successively leads to slightly worse results as well as more work (15103 and 32312 subdocuments). Thus, it appears that longer documents carry with it more noise as discussed in the Introduction. Just using the first subdocument of every relevant (c) performs quite well, with av-p of 0.4001. Since the FR collection has many documents of thousands of words long, it is difficult to imagine that signal parts are all in the first subdocuments. A casual scan however shows that some FR documents, such as FR88107-0009 and FR88119-0018, carry a summary at the beginning. Moreover, FR documents constitute only a minority of the training relevants. Thus the first subdocuments apparently carry sufficient signals of documents for training in this experiment. Last subdocuments (results not shown) do not perform as well as first. One of the best results is fmax=2 achieving av-p of 0.4047 as good as 'nonbreak' max=l method and using 10,169 training units.
Surprisingly, using the best ranking bestnx=30, 100, 300, 2000 subdocuments (e) gives 0.3790, 0.3993, 0.3999 and 0.3877 average precision respectively, peaking around bestnx=300 but does not give better performance than (b,c,d) strategies. For bestnx=30, employing only 1500 subdocuments apparently is not sufficient, and training may be limited to subdocuments resembling the original query. bestnx=100 uses 4945 units similar to max=l but with av-p about 1.5% worse, while bestnx=300 uses 13712 which is slightly less than first and performs about the same. In general, bestn results (not shown) are slightly less than those of bestnx as expected. Using the topnx=l subdocument of every relevant (If) achieves 0.4082, the best numerically. In (f) we have le, ss than 16114 units for training because we only rank the top 2000 for each query, and so some subdocuments ranking below 2000 are not accounted for. It appears that including other overall relevants can help improve performance.
Strategies (g,h) of combining sets of subdocuments do not seem to lead to more improved results.
Using the relevants retrieved (r-r) as a measure, it appears that larger training set sizes between 10000 to 16000 are needed to achieve good recall.
For example, max=l and bestnx=100 employs about 5000 units for training and have r-r of 7646 and 7605. bestnx=300, max=2, first and topnx=l have r-r values of 7703, 7783, 7805 and 7833, and training set sizes of: 13712, 15103, 16114 and 15702. fmax=2 achieves good r-r of 7827 with a training size of 10169. fmax=3 (results not shown) is inferior. For this collection, the best strategies of selecting subdocuments for training appears to be either fmax=2 with av-p/r-r values of 0.4047/7827 or topnx=l with 0.4082/7833. fmax=2 has the advantage that a ranking is not done and the training set is smaller. The detailed recall-precision values in Table 3 also shows that fmax=2 gives better precision at the low recall region. It appears that using document properties to select training documents in this routing experiment is both effective and efficient.
CONCLUSION
We explore several strategies of selecting relevant documents or portions of them for query training in the TREC-2 routing retrieval experiment. It confirms that using all relevants for training is not a good strategy because irrelevant noisy portions of documents would be included. Short relevants are the quality documents. Simple methods such as using only short documents, together with beginning portions of longer documents for training performs well and is also efficient. For this TREC2 routing, an average of about 200-300 subdocuments per query appears adequate, about 1/5-1/4 of all known relevant subdocuments available in this experiment. Selecting the bestn ranked relevants (as in relevance feedback) is not as effective as just selecting the top ranked unit of every document. This investigation also shows that breaking documents into subdocuments is useful for query training.
.................
After learning, these query-term edges and weights are frozen, the training documents removed, and new unseen testing documents are then linked in for simulation of the routing operation.
Table 2 :
2Merge topn=l,bestn tbestnl00 .............. Expansion Level K ............Table 1: Relevants Retrieved (r-r), Average Precision Values (av-p) and Number of Training Subdocuments for Various Subdocument Selection Strategies Strategy: all max=l Interpolated Recall -Precision Averages: Precision (precision after R (=num rel for a query) docs retrieved): Average Precision Values at Interpolated Recall Points and at Number of Documents Retrieved for Six Subdocument Selection Strategies (Expansion Level-40)a) ~ relev
subdocs
b) max=l
=2
=6
c) first
d) fmax=2
e) Best Ranked
bestnx=30
=100
=300
=2000
If) Top subdoc
topn =1
topnx=l
g) Merge Max=l,bestn
mbestnl00
h) 40
80
No. of Training
r-r/av-p
% inc
r-r/av-p
Subdocs
%
7611/.3795
baseline
7563/.3746
57751
baseline
first
fmax=2
bestnx=300
topnx=l
0.0
.8311
.8475
.8362
.8467
.8273
.8404
0.1
.6464
.6751
.6779
.6839
.6664
.6808
0.2
.5755
.6116
.5978
.6132
.6000
.6086
0.3
.5035
.5413
.5285
.5312
.5240
.5429
0.4
.4469
.4774
.4734
.4786
.4719
.4810
0.5
.3951
.4288
.4245
.4245
.4206
.4259
0.6
.3286
.3681
.3564
.3565
.3641
.3633
0.7
.2706
.2880
.2833
.2880
.2830
.2904
0.8
.2057
.1937
.2085
.2099
.2095
.2182
0.9
.1079
.1144
.1156
.1181
.1i59
.1183
1.0
.0115
.0107
.0120
.0135
.0113
.0123
Average precision (non-interpolated) over all rel docs
.3795
.4050
.4001
.4047
.3999
.4082
Precision:
At 5 docs
.6480
.7160
.6920
.7120
.6920
.6920
10 "
.6460
.6860
.6940
.6968
.6820
.6960
20 "
.6100
.6540
.6540
.6670
.6520
.6520
100 "
.4706
.4930
.4854
.4890
.4970
.4926
500 "
.2439
.2490
.2532
.2524
.2493
.2544
1000 "
.1522
.1529
.1561
.1565
.1541
.1567
R-Exact
.4036
.4283
.4218
.4228
.4201
.4274
ACKNOWLEDGMENTThis work is partially supported by a grant from ARPA via the TREC program.
Improving retrieval performance by relevance feedback. G Salton, C Buckley, J. of American Society for Information Science. 41Salton, G. & Buckley, C. Improving retrieval performance by relevance feedback. J. of American Society for Information Science. 41 (1990), 288-297.
Relevance feedback revisited. D Harman, Proc. nullHarman, D. Relevance feedback revisited. In: Proc.
N Y Acm, ACM SIGIR 15th Ann. Intl.Conf. on R&D in IR. Belkin, N.J, Ingwersen, P & Pejtersen, A.MACM SIGIR 15th Ann. Intl.Conf. on R&D in IR. Belkin, N.J, Ingwersen, P & Pejtersen, A.M (Eds.) ACM, NY. (1992), 1-10.
Retrieval experiments with a large collection using PIRCS. K L Kwok, L Papadopolous, Y Y Kwan, NIST Special Publication 500-267. Gaithersburg, M.D. 20899. Kwok, K.L., Papadopolous, L. & Kwan, Y.Y. Retrieval experiments with a large collection using PIRCS. In: NIST Special Publication 500-267. Gaithersburg, M.D. 20899. (March 1993), 153-172.
Relevance feedback and inference networks. D & Haines, W B Croft, Proc. ACM SIGIR 16th Ann. Intl.Conf. on R&D in IR. ACM SIGIR 16th Ann. Intl.Conf. on R&D in IRKorfhage, R, Rasmussen, E & WilletHaines, D & Croft, W.B. Relevance feedback and inference networks. In: Proc. ACM SIGIR 16th Ann. Intl.Conf. on R&D in IR. Korfhage, R, Rasmussen, E & Willet, P (FEds.) ACM, NY. (1993), 1-11.
The First Text REtrieval Conference (TREC-1). National Institute of Standards and Technology Special Publication 500-207. Harman, DHarman, D (Ed.) The First Text REtrieval Conference (TREC-1). National Institute of Standards and Technology Special Publication 500-207, March 1993.
A network approach to probabilistic information retrieval. K L Kwok, Submitted for publicationKwok, K.L. A network approach to probabilistic information retrieval. Submitted for publication.
The Second Text REtrieval Conference (TREC-2). National Institute of Standards and Technology Special Publication. Harman, Dto be publishedHarman, D (Ed.) The Second Text REtrieval Conference (TREC-2). National Institute of Standards and Technology Special Publication, to be published.
TREC-2 retrieval experiments using PIRCS. K L Kwok, L Grunfeld, NIST Special Publication. to be publishedKwok, K.L., Grunfeld, L. TREC-2 retrieval experiments using PIRCS. In: NIST Special Publication, to be published.
Relevance weighting of search terms. S E Robertson, K Sparck Jones, J. of American Society for Information Science. 27Robertson, S.E. & Sparck Jones, K. Relevance weighting of search terms." J. of American Society for Information Science. 27 (1976), 129-146.
. C J Van Rijsbergen, Information Retrieval, Second Edition. Butterworths, London. van Rijsbergen, C.J. Information Retrieval, Second Edition. Butterworths, London. (1979).
Experiments with a component theory of probabilistic information retrieval based on single terms as document components. K L Kwok, ACM Trans. on Information Systems. 8Kwok, K.L. Experiments with a component theory of probabilistic information retrieval based on single terms as document components. ACM Trans. on Information Systems. 8 (1990), 363-386.
A neural network for probabilistic information retrieval. K L Kwok, Proc. ACM SIGIR 12th Ann. Intl. ACM SIGIR 12th Ann. IntlKwok, K.L. A neural network for probabilistic information retrieval. In: Proc. ACM SIGIR 12th Ann. Intl.
. N Y Acm, Conf. on R&D in IR. Belkin, N.J. & van Rijsbergen, C.J.Conf. on R&D in IR. Belkin, N.J. & van Rijsbergen, C.J. (Eds.) ACM, NY. (1989), 21-30. |
3,518,648 | Toward an Underspecifiable Corpus Annotation Scheme | The Wall Street Journal corpora provided for the Workshop on Cross-Framework and Cross-Domain Parser Evaluation Shared Task are investigated in order to see how the structures that are difficult for an annotator of dependency structure are encoded in the different schemes. Non-trivial differences among the schemes are found. The paper also investigates the possibility of merging the information encoded in the different corpora. | [
15779080,
2486369,
3102322
] | Toward an Underspecifiable Corpus Annotation Scheme
ManchesterCopyright Manchester2008. August 2008
Yuka Tateisi yucca@cc.kogakuin.ac.jp
Department of Informatics
Kogakuin University
1-24-2 Nishi-shinjuku, Shinjuku-ku163-8677TokyoJapan
Toward an Underspecifiable Corpus Annotation Scheme
Coling
the workshop on Cross-Framework and Cross-Domain Parser EvaluationManchester2008. August 2008
The Wall Street Journal corpora provided for the Workshop on Cross-Framework and Cross-Domain Parser Evaluation Shared Task are investigated in order to see how the structures that are difficult for an annotator of dependency structure are encoded in the different schemes. Non-trivial differences among the schemes are found. The paper also investigates the possibility of merging the information encoded in the different corpora.
Background
This paper takes a look at several annotation schemes related to dependency parsing, from the viewpoint of a corpus annotator. The dependency structure is becoming a common criterion for evaluating parsers in biomedical text mining (Clegg and Shepherd, 2007;Pyssalo et al., 2007a), since their purpose in using parsers are to extract predicate-argument relations, which are easier to access from dependency than constituency structure. One obstacle in applying dependency-based evaluation schemes to parsers for biomedical texts is the lack of a manually annotated corpus that serves as a gold-standard. Aforementioned evaluation works used corpora automatically converted to the Stanford dependency scheme (de Marneffe et al., 2006) from gold-standard phrase structure trees in the Penn Treebank (PTB) (Marcus et al., 1993) format. However, the existence of errors in the automatic conversion procedure, which are not well-documented, makes the suitability of the resulting corpus for parser evaluation questionable, especially in comparing PTB-based parsers and parsers based on other formalisms such as CCG and HPSG (Miyao et al., 2007). To overcome the obstacle, we have manually created a dependency-annotated corpus in the biomedical field using the Rasp Grammatical Relations (Briscoe 2006) scheme (Tateisi et al., 2008). In the annotation process, we encountered linguistic phenomena for which it was difficult to decide the appropriate relations to annotate, and that motivated the investigation of the sample corpora provided for the Workshop on Cross-Framework and Cross-Domain Parser Evaluation Shared Task 1 , in which the same set of sentences taken from the Wall Street Journal section from Penn Treebank is annotated with different schemes.
The process of corpus annotation is assigning a label from a predefined set to a substring of the text. One of the major problems in the process is the annotator's lack of confidence in deciding which label should be annotated to the particular substring of the text, thus resulting in the inconsistency of annotation. The lack of confidence originates from several reasons, but typical situations can be classified into two types:
1) The annotator can think of two or more ways to annotate the text, and cannot decide which is the best way. In this case, the annotation scheme has more information than the annotator has. For example, the annotation guideline of Penn Treebank (Bies et al. 1995) lists alternatives for annotating structures involving null constituents that exist in the Treebank.
2) The annotator wants to annotate a certain information that cannot be expressed properly with the current scheme. This is to say, the annotator has more information than the scheme can express. 1 http://www-tsujii.is.s.u-tokyo.ac.jp/pe08-st/ For example, Tateisi et al (2000) report that, in the early version of the GENIA corpus, some cases of inter-annotator discrepancy occur because the class of names to be assigned (e.g. PROTEIN) is too coarse-grained for annotators, and the result led to a finer-graded classification (e.g.
PROTEIN-FAMILY, PROTEIN-COMPLEX) of names in the published version of GENIA (Kim et al., 2003).
In practice, the corpus designers deal with these problems by deciding how to annotate the questionable cases, and describing them in the guidelines, often on an example-by-example basis. Still, these cases are sources of errors when the decision described in the guideline is against the intuition of the annotator.
If the scheme allows the annotator to annotate the exact amount of information that (s)he has, (s)he would not be uncertain about how to annotate the information. However, because the information that an annotator has varies from annotator to annotator it is not practical to define a scheme for each annotator. Moreover, the resulting corpus would not be very useful, for a corpus should describe a "common standard" that is agreed by (almost) everyone.
One solution would be to design a scheme that is as information-rich as possible, in the way that it can be "underspecified" to the amount of the information that an annotator has. When the corpus is published, the annotation can be reduced to the "most-underspecified" level to ensure the uniformity and consistency of annotations, that is, to the level that all the annotators involved can agree (or the corpus can be published as-is with underspecification left to the user). For example, annotators may differ in decision about whether the POS of "human" in the phrase "human annotator" is an NN (common noun) or a JJ (adjective), but everyone would agree that it is not, for example, a VBN (past participle of a verb). In that case, the word can be annotated with an underspecified label like "NN or JJ". The Penn Treebank POS corpus (Santrini, 1990) allows such underspecification (NN|JJ). In the dependency structure annotation, Grammatical Relations (Briscoe 2006), for example, allows underspecification of dependency types by defining the class hierarchy of dependency types. The underspecified annotation is obviously better than discarding the annotation because of inconsistency, for the underspecified annotation have much more information than nothing at all, and can assure consistency over the entire corpus.
Defining an underspecification has another use. There are corpora in similar but different schemes, for a certain linguistic aspect (e.g. syntactic structure) based on formalisms suited for the application that the developers have in mind. That makes the corpus difficult for the use outside the group involved in the development of the corpus. In addition to the difficulty of using the resources across the research groups, the existence of different formalisms is an obstacle for users of NLP systems to compare and evaluate the systems. One scheme may receive a de facto status, as is the case with the Penn Treebank, but it is still unsuitable for applications that require the information not encoded in the formalisms or to compare systems based on widely different formalisms (e.g., CCG or HPSG in the case of syntactic parsing).
If some common aspects are extracted from the schemes based on different formalisms, the corpus annotated with the (common) scheme will be used as a standard for (coarse-grained) evaluation and comparison between systems based on different formalisms. If an information-rich scheme can be underspecified into a "common" level, the rich information in the corpus will be used locally for the system development and the "common" information can be used by people outside the developers' group. The key issue for establishing the "common" level would be to provide the systematic way to underspecify the individual scheme.
In this paper, the schemes of dependency corpora provided for the Shared Task are compared on the problematic linguistic phenomena encountered in annotating biomedical abstracts, in order to investigate the possibility of making the "common, underspecified" level of annotation. The compared schemes are mainly CONLL shared task structures (CONLL) 1 , Rasp Grammatical Relations (GR) , PARC 700 dependency structures (PARC) 2 and Stanford dependency structures (Stanford; de Marneffe et al. 2006), with partial reference to UTokyo HPSG Treebank predicate-argument structures (HPSG ;Miyao 2006) and CCGBank predicate-argument structures (CCG; Hockenmaier and Steedman 2005).
Underspecification
In dependency annotation, two types of information are annotated to sentences.
•
Dependency structure: what is dependent on what •
Dependency type: how the dependent depends on the head For the latter information, schemes like GR and Stanford incorporates the hierarchy of dependency types and allows systematic underspecification but that does not totally solve the problem. A case of GR is addressed later. If type hierarchy over different schemes can be established, it helps cross-scheme comparison. For the former information, in cases where some information in a corpus is omitted in another (e.g. head percolation), the corpus with less information is considered as the underspecification of the other, but when a different structure is assigned, there is no mechanism to form the underspecified structure so far proposed. In the following section, the sample corpora are investigated trying to find the difference in annotation, especially of the structural difference.
How are problematic structures encoded in the sample corpora?
The Wall Street Journal corpora provided for the shared task is investigated in order to look for the structures that the annotator of our dependency corpus commented as difficult, and to see how they are encoded in the different schemes. The subsections describe the non-trivial differences among the annotation schemes that are found. The subsections also discuss the underspecifiable annotation where possible.
Multi-word Terms
The structure inside multi-word terms, or more broadly, noun-noun sequence in general, have been left unannotated in Penn Treebank, and the later schemes follow the decision. Here, underspecification is realized in practice. In dependency schemes where dependency is encoded by a set of binary relations, the last element of the term is regarded as a head, and the rest of the element of the term is regarded as dependent on the last. In the PARC annotation, proper names like "Los Angeles" and "Alex de Castro" are treated as one token. However, there are noun sequences in which the head is clearly not the last token. For example, there are a lot of names in the biomedical field where a subtype is specified (e.g. Human Immunodeficiency Virus Type I). If the sequence is considered as a name (of a type of virus in this example), it may be reasonable to assign a flat structure to it, wherever the head is. On the other hand, a flat structure is not adequate for analyzing a structure like "Human Immunodeficiency Virus Type I and Type II". Thus it is conventional to assign to a noun phrase "a flat structure unless coordination is involved" in the biomedical corpora, e.g., GENIA and Bioinfer (Pyssalo et al., 2007b). However, adopting this convention can expose the corpus to a risk that the instances of a same name can be analyzed differently depending on context. A possible solution is to annotate a certain noun sequence as a term with a non-significant internal structure, and where needed, the internal structure may be annotated independently of the outside structure. The PARC annotation can be regarded as doing this kind of annotation by treating a multi-word term as token and totally ignore the internal structure. Going a step further, using IDs to the term and sub-terms, the internal structure of a term can be annotated, and the whole term or a subcomponent can be used outside, retaining the information where the sequence refers to parts of the same name. For example, Figure 1 is a PARC-like annotation using name-IDs, where id(ID, name) is for assigning an ID to a name or a part of a name, and name0, name1, name2, and name3 are IDs for "Human Immunodeficiency Virus Type I", "Human Immunodeficiency Virus", "Type I", "Type II", and "Human Immunodeficiency Virus Type II" respectively, and concat(a, b, c) means that strings b and c is concatenated to make string a. adjunct(name1, coord0)
Coordination
The example above suggests that the coordination is a problematic structure. In our experience, coordination structures, especially ones with ellipsis, were a major source of annotation inconsistency. In fact, there are significant differences in the annotation of coordination in the sample corpora, as shown in the following subsections.
What is the head?
Among the schemes used in the sample corpora, CCG does not explicitly annotate the coordination but encodes them as if the coordinated constituents exist independently 3 . The remaining schemes may be divided into determination of the head of coordination.
• GR, PARC, and HPSG makes the coordinator (and, etc) the head • CONLL and Stanford makes the preceding component the head For example, in the case with "makes and distributes", the former group encodes the relation into two binary relations where "and" is the head (of both), and "makes" and "distributes" are the dependent on "and". In the latter group, CONLL encodes the coordination into two binary relations: one is the relation where "makes" is the head and "and" is the dependant and another where "and" is the head and "distributes" is the dependent. In Stanford scheme, the coordinator is encoded into the type of relation (conj_and) where "makes" is the head and "distributes" is the dependent. As for the CCG scheme, the information that the verbs are coordinated by "and" is totally omitted. The difference of policy on head involves structural discrepancy where underspecification does not seem easy.
Distribution of the dependents
Another difference is in the treatment of dependents on the coordinated head. For example, the first sentence of the corpus can be simplified to "Bell makes and distributes products". The subject and object of the two verbs are shared: "Bell" is the subject of "makes" and "distributes", and "products" is their direct object. The subject 3 Three kinds of files for annotating sentence structures are provided in the original CCGbank corpus: the humanreadable corpus files, the machine-readable derivation files, and the predicate-argument structure files. The coordinators are marked in the human-readable corpus files, but not in the predicate-argument structure files from which the sample corpus for the shared task was derived. is treated as dependent on the coordinator in GR, dependent on the coordinator as well as both verbs in PARC 4 , dependent on both verbs in HPSG and Stanford (and CCG), and dependent on "makes" in CONLL. As for the object, "products" is treated as dependent on the coordinator in GR and PARC, dependent on both verbs in HPSG (and CCG), and dependent on "makes" in CONLL and Stanford. The Stanford scheme uniformly treats subject and object differently: The subject is distributed among the coordinated verbs, and the object is treated as dependent on the first verb only.
A different phenomenon was observed for noun modifiers. For example, semantically, "electronic, computer and building products" in the first sentence should be read as "electronic products and computer products and building products" not as "products that have electronic and computer and building nature". That is, the coordination should be read distributively. The distinction between distributive and nondistributive reading is necessary for applications such as information extraction. For example, in the biomedical text, it must be determined whether "CD4+ and CD8+ T cells" denotes "T cells expressing CD4 and T cells expressing CD8" or "T cells expressing both CD4 and CD8".
Coordinated noun modifier is treated differently among the corpora. The coordinated adjectives are dependent on the noun (like in nondistributive reading) in GR, CONLL, and PARC, while the adjectives are treated as separately dependent on the noun in Stanford and HPSG (and CCG). In the PARC scheme, there is a relation named coord_level denoting the syntactic type of the coordinated constituents. For example, in the annotation of the first sentence of the sample corpus ("...electronic, computer and building products"), coord_level(coord~19, AP) denotes that the coordinated constituents are AP, as syntactically speaking adjectives are coordinated. It seems that distributed and nondistributed readings (semantics) are not distinguished.
It can be said that GR and others are annotating syntactic structure of the dependency while HPSG and others annotate more semantic struc-ture. Ideally, the mechanism for encoding the syntactic and semantic structure separately on the coordination should be provided, with an option to decide whether one of them is left unannotated.
For example, the second example shown in Figure 1 ("Human Immunodeficiency Virus Type I and Type II") can be viewed as a coordination of two modifiers ("Type I" and "Type II") syntactically, and as a coordination of two names ("Human Immunodeficiency Virus Type I" and "Human Immunodeficiency Virus Type II") semantically. Taking this into consideration, the structure shown in Figure 1 can be enhanced into the one shown in Figure 2 where conj_sem is for representing the semantic value of coordination, and coord0_S denotes that the dependencies are related semantically to coord0. Providing two relations that work as cood_level in the PARC scheme, one for the syntactic level and the other for the semantic level, may be another solution: if a parallel of coord_level, say, coord_level_sem, can be used in addition to encode the semantically coordinated constituents, distributive reading of "electronic, computer and building products" mentioned above may be expressed by coord_level_sem(coord~19, NP)indicating that it is a noun phrases with shared head that are coordinated.
Coordinator
Two ways of expressing the coordination between three items are found in the corpora: retaining the surface form or not.
cotton , soybeans and rice eggs and butter and milk
For example, the structures for the two phrases above are different in the CONLL corpus while others ignore the fact that the former uses a comma while "and" is used in the latter. That is, the CONLL scheme encodes the surface structure, while others encode the deeper structure, for semantically the comma in the former example means "and". The difference can be captured by retrieving the surface form of the sentences in the corpora that ignore the surface structure. However, encoding surface form and deeper structure would help to capture maximal information and to compare the structures across different annotations more smoothly.
Prepositional phrases
Another major source of inconsistency involved prepositional phrases. The PP-attachment problem (where the PP should be attached) is a problem traditionally addressed in parsing, but in the case of dependency, the type of attachment also becomes a problem.
Where is the head?
The focus of the PP-attachment problem is the head where the PP should attach. In some cases,a the correct place to attach can be determined from the broader context in which the problematic sentence appears, and in some other cases the attachment ambiguity is "benign" in the sense that there is little or no difference in meaning caused by the difference in the attachment site. However, in highly specialized domain like biomedical papers, annotators of grammatical structures do not always have full access to the meaning, and occasionally, it is not easy to decide where to attach the PP, whether the ambiguity is benign, etc. Yet, it is not always that the annotator of a problematic sentence has no information at all: the annotator cannot usually choose from the few candidates selected by the (partial) understanding of the sentence, and not from all possible sites the PP can syntactically attach. No schemes provided for the task allow the listing of possible candidates of the phrases where a PP can attach (as allowed in the case of Penn Treebank POS corpus). As with the POS, a scheme for annotating ambiguous attachment should be incorporated. This can be more easily realized for dependency annotation, where the structure of a sentence is decomposed into list of local dependencies, than treebank annotation, where the structure is annotated as a whole. Simply listing the possible dependencies, with a flag for ambiguity, should work for the purpose. Preferably, the flag encodes the information about whether the annotator thinks the ambiguity is benign, i.e. the annotator believes that the ambiguity does not affect the semantics significantly.
Complement or Modifier
In dependency annotation, the annotator must decide whether the PP dependent of a verb or a verbal noun is an obligatory complement or an optional modifier. External resources (e.g. dictionary) can be used for common verbs, but for technical verbs such resources are not yet widely available, and collecting and investigating a large set of actual use of the verbal is not an easy task. Dependency types for encoding PP-attachment are varied among the schemes. Schemes such as CONLL and Stanford do not distinguish between complements and modifiers, and they just annotate the relation that the phrase "attaches as a PP". HPSG in theory can distinguish complements and modifiers, but in the actual corpus, all PPs appear as modifiers 5 . GR does not mark the type of the non-clausal modifying phrase but distinguish PP-complements (iobj), nominal complements (dobj) and modifiers. PARC has more distinction of attachment type (e.g. obj, obl, adjunct).
If the inconsistency problem involving the type of PP attachment lies in the distinction between complements and modifiers, treatment of CONLL and Stanford looks better than that of GR and PARC. However, an application may require the distinction (a candidate of such application is relation information extraction using predicate-argument structure) so that analysis with the schemes that cannot annotate such distinction at all is not suitable for such kind of applications. On the other hand, GR does have type-underspecification (Briscoe 2006) but the argument (complement) -modifier distinction is at the top level of the hierarchy and underspecification cannot be done without discarding the information that the dependent is a PP.
A dependent of a verbal has two aspects of distinction: complement/modifier and grammatical category (whether it is an NP, a PP, an AP, etc). The mechanism for encoding these aspects separately should be provided, with an option to 5 The modifier becomes a head in HPSG and in CCG unlike other formalisms. decide if one is left unannotated. A possible annotation scheme using IDs is illustrated in Figure 3, where type of dependency and type of the dependent are encoded separately. A slash indicates the alternatives from which to choose one (or more, in ambiguous cases).
Dependency(ID, verb, dependent) Dependent_type(ID, MOD/ARG) Dependent_form(ID, PP/NP/AP/...) Figure 3: An illustration of attachment to a verbal head
Toward a Unified Scheme
The observation suggests that, for difficult lingustic phenomena, different aspects of the phenomena are annotated by different schemes. It also suggests that there are at least two problems in defining the type of dependencies: one is the confusion of the level of analysis, and another is that several aspects of dependency are encoded into one label.
The confusion of the level of analysis means that, as seen in the case of coordination, the syntactic-level analysis and semantic-level analysis receive the same or similar label across the schemes. In each scheme only one level of analysis is provided, but it is not always explicit which level is provided in a particular scheme. Thus, it is inconvenient and annoying for an annotator who wants to annotate the other level or both levels at once.
As seen in the case of PP-dependents of verbals, because different aspects, or features, are encoded in one label, type-underspecification becomes a less convenient mechanism. If labels are properly decomposed into a set of feature values, and a hierarchy of values is provided for each feature, the annotation labels can be more flexible and it is easier for an annotator to choose a label that can encode the desired information. The distinction of syntax/semantics (or there may be more levels) can be incorporated into one of the features. Other possible features include the grammatical categories of head and dependent, argument/modifier distinction, and role of arguments or modifiers like the one annotated in Propbank (Palmer et al., 2005).
Decomposing labels into features have another use. It would make the mapping between one scheme and another more transparent.
As the dependency structure of a sentence is encoded into a list of local information in de-pendency schemes, it can be suggested that taking the union of the annotation of different schemes can achieve the encoding of the union of information that the individual schemes can encode, except for conflicting representations such as the head of coordinated structures, and the head of modifiers in HPSG. If the current labels are decomposed into features, it would enable one to take non-redundant union of information, and mapping from the union to a particular scheme would be more systematic. In many cases listed in the previous section, individual schemes could be obtained by systematically omitting some relations in the union, and common information among the schemes (the structures that all of the schemes concerned can agree) could be retrieved by taking the intersection of annotations. An annotator can annotate the maximal information (s)he knows within the framework of the union, and mapped into the predefined scheme when needed.
Also, providing a mechanism for annotating ambiguity should be provided. As for dependency types the type hierarchy of features described above can help. As for the ambiguity of attachment site and others that involve the problem of what is dependent on what, listing of possible candidates with a flag of ambiguity can help.
Figure 1 .
1PARC-like annotation with explicit annotation of names
Figure 2 .
2Annotation of coordinated names on syntactic and semantic levels
© 2008. Licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported license (http://creativecommons.org/licenses/by-nc-sa/3.0/). Some rights reserved.
http://www.yr-bcn.es/conll2008/ 2 http://www2.parc.com/isl/groups/nltt/fsbank/ triplesdoc.html
According to one of the reviewers this is an error in the distributed version of the PARC corpus that is the result of the automatic conversion. The correct structure is the one in which the subject is only dependent on both verbs but not on the coordinator (an example is parc_23.102 in http://www2.parc.com/isl/groups/nltt/fsbank/parc700-2006-05-30.fdsc); the same would hold of the object.
AcknowledgmentsI am grateful for the anonymous reviewers for suggestions and comments.
Bracketing Guidelines for Treebank II Style Penn Treebank Project. Ann Bies, Mark Ferguson, Karen Katz, Robert Mac-Intyre, Victoria Tredinnick, Grace Kim, Mary Ann Marcinkiewicz, Britta Schasberger, University of PennsylvaniaTechnical reportBies, Ann, Mark Ferguson, Karen Katz, Robert Mac- Intyre, Victoria Tredinnick, Grace Kim, Mary Ann Marcinkiewicz, and Britta Schasberger , 1995. Bracketing Guidelines for Treebank II Style Penn Treebank Project. Technical report, University of Pennsylvania.
An introduction to tag sequence grammars and the RASP system parser. Ted Briscoe, UCAM-CL-TR-662Cambridge University Computer LaboratoryTechnical ReportBriscoe, Ted. 2006. An introduction to tag sequence grammars and the RASP system parser. Technical Report (UCAM-CL-TR-662), Cambridge Univer- sity Computer Laboratory.
Benchmarking natural-language parsers for biological applications using dependency graphs. Andrew B Clegg, Shepherd, BMC Bioinformatics. 824Clegg, Andrew B. and Adrian J Shepherd. 2007. Benchmarking natural-language parsers for bio- logical applications using dependency graphs. BMC Bioinformatics 8:24.
CCGbank: User's Manual. Julia Hockenmaier, Mark Steedman, MS- CIS-05-09University of PennsylvaniaTechnical ReportHockenmaier, Julia and Mark Steedman. 2005. CCGbank: User's Manual, Technical Report (MS- CIS-05-09), University of Pennsylvania.
GENIA corpus -a semantically annotated corpus for bio-textmining. J-D Kim, T Ohta, Y Teteisi, J Tsujii, Bioinformatics. 19suppl. 1)Kim, J-D., Ohta, T., Teteisi Y., Tsujii, J. (2003). GENIA corpus -a semantically annotated corpus for bio-textmining. Bioinformatics. 19(suppl. 1), pp. i180-i182.
Generating typed dependency parses from phrase structure parses. De Marneffe, Bill Marie-Catherine, Christopher D Maccartney, Manning, Proceedings of LREC. LRECGenoa, Italyde Marneffe, Marie-Catherine, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. Proceedings of LREC 2006, Genoa, Italy.
From Linguistic Theory to Syntactic Analysis: Corpus-Oriented Grammar Development and Feature Forest Model. Yusuke Miyao, University of TokyoPhD ThesisMiyao, Yusuke. From Linguistic Theory to Syntactic Analysis: Corpus-Oriented Grammar Development and Feature Forest Model. 2006. PhD Thesis, Uni- versity of Tokyo.
Towards Framework-Independent Evaluation of Deep Linguistic Parsers. Yusuke Miyao, Kenji Sagae, ' Jun, Tsujii, Proceedings of Grammar Engineering across Frameworks. Grammar Engineering across FrameworksStanford, California, USAMiyao, Yusuke, Kenji Sagae, Jun'ichi Tsujii. 2007. Towards Framework-Independent Evaluation of Deep Linguistic Parsers. In Proceedings of Gram- mar Engineering across Frameworks, Stanford, California, USA, pp. 238-258.
The Proposition Bank: An Annotated Corpus of Semantic Roles. Martha Palmer, Paul Kingsbury, Daniel Gildea, Computational Linguistics. 311Palmer, Martha, Paul Kingsbury, Daniel Gildea. 2005. "The Proposition Bank: An Annotated Corpus of Semantic Roles". Computational Linguistics 31 (1): 71-106.
On the unification of syntactic annotations under the Stanford dependency scheme: A case study on BioInfer and GENIA. Sampo Pyysalo, Filip Ginter, Veronika Laippala, Katri Haverinen, Juho Heimonen, Tapio Salakoski, Proceedings of BioNLP Workshop at ACL. BioNLP Workshop at ACLPrague, Czech RepublicPyysalo, Sampo, Filip Ginter, Veronika Laippala, Katri Haverinen, Juho Heimonen, and Tapio Sala- koski. 2007a. On the unification of syntactic anno- tations under the Stanford dependency scheme: A case study on BioInfer and GENIA. Proceedings of BioNLP Workshop at ACL 2007, Prague, Czech Republic .
BioInfer: a corpus for information extraction in the biomedical domain. Sampo Pyysalo, Filip Ginter, Juho Heimonen, Jari Björne, Jorma Boberg, Jouni Järvinen, Tapio Salakoski, BMC Bioinformatics. 850Pyysalo, Sampo, Filip Ginter, Juho Heimonen, Jari Björne, Jorma Boberg, Jouni Järvinen and Tapio Salakoski. 2007b. BioInfer: a corpus for informa- tion extraction in the biomedical domain. BMC Bioinformatics 8:50.
Part-of-Speech Tagging Guidelines for the Penn Treebank Project. Beatrice Santorini, University of PennsylvaniaTechnical reportSantorini, Beatrice. 1990. Part-of-Speech Tagging Guidelines for the Penn Treebank Project. Techni- cal report, University of Pennsylvania.
Building an Annotated Corpus from Biology Research Papers. Yuka Tateisi, Tomoko Ohta, Nigel Collier, Chikashi Nobata, Jun'ichi Tsujii, the Proceedings of COLING 2000 Workshop on Semantic Annotation and Intelligent Content. LuxembourgTateisi, Yuka, Ohta, Tomoko, Nigel Collier, Chikashi Nobata and Jun'ichi Tsujii. 2000. Building an An- notated Corpus from Biology Research Papers. In the Proceedings of COLING 2000 Workshop on Semantic Annotation and Intelligent Content. Lux- embourg. pp. 28-34.
GENIA-GR: a Grammatical Relation Corpus for Parser Evaluation in the Biomedical Domain. Yuka Tateisi, Yusuke Miyao, Kenji Sagae, ' Jun, Tsujii, the Proceedings of the Sixth International Language Resources and Evaluation (LREC'08. Marrakech, MoroccoTateisi,Yuka, Yusuke Miyao, Kenji Sagae, Jun'ichi Tsujii. 2008. GENIA-GR: a Grammatical Relation Corpus for Parser Evaluation in the Biomedical Domain. In the Proceedings of the Sixth Interna- tional Language Resources and Evaluation (LREC'08). Marrakech, Morocco. |
2,433,417 | Logarithmic Opinion Pools for Conditional Random Fields | Recent work on Conditional RandomFields (CRFs) has demonstrated the need for regularisation to counter the tendency of these models to overfit. The standard approach to regularising CRFs involves a prior distribution over the model parameters, typically requiring search over a hyperparameter space. In this paper we address the overfitting problem from a different perspective, by factoring the CRF distribution into a weighted product of individual "expert" CRF distributions. We call this model a logarithmic opinion pool (LOP) of CRFs (LOP-CRFs). We apply the LOP-CRF to two sequencing tasks. Our results show that unregularised expert CRFs with an unregularised CRF under a LOP can outperform the unregularised CRF, and attain a performance level close to the regularised CRF. LOP-CRFs therefore provide a viable alternative to CRF regularisation without the need for hyperparameter search. | [
8940645,
11664683,
840255,
5383504,
5906107,
13936575
] | Logarithmic Opinion Pools for Conditional Random Fields
June 2005
Andrew Smith
Trevor Cohn tacohn@csse.unimelb.edu.au
Miles Osborne
Division of Informatics
Department of Computer Science and Software Engineering
Division of Informatics
University of Edinburgh United Kingdom
University of Melbourne
Australia
University of Edinburgh
United Kingdom
Logarithmic Opinion Pools for Conditional Random Fields
Proceedings of the 43rd Annual Meeting of the ACL
the 43rd Annual Meeting of the ACLAnn ArborJune 2005
Recent work on Conditional RandomFields (CRFs) has demonstrated the need for regularisation to counter the tendency of these models to overfit. The standard approach to regularising CRFs involves a prior distribution over the model parameters, typically requiring search over a hyperparameter space. In this paper we address the overfitting problem from a different perspective, by factoring the CRF distribution into a weighted product of individual "expert" CRF distributions. We call this model a logarithmic opinion pool (LOP) of CRFs (LOP-CRFs). We apply the LOP-CRF to two sequencing tasks. Our results show that unregularised expert CRFs with an unregularised CRF under a LOP can outperform the unregularised CRF, and attain a performance level close to the regularised CRF. LOP-CRFs therefore provide a viable alternative to CRF regularisation without the need for hyperparameter search.
Introduction
In recent years, conditional random fields (CRFs) (Lafferty et al., 2001) have shown success on a number of natural language processing (NLP) tasks, including shallow parsing (Sha and Pereira, 2003), named entity recognition (McCallum and Li, 2003) and information extraction from research papers (Peng and McCallum, 2004). In general, this work has demonstrated the susceptibility of CRFs to overfit the training data during parameter estimation. As a consequence, it is now standard to use some form of overfitting reduction in CRF training.
Recently, there have been a number of sophisticated approaches to reducing overfitting in CRFs, including automatic feature induction (McCallum, 2003) and a full Bayesian approach to training and inference (Qi et al., 2005). These advanced methods tend to be difficult to implement and are often computationally expensive. Consequently, due to its ease of implementation, the current standard approach to reducing overfitting in CRFs is the use of a prior distribution over the model parameters, typically a Gaussian. The disadvantage with this method, however, is that it requires adjusting the value of one or more of the distribution's hyperparameters. This usually involves manual or automatic tuning on a development set, and can be an expensive process as the CRF must be retrained many times for different hyperparameter values.
In this paper we address the overfitting problem in CRFs from a different perspective. We factor the CRF distribution into a weighted product of individual expert CRF distributions, each focusing on a particular subset of the distribution. We call this model a logarithmic opinion pool (LOP) of CRFs (LOP-CRFs), and provide a procedure for learning the weight of each expert in the product. The LOP-CRF framework is "parameter-free" in the sense that it does not involve the requirement to adjust hyperparameter values.
LOP-CRFs are theoretically advantageous in that their Kullback-Leibler divergence with a given distribution can be explicitly represented as a function of the KL-divergence with each of their expert distributions. This provides a well-founded framework for designing new overfitting reduction schemes: look to factorise a CRF distribution as a set of diverse experts.
We apply LOP-CRFs to two sequencing tasks in NLP: named entity recognition and part-of-speech tagging. Our results show that combination of unregularised expert CRFs with an unregularised standard CRF under a LOP can outperform the unregularised standard CRF, and attain a performance level that rivals that of the regularised standard CRF. LOP-CRFs therefore provide a viable alternative to CRF regularisation without the need for hyperparameter search.
Conditional Random Fields
A linear chain CRF defines the conditional probability of a state or label sequence s given an observed sequence o via 1 :
p(s | o) = 1 Z(o) exp T +1 ∑ t=1 ∑ k λ k f k (s t−1 , s t , o,t) (1)
where T is the length of both sequences, λ k are parameters of the model and Z(o) is the partition function that ensures (1) represents a probability distribution. The functions f k are feature functions representing the occurrence of different events in the sequences s and o.
The parameters λ k can be estimated by maximising the conditional log-likelihood of a set of labelled training sequences. The log-likelihood is given by:
L (λ ) = ∑ o,sp (o, s) log p(s | o; λ ) = ∑ o,sp (o, s) T +1 ∑ t=1 λ · f(s, o,t) − ∑ op (o) log Z(o; λ )
wherep(o, s) andp(o) are empirical distributions defined by the training set. At the maximum likelihood solution the model satisfies a set of feature constraints, whereby the expected count of each feature under the model is equal to its empirical count on the training data:
Ep (o,s) [ f k ] − E p(s|o) [ f k ] = 0, ∀k
In general this cannot be solved for the λ k in closed form so numerical routines must be used. Malouf (2002) and Sha and Pereira (2003) show that gradient-based algorithms, particularly limited memory variable metric (LMVM), require much less time to reach convergence, for some NLP tasks, than the iterative scaling methods (Della Pietra et al., 1997) previously used for log-linear optimisation problems. In all our experiments we use the LMVM method to train the CRFs.
For CRFs with general graphical structure, calculation of E p(s|o) [ f k ] is intractable, but for the linear chain case Lafferty et al. (2001) describe an efficient dynamic programming procedure for inference, similar in nature to the forward-backward algorithm in hidden Markov models.
Logarithmic Opinion Pools
In this paper an expert model refers a probabilistic model that focuses on modelling a specific subset of some probability distribution. The concept of combining the distributions of a set of expert models via a weighted product has previously been used in a range of different application areas, including economics and management science (Bordley, 1982), and NLP (Osborne and Baldridge, 2004).
In this paper we restrict ourselves to sequence models. Given a set of sequence model experts, indexed by α, with conditional distributions p α (s | o) and a set of non-negative normalised weights w α , a logarithmic opinion pool 2 is defined as the distribution:
p LOP (s | o) = 1 Z LOP (o) ∏ α [p α (s | o)] w α(2)
with w α ≥ 0 and ∑ α w α = 1, and where Z LOP (o) is the normalisation constant:
Z LOP (o) = ∑ s ∏ α [p α (s | o)] w α(3)
The weight w α encodes our confidence in the opinion of expert α. Suppose that there is a "true" conditional distribution q(s | o) which each p α (s | o) is attempting to model. Heskes (1998) shows that the KL divergence between q(s | o) and the LOP, can be decomposed into two terms:
K(q, p LOP ) = E − A (4) = ∑ α w α K (q, p α ) − ∑ α w α K (p LOP , p α )
This tells us that the closeness of the LOP model to q(s | o) is governed by a trade-off between two terms: an E term, which represents the closeness of the individual experts to q(s | o), and an A term, which represents the closeness of the individual experts to the LOP, and therefore indirectly to each other. Hence for the LOP to model q well, we desire models p α which are individually good models of q (having low E) and are also diverse (having large A).
LOPs for CRFs
Because CRFs are log-linear models, we can see from equation (2) that CRF experts are particularly well suited to combination under a LOP. Indeed, the resulting LOP is itself a CRF, the LOP-CRF, with potential functions given by a log-linear combination of the potential functions of the experts, with weights w α . As a consequence of this, the normalisation constant for the LOP-CRF can be calculated efficiently via the usual forward-backward algorithm for CRFs. Note that there is a distinction between normalisation constant for the LOP-CRF, Z LOP as given in equation (3), and the partition function of the LOP-CRF, Z. The two are related as follows:
p LOP (s | o) = 1 Z LOP (o) ∏ α [p α (s | o)] w α = 1 Z LOP (o) ∏ α U α (s | o) Z α (o) w α = ∏ α [U α (s | o)] w α Z LOP (o) ∏ α [Z α (o)] w α where U α = exp ∑ T +1 t=1 ∑ k λ αk f αk (s t−1 , s t , o,t) and so log Z(o) = log Z LOP (o) + ∑ α w α log Z α (o)
This relationship will be useful below, when we describe how to train the weights w α of a LOP-CRF. In this paper we will use the term LOP-CRF weights to refer to the weights w α in the weighted product of the LOP-CRF distribution and the term parameters to refer to the parameters λ αk of each expert CRF α.
Training LOP-CRFs
In our LOP-CRF training procedure we first train the expert CRFs unregularised on the training data. Then, treating the experts as static pre-trained models, we train the LOP-CRF weights w α to maximise the log-likelihood of the training data. This training process is "parameter-free" in that neither stage involves the use of a prior distribution over expert CRF parameters or LOP-CRF weights, and so avoids the requirement to adjust hyperparameter values.
The likelihood of a data set under a LOP-CRF, as a function of the LOP-CRF weights, is given by:
L(w) = ∏ o,s p LOP (s | o; w)p (o,s) = ∏ o,s 1 Z LOP (o; w) ∏ α p α (s | o) w α p(o,s)
After taking logs and rearranging, the loglikelihood can be expressed as:
L (w) = ∑ o,sp (o, s) ∑ α w α log p α (s | o) − ∑ op (o) log Z LOP (o; w) = ∑ α w α ∑ o,sp (o, s) log p α (s | o) + ∑ α w α ∑ op (o) log Z α (o) − ∑ op (o) log Z(o; w)
For the first two terms, the quantities that are multiplied by w α inside the (outer) sums are independent of the weights, and can be evaluated once at the beginning of training. The third term involves the partition function for the LOP-CRF and so is a function of the weights. It can be evaluated efficiently as usual for a standard CRF.
Taking derivatives with respect to w β and rearranging, we obtain:
∂ L (w) ∂ w β = ∑ o,sp (o, s) log p β (s | o) + ∑ op (o) log Z β (o) − ∑ op (o)E p LOP (s|o) ∑ t logU βt (o, s)
where U βt (o, s) is the value of the potential function for expert β on clique t under the labelling s for observation o. In a way similar to the representation of the expected feature count in a standard CRF, the third term may be re-written as:
− ∑ o ∑ t ∑ s ,s p LOP (s t−1 = s , s t = s , o) logU βt (s , s , o)
Hence the derivative is tractable because we can use dynamic programming to efficiently calculate the pairwise marginal distribution for the LOP-CRF. Using these expressions we can efficiently train the LOP-CRF weights to maximise the loglikelihood of the data set. 3 We make use of the LMVM method mentioned earlier to do this. We will refer to a LOP-CRF with weights trained using this procedure as an unregularised LOP-CRF.
Regularisation
The "parameter-free" aspect of the training procedure we introduced in the previous section relies on the fact that we do not use regularisation when training the LOP-CRF weights w α . However, there is a possibility that this may lead to overfitting of the training data. In order to investigate this, we develop a regularised version of the training procedure and compare the results obtained with each. We 3 We must ensure that the weights are non-negative and normalised. We achieve this by parameterising the weights as functions of a set of unconstrained variables via a softmax transformation. The values of the log-likelihood and its derivatives with respect to the unconstrained variables can be derived from the corresponding values for the weights w α . use a prior distribution over the LOP-CRF weights. As the weights are non-negative and normalised we use a Dirichlet distribution, whose density function is given by:
p(w) = Γ(∑ α θ α ) ∏ α Γ(θ α ) ∏ α w θ α −1 α
where the θ α are hyperparameters.
Under this distribution, ignoring terms that are independent of the weights, the regularised loglikelihood involves an additional term:
∑ α (θ α − 1) log w α
We assume a single value θ across all weights. The derivative of the regularised log-likelihood with respect to weight w β then involves an additional term 1 w β (θ − 1). In our experiments we use the development set to optimise the value of θ . We will refer to a LOP-CRF with weights trained using this procedure as a regularised LOP-CRF.
The Tasks
In this paper we apply LOP-CRFs to two sequence labelling tasks in NLP: named entity recognition (NER) and part-of-speech tagging (POS tagging).
Named Entity Recognition
NER involves the identification of the location and type of pre-defined entities within a sentence and is often used as a sub-process in information extraction systems. With NER the CRF is presented with a set of sentences and must label each word so as to indicate whether the word appears outside an entity (O), at the beginning of an entity of type X (B-X) or within the continuation of an entity of type X (I-X).
All our results for NER are reported on the CoNLL-2003 shared task dataset (Tjong Kim Sang and De Meulder, 2003). For this dataset the entity types are: persons (PER), locations (LOC), organisations (ORG) and miscellaneous (MISC). The training set consists of 14, 987 sentences and 204, 567 tokens, the development set consists of 3, 466 sentences and 51, 578 tokens and the test set consists of 3, 684 sentences and 46, 666 tokens.
Part-of-Speech Tagging
POS tagging involves labelling each word in a sentence with its part-of-speech, for example noun, verb, adjective, etc. For our experiments we use the CoNLL-2000 shared task dataset (Tjong Kim Sang and Buchholz, 2000). This has 48 different POS tags. In order to make training time manageable 4 , we collapse the number of POS tags from 48 to 5 following the procedure used in . In summary:
• All types of noun collapse to category N.
• All types of verb collapse to category V.
• All types of adjective collapse to category J.
• All types of adverb collapse to category R.
• All other POS tags collapse to category O.
The training set consists of 7, 300 sentences and 173, 542 tokens, the development set consists of 1, 636 sentences and 38, 185 tokens and the test set consists of 2, 012 sentences and 47, 377 tokens.
Expert sets
For each task we compare the performance of the LOP-CRF to that of the standard CRF by defining a single, complex CRF, which we call a monolithic CRF, and a range of expert sets.
The monolithic CRF for NER comprises a number of word and POS tag features in a window of five words around the current word, along with a set of orthographic features defined on the current word. These are based on those found in (Curran and Clark, 2003). Examples include whether the current word is capitalised, is an initial, contains a digit, contains punctuation, etc. The monolithic CRF for NER has 450, 345 features.
The monolithic CRF for POS tagging comprises word and POS features similar to those in the NER monolithic model, but over a smaller number of orthographic features. The monolithic model for POS tagging has 188, 448 features.
Each of our expert sets consists of a number of CRF experts. Usually these experts are designed to focus on modelling a particular aspect or subset of the distribution. As we saw earlier, the aim here is to define experts that model parts of the distribution well while retaining mutual diversity. The experts from a particular expert set are combined under a LOP-CRF and the weights are trained as described previously.
We define our range of expert sets as follows:
• Simple consists of the monolithic CRF and a single expert comprising a reduced subset of the features in the monolithic CRF. This reduced CRF models the entire distribution rather than focusing on a particular aspect or subset, but is much less expressive than the monolithic model. The reduced model comprises 24, 818 features for NER and 47, 420 features for POS tagging.
• Positional consists of the monolithic CRF and a partition of the features in the monolithic CRF into three experts, each consisting only of features that involve events either behind, at or ahead of the current sequence position.
• Label consists of the monolithic CRF and a partition of the features in the monolithic CRF into five experts, one for each label. For NER an expert corresponding to label X consists only of features that involve labels B-X or I-X at the current or previous positions, while for POS tagging an expert corresponding to label X consists only of features that involve label X at the current or previous positions. These experts therefore focus on trying to model the distribution of a particular label.
• Random consists of the monolithic CRF and a random partition of the features in the monolithic CRF into four experts. This acts as a baseline to ascertain the performance that can be expected from an expert set that is not defined via any linguistic intuition.
Experiments
To compare the performance of LOP-CRFs trained using the procedure we described previously to that of a standard CRF regularised with a Gaussian prior, we do the following for both NER and POS tagging:
• Train a monolithic CRF with regularisation using a Gaussian prior. We use the development set to optimise the value of the variance hyperparameter.
• Train every expert CRF in each expert set without regularisation (each expert set includes the monolithic CRF, which clearly need only be trained once).
• For each expert set, create a LOP-CRF from the expert CRFs and train the weights of the LOP-CRF without regularisation. We compare its performance to that of the unregularised and regularised monolithic CRFs.
• To investigate whether training the LOP-CRF weights contributes significantly to the LOP-CRF's performance, for each expert set we create a LOP-CRF with uniform weights and compare its performance to that of the LOP-CRF with trained weights.
• To investigate whether unregularised training of the LOP-CRF weights leads to overfitting, for each expert set we train the weights of the LOP-CRF with regularisation using a Dirichlet prior. We optimise the hyperparameter in the Dirichlet distribution on the development set. We then compare the performance of the LOP-CRF with regularised weights to that of the LOP-CRF with unregularised weights.
Results
Experts
Before presenting results for the LOP-CRFs, we briefly give performance figures for the monolithic CRFs and expert CRFs in isolation. For illustration, we do this for NER models only. Table 1 shows F scores on the development set for the NER CRFs. We see that, as expected, the expert CRFs in isolation model the data relatively poorly compared to the monolithic CRFs. Some of the label experts, for example, attain relatively low F scores as they focus only on modelling one particular label. Similar behaviour was observed for the POS tagging models.
LOP-CRFs with unregularised weights
In this section we present results for LOP-CRFs with unregularised weights. Table 2 gives F scores for NER LOP-CRFs while Table 3 gives accuracies for the POS tagging LOP-CRFs. The monolithic CRF scores are included for comparison. Both tables illustrate the following points:
• In every case the LOP-CRFs outperform the unregularised monolithic CRF
• In most cases the performance of LOP-CRFs rivals that of the regularised monolithic CRF, and in some cases exceeds it.
We use McNemar's matched-pairs test (Gillick and Cox, 1989) on point-wise labelling errors to examine the statistical significance of these results. We test significance at the 5% level. At this threshold, all the LOP-CRFs significantly outperform the corresponding unregularised monolithic CRF. In addition, those marked with * show a significant improvement over the regularised monolithic CRF. Only the value marked with † in Table 3 significantly under performs the regularised monolithic. All other values a do not differ significantly from those of the regularised monolithic CRF at the 5% level.
These results show that LOP-CRFs with unregularised weights can lead to performance improvements that equal or exceed those achieved from a conventional regularisation approach using a Gaussian prior. The important difference, however, is that the LOP-CRF approach is "parameter-free" in the Table 3: Accuracies for POS tagging unregularised LOP-CRFs sense that each expert CRF in the LOP-CRF is unregularised and the LOP weight training is also unregularised. We are therefore not required to search a hyperparameter space. As an illustration, to obtain our best results for the POS tagging regularised monolithic model, we re-trained using 15 different values of the Gaussian prior variance. With the LOP-CRF we trained each expert CRF and the LOP weights only once.
As an illustration of a typical weight distribution resulting from the training procedure, the positional LOP-CRF for POS tagging attaches weight 0.45 to the monolithic model and roughly equal weights to the other three experts.
LOP-CRFs with uniform weights
By training LOP-CRF weights using the procedure we introduce in this paper, we allow the weights to take on non-uniform values. This corresponds to letting the opinion of some experts take precedence over others in the LOP-CRF's decision making. An alternative, simpler, approach would be to combine the experts under a LOP with uniform weights, thereby avoiding the weight training stage. We would like to ascertain whether this approach will significantly reduce the LOP-CRF's performance. As an illustration, Table 4 gives accuracies for LOP-CRFs with uniform weights for POS tagging. A similar pattern is observed for NER. Comparing these values to those in Tables 2 and 3, we can see that in
LOP-CRFs with regularised weights
To investigate whether unregularised training of the LOP-CRF weights leads to overfitting, we train the LOP-CRF with regularisation using a Dirichlet prior. The results we obtain show that in most cases a LOP-CRF with regularised weights achieves an almost identical performance to that with unregularised weights, and suggests there is little to be gained by weight regularisation. This is probably due to the fact that in our LOP-CRFs the number of experts, and therefore weights, is generally small and so there is little capacity for overfitting. We conjecture that although other choices of expert set may comprise many more experts than in our examples, the numbers are likely to be relatively small in comparison to, for example, the number of parameters in the individual experts. We therefore suggest that any overfitting effect is likely to be limited.
Choice of Expert Sets
We can see from Tables 2 and 3 that the performance of a LOP-CRF varies with the choice of expert set. For example, in our tasks the simple and positional expert sets perform better than those for the label and random sets. For an explanation here, we refer back to our discussion of equation (5). We conjecture that the simple and positional expert sets achieve good performance in the LOP-CRF because they consist of experts that are diverse while simultaneously being reasonable models of the data. The label expert set exhibits greater diversity between the experts, because each expert focuses on modelling a particular label only, but each expert is a relatively poor model of the entire distribution and the corresponding LOP-CRF performs worse. Similarly, the random experts are in general better models of the entire distribution but tend to be less diverse because they do not focus on any one aspect or subset of it. Intuitively, then, we want to devise experts that provide diverse but accurate views on the data.
The expert sets we present in this paper were motivated by linguistic intuition, but clearly many choices exist. It remains an important open question as to how to automatically construct expert sets for good performance on a given task, and we intend to pursue this avenue in future research.
Conclusion and future work
In this paper we have introduced the logarithmic opinion pool of CRFs as a way to address overfitting in CRF models. Our results show that a LOP-CRF can provide a competitive alternative to conventional regularisation with a prior while avoiding the requirement to search a hyperparameter space.
We have seen that, for a variety of types of expert, combination of expert CRFs with an unregularised standard CRF under a LOP with optimised weights can outperform the unregularised standard CRF and rival the performance of a regularised standard CRF.
We have shown how these advantages a LOP-CRF provides have a firm theoretical foundation in terms of the decomposition of the KL-divergence between a LOP-CRF and a target distribution, and how this provides a framework for designing new overfitting reduction schemes in terms of constructing diverse experts.
In this work we have considered training the weights of a LOP-CRF using pre-trained, static experts. In future we intend to investigate cooperative training of LOP-CRF weights and the parameters of each expert in an expert set.
Table 1 :
1Development set F scores for NER experts
Table 2 :
2F scores for NER unregularised LOP-CRFsExpert set
Development set
Test set
Monolithic unreg.
97.92
97.65
Monolithic reg.
98.02
97.84
Simple
98.31 *
98.12 *
Positional
98.03
97.81
Label
97.99
97.77
Random
97.99
97.76 †
Table 4 :
4Accuracies for POS tagging uniform LOP-CRFs general LOP-CRFs with uniform weights, although still performing significantly better than the unregularised monolithic CRF, generally under perform LOP-CRFs with trained weights. This suggests that the choice of weights can be important, and justifies the weight training stage.
In this paper we assume there is a one-to-one mapping between states and labels, though this need not be the case.
Hinton (1999) introduced a variant of the LOP idea called Product of Experts, in which expert distributions are multiplied under a uniform weight distribution.
See(Cohn et al., 2005) for a scaling method allowing the full POS tagging task with CRFs.
AcknowledgementsWe wish to thank Stephen Clark, our colleagues in Edinburgh and the anonymous reviewers for many useful comments.
A multiplicative formula for aggregating probability assessments. R F Bordley, Management Science. 28R. F. Bordley. 1982. A multiplicative formula for aggregating probability assessments. Management Science, (28):1137- 1148.
Scaling conditional random fields using error-correcting codes. T Cohn, A Smith, M Osborne, Proc. ACL. ACLT. Cohn, A. Smith, and M. Osborne. 2005. Scaling conditional random fields using error-correcting codes. In Proc. ACL 2005.
Language independent NER using a maximum entropy tagger. J Curran, S Clark, Proc. CoNLL-2003. CoNLL-2003J. Curran and S. Clark. 2003. Language independent NER using a maximum entropy tagger. In Proc. CoNLL-2003.
Inducing features of random fields. S , Della Pietra, Della Pietra, V , J Lafferty, IEEE PAMI. 19S. Della Pietra, Della Pietra V., and J. Lafferty. 1997. Induc- ing features of random fields. In IEEE PAMI, volume 19(4), pages 380-393.
Some statistical issues in the comparison of speech recognition algorithms. L Gillick, S Cox, International Conference on Acoustics, Speech and Signal Processing. 1L. Gillick and S. Cox. 1989. Some statistical issues in the comparison of speech recognition algorithms. In Interna- tional Conference on Acoustics, Speech and Signal Process- ing, volume 1, pages 532-535.
Selecting weighting factors in logarithmic opinion pools. T Heskes, Advances in Neural Information Processing Systems 10. T. Heskes. 1998. Selecting weighting factors in logarithmic opinion pools. In Advances in Neural Information Process- ing Systems 10.
Product of experts. G E Hinton, ICANN. 1G. E. Hinton. 1999. Product of experts. In ICANN, volume 1, pages 1-6.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. J Lafferty, A Mccallum, F Pereira, Proc. ICML. ICMLJ. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and la- beling sequence data. In Proc. ICML 2001.
A comparison of algorithms for maximum entropy parameter estimation. R Malouf, Proc. CoNLL-2002. CoNLL-2002R. Malouf. 2002. A comparison of algorithms for maximum entropy parameter estimation. In Proc. CoNLL-2002.
Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. A Mccallum, W Li, Proc. CoNLL-2003. CoNLL-2003A. McCallum and W. Li. 2003. Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. In Proc. CoNLL-2003.
Dynamic conditional random fields for jointly labeling multiple sequences. A Mccallum, K Rohanimanesh, C Sutton, NIPS-2003 Workshop on Syntax, Semantics and Statistics. A. McCallum, K. Rohanimanesh, and C. Sutton. 2003. Dy- namic conditional random fields for jointly labeling multiple sequences. In NIPS-2003 Workshop on Syntax, Semantics and Statistics.
Efficiently inducing features of conditional random fields. A Mccallum, Proc. UAI. UAIA. McCallum. 2003. Efficiently inducing features of condi- tional random fields. In Proc. UAI 2003.
Ensemble-based active learning for parse selection. M Osborne, J Baldridge, Proc. NAACL. NAACLM. Osborne and J. Baldridge. 2004. Ensemble-based active learning for parse selection. In Proc. NAACL 2004.
Accurate information extraction from research papers using conditional random fields. F Peng, A Mccallum, Proc. HLT-NAACL. HLT-NAACLF. Peng and A. McCallum. 2004. Accurate information extrac- tion from research papers using conditional random fields. In Proc. HLT-NAACL 2004.
Bayesian conditional random fields. Y Qi, M Szummer, T P Minka, Proc. AISTATS. AISTATSY. Qi, M. Szummer, and T. P. Minka. 2005. Bayesian condi- tional random fields. In Proc. AISTATS 2005.
Shallow parsing with conditional random fields. F Sha, F Pereira, Proc. HLT-NAACL. HLT-NAACLF. Sha and F. Pereira. 2003. Shallow parsing with conditional random fields. In Proc. HLT-NAACL 2003.
Introduction to the CoNLL-2000 shared task: Chunking. E F Tjong Kim Sang, S Buchholz, Proc. CoNLL. CoNLLE. F. Tjong Kim Sang and S. Buchholz. 2000. Introduction to the CoNLL-2000 shared task: Chunking. In Proc. CoNLL- 2000.
Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. E F Tjong Kim Sang, F. De Meulder, Proc. CoNLL-2003. CoNLL-2003E. F. Tjong Kim Sang and F. De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proc. CoNLL-2003. |
61,863,536 | Reusing Parallel Corpora between Related Languages (invited talk) | Recent developments in statistical machine translation (SMT), e.g., the availability of efficient implementations of integrated open-source toolkits like Moses, have made it possible to build a prototype system with decent translation quality for any language pair in a few days or even hours. This is so in theory. In practice, doing so requires having a large set of parallel sentence-aligned bilingual texts (a bi-text) for that language pair, which is often unavailable. Large high-quality bi-texts are rare; except for Arabic, Chinese, and some official languages of the European Union (EU), most of the 6,500+ world languages remain resourcepoor from an SMT viewpoint. This number is even more striking if we consider language pairs instead of individual languages, e.g., while Arabic and Chinese are among the most resource-rich languages for SMT, the Arabic-Chinese language pair is quite resource-poor. Moreover, even resourcerich language pairs could be poor in bi-texts for a specific domain, e.g., biomedical text, conversa- | [] | Reusing Parallel Corpora between Related Languages (invited talk)
September 2011
Preslav Nakov nakov@comp.nus.edu.sg
National University of Singapore
Reusing Parallel Corpora between Related Languages (invited talk)
Proceedings of Recent Advances in Natural Language Processing
Recent Advances in Natural Language ProcessingHissar, Bulgaria1September 2011Speaker's Bio Dr. Preslav Nakov is a Research Fellow at the Na-tional University of Singapore. He received his PhD in Computer Science from the University of California at Berkeley in 2007. Dr. Nakov's re-search interestes are in the areas of Web as a cor-pus, lexical semantics, machine translation, infor-mation extraction, and bioinformatics.
Recent developments in statistical machine translation (SMT), e.g., the availability of efficient implementations of integrated open-source toolkits like Moses, have made it possible to build a prototype system with decent translation quality for any language pair in a few days or even hours. This is so in theory. In practice, doing so requires having a large set of parallel sentence-aligned bilingual texts (a bi-text) for that language pair, which is often unavailable. Large high-quality bi-texts are rare; except for Arabic, Chinese, and some official languages of the European Union (EU), most of the 6,500+ world languages remain resourcepoor from an SMT viewpoint. This number is even more striking if we consider language pairs instead of individual languages, e.g., while Arabic and Chinese are among the most resource-rich languages for SMT, the Arabic-Chinese language pair is quite resource-poor. Moreover, even resourcerich language pairs could be poor in bi-texts for a specific domain, e.g., biomedical text, conversa-
tional text, etc. Due to the increasing volume of EU parliament debates and the ever-growing European legislation, the official languages of the EU are especially privileged from an SMT perspective. While this includes "classic SMT languages" such as English and French (which were already resource-rich), and some important international ones like Spanish and Portuguese, many of the rest have a limited number of speakers and were resource-poor until a few years ago. Thus, becoming an official language of the EU has turned out to be an easy recipe for getting resource-rich in bi-texts quickly.
Our aim is to tap the potential of the EU resources so that they can be used by other non-EU languages that are closely related to one or more official languages of the EU.
We propose to use bi-texts for resource-rich language pairs to build better SMT systems for resource-poor pairs by exploiting the similarity between a resource-poor language and a resourcerich one.
We are motivated by the observation that related languages tend to have (1) similar word order and syntax, and, more importantly, (2) overlapping vocabulary, e.g., casa (house) is used in both Spanish and Portuguese; they also have (3) similar spelling. This vocabulary overlap means that the resource-rich auxiliary language can be used as a source of translation options for words that cannot be translated with the resources available for the resource-poor language. In actual text, the vocabulary overlap might extend from individual words to short phrases (especially if the resourcerich languages has been transliterated to look like the resource-poor one), which means that translations of whole phrases could potentially be reused between related languages. Moreover, the vocabulary overlap and the similarity in word order can be used to improve the word alignments for the resource-poor language by biasing the word alignment process with additional sentence pairs from the resource-rich language. We take advantage of all these opportunities: (1) we improve the word alignments for the resource-poor language, (2) we further augment it with additional translation options, and (3) we take care of potential spelling differences through appropriate transliteration.
Speaker's Bio
Dr. Preslav Nakov is a Research Fellow at the National University of Singapore. He received his PhD in Computer Science from the University of California at Berkeley in 2007. Dr. Nakov's research interestes are in the areas of Web as a corpus, lexical semantics, machine translation, information extraction, and bioinformatics. |
3,913,472 | Toward a Rational Model of Discourse Comprehension | [] | Toward a Rational Model of Discourse Comprehension
Jerry L Morgan
Center for the Study of Reading
University of Illinois at Urbana-Champaign
Toward a Rational Model of Discourse Comprehension
Introduction
I begin my tale with the moral: a quotation from the greatest English grammarian, Otto Jespersen (1965).
The essence of language is human activity-activity on the part of one individual to make himself understood by another, and activity on the part of that other to understand what was in the mind of the first. These two individuals, the producer and the recipient of language, or as we may more conveniently call them, the speaker and the hearer, and their relations to one another, should never be lost sight of if we want to understand the nature of language and of that part of language which is dealt with in grammar. But in former times this was often overlooked, and words and forms'were often treated as if they were things or natural objects with an existence of their own--a conception which may have been to a great extent fostered through a too exclusive preoccupation with written or printed words, but which is fundamentally false, as will easily be seen with a little reflexion. (p. 17) But the temptation to think of language as pure form is great; Jespersen himself slips into this metaphor a few pages later:
• • . we always find that there is one word of supreme importance to which the others are joined as subordinates.
This chief word is defined (qualified, modified) by another word, which in its turn may be defined (qualified, modified) by a third word, etc. (p. 96) But words do not define, modify, or qualify other words.
Speakers define, qualify, and modify. This confusion is so tempting that it is pervasive in every field that studies language, at any level.
It is almost universal in linguistics. We find it, for example, in the following from Halliday and Hasan (1976), who probably know better:
Let us start with a simple and trivial example.
Suppose we find the following instructions in the cookery book:
[1:1] Wash and core six cooking apples. Put them into a fireproof dish.
It is clear that them in the second sentence refers back to (is ANAPHORIC to) the six cooking apples in the first sentence. This ANAPHORIC function of them gives cohesion to the two sentences, so that we interpret them as a whole; the two sentences together constitute a text.
Or rather, they form part of the same text; there may be more of it to follow.
The texture is provided by the cohesive RELA-TION that exists between them and six cooking apples.
It is important to make this point, because we shall be constantly focusing attention on the items, such as them, which typically refer back to something that has gone before; but the cohesion is effected not by the presence of the referring item alone but by the presence of both the referring item and the item it refers to (P. 2).
There are two serious confusions here.
First, words do not refer; speakers refer to things by using words.
The word them does not refer to anything at all, obviously so since it can be used to refer to any set one wants to refer to. There is no particular set of entitles that one can say the word them refers to. But one can use it to refer to sets of things, when one's intended referent will be recoverable in some way by the hearer.
The second confusion is the idea that words "refer back" to other words.
The muddle here is obvious.
Whether it is people or words that refer, it is things, not (usually) other words, that they refer to. Thus in Halliday and Hasan's example [1:1], it is not the words six cooking apples that them is used to refer to; one is not being instructed to put three words in a fireproof dish.
The word them is used to refer to certain apples that were previously referred to by use of the words six cooking apples.
My objection to such descriptions is not based merely on a niggling concern with sloppy language.
If it were, one might respond that it's clear what Halliday and Hasan mean here, so my complaint is beside the point.
Rather, I think the pervasive confusion on just this point is a symptom of a serious comceptual confusion that renders a lot of the related work useless. This is the case with the passage from Halllday and Hasan.
They say that it is some relation between sentences in a text that gives it "cohesion", that renders it coherent, "so that we interpret them as a whole; the two sentences together constitute a text." The relation that gives rise to this cohesion is that them in one sentence "refers back" to the six cooking apples in a previous sentence.
If we interpret this phrase charitably, then the question arises, how do we know what them refers to? How do we know that it refers to the apples, and not to two of the writer's bachelor Uncles? We can't know such a thing.
We can only assume that the writer is rational, and that the recipe is coherent.
If it is coherent, we are justified in assuming that it is the apples that are referred to by them.
But there is a vicious circularity here.
The recipe has cohesion, is a coherent text, just in case them refers to the apples.
But we are only justified in inferring that them refers to the apples if we assume that the text is coherent.
Thus, in spite of Halliday and Nasan's claim, it is not the anaphoric facts that give rise to cohesion; rather, the assumption that the text is coherent gives rise to the inference that them refers to the apples.
This kind of confusion, it seems to me, arises from the linguist's habit of looking at every aspect of language in terms of linguistic forms and relations between them.
Thus in this case the mistaken characterization of reference as a relation between words, and of coherence as a property of an abstract linguistic object called a text.
In the rest of this brief paper I want to sketch an opposing view, and to claim that notions like "reference," "text structure," "relevance" and "coherence" are best treated, at least in part, in terms of communicative acts and the plans and goals of speakers/writers who perform such acts.
II. Three Ways of Looking at a Text
Assume for the moment that we know what a text (oral or written) is, and can tell a coherent text from a random transcription of English sentences (I will return to what counts as a coherent text later).
Then there are (at least) three kinds of things and relations involved in a text.
I. Sentences.
First, conventional wisdom in linguistics has it that texts consist of sentences.
I shall accept this for the moment, though a bit later I will show cause to modify it.
But what kind of "thing" is a sentence? It is, if anything is, an abstract linguistic object, a unit of form.
It is not a proposition, nor a fact, though it is a means by which such things are asserted, denied, questioned, etc. Nor is a sentence a speech act, though a speech act will usually be performed by means of the utterance of a sentence.
But a sentence and an utterance of a sentence are different kinds of things.
A sentence is not the kind of thing that is true or false; "facts," or "propositions," that sentences can be used to express, are true or false.
Or perhaps it would be more appropriate to speak of assertions as being true or false. At any rate, it is quite clear that it is nonsense to speak of sentences as true or false, as evidenced by the familiar problem of indexical expressions.
A sentence, then, may be used to assert that something is true, or false, or has occurred, but the sentence itself is not true or false, and does not occur.
Thus relations like causation, order in time, entailment, and so on, do not hold between sentences.
It is not clear what kind of relation can accurately be said to hold between the sentences of a text.
"Facts."
The second kind of "thing" involved in a text is what I shall call "facts."
(Notice that I do not say texts consist of or contain facts; merely that they somehow involve facts.) The term "fact" is a bit misleading--though I can think of no better term--in that I wish to include as facts events, states, and so forth that do not actually hold in the real world; "propositions," more or less.
Relations among the "facts" involved in a text, then, consist of two classes: first, the same relations that hold between facts in the real world--causation, relations of temporal order, motivation, and so forth; second, those relations that have to do with logic and hypothetical facts, like entailment and contradiction.
It may be necessary to distinguish facts on the one hand and propositions on the other, on grounds that relations between facts are of a kind different from relations between propositions, but I will ignore the problem here.
3-Speech acts.
The third kind of thing involved in texts is the "speech act" (by this term I mean to include as a sub-case acts of linguistic communication by writing).
Speech acts are not sentences, nor "logical forms," nor propositions, in spite of occasional attempts to define them in these terms.
They are acts, just as the term implies.
Relations between the speech acts involved in a text are just those that can hold between acts in general.
First, since an act is a subtype of event, the relations that can hold between events can, in general, hold between acts, thus between speech acts: relations of temporal order, for example.
Second, since a speech act is a sub-type of act, relations that can hold between acts can, in general, hold between speech acts.
The most important relation in this regard is the relation of purpose:
one does such-andsuch in order that such-and-such; or one performs a certain act in order thereby to perform a second act.
Long chains of these relations can hold between acts.
I may throw a switch in order to turn on a light in order to frighten away a burglar in order to save the family jewels. I may tell my friend that there is a charging bull behind him in order that he realize that he is in danger, in order that he get out of the way. It is a mistake to ask whether my speech act was an assertion or a warning, since this presupposes that the two are mutually exclusive.
It was both; I asserted something and thereby warned somebody, just as I threw the switch and thereby turned on the lights. I may make a certain mark on a piece of paper, thereby marking my ballot for Smith, thereby casting a vote for Smith.
I may ii0 assert that I will do the dishes, thereby volunteering to do the dishes. And so on.
It is commonly the case that acts are linked by complex relations of purpose and goal, including the case where one act is performed by means of performing another act. This is especially true of communicative acts.
There are several subvarieties of speech acts, for which several taxonomies have been proposed; Austin (1962), McCawley (1977, for example. One important distinction in kind is the distinction between the act of saying a sentence, and the act one thereby performs.
In performing an act of saying the English sentence "Your hair is in my yogurt" I may, in the right circumstances, thereby inform someone that their hair is in my yogurt.
The first kind of act, the act of saying, includes making sounds in a way that conforms to the conventions for what counts as a saying of a sentence, or making visible marks in a way that counts as a saying of a sentence.
Texts, then, do not really consist of sentences, but of sayings ("uses") or sentences; or in the case of written texts, of a permanent kind of record of uses of sentences.
III. The Interpretation of Texts
A. Speech acts.
The interpretation of a text, then, consists of the interpretation of this record of sayings of sentences.
Each saying is interpreted in terms of some speech act(s) performed by saying a given sentence.
(Henceforth by "speech act" I will mean the communicative act one performs by saying a sentence, as opposed to the act of saying itself.)
There are three aspects to the interpretation of speech acts:
the interpretation of what speech acts are performed-assertion, promising, denial, questioning, warning, etc.--by the saying of the sentence; the interpretation of what "facts" are asserted, denied, etc.; and the interpretation of the speaker's purpose and goal in performing the speech act.
As an aside I should mention the special instance where nothing is directly asserted, denied, etc.: the case of speech acts of reference.
An act of asserting, etc. (for brevity I will henceforth use assertion as representative of all speech acts types), will usually include an act of referring as a subpart; a reference to the entity of which something is asserted. But acts of referring can occur independently.
For example, l might say "The door~" to someone under a number of circumstances, to get them to open it, close it, shoot the bad guy standing in it, or merely observe what beautiful hardwood it is made of.
It would be a mistake to say that "The door~" means any of these things, or that I have performed (directly) any kind of speech act beyond merely referring.
I have only referred to the door, thereby to call my hearer's attention to it, with the expectation that when he turns his attention to the door he will realize what it is I want him to do about it.
The typical immediate goal associated with speech acts of all kinds is the same: that the hearer modify his model of a certain "world" (in the sense of "possible worlds ~') in a way that involves the "facts" that are asserted, etc. in the speech act.
The world involved may be the real world, or, in the case of story-telling, for example, some imaginary world.
The modification may include the construction ex nihilo of some hypothetical or imaginary world.
The relation between the "f~cts" of the speech act and the intended m0difi~ation vary with the nature of the speech act; but in all cases some modification is involved.
The simplest case is that of assertion; normally the immediate goal of an assertion is that the hearer modify the world under discussion in a fashion that makes the asserted fact true in that world.
In the case of yes-no questions, the goal is that the hearer modify his model of the world such that in that world the speaker wants the hearer to tell him whether the fact questioned is true.
In the case of imperatives, the goal is that the hearer modify his model such that in that world the speaker wants the hearer to bring about the truth of the ordered fact, and that certain social consequences will follow from non-compliance.
The raw datum of comprehension, then, is not the sentence or the proposition, but the fact that a certain speech act has occurred.
In comprehension, people do not process sentences as abstract formulae; they observe that someone has said something to them, and attempt to interpret that act and its consequences, which may include modification of their model of the world.
The process of modifying the model according to what is said is not direct, but the result of several steps of evaluation.
Interpretation of an assertion might go roughly like this, from the viewpoint of the hearer (where S is the speaker, A the addressee; addressee and hearer may be identical):
S has said x to A. Saying x counts as asserting p. S knows that saying x counts as asserting p. S knows that therefore his saying x is likely to be interpreted by A as an assertion of p. S has done nothing to prevent A from making this conclusion. Therefore S's intention is that his saying x be taken by A as an assertion of p. Then if S is sincere S believes that P is true. A must conclude that S has asserted p because he wants A to take p as true and modify his model of the world accordingly.
But the decision to believe p, i.e. modify his model of the world to include p, is a matter of choice on H's part, not an automatic consequence of processing the "sentence."
The steps involved in making this decision are equally complex, involving the ability to construct a hypothetical world just like the real one except that p is true, to evaluate the consistency and plausibility of that world, and so on.
Some of the facts that are asserted will relate to this decision-making process.
For example, in saying (1) my goal is most likely that the hearer come to believe that both facts asserted are true.
(1) John is here.
He has a dog with him.
But in the case of (2), I am not so much concerned with the second asserted fact in itself, but with the goal that from concluding that it is true, the hearer will be more likely to believe the first, since I intend that he take the second fact as evidence that my source is reliable.
(2) The world is flat. It says so in the Encyclopedia.
Matters that are sometimes construed as rhetorical relations between sentences fall into this category.
Some fact is asserted not because it is important in itself, but because it bears on H's evaluation of some other asserted fact.
Thus the relation is not one between sentences, but between speech acts.
One speech act is performed in order to influence the interpretation and evaluation of another.
At any rate, my point here is that in comprehending a text in the serious sense, comprehension proceeds not from some disembodied abstract object called a "sentence," nor from a "proposition," but from the perceived fact that S has said such-and-such, and that so saying counts as a speech act of a certain type.
There i~ another way in which modification of the world model is not a direct function of the asserted fact:
the widely studied problem of inference.
Given the hearer's acceptance of what the speaker has asserted, incorporation of the facts into the model of the world may involve more than merely adding the asserted facts. There is, for example, a general principle of ceteris paribus that comes into play in consideration of alternative worlds.
Roughly, when constructing a model of a world alternative to some point-of-reference world (usually the real one), the hearer will assume, lacking evidence (from assertion or inference) to the contrary, that the alternative world is consistent with the point-ofreference world in all relevant respects.
To take an extreme example, if someone is telling me about life on Arcturus, I will assume that the laws of physics are the same there as on earth, unless something the speaker says leads me to believe otherwise.
In the same way, hearers will assume, lacking counter-evidence, that what is typical in the point-of-reference (e.g. real) world is also typical in the alternative world.
They will also assume that things of a given type have the properties typical of things of that type. Gricean rules of conversation support these inferential strategies in the following way:
The hearer knows that the speaker knows the hearer is likely to make inferences according to these and other strategies.
The speaker has done nothing to prevent the hearer from making them.
Therefore the hearer is justified in inferring that the speaker intends for the inference to be made.
Using these and other strategies, then, the hearer modifies his model of one or more worlds, based not on detached sentences or propositions floating in some abstract semantic space, but on his observation that a certain person has performed a certain speech act.
B. Relations among speech acts.
But there is more to the interpretation of a text than just the interpretation of individual speech acts. A speech act is performed for some purpose, with some goal in mind. And complete understanding of a text involves the ability to infer such goals and purposes at every level, from inferring the purpose of referring expressions to inferring the speaker's overall goal in constructing the text.
One can understand every sentence in a text, yet come away puzzled at what it was the speaker was trying to say, or what the parts of the text had to do with each other. To understand the purpose of a speech act is to understand how it relates to a goal, how it is a step toward the achievement of that goal.
The most appropriate kind of theory for this aspect of a text is a theory of plans, in which purposes, goals, acts, and intentions play a crucial role.
There are a large number of goals a speaker can have in constructing a text, including many that are irrelevant to comprehension:
to derive royalties, for example, or to confuse an enemy by furnishing misinformation.
A proper theory of text comprehension must distinguish goals like these from those that are central to communication and comprehension, probably by means of conditions like those Grice (1957) proposes as criteria for meaning.
C. What can go wrong.
Then we can sketch the task of text comprehension as follows:
l.
From
From each speech act H must recover
what facts are being asserted, denied, promised, etc.
From this H must infer what modifica-
tions he is intended to make in his model of the world, and how to make them in the most consistent way; this is not a direct function of the facts, as discussed earlier.
For each speech act H must infer a
purpose that is consistent with the purposes he inferred for earlier speech acts; or he must revise earlier hypotheses about purposes accordingly.
Questions H must infer answers to are, "Why did the speaker perform this particular speech act, at this particular point in the text?" and "Why does he want me to have this particular fact just now?" 6. From speech acts and their purposes taken jointly, he must construct a hypothesis of the speaker's goal in the text, and of the plan that the speaker is following in advancing toward that goal.
At each step the purpose of a given speech act must somehow be construed as consistent with, and actually advancing that plan, or the plan hypothesis must be modified so that it can.
7. From hypotheses about the speaker's plans and goals in the text, the hearer will form expectations: hypotheses about what the speaker is likely to do next in advancing toward the goal of the text.
These matters do not proceed in separate compartments, of course, but feed each other. The plan one has constructed so far can influence decisions about what speech act is performed in a given utterance, for example, and the interpretation of pronouns can be influenced by hypotheses about the speaker's goals, just as a decision about what a referring expression is being used to refer to can influence the process of inferring a plan, and expectations about what the speaker will do next can influence the interpretation of what he actually does.
From this sketch we can derive a picture of where things can go wrong in comprehension, giving some insight perhaps into notions like "text structure," "relevance," and "coherence."
The hearer can have difficulty in tasks 1 through 3, of course, but the matter seems straightforward, so I will not discuss it. Difficulties can arise in task 4 in at least two ways. First, the world described may be so factually or logically bizarre, or so inconsistent with the hearer's beliefs (a description of ping pong in a black hole, for example), that the hearer is unable to construct a consistent model with any degree of detail.
The term "incoherent" might be applied to such cases, but I think this is not what linguists mean by "textual coherence," which I will discuss below.
A second kind of difficulty with task 4 arises when the facts are consistent, but the hearer lacks the knowledge necessary to figure out how to construct a consistent model that incorporates those facts.
For example, if I describe in detail a walk through the South Side of Chicago, a person who has been there before will be able to construct a much more richly detailed model of my walk than a person who has not. Difficulties can arise with task 5, insofar as the hearer is able to understand clearly what's being asserted, but unable to determine the speaker's purpose in asserting it. Here is the place to look for an adequate definition of relevance.
Actually there are two senses of the word in ordinary usage.
One can speak of relevance as a relation between facts.
One fact is relevant to another when the truth of one depends in some way on the truth of the other. But I think more often, linguists who speak of "relevance" as a problem of text comprehension have in mind a problem that is best treated in terms of purposes behind speech acts.
Given a hypothesis about the goal and plans of a speaker in a text, a given "sentence" (i.e. speech act) is taken to be irrelevant when the hearer is unable to see how it functions within the plan to advance toward the goal.
Relevance under this interpretation, then, is a relation between an act and a goal, not a relation between sentences. If in recounting my recipe for Wienerschnitzel I describe my new driveway, it's not that the sentences are irrelevant; rather, I have done something irrelevant.
The same passage may count as full of irrelevancies, relative to one goal, but uniformly relevant, relative to another goal.
Task 6 is probably the most complex and difficult, and the one we know least about. But I suspect that it is a likely source of progress in understanding such important but elusive notions as "coherence," "text structure," and "topic."
In understanding a text, the hearer unconsciously searches out a primary goal behind the text, and tries to construe every part of the text as a purposeful step toward that goal, according to some plan.
If the hearer is unable to reconstruct the goal or plan, or indeed decides there is none, the text will be judged "incoherent."
Coherence is not a formal property of texts, nor of "logical structures" of texts, but a function of the hearer's ability to relate of the text to a pla~ for parts achieving some goal.
If it should turn o~t that the coherence I of texts correlates with the number of pronouns, it would be a mistake to conclude that lots of pronouns makes a text coherent. Rather, it would show that coherent texts tend to be ones where the speaker says a lot about one or two topics, rather than saying one thing about 32 topics. It is the coherence of what the speaker is doing in the text that gives rise to the abundance of pronouns; the formal property of having a lot of pronouns does not give rise to coherence.
At least some aspects of "text structure" can also be treated in these terms.
An ideal unified paragraph, for example, is a unit of function, not of form; the speaker formulates a subgoal as a step toward the primary goal of the text, and sets about to achieve, that goal in a series of speech acts. Insofar as the hearer is able to discover this, the series of speech acts will be judged to be a unit; but a unit of function, not of form, defined not in terms of sentences or propositions, but communicative acts of some person, who uses those sentences to convey those propositions.
It is likely that an understanding of task 6 will lead to an understanding of "topic" as well.
At present, there are nearly as many definitions of "topic" as there are linguists, and none of the definitions is clear enough to be usable.
For some linguists the topic is a certain NP in a sentence; for others a topic is something a sentence has, though the NP may not be present in the sentences.
For some every sentence has a topic; for others, only some sentences have topics.
But I suspect that all of these attempts miss by a wide mark.
First, it is not NP's that are topics, but the things in the world they refer to. Second, I suspect that such definitions can never be made sense of in that i£ is speakers, not sentences or even texts, that have topics.
If so, then the proper theoretical treatment of "topic" would be framed in terms of a theory of complex communicative acts, not formal linguistic properties.
IV. Conclusion
In this speculative paper I have proposed a way of looking at the comprehension of connected text that is counter to the linguist's usual way of looking at language.
My main point is that certain notions are more likely to receive adequate treatment in a theory that incorporates a theory of speech acts, a theory of plans and goals, and a theory of inference, in place of a theory that looks for answers in terms of formal properties of texts.
It remains, of course, to develop such theories to a level where my claims can be rigorously tested. The construction of such theories should be a prime goal of theoretical linguistics.
How to do things with words. J Austin, Oxford University PressOxfordAustin, J. How to do things with words. Oxford: Oxford University Press, 1962.
. H P Grice, Meaning, Philosophical Review. 66Grice, H. P. Meaning. Philosophical Review, 1957, 66, 377-388.
. M A K Halliday, R Hasan, Cohesion in English. LongmanHalliday, M. A. K., & Hasan, R. Cohesion in English. London: Longman, 1976.
The philosophy of 9rammar. O Jespersen, New York; NortonJespersen, O. The philosophy of 9rammar. New York: Norton, 1965.
Footnote This research was supported by the National Institute of Education under Contract No. J Mccawley, US-NIE-C-4OO-76-OII6Proceedings of the Austin Conferenee on Performatives, Prer suppositions, and Implicatures. A. Rogers, R. Wall, and J. Murphythe Austin Conferenee on Performatives, Prer suppositions, and ImplicaturesArlington, VaRemarks on the lexicography of performative verbsMcCawley, J. Remarks on the lexicography of performative verbs. In A. Rogers, R. Wall, and J. Murphy (Eds.), Proceedings of the Austin Conferenee on Performatives, Prer suppositions, and Implicatures. Arlington, Va.: Center for Applied Linguistics, 1977. Footnote This research was supported by the National Institute of Education under Contract No. US-NIE-C-4OO-76-OII6. |
|
251,402,038 | Know Better -A Clickbait Resolving Challenge | In this paper, we present a new corpus of clickbait articles annotated by university students along with a corresponding shared task: clickbait articles use a headline or teaser that hides information from the reader to make them curious to open the article. We therefore propose to construct approaches that can automatically extract the relevant information from such an article, which we call clickbait resolving. We show why solving this task might be relevant for end users, and why clickbait can probably not be defeated with clickbait detection alone. Additionally, we argue that this task, although similar to question answering and some automatic summarization approaches, needs to be tackled with specialized models. We analyze the performance of some basic approaches on this task and show that models fine-tuned on our data can outperform general question answering models, while providing a systematic approach to evaluate the results. We hope that the data set and the task will help in giving users tools to counter clickbait in the future. | [
201646309,
207901226,
204960716,
237491981,
52013710,
11816014,
215768182,
7164502,
964287
] | Know Better -A Clickbait Resolving Challenge
June 2022
Benjamin Hättasch benjamin.haettasch@cs.tu-darmstadt.de
Technical University of Darmstadt (TU
DarmstadtGermany
Carsten Binnig carsten.binnig@cs.tu-darmstadt.de
Technical University of Darmstadt (TU
DarmstadtGermany
Know Better -A Clickbait Resolving Challenge
Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022)
the 13th Conference on Language Resources and Evaluation (LREC 2022)MarseilleJune 2022Language Resources Association (ELRA), licensed under CC-BY-NC-4.0 515Information ExtractionQuestion AnsweringCorpusSummarizationNatural Language GenerationClick- baitShared Task
In this paper, we present a new corpus of clickbait articles annotated by university students along with a corresponding shared task: clickbait articles use a headline or teaser that hides information from the reader to make them curious to open the article. We therefore propose to construct approaches that can automatically extract the relevant information from such an article, which we call clickbait resolving. We show why solving this task might be relevant for end users, and why clickbait can probably not be defeated with clickbait detection alone. Additionally, we argue that this task, although similar to question answering and some automatic summarization approaches, needs to be tackled with specialized models. We analyze the performance of some basic approaches on this task and show that models fine-tuned on our data can outperform general question answering models, while providing a systematic approach to evaluate the results. We hope that the data set and the task will help in giving users tools to counter clickbait in the future.
Introduction
Nearly everyone knows headlines like "The hidden secret of Kermit the Frog" or "15 hacks that will change your life". If you open the article, you will usually find completely trivial and well-known information, or you will find that the headline was completely exaggerating or misleading. In short, you have fallen for clickbaita term that refers to a certain style of a headline or other teaser text designed to "bait" the reader into clicking a link to a full article. Unfortunately, clickbait is annoying but effective. Just as with tabloid headlines, people's curiosity is exploited to get them to open the article and read it. And since their curiosity is so strong, they spend time reading the articles--even though they usually know very well that the article is clickbait and that the "shocking" news will turn out to be unsurprising in the end. A good analysis of how marketing experts use the fear of missing out on information (Loewenstein, 1994) and turn this into a curiosity gap by intentionally making headlines really irresistible, can be found in the recent work by Scott (2021). She identifies patterns in these headlines and shows that, e.g., certain groups of adjectives are used significantly more often in clickbait headlines than in general. Websites that use such techniques usually focus on attracting people to the page and, in the best case, making them stay as long as possible in order to "play out" as many ads as possible. As such, clickbait is a billiondollar business today and thus many more advanced techniques are being developed. For example, with the help of social media and article recommender systems, owners of these websites "made for advertising" try to get the largest possible share from the estimated $ 115 billion that will be spent for displaying ads in the US in 2022 alone (Barwick, 2021). In many of these cases, clickbait is not used to (legitimately) refinance the costs of running a news page or a social network, but these websites are solely built to generate profit by tricking their users. Sadly, clickbait not only causes people to waste time. A study by Gabielkov et al. (2016) shows that almost 60 percent of the links shared on social media were not clicked before being shared. Clickbait articles with provocative titles (containing, e.g., exaggerations or rhetoric questions) might spread false information to those who do not read the full articles even if the misinformation is corrected in the article itself at some point (which might be page 30 of an image gallery with ads on every other page). Sometimes, clickbait headings do not point to an article at all but to webpages distributing malware or collecting personal data (e.g., with fake sweepstakes). So can the NLP community help reducing the amount of clickbait on the web? Current scientific work on clickbait mainly focuses on detecting clickbait using linguistic analyses of headlines, learned models for classification, or regression approaches to determine the degree to which an article is clickbait. This information is then used to hide clickbait articles from webpages and timelines. Completely hiding contents is effective-but like with every other filter approach, there might be false positives. Simply marking articles as clickbait but still showing them solves the issue of filtering out important articles, but may not be sufficient, since many people open clickbait articles despite knowing they are clickbait. We therefore suggest working on approaches that can automatically extract the teased information from the article text and thus fill the curiosity gap by displaying this information next to the headline without requiring the user to click the link.
Contributions In this paper, we hence present a dataset and a shared task to complement existing approaches on clickbait detection to enable the development of new approaches that will allow users to better deal with clickbait. To be more precise, we present a corpus of clickbait samples (titles/teasers and texts) together with their resolutions, which were annotated by university students. We define a clickbait resolving task based on the dataset and will maintain a leaderboard for approaches submitted to that task. To allow a direct application of the resulting approaches, submissions have to provide/implement an interface that allows running the model on new data (i.e., resolving a given clickbait article). We also expect authors to open source their code and pretrained models (after they successfully published their work). The leaderboard, evaluation scripts and other code, instructions how to submit and obtain the corpus (including additional silver data), and baseline implementations including pre-trained models can be found at: https://link.tuda.systems/ clickbait-resolving-challenge Finally, it is important to note that this paper is meant as a starting point. We hope that the resulting approaches (e.g., browser plugins or the adaption by social networks) will ultimately reduce the amount of clickbait articles and additionally help to increase media literacy.
Outline We first give an overview of related work, previous tasks, and datasets related to clickbait in Section 2. Afterwards, we describe the construction process (Section 3 and the properties of our corpus (Section 4). Then we define the task and important metrics in Section 5. Finally, we analyze the usefulness of our data using several baseline approaches (Section 6) before wrapping up our contribution in Section 7.
Related Work & Existing Datasets
"The term clickbait refers to social media messages that are foremost designed to entice their readers into clicking an accompanying link to the posters' website, at the expense of informativeness and objectiveness" (Potthast et al., 2018a). Early work by Vijgen (2014) and Blom and Hansen (2015) studied the phenomenon from a linguistic perspective and detected homogeneous structures (e.g., headlines starting with a number leading to listicles, i.e., articles only consisting of long lists or image galleries) and the use of certain patterns and expressions (e.g., "This will blow your mind."). Current work in this area mostly focuses on detecting clickbait articles to warn users or hide those headlines and teasers from them. Agrawal (2016) presented a convolutional neural network for classification whether headlines are using clickbait techniques or not. Chakraborty et al. (2016) evaluated different techniques to create such a model, too. Additionally, they propose to ask users to mark contents they perceive as clickbait and infer from that to block similar contents. Finally, they integrated this approach into a browser extension that can mark and hide clickbait contents on several media sites and ran a field study. Rony et al. (2017) trained embeddings for classification based on a large corpus of social media posts. To evaluate all these and some other approaches, different corpora containing clickbait articles were created by the authors, a good overview of these corpora can be found in Potthast et al. (2018b). In that paper, the authors describe how they constructed a new corpus of clickbait teasers based on Twitter posts for the clickbait challenge 2017 (Potthast et al., 2018a). That challenge is the first that phrases the problem as a regression task which tries to assign a score for the strength of clickbait. It received 13 submissions during the original shared task period, but is still open for further submissions. The best scoring submission was able to improve on the F1 score by nearly 20 percentage points compared to the baseline. After these classification and regression approaches, we now propose to go one step further by creating models that generate or extract text. Our task is related to several NLP disciplines: one of them are topic-focused single document summarization approaches, that try to extract the most important parts of a text with regard to a certain topic (the so-called content selection). A systematic evaluation of such approaches was already performed in the DUC2005 challenge (Dang, 2006), more recent ways to frame this task were, e.g., proposed by Narayan et al. (2018) or Deutsch and Roth (2019). Moreover, our task can be seen as a specialized question answering (QA) task, particularly as textual QA which aims to answer a given question based on unstructured textual documents. That field in turn integrates with neural machine reading comprehension (MRC). Traditional approaches tried to tackle the problem using different components dealing with question analysis, classification, document retrieval and answer extraction but are nowadays mostly replaced by neural end-to-end models. A good overview of existing approaches in these fields can be found in the recent paper by Zhu et al. (2021), an extensive analysis of transformer based language models and their preparation for different downstream task was recently presented by Kalyan et al. (2021). Even though the general task description (find a resolution to a short text snippet in a longer text) matches the one of textual question answering, there are differences: Teasers might be formulated as (rhetoric) questions but usually do not have a question format, and they may contain certain expressions like "will change your life" that are not actually useful to find the resolution. Furthermore, the presented texts do not always contain a real resolution or the resolution may at least not match the detail level promised in the teaser. We will further discuss these challenges in the next sections. Taking this into account, we think it is reasonable to consider clickbait resolving as a task that should be tackled with dedicated approaches.
Corpus Construction
The two most straightforward approaches to construct a corpus of clickbait articles and their resolutions are a) annotating clickbait articles with the resolution manually or b) finding combinations of articles and resolutions and manually checking and confirming them. We decided to go with the second way, hoping that this will result in a higher linguistic variability of the resolutions since they are written by lots of different authors, and also a higher quality since the authors wrote them with the intrinsic motivation of helping other people. This however required finding suitable data sources as well as a careful checking of the resulting samples. We evaluated multiple possible sources, starting with Twitter accounts like @SavedYouAClick and @We-HateClickbait that post resolutions to clickbait articles. Unfortunately (at least for our use case) they often do only include a screenshot of the headline but no link to the original source, the overall amount of tweets is limited to a few hundred tweets, or many of the tweets deal with other topics. Therefore, we settled on the Subreddit Saved you a click (not related to the Twitter account) which was created in June 2014 and has nearly 1.8 million members. From April to September 2021, we downloaded 4870 posts containing article links from that Subreddit and determined Web Archive links for the URLs if necessary. We then crawled and parsed the pages to extract full article texts. On purpose, we did not remove sentences like "Get the latest from xyz Sign up for our newsletter" since they will be contained when retrieving page contents in real world usage, too. However, it might be interesting to train or adapt models that remove these parts and detect the real content automatically, and one might use them as part of your processing pipeline when submitting to the shared task (see Section 5 for details. We will curate a public list of preprocessing models and other systems working on/with the data on the webpage for the challenge, and welcome submissions to this independently of a submission to the task itself.
To compile a corpus out of the raw data, university students manually checked and annotated the data following a list of guidelines. They first checked whether the resolution from Reddit is suitable to answer the kind of question raised in the teaser and afterwards determined their correctness based on teaser, resolution and the full text. Thereby, they also determined whether the text contains the relevant information to produce the proposed resolution. More than half of the samples had to be discarded in this step due to quality problems. Even though we wanted to keep the linguistic variability high, the students were advised to reformulate the resolutions in some cases, e.g., to remove long sequences of exclamation marks, sarcastic comments regarding the articles beyond the resolution, and other additions like the amount of clicks the author needed to get to the teased information. Also, for articles which could simply be summarized with "yes" or "no", we marked this as additional information and replaced resolutions like "Nope" or "Yeah". Finally, we split the resulting samples into train, dev and test set. As a result of our approach, we created a corpus with articles from many different sources, which contains resolutions written by different authors and consists of manually confirmed entries only. The input texts correspond to what one could expect when running a crawler on an arbitrary page to resolve the clickbait for a user that just spotted the headline of that article.
Dataset Statistics
Our corpus consists of 2635 samples for English clickbait articles with their resolutions. It is split into a public training set (2108 samples / 80 %), a public dev set (264 samples / 10 %) and a test set (263 samples / 10 %) that we keep private for evaluation. For each sample, we provide a title/teaser, the article text and the resolution, as well as some meta information: the URL to the full article, a timestamp when the resolution was created, and the score the resolution post on Reddit achieved (which might hint on the quality of the uncleaned answer but is also influenced by the overall interest in the topic and other subjective factors). Finally, we manually annotated whether a clickbait headline is hinting at a simple yes/no answer and normalized the resolution for those cases. This applies to about 3.9 % of the samples. The average title has a length of 68.7 characters or 13.4 words. The texts are on average 3582.2 characters or 716.6 words long. They were written in the years 2016 to 2021.
Task definition
On top of the proposed dataset, we define a task for finding resolutions to clickbait articles, which is evaluated using the public dev and private test set. We first discuss the metrics used for evaluation, and afterwards describe details for task and submission.
Metrics
Already the first shared task on question answering (Voorhees and Tice, 2000) raised the question whether metrics known from the field of information retrieval are suitable for this kind of task (i.e., really resemble human judgement of correctness and equivalence). This is particularly a problem for free form question answering, where simple metrics like precision or recall cannot be employed directly but the semantic equivalence of strings has to be (automatically) evaluated to estimate the correctness of an answer. Like in many tasks of the NLP community, this is a hard problem, e.g., due to ambiguity, synonyms, and context-depending meanings. Even human annotators might disagree, particularly because of different previous knowledge or different interpretations of the question. Another problem is different levels of granularity, which poses in fact a typical issue with clickbait: the headline promises a detailed answer, but the article then just presents somewhat common knowledge or things that could be easily guessed (e.g., "you won't believe what the other kids call Prince George" and the answer is just "George"). But what effect does this have on the evaluation of resolution correctness-should answers of another detail level be treated as similar or not? These difficulties lead to the development of a range of metrics with different properties: metrics working only on the syntactical level might both under-and overestimate similarity (e.g., sentences consisting of synonyms will get a low similarity score, but sentences like "They said yes" and "They said no" or "We did start the fire" and "We did not start the fire" will get high scores even though they express the absolute opposite). Trained models might be able to better capture semantic meanings, but this highly depends on the data they were trained on, and it still cannot be guaranteed that the contexts are correctly interpreted and that the background knowledge incorporated in the language model is valid for this specific pair of texts. Recent papers like Chen et al. (2019) and Si et al. (2021) evaluated different classic and transformer based metrics, but found that none of them can resemble human judgement in every case. We therefore decided to measure and report multiple metrics in our task at once. That way, we can take different aspects of similarity (e.g., syntactic equivalence and semantic correspondence) into account. Potential users of the resulting models can then choose the model to use for their application based on these aspects. The evaluation of the task will use the following metrics:
Exact Match This metric measures whether the predicted resolution matches the human written one character by character.
Recall-Oriented Understudy for Gisting Evaluation ROUGE (Lin, 2004) was developed to evaluate the quality of a summary by comparing it to human created gold summaries. There are different variants: ROUGE-N measures the n-gram recall, precision and F1 score between a text and the gold standard. We use ROUGE-2 (based on bigrams) and report the F1 score. We also report the ROUGE-L F1 scores, which are based on the longest common subsequence.
Bilingual Evaluation Understudy BLEU (Papineni et al., 2001) was developed to score the similarity between machine and human translations of a given text but may also be used to evaluate text similarity in general. This metric was one of the first to show a high correlation with human judgment. It is precision-based, uses cumulative n-grams, and works best if there are multiple reference translations (which is unfortunately not the case for our data). We use BLEU-2 which incorporates the unigram and bigram overlap in one single score.
Metric for Evaluation of Translation with Explicit
ORdering METEOR (Banerjee and Lavie, 2005) again was originally designed for evaluation of machine translation. It works on unigram level but allows generalization by taking not only the surface forms but also stemmed forms and meanings into account. ME-TEOR creates an alignment between the tokens of the texts to compare, scores that alignment using precision and recall, and combines these scores in an F-measure with a higher weight on recall.
BERTScore BERTScore (Zhang* et al., 2020) was developed as a robust automatic metric for text generation. Similar to the previous metrics, it scores the similarity of the tokens in candidate and reference texts. But to do so, it uses the cosine similarity between pretrained contextual BERT embeddings instead of the surface forms and sums them up to a single score. Studies show that this score better aligns with human judgment than other metrics in many cases.
Task Details
A submission to the clickbait resolving challenge should produce resolutions for given texts and teasers of clickbait articles. We do not give constraints on the implementation. Both generative models, which produce a new text to do this, and extractive models, which extract part of the original text, can be used. There is also no specification for the maximum length of the resolution, but since the expected resolutions are always only a few words to a few sentences at most, unnecessarily long predictions will automatically result in poor scores. The meta-information described in Section 4 can also be taken into account, but the approach should be robust to a lack of certain information (e.g., the Reddit score). It is allowed to use other resources (e.g., ontologies, models for pre-and post-processing). Approaches that access online resources to produce the resolution (e.g, API calls to lexical resources) are listed in a separate leaderboard.
As described in Section 3, we call for supporting models and approaches (e.g., for pre-processing) to be submitted to us as well, so that we can promote them prominently on the project page.
In a recent paper (Hättasch et al., 2021) we discussed the importance of not only reproducibility but applicability of results from shared tasks. We therefore require the implementation of a small python interface that can be used to predict resolutions for single or multiple new samples. That interface should be published together with code and pre-trained model dumps (if necessary).
Leaderboard & Submission
All details regarding evaluation and submission procedure can be found on the project website. We publish an evaluation script that can be used to evaluate an approach on the dev set. The test set is kept privately by us and used to finally evaluate submissions. Each approach will be evaluated on the test set only once. We will maintain a leaderboard for models/approaches for our task as part of the webpage. It will report both the dev and test scores for all submitted approaches. Submissions under review may show up as anonymous on the board, but we place great value on reproducibility. It is therefore required to open source code, model dumps and the above-mentioned code snippet to generate new predictions to stay in the leaderboard once an approach was successfully published somewhere.
Experimental Results & Discussions
Baselines
To prove the usefulness of the data for the proposed task but also show that the task cannot be trivially solved with existing approaches, we evaluated our data using the following approaches:
First & Last Sentence As two trivial baselines, we extract the first or last sentence of an article and treat that sentence as resolution. This approach neither uses the training data nor incorporates the teaser information and is mainly used to "calibrate" the scores.
Longformer2Roberta Summarization 1 This is a EncoderDecoder model based on Longformers 2 and the RoBERTa-base 3 model fine-tuned for summarization. The use of longformer models with a maximum input size of 4096 characters allows us to load the full text of nearly all input samples. This approach again does not make use of the teaser information.
BART SQuADv2 4 A BART-LARGE (Lewis et al., 2020) model fine-tuned on the second version of the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016). This is a seq2seq model that can handle sequences up to 1024 characters, longer input was truncated. Both the teaser/title and the text are used for generation.
Sentence Transformers (S-BERT) QA 5 This sentence transformer model (Reimers and Gurevych, 2019) was fine-tuned on 215 M question-answer-pairs. 1 https://huggingface.co/ patrickvonplaten/longformer2roberta-cnn_dailymail-fp16 2 https://huggingface.co/allenai/ longformer-base-4096 3 https://huggingface.co/roberta-base 4 https://huggingface.co/a-ware/bart-squadv2 5 https://huggingface.co/sentencetransformers/multi-qa-mpnet-base-cos-v1
It works in an extractive fashion and uses a vector space designed for semantic search.
T5 SQuAD 6 The T5 base model (Raffel et al., 2020) fine-tuned on the SQuAD dataset. This is again a seq2seq model incorporating both the teaser and the text for producing the resolution.
T5 SQuAD fine-tuned For the final two baselines, we fine-tuned existing models using our training set. First, we took the T5-based QA model (see above) for that and ran a standard fine-tuning on all 2108 training data samples.
T5 SQuAD augmented + fine-tuned To better level between the size of the SQuAD dataset the model was originally trained on and the amount of data available for fine-tuning, we additionally created silver data using an augmentation step and fine-tuned the model with both the training data and the automatically created silver data. For the augmentation step, we used NL-PAug (Ma, 2019) to create two artificial teasers for each training sample based on WordNet. The texts were not modified. Hence, that model was fine-tuned on 2108 + 2 · 2108 = 6324 samples.
Results
The resulting scores from running the baselines on dev and test set can be found in Table 1. In Table 2 we show the resulting ranks of the different approaches for each metric.
Most important, it can be seen that the approach using our dataset the most (namely using title and context for prediction itself, and being fine-tuned on the training data) performs better than all other approaches regardless of the metric. This is also remarkable because the underlying (non-refined) model performs worse than the BART model also trained on the SQuAD data for most metrics (except Exact Match, and BERTScore on the dev set), i.e., this is achieved although not even a superior respectively the best model was refined.
For the most part, the extractive baselines do not perform well and, depending on the metric, are even beaten by the LongformerSummary model which does not include the title when generating the response. This is certainly partly due to the fact that the resolutions were written by hand and not generated by selecting existing text blocks. When applying metrics that primarily measure the overlap of tokens or n-grams, it may thus not even be possible to achieve a perfect score. We therefore also report the extractive upper bound in Table 1, i.e., the highest possible score that could be achieved if always the one sentence best matching the answer was selected. As described earlier, we also have to assume that the information needed to resolve a clickbait can often not be found in a single sentence, but rather spans a longer range, which might be another tripping stone for extractive approaches. Table 2: Resulting ranks of the baseline approaches based on the different metrics. 1 is the best rank. The two fine-tuned versions of T5 rank on the first two ranks regardless of the metric. The summary approach that does not take the teaser into account as well as the extractive approaches land on the back ranks for all metrics.
A qualitative analysis of the cases where our augmented and fine-tuned T5 baseline fails on the dev set according to the Rouge-2 metric shows several patterns: in some cases, the metric does not reflect that the most important aspect of the answer was correctly extracted (e.g., for entry 17669 with the gold answer "'The Intelligent Investor' by Benjamin Graham, it advises to buy stocks when they are low and hold them and also ways to avoid huge mistakes." the answer "The Intelligent Investor" was produced. Yet, since it is a considerably shorter subset, the score is low. In many other cases where only a subset of the expected answer is returned, this low score seems however to be justified. "130/80" is indeed an important part of the resolution to entry 13522, but without the information that this is the new definition for high blood pressure, it will probably not be understandable on its own. The same applies for "cold" (entry 10009) with the expected answer "his Burger King food was cold". Similarly, the baseline approach often extracts something that is related to the answer, but on another detail level and thus may be too generic to really satisfy the information need. For example, it returns "fear" as "the sad reason half of Americans don't take all their paid vacation"-which is true but not as detailed as "they believe they'll be replaced" (entry 15745). Finally, the approach often only repeats a central phrase from the title, e.g., "the Iron Throne" for "I sat on the actual Iron Throne from 'Game of Thrones'-here's what it was like" (entry 21787). To summarize: Measured with different metrics, patterns in rankings emerge, but the differences make the use of different scores seem justified and provide intuition about to which aspects (e.g., customized wording) certain approaches perform particularly well. Generative approaches (seq2seq) seem more promising than extractive approaches. Most neural models clearly outperform the trivial baselines, but there are still a lot of cases where they are not able to produce the correct answer, and several patterns for such cases can be found. For the models tested, generic QA approaches cannot match the quality of an approach specifically refined on the data. Finally, by means of augmentation, the quality of the approach can be boosted even more, without having to manually annotate further data.
Conclusion
In this paper, we presented a new corpus for clickbait resolving and established a corresponding task. We showed that our dataset is suitable to train approaches for that task using several baselines and evaluating with different metrics-building metrics for free form question answering evaluation is thereby treated as orthogonal problem, but we will happily include new metrics resembling human judgment that are developed in the next years into our evaluation procedure. The leaderboard and all important details to train, evaluate, and submit own approaches can be found on the project webpage. We hope that our dataset and our task will help to preserve people from wasting their time, improve their media literacy, and in the end reduce the amount of clickbait on the internet.
Acknowledgements
We thank Max Doll, Kathrin Ferring, Martin Otterbein, and Jan-Hendrik Schmidt for the countless hours they spent reading hardly bearable texts. This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant No. GRK 1994/1, as well as the German Federal Ministry of Education and Research and the State of Hesse through the National High-Performance Computing Program.
Bibliographical References
Table 1: Results of the baseline models evaluated with different metrics (Exact Match, Rouge-2 & Rouge-L F1, Meteor, BLEU-2, and BERTScore) on the dev and test sets of our corpus. Additionally, we show the scores for the extractive upper bound (i.e., selecting the one sentence corresponding best with the manually written answer). Values between 0 and 1, and higher is better for all metrics. (S) marks Seq2Seq models, (E) marks models working extractively. The fine-tuned versions of T5 outperform all other approaches regardless of the metric.Approach
ExactMatch Rouge-2 Rouge-L Meteor BLEU-2 BERTScore
Dev
Extractive Upper Bound
.0000
.1351
.2090
.1852
.0957
.1760
First Sentence (E)
.0000
.0151
.0657
.0638
.0128
.0214
Last Sentence (E)
.0000
.0070
.0386
.0455
.0058
.0553
Longformer Summary (S)
.0000
.0298
.1109
.0612
.0123
.0533
BART SQuADv2 (S)
.0038
.0565
.1030
.1164
.0508
.0476
S-BERT QA (E)
.0000
.0178
.0747
.0697
.0131
.0472
T5 (S)
.0189
.0394
.0907
.1074
.0423
.0730
T5 fine-tuned (S)
.0455
.0716
.1568
.1891
.0737
.1567
T5 augmented+fine-tuned (S)
.0720
.0870
.1846
.2296
.0910
.2089
Test
Extractive Upper Bound
.0000
.1029
.1616
.1456
.0686
.1662
First Sentence (E)
.0000
.0187
.0517
.0582
.0125
.0514
Last Sentence (E)
.0000
.0043
.0306
.0404
.0035
.0045
LongformerSummary (S)
.0000
.0202
.0816
.0450
.0064
.0594
BART SQuADv2 (S)
.0038
.0600
.1034
.1089
.0610
.1276
S-BERT QA (E)
.0000
.0161
.0592
.0573
.0094
.0705
T5 (S)
.0114
.0275
.0849
.0952
.0250
.1124
T5 fine-tuned (S)
.0342
.0716
.1534
.1790
.0681
.2137
T5 augmented+fine-tuned (S)
.0456
.0688
.1702
.2038
.0690
.2523
Approach
ExactMatch Rouge-2 Rouge-L Meteor BLEU-2 BERTScore
Dev
First Sentence (E)
5
7
7
6
6
8
Last Sentence (E)
5
8
8
8
8
6
Longformer Summary (S)
5
5
3
7
7
7
BART SQuADv2 (S)
4
3
4
3
3
4
S-BERT QA (E)
5
6
6
5
5
5
T5 (S)
3
4
5
4
4
3
T5 fine-tuned (S)
2
2
2
2
2
2
T5 augmented+fine-tuned (S)
1
1
1
1
1
1
Test
First Sentence (E)
5
6
7
5
5
7
Last Sentence (E)
5
8
8
8
8
8
LongformerSummary (S)
5
5
5
7
7
6
BART SQuADv2 (S)
4
3
3
3
3
3
S-BERT QA (E)
5
7
6
6
6
5
T5 (S)
3
4
4
4
4
4
T5 fine-tuned (S)
2
2
2
2
2
2
T5 augmented+fine-tuned (S)
1
1
1
1
1
1
https://huggingface.co/valhalla/t5base-squad
Clickbait detection using deep learning. A Agrawal, 2016 2nd International Conference on Next Generation Computing Technologies (NGCT). Agrawal, A. (2016). Clickbait detection using deep learning. In 2016 2nd International Conference on Next Generation Computing Technologies (NGCT), pages 268-272.
METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. S Banerjee, A Lavie, Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or SummarizationAssociation for Computational LinguisticsBanerjee, S. and Lavie, A. (2005). METEOR: An Au- tomatic Metric for MT Evaluation with Improved Correlation with Human Judgments. In Proceed- ings of the ACL Workshop on Intrinsic and Extrin- sic Evaluation Measures for Machine Translation and/or Summarization, pages 65-72. Association for Computational Linguistics.
Brands are still playing ball with clickbait ad sites, advertising's roach that will survive the bomb. R Barwick, Barwick, R. (2021). Brands are still playing ball with clickbait ad sites, advertising's roach that will sur- vive the bomb. https://www.morningbrew. com/marketing/stories/2021/09/ 08/brands-still-playing-ball- clickbait-ad-sites-advertisings- roach-will-survive-bomb.
Click bait: Forward-reference as lure in online news headlines. J N Blom, K R Hansen, Journal of Pragmatics. 76Blom, J. N. and Hansen, K. R. (2015). Click bait: Forward-reference as lure in online news headlines. In Journal of Pragmatics, volume 76, pages 87-100.
Stop clickbait: Detecting and preventing clickbaits in online news media. A Chakraborty, B Paranjape, S Kakarla, N Ganguly, Proceedings of the 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM '16. the 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM '16IEEE PressChakraborty, A., Paranjape, B., Kakarla, S., and Gan- guly, N. (2016). Stop clickbait: Detecting and pre- venting clickbaits in online news media. In Proceed- ings of the 2016 IEEE/ACM International Confer- ence on Advances in Social Networks Analysis and Mining, ASONAM '16, page 9-16. IEEE Press.
Evaluating Question Answering Evaluation. A Chen, G Stanovsky, S Singh, M Gardner, Proceedings of the 2nd Workshop on Machine Reading for Question Answering. the 2nd Workshop on Machine Reading for Question AnsweringAssociation for Computational LinguisticsChen, A., Stanovsky, G., Singh, S., and Gardner, M. (2019). Evaluating Question Answering Evaluation. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 119-124. Association for Computational Linguistics.
DUC 2005: Evaluation of question-focused summarization systems. H T Dang, Proceedings of the Workshop on Task-Focused Summarization and Question Answering -SumQA '06. the Workshop on Task-Focused Summarization and Question Answering -SumQA '06Association for Computational Linguistics48Dang, H. T. (2006). DUC 2005: Evaluation of question-focused summarization systems. In Pro- ceedings of the Workshop on Task-Focused Sum- marization and Question Answering -SumQA '06, page 48. Association for Computational Linguistics.
Summary Cloze: A New Task for Content Selection in Topic-Focused Summarization. D Deutsch, D Roth, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Association for Computational LinguisticsDeutsch, D. and Roth, D. (2019). Summary Cloze: A New Task for Content Selection in Topic-Focused Summarization. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 3720-3729. Association for Com- putational Linguistics.
Social Clicks: What and Who Gets Read on Twitter?. M Gabielkov, A Ramachandran, A Chaintreau, A Legout, Proceedings of the 2016 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Science. the 2016 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer ScienceACMGabielkov, M., Ramachandran, A., Chaintreau, A., and Legout, A. (2016). Social Clicks: What and Who Gets Read on Twitter? In Proceedings of the 2016 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Sci- ence, pages 179-192. ACM.
Netted?! How to Improve the Usefulness of Spider & Co. B Hättasch, N Geisler, C Binnig, 2nd International Conference on Design of Experimental Search & Information REtrieval Systems (DESIRES). 6Hättasch, B., Geisler, N., and Binnig, C. (2021). Net- ted?! How to Improve the Usefulness of Spider & Co. 2nd International Conference on Design of Ex- perimental Search & Information REtrieval Systems (DESIRES), page 6.
K S Kalyan, A Rajasekharan, S Sangeetha, arXiv:2108.05542AMMUS : A Survey of Transformer-based Pretrained Models in Natural Language Processing. Kalyan, K. S., Rajasekharan, A., and Sangeetha, S. (2021). AMMUS : A Survey of Transformer-based Pretrained Models in Natural Language Processing. arXiv: 2108.05542.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. M Lewis, Y Liu, N Goyal, M Ghazvininejad, A Mohamed, O Levy, V Stoyanov, L Zettlemoyer, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsLewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mo- hamed, A., Levy, O., Stoyanov, V., and Zettlemoyer, L. (2020). BART: Denoising sequence-to-sequence pre-training for natural language generation, trans- lation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 7871-7880, Online, July. Association for Computational Linguistics.
ROUGE: A package for automatic evaluation of summaries. C.-Y Lin, Text Summarization Branches Out. Barcelona, SpainAssociation for Computational LinguisticsLin, C.-Y. (2004). ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain, July. Association for Computational Linguistics.
The psychology of curiosity: A review and reinterpretation. G Loewenstein, Psychological bulletin. American Psychological Association116Loewenstein, G. (1994). The psychology of curiosity: A review and reinterpretation. In Psychological bul- letin, volume 116, pages 75-98. American Psycho- logical Association.
NLP augmentation. E Ma, Ma, E. (2019). NLP augmentation. https:// github.com/makcedward/nlpaug.
. S Narayan, S B Cohen, M Lapata, Narayan, S., Cohen, S. B., and Lapata, M. (2018).
Don't give me the details, just the summary! topicaware convolutional neural networks for extreme summarization. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsDon't give me the details, just the summary! topic- aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 1797-1807, Brussels, Belgium, October-November. Association for Computational Linguistics.
BLEU: A method for automatic evaluation of machine translation. K Papineni, S Roukos, T Ward, W.-J Zhu, Proceedings of the 40th Annual Meeting on Association for Computational Linguistics -ACL '02. the 40th Annual Meeting on Association for Computational Linguistics -ACL '02Association for Computational Linguistics311Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. (2001). BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics -ACL '02, page 311. Association for Computational Linguistics.
M Potthast, T Gollub, M Hagen, B Stein, arXiv:1812.10847The Clickbait Challenge 2017: Towards a Regression Model for Clickbait Strength. Potthast, M., Gollub, T., Hagen, M., and Stein, B. (2018a). The Clickbait Challenge 2017: Towards a Regression Model for Clickbait Strength. arXiv: 1812.10847.
Crowdsourcing a large corpus of clickbait on Twitter. M Potthast, T Gollub, K Komlossy, S Schuster, M Wiegmann, E P Garces Fernandez, M Hagen, B Stein, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAAssociation for Computational LinguisticsPotthast, M., Gollub, T., Komlossy, K., Schuster, S., Wiegmann, M., Garces Fernandez, E. P., Hagen, M., and Stein, B. (2018b). Crowdsourcing a large corpus of clickbait on Twitter. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1498-1507, Santa Fe, New Mex- ico, USA, August. Association for Computational Linguistics.
Exploring the limits of transfer learning with a unified text-to-text transformer. C Raffel, N Shazeer, A Roberts, K Lee, S Narang, M Matena, Y Zhou, W Li, P J Liu, In Journal of Machine Learning Research. 21Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. In Journal of Ma- chine Learning Research, volume 21, pages 1-67.
SQuAD: 100,000+ questions for machine comprehension of text. P Rajpurkar, J Zhang, K Lopyrev, P Liang, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsRajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P. (2016). SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2383-2392, Austin, Texas, November. Association for Computational Linguis- tics.
Sentence-bert: Sentence embeddings using siamese bert-networks. N Reimers, I Gurevych, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics11Reimers, N. and Gurevych, I. (2019). Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Associa- tion for Computational Linguistics, 11.
Diving deep into clickbaits: Who use them to what extents in which topics with what effects?. M M U Rony, N Hassan, M Yousuf, Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, ASONAM '17. the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, ASONAM '17New York, NY, USAAssociation for Computing MachineryRony, M. M. U., Hassan, N., and Yousuf, M. (2017). Diving deep into clickbaits: Who use them to what extents in which topics with what effects? In Pro- ceedings of the 2017 IEEE/ACM International Con- ference on Advances in Social Networks Analysis and Mining 2017, ASONAM '17, page 232-239, New York, NY, USA. Association for Computing Machinery.
You won't believe what's in this paper! Clickbait, relevance and the curiosity gap. K Scott, Journal of Pragmatics. 175Scott, K. (2021). You won't believe what's in this pa- per! Clickbait, relevance and the curiosity gap. In Journal of Pragmatics, volume 175, pages 53-66.
What's in a name? answer equivalence for open-domain question answering. C Si, C Zhao, J Boyd-Graber, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingDominican RepublicAssociation for Computational LinguisticsOnline and Punta CanaSi, C., Zhao, C., and Boyd-Graber, J. (2021). What's in a name? answer equivalence for open-domain question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Lan- guage Processing, pages 9623-9629, Online and Punta Cana, Dominican Republic, November. Asso- ciation for Computational Linguistics.
The Listicle: An exploring research on an interesting shareable new media phenomenon. B Vijgen, Studia Universitatis Babes-Bolyai -Ephemerides. 59Studia Universitatis Babes-BolyaiVijgen, B. (2014). The Listicle: An exploring re- search on an interesting shareable new media phe- nomenon. In Studia Universitatis Babes-Bolyai - Ephemerides, volume 59, pages 103-122. Studia Universitatis Babes-Bolyai.
Building a question answering test collection. E M Voorhees, D M Tice, Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '00. the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '00Association for Computing MachineryVoorhees, E. M. and Tice, D. M. (2000). Building a question answering test collection. In Proceedings of the 23rd Annual International ACM SIGIR Con- ference on Research and Development in Informa- tion Retrieval, SIGIR '00, pages 200-207. Associa- tion for Computing Machinery.
Bertscore: Evaluating text generation with bert. * Zhang, T Kishore, * , V Wu, * , F Weinberger, K Q , Artzi , Y , International Conference on Learning Representations. Zhang*, T., Kishore*, V., Wu*, F., Weinberger, K. Q., and Artzi, Y. (2020). Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations 2020.
F Zhu, W Lei, C Wang, J Zheng, S Poria, T.-S Chua, arXiv:2101.00774Retrieving and Reading: A Comprehensive Survey on Open-domain Question Answering. Zhu, F., Lei, W., Wang, C., Zheng, J., Poria, S., and Chua, T.-S. (2021). Retrieving and Reading: A Comprehensive Survey on Open-domain Question Answering. arXiv: 2101.00774. |
14,694,982 | Learning Verbs on the Fly | To answer the question "What are the duties of a medical doctor?", one would require knowledge about verb-based relations. A lot of effort has been invested in developing relation learners, however to our knowledge there is no repository (or system) which can return all verb relations for a given term. This paper describes an automated procedure which can learn and produce such information with minimal effort. To evaluate the performance of our verb harvesting procedure, we have conducted two types of evaluations: (1) in the human based evaluation we found that the accuracy of the described algorithm is .95 at rank 100; (2) in the comparative study with existing relation learner and knowledge bases we found that our approach yields 12 times more verb relations. | [
11311232,
2725774,
9792804,
6987562,
7749804,
648239,
14624577,
3179848,
9672153,
1560925,
10318045,
14680675,
1453540,
14061182,
226541,
12893808,
2443507,
743925,
10084087,
15455102
] | Learning Verbs on the Fly
December 2012
Z Or Ni Tsa Kozar Eva kozareva@isi.edu
USC Information Sciences Institute
4676 Admiralty Way Marina del Rey90292-6695CA
Learning Verbs on the Fly
Proceedings of COLING 2012: Posters
COLING 2012: PostersMumbaiDecember 2012verb harvestingrelation learninginformation extractionknowledge acquisition 599
To answer the question "What are the duties of a medical doctor?", one would require knowledge about verb-based relations. A lot of effort has been invested in developing relation learners, however to our knowledge there is no repository (or system) which can return all verb relations for a given term. This paper describes an automated procedure which can learn and produce such information with minimal effort. To evaluate the performance of our verb harvesting procedure, we have conducted two types of evaluations: (1) in the human based evaluation we found that the accuracy of the described algorithm is .95 at rank 100; (2) in the comparative study with existing relation learner and knowledge bases we found that our approach yields 12 times more verb relations.
Introduction
To be able to answer the questions "What causes ebola?", "What are the duties of a medical doctor?", "What are the differences between a terrorist and a victim?", "Which are the animals that have wings but cannot fly?" one requires knowledge about verb-based relations. Over the years, researchers have developed various relation learning algorithms. Some (Ravichandran and Hovy, 2002;Bunescu and Mooney, 2007) targeted specific relations like BornInYear, CorporationAcquired, others (Wu and Weld, 2010;Fader et al., 2011) extracted any phrase denoting a relation in an English sentence. (Banko, 2009) used labeled data to learn relations, (Suchanek et al., 2007) used information encoded in the structured Wikipedia documents, (Riloff and Jones, 1999) bootstrapped patterns. As a result various knowledge bases have been produced like TopicSignatures (Agirre and Lacalle, 2004), ConceptNet (Liu and Singh, 2004), Yago (Suchanek et al., 2007), NELL (Carlson et al., 2009) and ReVerb (Fader et al., 2011).
Despite the many efforts to date, yet there is no universal repository (or even a system), which for a given term it can immediately return all verb relations related to the term. However, one would still like to dispose of an automated procedure, which on the fly can accurately and quickly produce such information for any term. If available, such resource can aid different natural language processing tasks such as preposition sense disambiguation (Litkowski and Hargraves, 2007), selectional preferences (Resnik, 1996;Ritter et al., 2010), question answering (Ferrucci et al., 2010) and textual entailment (Szpektor et al., 2004).
The question we address in this paper is: Is it possible to create a procedure which will go beyond existing techniques and learn in a semi-supervised manner for a given term all verb relations associated with it?
The main contributions of the paper are:
• We develop an automatic procedure, which on the fly can learn a diverse set of verb and verb-preposition relations for a given term. • We establish the effectiveness of our approach through human-based evaluation.
• We conduct a comparative study with the verb-based relation extraction system ReVerb (Fader et al., 2011) and show that our approach accurately extracts more verb-based relations. • We also compare the verb relations produced by our system with those available in existing knowledge bases, and observe that despite their completeness these repositories lack many verb-based relations.
The rest of the paper is organized as follows. Next, we present related work. Section 3 outlines the verb-based relation learner. Section 4 describes the data collection process. Section 5 reports on the experimental results. Finally, we conclude in Section 6.
Related Work
Lots of attention has been payed on learning is-a and part-of relations (Hearst, 1992;Girju et al., 2003;Pasca, 2004;Etzioni et al., 2005;Kozareva et al., 2008;Pantel and Pennacchiotti, 2006;Carlson et al., 2009;Talukdar et al., 2008). Others (Ravichandran and Hovy, 2002;Bunescu and Mooney, 2007) have focused on learning specific relations like BornInYear, EmployedBy and CorporationAcquired. However to build a system that can learn a richer set of relations is not trivial, because often labeled training data is required (Kim and Moldovan, 1993;Soderland et al., 1999) and most methods do not scale to corpora where the number of relations is very large or when the relations are not specified in advance (Fader et al., 2011).
However, recently developed OpenIE systems like TextRunner (Banko et al., 2007;Banko, 2009) and ReVerb (Fader et al., 2011) surmount the necessity of labeled data by extracting arbitrary phrases denoting relations in English sentences. (Banko et al., 2007;Banko, 2009) define relation to be any verb-prep, adj-noun construction. While such systems are great at learning general relations, they are not guided but simply gather in an undifferentiated way whatever happens to be contained in their input. In order to be able to extract all verb relations associated with a given term, such systems need to part-of-speech tag and parse a large document collection, then they have to extract all verb constructions and all arguments matching specific sets of patterns which were written by humans (or experts). Finally, they must filter out the information and retrieve only those verb relations that are associated with the specific term. Once compiled the repository is straightforward to query and use, however if a term is not present in the compiled repository, repeating the whole process on a new document collection becomes time consuming and unpractical. The main objective and contribution of our research is the development of a dynamic and flexible knowledge harvesting procedure, which for any given term can learn on the fly verb based relations associated with the term in a very fast and accurate manner.
Learning Verb-based Relations
Problem Formulation
We define our task as given a term, a relation expressed by a verb and a set of prepositions: (1) learn in bootstrapping fashion new relations (i.e. verbs) associated with the initial term and filter out erroneous extractions;
(2) form triples of the term, the harvested verbs and the initial set of prepositions to learn additional relations (i.e. verb-prepositions) and their argument fillers. Figure 1: Verb-based Relation Learning. Figure 1 shows an example for the input term terrorists, the verb relation bomb and the recursive pattern "terrorists bomb and *". The algorithm learns on the * position verbs like kill, murder, threaten, burn, assassinate. We denote this phase as verb extraction. Then each learned verb is used to form triples of the type term-verb-preposition to learn new verb-preposition relations and their argument fillers. For instance, "terrorists kill with *" extracts arguments like {bombs, suicide, impunity}. We denote this phase as verb-preposition extraction. Finally, the learned relations and arguments are ranked and arranged by their ranking score. The output of this harvesting procedure is triples of the kind "terrorists kill people", "terrorists kill on purpose", "terrorists bomb buildings" among others.
Algorithm Description
Because of their fixed nature, pattern-based methods often fail to extract information from small corpus or single document. However, nowadays we dispose of endless amount of data, which is easily accessible and is making it possible for such systems to work successfully by scanning billions of Web pages to extract the necessary information. Many of the existing and most accurate is-a relation learners rely on lexico-syntactic patterns (Hearst, 1992;Pasca, 2004;Etzioni et al., 2005), therefore we decided to use patterns for the verb extraction procedure.
PHASE1: Learning Verb
Relations. The first phase of the algorithm focuses on verb extraction. We use (Kozareva et al., 2008) recursive DAP pattern for is-a relation learning and adapted it to verb extraction as follows: "<seed-term> <seed-verb> and *", where <seed-term> is any term (noun) given by the user or taken from an existing knowledge base, <seed-verb> is a seed relation expressed through a verb and * indicates the position on which new verbs will be extracted. The generated patterns are submitted to the search engine as a web query and all retrieved snippets are kept. The algorithm extracts on the position of the * all verb constructions and if they were not previously explored by the algorithm, they are placed on the <seed-verb> position of DAP and used as seeds in the subsequent verb extraction iteration. The harvesting terminates when there are no more verbs to be explored. Following (Kozareva et al., 2008), we filter out erroneous extractions using graph ranking. We build a directed graph G = (V, E), where each node v ∈ V is an extracted verb candidate and (u, v) ∈ E is an edge between two verb nodes indicating that the verb u lead to the extraction of the verb v. Each node u in the graph is ranked as u = ∀(u,v)∈E (u, v). Confidence in u increases when u extracts more verbs.
PHASE2: Learning Verb-Preposition Relations.
In the second phase, the learned verbs are paired with an initial set of 17 prepositions to learn new relations and argument fillers. The prepositions were taken from the SemEval 2007 task on preposition disambiguation (Litkowski and Hargraves, 2007). To extract more relations, the algorithm uses the pattern "<seed-term> <verb> <prep> *", where <seed-term> is the initial term for which we want to learn verb-based relations, <verb> are the leaned verbs from the previous phase and * is the position of the argument fillers. Given the relation kill for the term terrorists, new relations like terrorists kill on, terrorists kill with, terrorists kill for and terrorists kill without are instantiated 1 . Similarly to the verb extraction phase, we rank terms by building a bipartite graph G = (V , E ) with two types of nodes. One set represents the verbs and verb-prepositions V , and the other set represents the arguments A. An edge e (v, a) ∈ E between v ∈ V and a ∈ A shows that the verb (or verb-prep) v extracted the argument a. Each argument is ranked as a = ∀(v,a)∈E (v, a). Confidence in a increases when a is extracted multiple times by different verbs.
Data Collection
It is impossible to collect and report results for all terms in the world. Still to evaluate the effectiveness of our verb-based relation learner, we have randomly selected 36 terms, which capture daily activities like going to a restaurant to unpleasant events like bombing. For the purpose of visualization, we have organized the terms into the following groups (topics): Bombing, Diseases, Elections, Restaurants, and Animals. like (were killed, are killed, killed). For each domain, we also show the total number of verbs used to initiate the harvesting process and the total number of learned information. In total, we have submitted ∼ 101, 559 queries and we have collected 10.3GB snippets, which were cleaned, part-of-speech tagged (Schmid, 1994) and used for the extraction of the verb-based relations and arguments. In total for all terms the algorithm extracted 26, 678 candidate relations and 1, 040, 651 candidate arguments of which 26, 086 have rank a>5.
Evaluation and Results
In this section, we evaluate the results of the verb-based relation learning procedure, which is extremely challenging because there is no universal knowledge repository against which one can compare performance in terms of precision and recall. To the extend to which it is possible, we conduct a human-based evaluation and we compare results to knowledge bases that have been extracted in a similar way (i.e., through pattern application over unstructured text).
Human-based Evaluation
Among the most common approaches on evaluating the correctness of the harvested information is by using human annotators (Pantel and Pennacchiotti, 2006;Navigli et al., 2011). Conducting such evaluations is very important, because the harvested information is often used by QA, machine reading and IE systems (Ferrucci et al., 2010;Freedman et al., 2011).
Since the evaluation of all 1, 067, 329 harvested terms is time consuming and costly, we decided to annotate for each term 100 verb relations and argument fillers. We conducted two separate annotations for the verbs and arguments, which resulted in 7200 annotations. We used two annotators who were instructed to mark as incorrect verbs (and argument fillers) that do not correspond to the term. For instance, "drugs affect" is marked as correct, while "drugs discuss" is marked as incorrect. We compute Accuracy as the number of Correct terms, divided by the total number of terms used in the annotation. Table 2 shows the accuracy of each domain at different ranks. The overall performance of our relation learner is .95 at rank 100 for the learned verbs and argument fillers. Tables 3 and 4 show examples of the harvested information.
Comparison with Existing Knowledge Bases
In this evaluation, we measure the ability of our system to learn verb-based relations of a term with respect to already existing knowledge bases, which have been created in a similar way. However, such comparative evaluations are not always possible to perform, because researchers have not fully explored the same terms and relations we have studied. When we compared results against existing knowledge bases, we noticed that Yago (Suchanek et al., 2007) has more detailed information for the arguments of the verb relations rather than the verb relations themselves. Repositories like ConceptNet 2 (Liu and Singh, 2004) contain 1.6 million assertions, however they only belong to twenty relation types such as is-a, part-of, made-of, effect-of among others. The only repository that we found with a diverse set of verb relations is the never-ending language learner NELL 3 (Carlson et al., 2009). However, there were only 11 verb relations for bomb and 2 verb relations for virus. This analysis shows that despite their completeness and richness, existing knowledge repositories can be further enriched with verb-based relations produced by our learning procedure.
Comparison with Existing Relation Learner
For our comparative study with existing systems, we used ReVerb 4 (Fader et al., 2011), which similarly to our approach was specifically designed to learn verb-based relations from unstructured texts. Currently, ReVerb has extracted relations from ClueWeb09 5 and Wikipedia, which have been freely distributed to the public. ReVerb learns relations by taking as input any document and applies POS-tagging, NP-chunking and a set of rules over all sentences in the document to generate triples containing the verbs and the arguments associated with them. According to (Fader et al., 2011) ReVerb outperforms TextRunner (Banko et al., 2007) and the open Wikipedia extractor WOE (Wu and Weld, 2010) in terms of the quantity and quality of the learned relations. For comparison, we took five terms from our experiment: ant, bomb, president, terrorists, virus and collected all verbs found by ReVerb in the ClueWeb09 and Wikipedia triples.
Conclusion
Our key contribution is the development of a semi-supervised procedure, which starts with a term and a verb to learn from Web documents a large and diverse set of verb relations. We have conducted an experimental evaluation with 36 terms and have collected 26, 678 unique candidate verbs and 1, 040, 651 candidate argument fillers. We have evaluated the accuracy of our approach using human based evaluation and have compared results against the ReVerb (Fader et al., 2011) system and existing knowledge bases like NELL (Carlson et al., 2009), Yago (Suchanek et al., 2007) and ConceptNet (Liu and Singh, 2004). Our study showed that despite their completeness these resources lack verb-based information and there is plenty of room for improvement since they can be further enriched with verbs using our harvesting procedure. In the future, we would like to test the usefulness of the generated resources in NLP applications.
Table 1
1shows the terms and seed verbs used to initiate the verb-based relation learning process, and summarizes the obtained results and the total number of iterations which were run to extract the verbs. #Verbs Unique shows the number of unique verbs after merging expressionsSeed Term
Seed Verb
#Verbs Learned
#Verbs Unique
#Iter.
#Args. Learned
#Args. with a >5
BOMBING
authorities
say
3049
1805
14
7284
151
bomb
explodes
1020
705
11
13454
451
bombers
explode
265
224
19
9097
344
killers
kill
178
163
14
6906
217
soldiers
die
4588
2533
10
34330
1010
terrorists
kill
1401
941
10
13698
468
victims
suffer
1861
1263
13
21982
767
totalDomain
6
12362
7632
-
106751
3408
DISEASE
bacteria
caused
1439
853
10
39573
1261
cancer
caused
1389
848
7
42640
1585
diseases
caused
792
582
12
38307
1387
doctors
cure
2700
1611
10
56935
1050
drugs
caused
1936
1242
9
60393
1890
nurses
help
1882
1167
8
39305
675
patient
lives
1631
923
9
78946
1668
virus
caused
1835
992
10
43481
1372
totalDomain
4
13604
8218
-
399580
9838
ELECTION
candidates
vote
2116
1299
8
55009
1078
congressmen
say
92
86
9
5601
123
senators
vote
718
510
16
12385
340
presidents
run
717
535
11
18476
420
voters
vote
1400
935
13
38298
785
totalDomain
3
5043
3365
-
129769
2746
RESTAURANT
drinks
tasted
881
591
11
39086
1088
food
tasted
984
664
8
74399
1740
meals
tasted
775
562
10
48474
1144
menu
looks
1479
870
11
51278
1041
restaurants
serve
711
532
8
36120
776
waiters
serve
123
107
9
8457
151
totalDomain
3
4953
3326
-
257814
5940
ANIMALS
ants
eat
827
607
12
25046
753
birds
eat
3623
2064
8
62031
1465
dinosaurs
eat
544
386
11
11013
345
jellyfish
eat
12
11
4
1120
20
lice
eat
42
42
8
3330
131
mammals
eat
338
272
10
14224
527
otters
eat
190
159
8
5051
159
sharks
eat
697
500
12
16942
598
slugs
eat
60
60
11
5223
89
vultures
eat
36
36
5
2757
67
totalDomain
1
6369
4137
-
146737
4154
Table 1 :
1Tested Terms for Verb-based Relation Learning and Extracted Information.
Table 2 :
2Accuracy of the Harvested Information.Term
Learned Verbs
diseases
spread, develop, treat, come, kill, mutate, diagnose, evolve, are caught, survive, grow, occur, carry, cause,
are cured, affect, are identified, start, prevent, propagate, are transmitted, thrive, sicken, change, flourish
meals
are prepared, are served, are cooked, are delivered, are planned, are eaten, are tasted, are provided, look,
are made, are consumed, are offered, are created, are frozen, are bought, are packed, are paid, smell,
are designed, are purchased, are sold, are produced, are prepped, are shared, are catered
soldiers
kill, shoot, beat, fought, fell, destroyed, fired, attacked, are trained, died, took, said, laughed, kicked, die,
were humiliating, cheered, mocked, raised, drummed, captured, looted, ran, arrested, buried, defended
Table 3 :
3Examples of Learned Verbs.
Table 5
5summarizes the total number of unique verb extractions found by ReVerb in ClueWeb09 since the Wikipedia ones had low coverage. We have also manually validated the correctness of the verbs found by ReVerb and have seen that their accuracy is 100%. With respect to our extractions ReVerb has lower recall. thrill, sexual reasons, money, fun, the sake, rush, sport, cash, fame in ridiculous ways, patterns, cold blood, silence, groups, conflict with, series, certain periods, captivity, sequence with some criteria, knife, brutality, hands, motive, intention, impunity, stealth, purpose, violence to relieve themselves, symbolize, show others, make a statement, just kill, gain money, gain identity, gain control, gain material over a period, time, robberies, course, many months, multiple timeTerm-Verb
Preposition
Learned Arguments
terrorists
through
violence, micro technology, orkut
communicate
secure channels, email, internet,
internet networks, cellphones
with
their contacts, each other, the world,
other terrorists, US citizens, Korea,
governments, America
in
brief, code, VW, Russian, French,
various ways, secret, English
by
mail, phone, fax, email
without
detection, tapping calls
birds fly
above
earth, castles, our heads, trees, lake,
field, river, cloud, city
through
air, night, sky, park, country club,
wind, storm, region, city
around
her, fish, house, my head, bird feeder,
home, your city, ruins, place
across
sky, gulf, screen, rainbow, sunset,
horizon, african savanna, our path,
street, hometown
into
windows, walls, power lines, towers,
sun, sea, darkness, mist, house
killers kill
for
power,
Table 4 :
4Examples of Learned Arguments.Term
ClueWeb (ReVerb)
Web (DAP)
ants
32
607
bomb
46
535
presidents
32
705
terrorists
96
941
virus
128
992
Table 5 :
5Comparison of Verb-based Relation Learners.
Some verbs cannot be paired with all prepositions, we filter out those for which no results were found.
http://web.media.mit.edu/hugo/conceptnet/#overview 3 Comparison done in March 2012 with http://rtw.ml.cmu.edu/rtw/kbbrowser/
http://reverb.cs.washington.edu/ 5 http://lemurproject.org/clueweb09.php/
AcknowledgementsWe would like to thank Ed Hovy for initial comments on the work and the anonymous reviewers.
Publicly available topic signatures for all wordnet nominal senses. E Agirre, O L D Lacalle, Agirre, E. and Lacalle, O. L. D. (2004). Publicly available topic signatures for all wordnet nominal senses.
Acquisition of instance attributes via labeled and related instances. E Alfonseca, M Pasca, E Robledo-Arnuncio, Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval, SIGIR '10. the 33rd international ACM SIGIR conference on Research and development in information retrieval, SIGIR '10Alfonseca, E., Pasca, M., and Robledo-Arnuncio, E. (2010). Acquisition of instance attributes via labeled and related instances. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval, SIGIR '10, pages 58-65.
Open information extraction from the web. M Banko, Ph.D.Dissertation from University of WashingtonBanko, M. (2009). Open information extraction from the web. In Ph.D. Dissertation from University of Washington.
Open information extraction from the web. M Banko, M J Cafarella, S Soderl, M Broadhead, O Etzioni, IJCAI. Banko, M., Cafarella, M. J., Soderl, S., Broadhead, M., and Etzioni, O. (2007). Open informa- tion extraction from the web. In In IJCAI, pages 2670-2676.
P Buitelaar, P Cimiano, B Magnini, Ontology Learning from Text: Methods, Evaluation and Applications. AmsterdamIOS Press123Buitelaar, P., Cimiano, P., and Magnini, B., editors (2005). Ontology Learning from Text: Methods, Evaluation and Applications, volume 123 of Frontiers in Artificial Intelligence and Applications. IOS Press, Amsterdam.
Learning to extract relations from the web using minimal supervision. R Bunescu, R Mooney, Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. the 45th Annual Meeting of the Association of Computational LinguisticsBunescu, R. and Mooney, R. (2007). Learning to extract relations from the web using minimal supervision. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 576-583.
Coupling semi-supervised learning of categories and relations. A Carlson, J Betteridge, E R H Jr, T M Mitchell, Proceedings of the NAACL HLT 2009 Workskop on Semi-supervised Learning for Natural Language Processing. the NAACL HLT 2009 Workskop on Semi-supervised Learning for Natural Language ProcessingCarlson, A., Betteridge, J., Jr., E. R. H., and Mitchell, T. M. (2009). Coupling semi-supervised learning of categories and relations. In Proceedings of the NAACL HLT 2009 Workskop on Semi-supervised Learning for Natural Language Processing.
KnowNet: A Proposal for Building Highly Connected and Dense Knowledge Bases from the Web. M Cuadros, G Rigau, Semantics in Text Processing. 1STEPCuadros, M. and Rigau, G. (2008). KnowNet: A Proposal for Building Highly Connected and Dense Knowledge Bases from the Web. In Semantics in Text Processing. STEP 2008 Conference Proceedings, volume 1 of Research in Computational Semantics, pages 71-84.
Unsupervised named-entity extraction from the web: an experimental study. O Etzioni, M Cafarella, D Downey, A.-M Popescu, T Shaked, S Soderland, D S Weld, A Yates, Artificial Intelligence. 1651Etzioni, O., Cafarella, M., Downey, D., Popescu, A.-M., Shaked, T., Soderland, S., Weld, D. S., and Yates, A. (2005). Unsupervised named-entity extraction from the web: an experimental study. Artificial Intelligence, 165(1):91-134.
Identifying relations for open information extraction. A Fader, S Soderland, O Etzioni, Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. the 2011 Conference on Empirical Methods in Natural Language ProcessingFader, A., Soderland, S., and Etzioni, O. (2011). Identifying relations for open information extraction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, EMNLP 2011, pages 1535-1545.
Building watson: An overview of the deepqa project. D Ferrucci, E Brown, J Chu-Carroll, J Fan, D Gondek, A A Kalyanpur, A Lally, J W Murdock, E Nyberg, J Prager, N Schlaefer, C Welty, AI Magazine. 313Ferrucci, D., Brown, E., Chu-Carroll, J., Fan, J., Gondek, D., Kalyanpur, A. A., Lally, A., Murdock, J. W., Nyberg, E., Prager, J., Schlaefer, N., and Welty, C. (2010). Building watson: An overview of the deepqa project. AI Magazine, 31(3):59-79.
Extreme extraction -machine reading in a week. M Freedman, L A Ramshaw, E Boschee, R Gabbard, G Kratkiewicz, N Ward, R M Weischedel, Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. the 2011 Conference on Empirical Methods in Natural Language ProcessingFreedman, M., Ramshaw, L. A., Boschee, E., Gabbard, R., Kratkiewicz, G., Ward, N., and Weischedel, R. M. (2011). Extreme extraction -machine reading in a week. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, EMNLP 2011, pages 1437-1446.
Learning semantic constraints for the automatic discovery of part-whole relations. R Girju, A Badulescu, D Moldovan, Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology. the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyGirju, R., Badulescu, A., and Moldovan, D. (2003). Learning semantic constraints for the automatic discovery of part-whole relations. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, pages 1-8.
SemEval-2007 task 04: Classification of semantic relations between nominals. R Girju, P Nakov, V Nastaste, S Szpakowicz, P Turney, Yuret , D , SemEval. Girju, R., Nakov, P., Nastaste, V., Szpakowicz, S., Turney, P., and Yuret, D. (2007). SemEval-2007 task 04: Classification of semantic relations between nominals. In SemEval 2007.
Automatic acquisition of hyponyms from large text corpora. M Hearst, Proceedings of the 14th conference on Computational linguistics. the 14th conference on Computational linguisticsHearst, M. (1992). Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th conference on Computational linguistics, pages 539-545.
Corpus-based semantic lexicon induction with web-based corroboration. S Igo, E Riloff, Proceedings of the Workshop on Unsupervised and Minimally Supervised Learning of Lexical Semantics. the Workshop on Unsupervised and Minimally Supervised Learning of Lexical SemanticsIgo, S. and Riloff, E. (2009). Corpus-based semantic lexicon induction with web-based corroboration. In Proceedings of the Workshop on Unsupervised and Minimally Supervised Learning of Lexical Semantics.
Factrank: Random walks on a web of facts. A Jain, P Pantel, Proceedings of the 23rd International Conference on Computational Linguistics. the 23rd International Conference on Computational LinguisticsJain, A. and Pantel, P. (2010). Factrank: Random walks on a web of facts. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 501-509.
Selectively using relations to improve precision in question answering. B Katz, J Lin, Proceedings of the EACL-2003 Workshop on Natural Language Processing for Question Answering. the EACL-2003 Workshop on Natural Language Processing for Question AnsweringKatz, B. and Lin, J. (2003). Selectively using relations to improve precision in question answering. In In Proceedings of the EACL-2003 Workshop on Natural Language Processing for Question Answering, pages 43-50.
Acquisition of semantic patterns for information extraction from corpora. J Kim, D Moldovan, Proceedings of Ninth IEEE Conference on Artificial Intelligence for Applications. Ninth IEEE Conference on Artificial Intelligence for Applications17176Kim, J. and Moldovan, D. (1993). Acquisition of semantic patterns for information extraction from corpora. In Proceedings of Ninth IEEE Conference on Artificial Intelligence for Applications, page 17176.
Learning arguments and supertypes of semantic relations using recursive patterns. Z Kozareva, E Hovy, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL '10. the 48th Annual Meeting of the Association for Computational Linguistics, ACL '10Kozareva, Z. and Hovy, E. (2010). Learning arguments and supertypes of semantic relations using recursive patterns. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL '10, pages 1482-1491.
Semantic class learning from the web with hyponym pattern linkage graphs. Z Kozareva, E Riloff, E Hovy, Proceedings of ACL-08: HLT. ACL-08: HLTKozareva, Z., Riloff, E., and Hovy, E. (2008). Semantic class learning from the web with hyponym pattern linkage graphs. In Proceedings of ACL-08: HLT, pages 1048-1056.
The automated acquisition of topic signatures for text summarization. C.-Y Lin, E Hovy, Proceedings of the 18th conference on Computational linguistics. the 18th conference on Computational linguistics1COLING '00Lin, C.-Y. and Hovy, E. (2000). The automated acquisition of topic signatures for text summa- rization. In Proceedings of the 18th conference on Computational linguistics -Volume 1, COLING '00, pages 495-501.
Concept discovery from text. D Lin, P Pantel, Proceedings of the 19th international conference on Computational linguistics. the 19th international conference on Computational linguisticsLin, D. and Pantel, P. (2002). Concept discovery from text. In Proceedings of the 19th international conference on Computational linguistics, pages 1-7.
Semeval-2007 task 06: Word-sense disambiguation of prepositions. K C Litkowski, O Hargraves, Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007). the Fourth International Workshop on Semantic Evaluations (SemEval-2007)Litkowski, K. C. and Hargraves, O. (2007). Semeval-2007 task 06: Word-sense disambiguation of prepositions. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 24-29.
Focusing on ConceptNet's natural language knowledge representation. H Liu, P Singh, Commonsense Reasoning in and over Natural Language Proceedings of the 8th International Conference on Knowledge-Based Intelligent Information and Engineering Systems. Liu, H. and Singh, P. (2004). Focusing on ConceptNet's natural language knowledge repre- sentation. In Commonsense Reasoning in and over Natural Language Proceedings of the 8th International Conference on Knowledge-Based Intelligent Information and Engineering Systems (KES 2004), pages 71-84.
A graph-based algorithm for inducing lexical taxonomies from scratch. R Navigli, P Velardi, S Faralli, IJCAI 2011, Proceedings of the 22nd International Joint Conference on Artificial Intelligence. Navigli, R., Velardi, P., and Faralli, S. (2011). A graph-based algorithm for inducing lexical taxonomies from scratch. In IJCAI 2011, Proceedings of the 22nd International Joint Conference on Artificial Intelligence, pages 1872-1877.
Espresso: Leveraging generic patterns for automatically harvesting semantic relations. P Pantel, M Pennacchiotti, Proceedings of 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational LinguisticsPantel, P. and Pennacchiotti, M. (2006). Espresso: Leveraging generic patterns for automatically harvesting semantic relations. In Proceedings of 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, ACL 2006.
Acquisition of categorized named entities for web search. M Pasca, Proceedings of the thirteenth ACM international conference on Information and knowledge management. the thirteenth ACM international conference on Information and knowledge managementPasca, M. (2004). Acquisition of categorized named entities for web search. In Proceedings of the thirteenth ACM international conference on Information and knowledge management, pages 137-145.
Learning surface text patterns for a question answering system. D Ravichandran, E Hovy, Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. the 40th Annual Meeting on Association for Computational LinguisticsRavichandran, D. and Hovy, E. (2002). Learning surface text patterns for a question answering system. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 41-47.
Selectional constraints: an information-theoretic model and its computational realization. P Resnik, Resnik, P. (1996). Selectional constraints: an information-theoretic model and its computa- tional realization.
Automatically generating extraction patterns from untagged text. E Riloff, Proceedings of the thirteenth national conference on Artificial intelligence. the thirteenth national conference on Artificial intelligence2AAAI'96Riloff, E. (1996). Automatically generating extraction patterns from untagged text. In Proceedings of the thirteenth national conference on Artificial intelligence -Volume 2, AAAI'96, pages 1044-1049.
Learning dictionaries for information extraction by multi-level bootstrapping. E Riloff, R Jones, AAAI '99/IAAI '99: Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence. Riloff, E. and Jones, R. (1999). Learning dictionaries for information extraction by multi-level bootstrapping. In AAAI '99/IAAI '99: Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence.
A latent dirichlet allocation method for selectional preferences. A Ritter, Mausam, O Etzioni, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL 2010. the 48th Annual Meeting of the Association for Computational Linguistics, ACL 2010Ritter, A., Mausam, and Etzioni, O. (2010). A latent dirichlet allocation method for selectional preferences. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL 2010, pages 424-434.
Probabilistic part-of-speech tagging using decision trees. H Schmid, Schmid, H. (1994). Probabilistic part-of-speech tagging using decision trees.
On-demand information extraction. S Sekine, Proceedings of the COLING/ACL on Main conference poster sessions, COLING-ACL '06. the COLING/ACL on Main conference poster sessions, COLING-ACL '06Sekine, S. (2006). On-demand information extraction. In Proceedings of the COLING/ACL on Main conference poster sessions, COLING-ACL '06, pages 731-738.
Semantic taxonomy induction from heterogenous evidence. R Snow, D Jurafsky, A Y Ng, Proceedings of 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, ACL. 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, ACLSnow, R., Jurafsky, D., and Ng, A. Y. (2006). Semantic taxonomy induction from heterogenous evidence. In Proceedings of 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, ACL.
Learning information extraction rules for semi-structured and free text. S Soderland, C Cardie, R Mooney, Machine Learning. Soderland, S., Cardie, C., and Mooney, R. (1999). Learning information extraction rules for semi-structured and free text. In Machine Learning, pages 233-272.
Yago: a core of semantic knowledge. F M Suchanek, G Kasneci, G Weikum, WWW '07: Proceedings of the 16th international conference on World Wide Web. Suchanek, F. M., Kasneci, G., and Weikum, G. (2007). Yago: a core of semantic knowledge. In WWW '07: Proceedings of the 16th international conference on World Wide Web, pages 697-706.
Scaling web-based acquisition of entailment relations. I Szpektor, H Tanev, I Dagan, B Coppola, Proc. Empirical Methods in Natural Language Processing (EMNLP). Empirical Methods in Natural Language essing (EMNLP)Szpektor, I., Tanev, H., Dagan, I., and Coppola, B. (2004). Scaling web-based acquisition of entailment relations. In Proc. Empirical Methods in Natural Language Processing (EMNLP).
. P P Talukdar, J Reisinger, M Pasca, D Ravichandran, R Bhagat, F Pereira, Talukdar, P. P., Reisinger, J., Pasca, M., Ravichandran, D., Bhagat, R., and Pereira, F. (2008).
Weakly-supervised acquisition of labeled class instances using graph random walks. Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingWeakly-supervised acquisition of labeled class instances using graph random walks. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP 2008, pages 582-590.
Unsupervised methods for developing taxonomies by combining syntactic and statistical information. D Widdows, Proceedings of HLT-NAACL. HLT-NAACLWiddows, D. (2003). Unsupervised methods for developing taxonomies by combining syntactic and statistical information. In Proceedings of HLT-NAACL.
Open information extraction using wikipedia. F Wu, D S Weld, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL '10. the 48th Annual Meeting of the Association for Computational Linguistics, ACL '10Wu, F. and Weld, D. S. (2010). Open information extraction using wikipedia. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL '10, pages 118-127. |