corpusid
int64 110
268M
| title
stringlengths 0
8.56k
| abstract
stringlengths 0
18.4k
| citations
sequencelengths 0
142
| full_paper
stringlengths 0
635k
|
---|---|---|---|---|
1,269,169 | A Statistical Parser for Czech* | This paper considers statistical parsing of Czech, which differs radically from English in at least two respects: (1) it is a highly inflected language, and (2) it has relatively free word order. These differences are likely to pose new problems for techniques that have been developed on English. We describe our experience in building on the parsing model of (Collins 97). Our final results -80% dependency accuracy -represent good progress towards the 91% accuracy of the parser on English (Wall Street Journal) text. | [
608,
1345,
12227075,
330,
12615602,
252796,
3262717
] | A Statistical Parser for Czech*
Michael Collins mcollins@research@att.com
Jan Haj
Lance Ramshaw iramshaw@bbn@com
Christoph Tillmann
Institute of Formal and Applied Linguistics
AT&T Labs-Research
Shannon Laboratory
180 Park Avenue, Florham Park07932NJ
Charles University
PragueCzech Republic
Lehrstuhl ftir Informatik VI
BBN Technologies
70 Fawcett St02138CambridgeMA
RWTH
D-52056Aachen, AachenGermany
A Statistical Parser for Czech*
This paper considers statistical parsing of Czech, which differs radically from English in at least two respects: (1) it is a highly inflected language, and (2) it has relatively free word order. These differences are likely to pose new problems for techniques that have been developed on English. We describe our experience in building on the parsing model of (Collins 97). Our final results -80% dependency accuracy -represent good progress towards the 91% accuracy of the parser on English (Wall Street Journal) text.
annotated for dependency structure). Czech differs radically from English in at least two respects:
• It is a highly inflected (HI) language. Words in Czech can inflect for a number of syntactic features: case, number, gender, negation and so on. This leads to a very large number of possible word forms, and consequent sparse data problems when parameters are associated with lexical items, on the positive side, inflectional information should provide strong cues to parse structure; an important question is how to parameterize a statistical parsing model in a way that makes good use of inflectional information.
• It has relatively free word order (F-WO). For example, a subject-verb-object triple in Czech can generally appear in all 6 possible surface orders (SVO, SOV, VSO etc.).
Other Slavic languages (such as Polish, Russian, Slovak, Slovene, Serbo-croatian, Ukrainian) also show these characteristics. Many European languages exhibit FWO and HI phenomena to a lesser extent. Thus the techniques and results found for Czech should be relevant to parsing several other languages.
This paper first describes a baseline approach, based on the parsing model of (Collins 97), which recovers dependencies with 72% accuracy. We then describe a series of refinements to the model, giving an improvement to 80% accuracy, with around 82% accuracy on newspaper/business articles. (As a point of comparison, the parser achieves 91% dependency accuracy on English (Wall Street Journal) text.)
Data and Evaluation
The Prague Dependency Treebank PDT (Haji~, 1998) has been modeled after the Penn Treebank (Marcus et al. 93), with one important exception: following the Praguian linguistic tradition, the syntactic annotation is based on dependencies rather than phrase structures. Thus instead of "nonterminal" symbols used at the non-leaves of the tree, the PDT uses so-called analytical functions capturing the type of relation between a dependent and its governing node. Thus the number of nodes is equal to the number of tokens (words + punctuation) plus one (an artificial root node with rather technical function is added to each sentence). The PDT contains also a traditional morpho-syntactic annotation (tags) at each word position (together with a lemma, uniquely representing the underlying lexicai unit). As Czech is a HI language, the size of the set of possible tags is unusually high: more than 3,000 tags may be assigned by the Czech morphological analyzer. The PDT also contains machine-assigned tags and lemmas for each word (using a tagger described in (Haji~ and Hladka, 1998)).
For evaluation purposes, the PDT has been divided into a training set (19k sentences) and a development/evaluation test set pair (about 3,500 sentences each). Parsing accuracy is defined as the ratio of correct dependency links vs. the total number of dependency links in a sentence (which equals, with the one artificial root node added, to the number of tokens in a sentence). As usual, with the development test set being available during the development phase, all final results has been obtained on the evaluation test set, which nobody could see beforehand.
A Sketch of the Parsing Model
The parsing model builds on Model 1 of (Collins 97); this section briefly describes the model. The parser uses a lexicalized grammar --each nonterminal has an associated head-word and part-ofspeech (POS). We write non-terminals as X (x): X is the non-terminal label, and x is a (w, t> pair where w is the associated head-word, and t as the POS tag. See figure 1 for an example lexicalized tree, and a list of the lexicalized rules that it contains.
Each rule has the form 1 :
P(h) --+ L,~(l,)...Ll(ll)H(h)Rl(rl)...Rm(rm)
IWith the exception of the top rule in the tree, which has the f0rmTOP -+ H(h).
H is the head-child of the phrase, which inherits the head-word h from its parent P. L1 The model can be considered to be a variant of Probabilistic Context-Free Grammar (PCFG). In PCFGs each role cr --+ fl in the CFG underlying the PCFG has an associated probability P(/3la ). In (Collins 97), P(/~lo~) is defined as a product of terms, by assuming that the right-hand-side of the rule is generated in three steps:
1. Generate the head constituent label of the phrase, with probability 79H( H I P, h ).
2. Generate modifiers to the left of the head with probability Hi=X..n+l 79L(Li(li) [ P, h, H),
where Ln+l(ln+l) = STOP. The STOP symbol is added to the vocabulary of nonterminals, and the model stops generating left modifiers when it is generated.
3. Generate modifiers to the right of the head with probability Hi=l..m+l PR(Ri(ri) [ P, h, H). Rm+l (rm+l) is defined as STOP. Other rules in the tree contribute similar sets of probabilities. The probability for the entire tree is calculated as the product of all these terms. (Collins 97) describes a series of refinements to this basic model: the addition of "distance" (a conditioning feature indicating whether or not a modifier is adjacent to the head); the addition of subcategorization parameters (Model 2), and parameters that model wh-movement (Model 3); estimation techniques that smooth various levels of back-off (in particular using POS tags as word-classes, allowing the model to learn generalizations about POS classes of words). Search for the highest probability tree for a sentence is achieved using a CKY-style parsing algorithm.
Parsing the Czech PDT
Many statistical parsing methods developed for English use lexicalized trees as a representation (e.g., (Jelinek et 97)) emphasize the use of parameters associated with dependencies between pairs of words. The Czech PDT contains dependency annotations, but no tree structures. For parsing Czech we considered a strategy of converting dependency structures in training data to lexicalized trees, then running the parsing algorithms originally developed for English. A key point is that the mapping from lexicalized trees to dependency structures is many-to-one. As an example, figure 2 shows an input dependency structure, and three different lexicalized trees with this dependency structure. The choice of tree structure is crucial in determining the independence assumptions that the parsing model makes. There are at least 3 degrees of freedom when deciding on the tree structures:
. How "fiat" should the trees be? The trees could be as fiat as possible (as in figure 2(a)), or binary branching (as in trees (b) or (c)), or somewhere between these two extremes.
2. What non-terminal labels should the internal nodes have?
3. What set of POS tags should be used?
A Baseline Approach
To provide a baseline result we implemented what is probably the simplest possible conversion scheme:
.
.
.
The trees were as fiat as possible, as in figure 2(a).
The non-terminal labels were "XP", where X is the first letter of the POS tag of the headword for the constituent. See figure 3 for an example.
The part of speech tags were the major category for each word (the first letter of the Czech POS set, which corresponds to broad category distinctions such as verb, noun etc.).
The baseline approach gave a result of 71.9% accuracy on the development test set.
(a) X(saw) (b) X(saw) (c) N X(saw) X(I) V X(man) I [ I ~ I V X(man) N saw D N [ [ I I saw D N I the man [ [ the man X(saw) X(saw) X(
'4.2 Modifications to the Baseline Trees
While the baseline approach is reasonably successful, there are some linguistic phenomena that lead to clear problems. This section describes some tree transformations that are linguistically motivated, and lead to improvements in parsing accuracy.
Relative Clauses
In the PDT the verb is taken to be the head of both sentences and relative clauses. Figure 4 illustrates how the baseline transformation method can lead to parsing errors in relative clause cases. Figure 4(c) shows the solution to the problem: the label of the relative clause is changed to SBAR, and an additional vP level is added to the right of the relative pronoun. Similar transformations were applied for relative clauses involving Wh-PPs (e.g., "the man to whom I gave a book"), Wh-NPs (e.g., "the man whose book I read") and Wh-Adverbials (e.g., "the place where I live").
Coordination
The PDT takes the conjunct to be the head of coordination structures (for example, and would be the head of the NP dogs and cats). In these cases the baseline approach gives tree structures such as that in figure 5(a). The non-terminal label for the phrase is JP (because the head of the phrase, the conjunct and, is tagged as J). This choice of non-terminal is problematic for two reasons: (1) the JP label is assigned to all coordinated phrases, for example hiding the fact that the constituent in figure 5(a) is an NP; (2) the model assumes that left and right modifiers are generated independently of each other, and as it stands will give unreasonably high probability to two unlike phrases being coordinated. To fix these problems, the non-terminal label in coordination cases was altered to be the same as that of the second conjunct (the phrase directly to the right of the head of the phrase). See figure 5. A similar transformation was made for cases where a comma was the head of a phrase. Figure 6 shows an additional change concerning commas. This change increases the sensitivity of the model to punctuation.
Punctuation
Model Alterations
This section describes some modifications to the parameterization of the model.
Preferences for dependencies that do not cross verbs
The model of (Collins 97) had conditioning variables that allowed the model to learn a preference for dependencies which do not cross verbs. From the results in table 3, adding this condition improved accuracy by about 0.9% on the development set.
Punctuation for phrasal boundaries
The parser of (Collins 96) used punctuation as an indication of phrasal boundaries. It was found that if a constituent Z ~ (...XY...) has two children X and Y separated by a punctuation mark, then Y is generally followed by a punctuation mark or the end of sentence marker. The parsers of (Collins 96,97) encoded this as a hard constraint. In the Czech parser we added a cost of -2.5 (log probability) z to structures that violated this constraint.
First-Order (Bigram) Dependencies
The model of section 3 made the assumption that modifiers are generated independently of each other.
This section describes a bigram model, where the context is increased to consider the previously generated modifier ((Eisner 96) also describes use of bigram statistics). The right-hand-side of a rule is now assumed to be generated in the following three step process:
1. Generate the head label, with probability ~'~ (H I P, h)
Generate left modifiers with probability
1-I Pc(L~(li) l Li-I'P'h'H)
/=l..n+l where L0 is defined as a special NULL symbol. Thus the previous modifier, Li-1, is added to the conditioning context (in the previous model the left modifiers had probability 1"[i=1..,~+1 Pc(Li(li) I P,h,H).) 3. Generate fight modifiers using a similar bigram process.
Introducing bigram-dependencies into the parsing model improved parsing accuracy by about 0.9 % (as shown in Table 3).
2This value was optimized on the development set
Alternative Part-of-Speech Tagsets
Part of speech (POS) tags serve an important role in statistical parsing by providing the model with a level of generalization as to how classes of words tend to behave, what roles they play in sentences, and what other classes they tend to combine with. Statistical parsers of English typically make use of the roughly 50 POS tags used in the Penn Treebank corpus, but the Czech PDT corpus provides a much richer set of POS tags, with over 3000 possible tags defined by the tagging system and over 1000 tags actually found in the corpus. Using that large a tagset with a training corpus of only 19,000 sentences would lead to serious sparse data problems. It is also clear that some of the distinctions being made by the tags are more important than others for parsing. We therefore explored different ways of extracting smaller but still maximally informative POS tagsets.
Description of the Czech Tagset
The POS tags in the Czech PDT corpus (Haji~ and Hladk~i, 1997) are encoded in 13-character strings. Table 1 shows the role of each character. For example, the tag NNMP1 ..... A--would be used for a word that had "noun" as both its main and detailed part of speech, that was masculine, plural, nominative (case 1), and whose negativeness value was "affirmative". Within the corpus, each word was annotated with all of the POS tags that would be possible given its spelling, using the output of a morphological analysis program, and also with the single one of those tags that a statistical POS tagging program had predicted to be the correct tag (Haji~ and Hladka, 1998). the alternative possible tags and machine-selected tag for each word. In the training portion of the corpus, the correct tag as judged by human annotators was also provided.
Selection of a More Informative Tagset
In the baseline approach, the first letter, or "main part of speech", of the full POS strings was used as the tag. This resulted in a tagset with 13 possible values. A number of alternative, richer tagsets were explored, using various combinations of character positions from the tag string. The most successful alternative was a two-letter tag whose first letter was always the main POS, and whose second letter was the case field if the main POS was one that displays case, while otherwise the second letter was the detailed POS. (The detailed POS was used for the main POS values D, J, V, and X; the case field was used for the other possible main POS values.) This two-letter scheme resulted in 58 tags, and provided about a 1.1% parsing improvement over the baseline on the development set.
Even richer tagsets that also included the person, gender, and number values were tested without yielding any further improvement, presumably because the damage from sparse data outweighed the value of the additional information present.
Explorations toward Clustered Tagsets
An entirely different approach, rather than searching by hand for effective tagsets, would be to use clustering to derive them automatically. We explored two different methods, bottom-up and topdown, for automatically deriving POS tag sets based on counts of governing and dependent tags extracted from the parse trees that the parser constructs from the training data. Neither tested approach resulted in any improvement in parsing performance com-pared to the hand-designed "two letter" tagset, but the implementations of each were still only preliminary, and a clustered tagset more adroitly derived might do better.
Dealing with Tag Ambiguity
One final issue regarding POS tags was how to deal with the ambiguity between possible tags, both in training and test. In the training data, there was a choice between using the output of the POS tagger or the human annotator's judgment as to the correct tag. In test data, the correct answer was not available, but the POS tagger output could be used if desired. This turns out to matter only for unknown words, as the parser is designed to do its own tagging, for words that it has seen in training at least 5 times, ignoring any tag supplied with the input. For "unknown" words (seen less than 5 times), the parser can be set either to believe the tag supplied by the POS tagger or to allow equally any of the dictionary-derived possible tags for the word, effectively allowing the parse context to make the choice. (Note that the rich inflectional morphology of Czech leads to a higher rate of"unknown" word forms than would be true in English; in one test, 29.5% of the words in test data were "unknown".)
Our tests indicated that if unknown words are treated by believing the POS tagger's suggestion, then scores are better if the parser is also trained on the POS tagger's suggestions, rather than on the human annotator's correct tags. Training on the correct tags results in 1% worse performance. Even though the POS tagger's tags are less accurate, they are more like what the parser will be using in the test data, and that turns out to be the key point. On the other hand, if the parser allows all possible dictionary tags for unknown words in test material, then it pays to train on the actual correct tags.
In initial tests, this combination of training on the correct tags and allowing all dictionary tags for unknown test words somewhat outperformed the alternative of using the POS tagger's predictions both for training and for unknown test words. When tested with the final version of the parser on the full development set, those two strategies performed at the same level.
• 5 Results
We ran three versions of the parser over the final test set: the baseline version, the full model with all additions, and the full model with everything but the bigram model. Table 3 shows the relative improvement of each component of the model 4. Table 4 shows the results on the development set by genre. It is interesting to see that the performance on newswire text is over 2% better than the averaged performance. The Science section of the development set is considerably harder to parse (presumably because of longer sentences and more open vocabulary). 3The parser fails to give an analysis on some sentences, because the search space becomes too large. The baseline system missed 5 sentences; the full system missed 21 sentences; the full system minus bigrams missed 2 sentences. To score the full system we took the output from the full system minus bigrams when the full system produced no output (to prevent a heavy penalty due to the 21 missed sentences). The remaining 2 unparsed sentences (5 in the baseline case) had all dependencies attached to the root.
4We were surprised to see this slight drop in accuracy for the punctuation tree modification. Earlier tests on a different development set, with less training data and fewer other model alterations had shown a good improvement for this feature.
Comparison to Previous Results
The main piece of previous work on parsing Czech that we are aware of is described in (Kubofi 99). This is a rule-based system which is based on a manually designed set of rules. The system's accuracy is not evaluated on a test corpus, so it is difficult to compare our results to theirs. We can, however, make some comparison of the results in this paper to those on parsing English. (Collins 99) describes results of 91% accuracy in recovering dependencies on section 0 of the Penn Wall Street Journal Treebank, using Model 2 of (Collins 97). This task is almost certainly easier for a number of reasons: there was more training data (40,000 sentences as opposed to 19,000); Wall Street Journal may be an easier domain than the PDT, as a reasonable proportion of sentences come from a sub-domain, financial news, which is relatively restricted. Unlike model 1, model 2 of the parser takes subcategorization information into account, which gives some improvement on English and might well also improve results on Czech. Given these differences, it is difficult to make a direct comparison, but the overall conclusion seems to be that the Czech accuracy is approaching results on English, although it is still somewhat behind. 6 Conclusions The 80% dependency accuracy of the parser represents good progress towards English parsing performance. A major area for future work is likely to be an improved treatment of morphology; a natural approach to this problem is to consider more carefully how POS tags are used as word classes by the model. We have begun to investigate this issue, through the automatic derivation of POS tags through clustering or "splitting" approaches. It might also be possible to exploit the internal structure of the POS tags, for example through incremental prediction of the POS tag being generated; or to exploit the use of word lemmas, effectively splitting word-word relations into syntactic dependencies (POS tag-POS tag relations) and more semantic (lemma-lemma) dependencies.
For
example, the probability of s (bought, VBD) -> NP(yesterday,NN) NP(IBM,NNP) VP (bought, VBD) is defined as /oh (VP I S, bought, VBD) × Pt (NP ( IBM, NNP) I S, VP, bought, VBD) x Pt(NP (yesterday, NN) I S ,VP, bought ,VBD) × e~ (STOP I s, vP, bought, VBD) × Pr (STOP I S, VP, bought. VBD)
Figure 1 :
1A lexicalized parse tree, and a list of the rules it contains.
Figure 2 :Figure 3 :
23Converting dependency structures to lexicalized trees with equivalent dependencies. The trees (a), (b) and (c) all have the input dependency structure: (a) is the "flattest" possible tree; (b) and (c) are binary branching structures. Any labels for the non-terminals (marked X) would preserve the dependency structure. The baseline approach for non-terminal labels. Each label is XP, where X is the POS tag for the head-word of the constituent.
Figure 5 :Figure 6 :Figure 4 :
564An example of coordination. The baseline approach (a) labels the phrase as a Jp; the refinement (b) takes the second conjunct's label as the non-terminal for the whole phrase. An additional change, triggered by a comma that is the left-most child of a phrase: a new non-terminal NPX is introduced. (a) The baseline approach does not distinguish main clauses from relative clauses: both have a verb as the head, so both are labeled VP. (b) A typical parsing error due to relative and main clauses not being distinguished. (note that two main clauses can be coordinated by a comma, as in John likes Mary, Mary likes Tim). (c) The solution to the problem: a modification to relative clause structures in training data.
...Ln and R1...Rm are left and right modifiers ofH.
Either n or m may be zero, and n =
m
=
0 for unary rules.
For example,
in S (bought,VBD)
-+ NP (yesterday,NN)
NP (IBM, NNP) VP (bought, VBD) :
n=2
m=0
P=S
H=VP
LI = NP
L2 = NP
l I = <IBM, NNP>
12 = <yesterday, NN>
h = <bought, VBD)
Input: sentence with part of speech tags: UN saw/V the/D man/N (N=noun, V=verb, D=determiner) dependencies (word ~ Parent): (I =~ saw), (saw =:~ START), (the =~ man), (man =¢, saw> Output: a lexicalized tree
Table 1 :
1The 13-character encoding of the Czech POS tags.
Table 2
2shows a phrase from the corpus, withForm
Dictionary Tags
Machine Tag
poslanci
NNMPI
.....
A--
NNMP5
.....
A
NNMP7
.....
A.
NNMS3
.....
A.
NNMS6
.....
A.
NNMPI
.....
A.
Parlamentu
NNIS2
.....
A--
NNIS2
.....
A
NNIS3
.....
A.
NNIS6
.....
A-I
schv~ilili
VpMP-
--XR-AA-
VpMP-
--XR-AA-
Table 2 :
2Corpus POS tags for "the representatives
of the Parliament approved".
The baseline system on the fi-[I Modification
II Improvement
Coordination
+2.6%
Relative clauses
+ 1.5 %
Punctuation
-0.1% ??
Enriched POS tags
+1. 1%
Punctuation
+0.4%
Verb crossing
+0.9%
Bigram
+0.9%
I Total change
+7.4%
Total Relative Error reduction
26%
Table 3 :
3A breakdown of the results on the development set.Genre
Newspaper
Business
Science
Proportion
(Sentences/
Dependencies)
50%/44%
25%/19%
25%/38%
Accuracy
81.4%
81.4%
76.0%
Table 4 :
4Breakdown of the results by genre. Note that although the Science section only contributes 25% of the sentences in test data, it contains much longer sentences than the other sections and therefore accounts for 38% of the dependencies in test data.nal test set achieved 72.3% accuracy. The final system achieved 80.0% accuracy 3: a 7.7% absolute improvement and a 27.8% relative improvement.The development set showed very similar results: a baseline accuracy of 71.9% and a final accuracy of 79.3%.
Statistical Parsing with a Context-free Grammar and Word Statistics. E Charniak, Proceedings of the Fourteenth National Conference on Artificial Intelligence. the Fourteenth National Conference on Artificial IntelligenceMenlo ParkAAAI Press/MIT PressE. Charniak. 1997. Statistical Parsing with a Context-free Grammar and Word Statistics. Pro- ceedings of the Fourteenth National Conference on Artificial Intelligence, AAAI Press/MIT Press, Menlo Park (1997).
A New Statistical Parser Based on Bigram Lexical Dependencies. M Collins, Proceedings of 512 the 34th Annual Meeting of the Association for Computational Linguistics. 512 the 34th Annual Meeting of the Association for Computational LinguisticsM. Collins. 1996. A New Statistical Parser Based on Bigram Lexical Dependencies. Proceedings of 512 the 34th Annual Meeting of the Association for Computational Linguistics, pages 184-191.
Three Generative, Lexicalised Models for Statistical Parsing. M Collins, Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational Linguistics. the 35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational LinguisticsM. Collins. 1997. Three Generative, Lexicalised Models for Statistical Parsing. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational Linguistics, pages 16-23.
Head-Driven Statistical Models for Natural Language Parsing. M Collins, University of PennsylvaniaPh.D. ThesisM. Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. Thesis, Uni- versity of Pennsylvania.
Three New Probabilistic Models for Dependency Parsing: An Exploration. J Eisner, Proceedings of COLING-96. COLING-96J. Eisner. 1996. Three New Probabilistic Models for Dependency Parsing: An Exploration. Proceed- ings of COLING-96, pages 340-345.
Building a Syntactically Annotated Corpus: The Prague Dependency Treebank. Issues of Valency and Meaning (Festschrift for Jarmila Panevov~i). Carolina, Charles University, PragueJan Haji6. 1998. Building a Syntactically Anno- tated Corpus: The Prague Dependency Treebank. Issues of Valency and Meaning (Festschrift for Jarmila Panevov~i). Carolina, Charles University, Prague. pp. 106-132.
Tagging of Inflective Languages: a Comparison. Jan Haji~, Barbora Hladk~i, Proceedings of the ANLP'97. the ANLP'97Washington, DCJan Haji~ and Barbora Hladk~i. 1997. Tagging of In- flective Languages: a Comparison. In Proceed- ings of the ANLP'97, pages 136--143, Washing- ton, DC.
Tagging Inflective Languages: Prediction of Morphological Categories for a Rich, Structured Tagset. Jan Haji6, Barbora Hladka, Proceedings of ACL/Coling'98. ACL/Coling'98Montreal, CanadaJan Haji6 and Barbora Hladka. 1998. Tagging In- flective Languages: Prediction of Morphological Categories for a Rich, Structured Tagset. In Pro- ceedings of ACL/Coling'98, Montreal, Canada, Aug. 5-9, pp. 483-490.
Decision Tree Parsing using a Hidden Derivation Model. Proceedings of the 1994 Human Language Technology Workshop. J Jelinek, D Lafferty, R Magerman, A Mercer, S Ratnaparkhi, Roukos, E Jelinek, J. Lafferty, D. Magerman, R. Mercer, A. Ratnaparkhi, S. Roukos. 1994. Decision Tree Parsing using a Hidden Derivation Model. Pro- ceedings of the 1994 Human Language Technol- ogy Workshop, pages 272-277.
A Robust Parser for Czech. V Kubofi, 6/19990FAL, Matematickofyzikdlnf fakulta Karlovy univerzity. PragueTechnical ReportV. Kubofi. 1999. A Robust Parser for Czech. Technical Report 6/1999, 0FAL, Matematicko- fyzikdlnf fakulta Karlovy univerzity, Prague.
Statistical Decision-Tree Models for Parsing. D Magerman, Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. the 33rd Annual Meeting of the Association for Computational LinguisticsD. Magerman. 1995. Statistical Decision-Tree Mod- els for Parsing. Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, pages 276-283.
Building a Large Annotated Corpus of English: the Penn Treebank. M Marcus, B Santorini, M Marcinkiewicz, Computational Linguistics. 192M. Marcus, B. Santorini and M. Marcinkiewicz. 1993. Building a Large Annotated Corpus of En- glish: the Penn Treebank. Computational Lin- guistics, 19(2):313-330.
A Linear Observed Time Statistical Parser Based on Maximum Entropy Models. A Ratnaparkhi, Proceedings of the Second Conference on Empirical Methods in Natural Language Processing. the Second Conference on Empirical Methods in Natural Language ProcessingProvidence, Rhode IslandBrown UniversityA. Ratnaparkhi. 1997. A Linear Observed Time Sta- tistical Parser Based on Maximum Entropy Mod- els. In Proceedings of the Second Conference on Empirical Methods in Natural Language Pro- cessing, Brown University, Providence, Rhode Island. |
17,321,205 | COMPUTATIONAL COMPLEXITY IN TWO-LEVEL MORPHOLOGY | Morphological analysis must take into account the spelling-change processes of a language as well as its possible configurations of stems, affixes, and inflectional markings. The computational difficulty of the task can be clarified by investigating specific models of morphological processing. The use of finite-state machinery in the "twolevel" model by Kimmo Koskenniemi gives it the appearance of computational efficiency, but closer examination shows the model does not guarantee efficient processing. Reductions of the satisfiability problem show that finding the proper lexical/surface correspondence in a two-level generation or recognition problem can be computationally difficult. The difficulty increases if unrestricted deletions (null characters) are allowed.3, | [
17321205
] | COMPUTATIONAL COMPLEXITY IN TWO-LEVEL MORPHOLOGY
G Edward Barton Jr
M.I.T. Artificial Intelligence Laboratory 545 Technology Square Cambridge
02139MA
COMPUTATIONAL COMPLEXITY IN TWO-LEVEL MORPHOLOGY
Morphological analysis must take into account the spelling-change processes of a language as well as its possible configurations of stems, affixes, and inflectional markings. The computational difficulty of the task can be clarified by investigating specific models of morphological processing. The use of finite-state machinery in the "twolevel" model by Kimmo Koskenniemi gives it the appearance of computational efficiency, but closer examination shows the model does not guarantee efficient processing. Reductions of the satisfiability problem show that finding the proper lexical/surface correspondence in a two-level generation or recognition problem can be computationally difficult. The difficulty increases if unrestricted deletions (null characters) are allowed.3,
INTRODUCTION
The "dictionary lookup" stage in a natural-language system can involve much more than simple retrieval. Inflectional endings, prefixes, suffixes, spelling-change processes, reduplication, non-concatenative morphology, and clitics may cause familiar words to show up in heavily disguised form, requiring substantial morphological analysis. Superficially, it seems that word recognition might potentially be complicated and difficult. This paper examines the question more formally by investigating the computational characteristics of the "twolevel" model of morphological processes. Given the kinds of constraints that can be encoded in two-level systems, how difficult could it be to translate between lexical and surface forms? Although the use of finite-state machinery in the two-level model gives it the appearance of computational efficiency, the model itself does not guarantee efficient processing. Taking the Kimmo system (Karttunen, 1983) for concreteness, it will be shown that the general problem of mapping between ]exical and surface forms in two-level systems is computationally difficult in the worst case; extensive backtracking is possible. If null characters are excluded, the generation and recognition problems are NP-complete in the worst case. If null characters are completely unrestricted, the problems is PSPACEcomplete, thus probably even harder. The fundamental difficulty of the problems does not seem to be a precompilation effect.
In addition to knowing the stems, affixes, and cooccurrence restrictions of a language, a successful morphological analyzer must take into account the spelling-change processes that often accompany affixation. In English, the program must expect love+ing to appear as loving, fly+s as flies, lie+ing as lying, and big+er as bigger. Its knowledge must be sufficiently sophisticated to distinguish such surface forms as hopped and hoped. Crosslinguistically, spelllng-change processes may span either a limited or a more extended range of characters, and the material that triggers a change may occur either before or after the character that is affected. (Reduplication, a complex copying process that may also be found, will not be considered here.)
The Kimmo system described by Karttunen (1983} is attractive for putting morphological knowledge to use in processing. Kimmo is an implementation of the "two-level" model of morphology that Kimmo Koskenniemi proposed and developed in his Ph.D. thesis. I A system of lexicons in the dictionary component regulates the sequence of roots and affixes at the lexical level, while several finite-state transducers in the automaton component --~ 20 transducers for Finnish, for instance --mediate the correspondence between lexical and surface forms. Null characters allow the automata to handle insertion and deletion processes. The overall system can be used either for generation or for recognition.
The finite-state transducers of the automaton component serve to implement spelling changes, which may be triggered by either left or right context and which may ignore irrelevant intervening characters. As an example, the following automaton describes a simplified "Y-change" process that changes y to i before suffix es: The details of this notation will not be explained here; basic familiarity with the Kimmo system is assumed. For further introduction, see Barton (1985), Karttunen (1983), and references cited therein.
THE SEEDS
OF COMPLEXITY
At first glance, the finite-state machines of the twolevel model appear to promise unfailing computational efficiency. Both recognition and generation are built on the simple process of stepping the machines through the input. Lexical lookup is also fast, interleaved character by character with the quick left-to-right steps of the automata. The fundamental efficiency of finite-state machines promises to make the speed of Kimmo processing largely independent of the nature of the constraints that the automata encode:
The most important technical feature of Koskenniemi's and our implementation of the Two-level model is that morphological rules are represented in the processor as automata, more specifically, as finite state transducers .... One important consequence of compiling [the grammar rules into automata] is that the complexity of the linguistic description of a language has no significant effect on the speed at which the forms of that language can be recognized or generated. This is due to the fact that finite state machines are very fast to operate because of their simplicity .... Although Finnish, for example, is morphologically a much more complicated language than English, there is no difference of the same magnitude in the processing times for the two languages .... [This fact] has some psycholinguistie interest because of the common sense observation that we talk about "simple" and "complex" languages but not about "fast" and "slow" ones. (Karttunen, 1983:166f) For this kind of interest in the model to be sustained, it must be the model itself that wipes out processing difficulty, rather than some accidental property of the encoded morphological constraints.
Examined in detail, the runtime complexity of Kimmo processing can be traced to three main sources. The recognizer and generator must both run the finite-state machines of the automaton component; in addition, the recognizer must descend the letter trees that make up a lexicon. The recognizer must also decide which suffix lexicon to explore at the end of an entry. Finally, both the recognizer and the generator must discover the correct lexical-surface correspondence.
All these aspects of runtime processing are apparent in traces of implemented Kimmo recognition, for instance when the recognizer analyzes the English surface form spiel (in 61 steps) according to Karttunen and Wittenburg's (1983) analysis ( Figure 1). The stepping of transducers and letter-trees is ubiquitous. The search for the lexical-surface correspondence is also clearly displayed; for example, before backtracking to discover the correct lexical entry spiel, the recognizer considers the lexical string spy+ with y surfacing as i and + as e. Finally, after finding the putative root spy the recognizer must decide whether to search the lexicon I that contains the zero verbal ending of the present indicative, the lexicon AG storing the agentive suffix *er, or one of several other lexicons inhabited by inflectional endings such as +ed.
The finite-state framework makes it easy to step the automata; the letter-trees are likewise computationally well-behaved. It is more troublesome to navigate through the lexicons of the dictionary component, and the current implementation spends considerable time wandering about. However, changing the implementation of the dictionary component can sharply reduce this source of complexity; a merged dictionary with bit-vectors reduces the number of choices among alternative lexicons by allowing several to be searched at once (Barton, 1985).
More ominous with respect to worst-case behavior is the backtracking that results from local ambiguity in the construction of the lexical-surface correspondence. Even if only one possibility is globally compatible with the constraints imposed by the lexicon and the automata, there may not be enough evidence at every point in processing to choose the correct lexical-surface pair. Search behavior results.
In English examples, misguided search subtrees are necessarily shallow because the relevant spelling-change processes are local in character. Since long-distance harmony processes are also possible, there can potentially be a long interval before the acceptability of a lexical-surfaee pair is ultimately determined. For instance, when vowel alternations within a verb stem are conditioned by the occurrence of particular tense suffixes, the recognizer must sometimes see the end of the word before making final decisions about the stem. Ignoring the problem of choosing among alternative lexicons, it is easy to see that the use of finite-state machinery helps control only one of the two remaining sources of complexity. Stepping the automata should be fast, but the finite-state framework does not guarantee speed in the task of guessing the correct lexical-surface correspondence. The search required to find the correspondence may predominate. In fact, the Kimmo recognition and generation problems bear an uncomfortable resemblance to problems in the computational class NP. Informally, problems in NP have solutions that may be hard to guess but are easy to verify -just the situation that might hold in the discovery of a Kimmo lexical-surface correspondence, since the automata can verify an acceptable correspondence quickly but may need search to discover one.
THE COMPLEXITY OF TWO-LEVEL MORPHOLOGY
The Kimmo algorithms contain the seeds of complexity, for local evidence does not always show how to construct a lexical-surface correspondence that will satisfy the constraints expressed in a set of two-level automata. These seeds can be exploited in mathematical reductions to show that two-level automata can describe computationally difficult problems in a very natural way. It follows that the finite-state two-level framework itself cannot guarantee computational efficiency. If the words of natural languages are easy to analyze, the efficiency of processing must result from some additional property that natural languages have, beyond those that are captured in the twolevel model. Otherwise, computationally difficult problems might turn up in the two-level automata for some natural language, just as they do in the artificially constructed languages here. In fact, the reductions are abstractly modeled on the Kimmo treatment of harmony processes and other long-distance dependencies in natural languages.
The reductions use the computationally difficult Boolean satisfiability problems SAT and 3SAT, which involve deciding whether a CNF formula has a satisfying truth-assignment. It is easy to encode an arbitrary SAT problem as a Kimmo generation problem, hence the general problem of mapping from lexical to surface forms in Kimmo systems is NP-complete. 2 Given a CNF formula ~, first construct a string o by notational translation: use a minus sign for negation, a comma for conjunction, and no explicit operator for disjunction. Then the o corresponding to the formula (
~ v y)&(~ v z)&(x v y v z) is -xy.-yz .xyz.
2Membership in NP is also required for this conclusion. A later section ("The Effect of Nulls ~) shows membership in NP by sketching how a nondeterministic machine could quickly solve Kimmo generation and recognition problems.
The notation is unambiguous without parentheses because is required to be in CNF. Second, construct a Kimmo automaton component A in three parts. (A varies from formula to formula only when the formulas involve different sets of variables.) The alphabet specification should list the variables in a together with the special characters T, F, minus sign, and comma; the equals sign should be declared as the Kimmo wildcard character, as usual. The consistency automata, one for each variable in a, should be constructed on the following model:
(xfalsc}
The consistency automaton for variable x constrains the mapping from variables in the lexical string to truth-values in the surface string, ensuring that whatever value is assigned to x in one occurrence must be assigned to x in every occurrence. Finally, use the following satisfaction automaton, which does not vary from formula to formula:
"satisfaction" 3 4 = = , (lexical characters} T F , (surface characters} 1. 2 1 3 0 (no true seen in this group) 2: 2 2 2 1 (true seen in this group} 3. 1 2 0 0 (-F counts as true)
The satisfaction automaton determines whether the truthvalues assigned to the variables cause the formula to come out true. Since the formula is in CNF, the requirement is that the groups between commas must all contain at least one true value. The net result of the constraints imposed by the consistency and satisfaction automata is that some surface string can be generated from a just in case the original formula has a satisfying truth-assignment. Furthermore, A and o can be constructed in time polynomial in the length of ~; thus SAT is polynomial-time reduced to the Kimmo generation problem, and the general case of Kimmo generation is at least as hard as SAT. Incidentally, note that it is local rather than global ambiguity that causes trouble; the generator system in the reduction can go through quite a bit of search even when there is just one final answer. Figure 2 traces the operation of the Kimmo generation algorithm on a (uniquely) satisfiable formula.
Like the generator, the Kimmo recognizer can also be used to solve computationally difficult problems. One easy reduction treats 3SAT rather than SAT, uses negated alphabet symbols instead of a negation sign, and replaces the satisfaction automaton with constraints from the dictionary component; see Barton (1985) for details.
-FF, - -FF, -T -FF, -F -FF, -FF -FF, -FF. -FF, -FF, -FF, -FF, -FF, -FF, -FF -FF, -FF -FF, -FF -FF, -FF -FF, -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF -FF
50
-F-T XXX z-con. . Though only one truth-assignment will satisfy the formula, it takes quite a bit of backtracking to find it. The notation used here for describing generator actions is similar to that used to describe recognizer actions in Figure ??, but a surface rather than a lexical string is the goal. A *-entry in the backtracking column indicates backtracking from an immediate failure in the preceding step, which does not require the full backtracking mechanism to be invoked.
THE EFFECT OF PRECOMPILATION
Since the above reductions require both the language description and the input string to vary with the SAT/3SAT problem to be solved, there arises the question of whether some computationally intensive form of precompilation could blunt the force of the reduction, paying a large compilation cost once and allowing Kimmo runtime for a fixed grammar to be uniformly fast thereafter. This section considers four aspects of the precompilation question.
First, the external description of a Kimmo automator or lexicon is not the same as the form used at runtime. Instead, the external descriptions are converted to internal forms: RMACHINE and GMACHINE forms for automata, letter trees for lexicons (Gajek et al., 1983). Hence the complexity implied by the reduction might actually apply to the construction of these internal forms; the complexity of the generation problem (for instance) might be concentrated in the construction of the "feasible-pair list" and the GMACHINE. This possibility can be disposed of by reformulating the reduction so that the formal problems and the construction specify machines in terms of their internal forms rather than their external descriptions. The GMACHINEs for the class of machines created in the construction have a regular structure, and it is easy to build them directly instead of building descriptions in external" format. As traces of recognizer operation suggest, it is runtime processing that makes translated SAT problems difficult for a Kimmo system to solve.
Second, there is another kind of preprocessing that might be expected to help. It is possible to compile a set of Kimmo automata into a single large automaton (a BIGMACHINE) that will run faster than the original set. The system will usually run faster with one large automaton than with several small ones, since it has only one machine to step and the speed of stepping a machine is largely independent of its size. Since it can take exponential time to build the BIGMACHINE for a translated SAT problem, the reduction formally allows the possibility that BIGMACHINE precompilation could make runtime pro-cessing uniformly efficient. However, an expensive BIG-MACH]NE precompilation step does not help runtime processing enough to change the fundamental complexity of the algorithms. Recall that the main ingredients of Kimmo runtime complexity are the mechanical operation of the automata, the difficulty of finding the right lexical-surface correspondence, and the necessity of choosing among alternative lexicons. BIGMACHINE precompilation will speed up the mechanical operation of the automata, but it will not help in the difficult task of deciding which lexicalsurface pair will be globally acceptable. Precompilation oils the machinery, but accomplishes no radical changes.
Third, BIGMACHINE precompilation also sheds light on another precompilation question. Though B]GMA-CHINE precompilation involves exponential blowup in the worst case (for example, with the SAT automata), in practice the size of the BIGMACHINE varies --thus naturally raising the question of what distinguishes the "explosive" sets of automata from those with more civilized behavior. It is sometimes suggested that the degree of interaction among constraints determines the amount of BIG-MACHINE blowup. Since the computational difficulty of SAT problems results in large measure from their "global" character, the size of the BIGMACHINE for the SAT system comes as no surprise under the interaction theory. However, a slight change in the SAT automata demonstrates that BIGMACHINE size is not a good measure of interaction among constraints. Eliminate the satisfaction automaton from the generator system, leaving only the consistency automata for the variables. Then the system will not search for a satisfying truth-assignment, but merely for one that is internally consistent. This change entirely eliminates interactions among the automata; yet the BIGMACHINE must still be exponentially larger than the collection of individual automata, for its states must distinguish all the possible truth-assignments to the variables in order to enforce consistency. In fact, the lack of interactions can actually increase the size of the BIGMA-CHINE, since interactions constrain the set of reachable state-combinations.
Finally, it is worth considering whether the nondeterminism involved in constructing the lexical-surface correspondence can be removed by standard determinization techniques. Every nondeterministic finite-state machine has a deterministic counterpart that is equivalent in the weak sense that it accepts the same language; aren't Kimmo automata just ordinary finite-state machines operating over an alphabet that consists of pairs of ordinary characters? Ignoring subtleties associated with null characters, Kimmo automata can indeed be viewed in this way when they are used to verify or reject hypothesized pairs of lexical and surface strings. However, in this use they do not need determinizing, for each cell of an automaton description already lists just one state. In the cases of primary interest --generation and recognition --the machines are used as genuine transducers rather than acceptors.
The determinizing algorithms that apply to finite-state acceptors will not work on transducers, and in fact many finite-state transducers are not determinizable at all. Upon seeing the first occurrence of a variable in a SAT problem, a deterministic transducer cannot know in general whether to output T or F. It also cannot wait and output a truthvalue later, since the variable might occur an unbounded number of times before there was sufficient evidence to assign the truth-value. A finite-state transducer would not be able in general to remember how many outputs had been deferred.
THE EFFECT OF NULLS
Since Kimmo systems can encode NP-complete problems, the general Kimmo generation and recognition problems are at least as hard as the difficult problems in NP. But could they be even harder? The answer depends on whether null characters are allowed. If nulls are completely forbidden, the problems are in NP, hence (given the previous result) NP-complete. If nulls are completely unrestricted, the problems are PSPACE-complete, thus probably even harder than. the problems in NP. However, the full power of unrestricted null characters is not needed for linguistically relevant processing.
If null characters are disallowed, the generation problem for Kimmo systems can be solved quickly on a nondeterministic machine. Given a set of automata and a lexical string, the basic nondeterminism of the machine can be used to guess the lexical-surface correspondence, which the automata can then quickly verify. Since nulls are not permitted, the size of the guess cannot get out of hand; the lexical and surface strings will have the same length. The recognition problem can be solved in the same way except that the machine must also guess a path through the dictionary.
If null characters are completely unrestricted, the above argument fails; the lexical and surface strings may differ so radically in length that the lexical-surface correspondence cannot be proposed or verified in time polynomial in input length. The problem becomes PSPACEcomplete --as hard as checking for a forced win from certain N x N Go configurations, for instance, and probably even harder than NP-complete problems (cf. Garey and Johnson, 1979:171ff). The proof involves showing that Kimmo systems with unrestricted nulls can easily be induced to work out, in the space between two input characters, a solution to the difficult Finite State Automata Intersection problem.
The PSPACE-completeness reduction shows that if two-level morphology is formally characterized in a way that leaves null characters completely unrestricted, it can be very hard for the recognizer to reconstruct the superficially null characters that may lexically intervene between two surface characters. However, unrestricted nulls surely are not needed for linguistically relevant Kimmo systems. Processing complexity can be reduced by any restriction that prevents the number of nulls between surface characters from getting too large. As a crude approximation to a reasonable constraint, the PSPACE-completeness reduction could be ruled out by forbidding entire lexicon entries from being deleted on the surface. A suitable restriction would make the general Kimmo recognition problems only NP-complete.
Both of the reductions remind us that problems involving finite-state machines can be hard. Determining membership in a finite-state language may be easy, but using finite-state machines for different tasks such as parsing or transduction can lead to problems that are computationally more difficult.
Figure ] :
]These traces show the steps that the KIMMOrecognizer for English goes through while analyzing the surface form spiel. Each llne of the
Figure 2 :
2The generator system for deciding the satisfiability of Boolean formulas in x, y, and z goes through these steps when applied to the encoded version of the (satisfiable) formula (5 V y)&(~ V z)&(~ V ~)&(z V y V z)
IUniversity of Helsinki, Finland, circa Fall 1983."Y-Change" 5 5
y y * s =
(lexicalcharacters)
i y = s =
(surface characters)
state 1: 2 4 1 1 1
(normal state)
state 2.
0 0 3 0 0
(require *s)
state 3. 0 0 0 1 0
(require s)
state 4:
2 4 8 1 1
(forbid+s)
state S: 2 4 1 0 1
(forbids)
table oil the left shows the le]dcal string and automaton states at the end of a step. If some autoz,mton blocked, the automaton states axe replaced by ~, XXI entry. An XXX entry with no autonmto,, n:une indicates that the ]exical string could not bc extended becau,~e the surface c],aracter .'tnd h,xical letter tree together ruh'd out ,-dl feasible p,'drs. After xn XXX or *** entry, the recognizer backtracks and picks up from a previous choice point. indicated by the paxenthesized step l*lU,zl)er before the lexical .~tring. The tree Ol, the right depicts the search graphically, reading from left to right and top t. ])ottoln with vertir;d b;trs linking the choices at each choice point The flhntres were generated witl, a ](IM M() hnplen*entation written i, an ;I.llgll*t,llter| version of MACI,ISI'I,t,sed initiMly on Kltrttllnel*',,¢ (1983:182ff) ;dgorithni description; the diction;n'y m.l antomaton contpouents for E,glish were taken front 1.;artt,ne, and Wittenlmrg (1983) with minor ('llikllgCS. This iJz*ple*l*Vl*tatio*) se;u'¢h(.s del.th-tlr,~t a,s Kmttu,en's does, but explores the Mternatives at a giwm depth in a different order from Karttttnen's.
Generating from lexical form "-xy. -yz. -y-z,xyz"1
1,1.1,3
38 +
2
3
4
5
6
7 +
8
g
10
ll
12 +
13
14
15 +
16
17
18 +
lg
20 +
21
22 +
23
24 (8)
25
26
27
28 +
29
30
31 +
32
33
34 +
35
36 +
37
-F
-FF
-FF,
ACKNOWLEDGEMENTSThis report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research has been provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-80-C-0505. A version of this paper was presented to the Workshop on Finite-State Morphology, Center for the Study of Language and Information, Stanford University, July 29-30, 1985; the author is grateful to Lauri Karttunen for making that presentation possible. This research has benefited from guidance and commentary from Bob Berwick, and Bonnie Dorr and Eric Grimson have also helped improve the paper.
The Computational Complexity of Two-Level Morphology. E Barton, M.I.T. Artificial Intelligence Laboratory. 856A.I. Memo No.Barton, E. (1985). "The Computational Complexity of Two-Level Morphology," A.I. Memo No. 856, M.I.T. Artificial Intelligence Laboratory, Cambridge, Mass.
. O Gajek, H Beck, D Elder, G Whittemore, Gajek, O., H. Beck, D. Elder, and G. Whittemore (1983).
LISP Implementation. Texas Linguistic Forum. 22of the KIMMO system"LISP Implementation [of the KIMMO system]," Texas Linguistic Forum 22:187-202.
Computers and Intractability. M Garey, D Johnson, W. H. Freeman and CoSan FranciscoGarey, M., and D. Johnson (1979). Computers and In- tractability. San Francisco: W. H. Freeman and Co.
KIMMO: A Two-Level Morpho-• logical Analyzer. L Karttunen, Texas Linguistic Forum. 22Karttunen, L. (1983). "KIMMO: A Two-Level Morpho- • logical Analyzer," Texas Linguistic Forum 22:165-186.
A Two-Level Morphological Analysis of English. L Karttunen, K Wittenburg, Texas Linguistic Forum. 22Karttunen, L., and K. Wittenburg (1983). "A Two-Level Morphological Analysis of English," Texas Linguistic Forum 22:217-228. |
5,459,895 | INCREMENTAL LL(l) PARSING IN LANGUAGE-BASED EDITORS | This paper introduces an efficient incremental LL(l) parsing algorithm for use in language-based editors that use the structure recognition approach. It fe atures very fine grained analysis and a unique approach to parse control and error recovery. It also presents incomplete LL(l) grammars as a way of dealing with the complex ity of fu ll language grammars and as a mechanism for providing structured editor support for task languages that are only partially structured. The semantics of in complete grammars are presented and it is shown how incomplete LL(l) grammars can be transformed into complete LL(l) grammars. The algorithms presented have been implemented in the fred language-based edi tor | [] | INCREMENTAL LL(l) PARSING IN LANGUAGE-BASED EDITORS
John J Shilling shilling@cc.gatech.edu
College of Computing
Georgia Institute of Technology Atlanta
30332-0280Georgia
INCREMENTAL LL(l) PARSING IN LANGUAGE-BASED EDITORS
This paper introduces an efficient incremental LL(l) parsing algorithm for use in language-based editors that use the structure recognition approach. It fe atures very fine grained analysis and a unique approach to parse control and error recovery. It also presents incomplete LL(l) grammars as a way of dealing with the complex ity of fu ll language grammars and as a mechanism for providing structured editor support for task languages that are only partially structured. The semantics of in complete grammars are presented and it is shown how incomplete LL(l) grammars can be transformed into complete LL(l) grammars. The algorithms presented have been implemented in the fred language-based edi tor
INTRODUCTION
This paper introduces an efficient incremental LL(l) parsing algorithm for use in language-based editors that use the structure recognition approach. It is motivated by a style of interaction that parses the user input at intervals of very small granularity. A second motivation for the algorithm is the problem of changes internal to the editing buffer. Because incremental analysis can oc cur after each keystroke, an unrestricted parser will at tempt to include too much into its focus before a change is complete causing the editor to detect erroneous states that will become irrelevant as the user completes the change. The parsing algorithms presented in this paper use the user focus as a guide in restricting parsing. The algorithm presented has been implemented in the fred language-based editor [Shi83, Shi85] .
Incomplete LL( 1) grammars are introduced as a way of dealing with the complexity of full language gram mars and as a mechanism for providing structured ed itor support for task languages that are only partially structured. Incomplete grammars were introduced by Orailoglu [Ora83) for the fred editor [Shi85, Shi86) as a method of dealing with the complexity of fu ll language
grammars. Incomplete grammars allow incremental re finement of language grammars and also allow gram mars to be defined for languages that are not LL(l). Defining an incomplete grammar for a non-LL(l) lan guage allows the editor to give structured support for the LL(l) subset of the language rather than disallowing the language completely. Another useful application of incomplete grammars is in providing structured support for tasks whose languages are only partially structured. An example of this is a grammar that facilitates struc tured support for editing LaTeX documents. A LaTeX documents contains structured elements but much of the document can be treated as unstructured text.
This paper introduces incomplete LL(l) grammars and characterizes their parsing semantics. It then shows how the grammars can be translated into conventional LL(l) grammars, eliminating the need for specialized parsing algorithms.
INCREMENTAL LL(l) PARSING
The goal of incremental parsing is to re-establish a cor rect structuralization of the user's editing buffer after changes have been made. The approach taken must differ from straightforward once-only top-down parsing because a once-only parser never needs to reverse deci sions after they are made. In incremental parsing de cisions are unmade and sections of the parse tree are deleted, transformed, and grafted into new locations.
At the same time, the amount of parsing actually done must be limited if the algorithms are going to provide real-time response to a user. The algorithms must first establish the scope of modifications and efficiently re structure the parse tree within this scope.
The parsing method described in this paper is more fine grained than previous methods. The goal is to re structure the editing buffer after each text-modifying keystroke of a user. The challenge is that it is often not possible to achieve a complete, correct structuralization because the user is in the process of making a change that is not yet complete. On the other hand, the user
Two basic ideas drive the tree preparation. The first is that the region of the tree defined by Lexical Left · Boundary, Lexical Right Boundary and Common An cestor is invalidated because the tokens along its frontier have been recalculated. The second is that the subtree of the parse tree rooted at Royal Node is suspect be cause it was instantiated on the basis of a token that has been altered.
Figure 2 shows the Sweep algorithm. It begins by identifying the Common Ancestor and the Royal Node and then cleans the region modified by the lexical to kenization. This is a wedge in the parse tree that is bounded by the path from the Lexical Left Boundary to the Common Ancestor to the Lexical Right Bound ary. All nodes on the interior of the modified region are deleted except the direct sons of the nodes along the boundary.
The algorithm must now decide what to do about the Royal Node. We distinguish two cases in dealing with the Royal Node based on the relationship between the Royal Node and the Common Ancestor. If the Royal Node is a descendent of the Common Ancestor then there is no conflict because there are no tokens in the subtree rooted at Royal Node. If Royal Node is the same as, or an ancestor of the Common Ancestor then the subtree rooted at the leftmost son of Common An cestor is clipped. This will in general leave parts of the parse tree intact that may not be valid with the new tokenization.
Before exiting, the the Sweep algorithm pushes the current parse pointer back to the left in the parse tree 1 We will ignore non-significant nodes such as error nodes and
INCREMENTAL PARSING
We now enter the actual incremental parsing algo rithm. The idea of the algorithm is similar to straight forward LL(l) parsing with several major differences. The incremental algorithm must decide how to handle the situations when it advances to a satisfied token ele ment but has a non-empty parsing queue and conversely when it empties the parsing · queue but has unsatisfied productions in the parse tree. The second situation is handled in follow-the-cursor parsing by essentially doing nothing. We do not want to remove any fu rther tokens fr om the parse tree so the algorithm simply leaves unsat isfied productions in the tree and displays them to the user as soft templates. In the first situation the algo rithm needs to open up space in the parse tree to accom modate the elements of the parsing queue. This is done by invoking a conflict resolution algorithm described be low. Following the description of the conflict resolution algorithm we will present two algorithms that together accomplish the incremental parsing desired. The first is the inner parsing algorithm that does most of the work and the second is the outer parsing algorithm that provides high level control.
CONFLICT RESOLUTION
In our parsing algorithm we will need to resolve a con flict if the element at the front of the parse queue cannot be parsed at the current parse position. The conflict can exist because there is already a token at Parse Position as described above or it can exist simply because the Queue Element does not fit into the terminal or non terminal symbol at the Parse Position. The general al-gorithm would have grafted such an element as an error. That is not satisfactory here for two reasons. The first is that there are now non-terminal rooted subtrees on the Parse Queue as well as tokens. A subtree may not be parsable at this point but the tokens along its frontier may be. The second reason is that the algorithm does . not have the guarantee that the subtree rooted at _Parse . Position is properly prepared to be parsed because it may not have deleted the entire subtree rooted at Royal Node in the Sweep algorithm.
The goal is to parse the elements of the parse· queue by disrupting as small a region of the parse tree as possible. There is a conflict here because we want to parse the tokens in the parsing queue but we would like to keep the tokens that are on the tree intact if possible. Our solution to this is to give priority to the parsing of tokens before the cursor. This may mean dislocating tokens on the parse tree. If tokens are displaced, they are grafted to the tree as error nodes rather than moving them to the parsing queue.
43
We first present some definitions.
• As a generalization of the previous definition we
define Royal Node is defined to be the highest node in the tree that has Parse Position as the first leaf of its frontier. If no such node exists then Royal Node is defined to be the node at Parse Position.
• Decision Node is defined to be the lowest node on the path from Parse Position to Royal Node that has the element at the front of the Parse Queue in its first set. If no such node exists then Decision Node is defined to be NULL.
• List Node is defined to be a node on the path from · the Decision Node to the Royal Node (inclusive) that is a list structured production. If no such node exists then List Node is defined to be NULL.
• Nullable Node is defined to be a node along the path from the Parse Position to the Royal Node that is nullable and has the element at the front of the Parse Queue in its follow set. If no such node exists then Nullable Node is defined to be NULL.
The Royal Node is the highest point in the parse tree where the token at Parse Position (or the token that previously was the first token of Parse Position) caused a decision to be made. The Decision Node, if it exists , is the lowest production along the path from Parse Po sition to Royal Node that the fr ont of the Parse Queue can belong to. If the Decision Node exists then we can try to find a List Node. List Node is a place in the parse tree where a list production can be found. This makes it a place where we can wedge in a new produc tion without tearing down any existing parse tree. At most one list node can be found because if there were two or more _ then there would be an ambiguous parse. Finally, Nullable Node is a node that can be nulled while still allowing the element at the front of the Parse Queue to be correctly parsed.
The algorithm for resolving the conflict is presented in figure 3. It first finds the four nodes described · above. If List Node exists then the list production is expanded by an additional element using the GraftNewList subrou tine. In the StealProduction subroutine the tokens in the subtree rooted at the node of the first parameter are grafted to the right as error nodes. The (tokenless) subtree rooted at the node is then deleted leaving an open non-terminal that is either nullable or has the el ement at the fr ont of the parse queue in its first set . The final chance to avoid grafting an error token is if there is a non-terminal subtree at the front of the parse queue. In this case the nonterminal is removed and replaced with its children in the Reduce subroutine. This process continues until the algorithm has freed up a non-terminal in the parse tree or has emptied the parse queue.
INNER PARSING ALGORITHM
OUTER PARSING ALGORITHM
The outer parsing algorithm provides high level con trol over the inner parsing algorithm. It resolves con flicts when Parse Position is advanced to a token and Parse Queue is not empty or Parse Queue is empty but Parse Position is a non-satisfied production element. The former case is handled by the conflict resolution algorithm. The latter case is allowed as a legal state in follow-the-cursor parsing because tokens to the right of the cursor are not taken to satisfy the parse position.
At the end of the normal parsing loop an error recov ery algorithm is called. The Error Recovery algorithm is the only algorithm that is allowed to parse past the cursor. In follow-the-cursor parsing it is sometimes nec essary to invoke the Steal Production process that grafts tokens as errors to the right of the current parse posi tion. It is also possible that a token has been inserted which will resolve an error in the syntax of the user buffer if they were included in the parse. The idea of the Error Recovery algorithm is to probe into the error tokens directly past the cursor to see if these tokens can be parsed correctly.
An outline of the error recovery algorithm is presented
. The algorithm begins by saving the current parse tree status, called the initial consistent parse. Each er ror token is then considered in turn. If the error token can be parsed correctly then that is done. If parsing the token completes a production in the parse tree then the consistent parse is updated to be the current parse state. The loop terminates when it runs out of error tokens or it encounters an error token that cannot be parsed correctly. It then backs up the state of the parse tree to the last consistent parse and exits.
INCOMPLETE GRAMMARS
Incomplete grammars presented here introduce two new non-terminal classes, unstructuretP and pre/erred non terminals , into language description grammars. Pre ferred non-terminals are the left-hand-sides of a spe cial production class called preferred productions. In tuitively, the unstructured non-terminal class allows the language designer to have a production that escapes the structuralization process. A preferred production is a way of finding structure within the lack of structure of the unstructured non-terminal.
A conventional LL( 1) grammar can be described as a tuple [PLRS76] G = (S, T, N, P) where S is the start symbol of G, S E N. T is a finite set of terminal symbols.
2Qrailoglu refers to this non-tenninal class as Un known,. G = (S, T, N, U, P,.Pu) where S, T, N, and P have their conventional meaning and U is a distinguished set of non-terminal symbols de noted unstn1ctured, U e N.
N is a finite set of non-terminal symbols. P is a set of production rules.
An incomplete LL(l) grammar is described as a tuple
Pu is a distinguished set of production rules denoted prefe rred productions, Pu E P. · An unstructured non-terminal can occur at any point in the right-hand-side of a production rule. For the purpose of constructing the select sets of normal non terminals (non-terminals that are not unstructured non terminals) each occurrence of an unstructured non terminal is treated as a unique, distinguished terminal symbol 11, 11 <t T. Thus a non-terminal's select set will contain an entry for each terminal symbol in its first set and an entry for any unstructured element that it can be derived from it. This is similar to the way that non-terminals are treated in incremental parsing. For parsing purposes we do not construct the first set of an unstructured element but we do construct the fo llow set of an unstructured element in the .normal way. We do not construct the first sets for unstructured elements because their first sets vary at parse-time, depending on the shape of the parse tree. Intuitively, the run-time first sets vary because we want the unstructured ele ment to act as a wild card non-terminal and accept any token that is not otherwise accepted at the point that the unstructured element occurs.
Consider, for example, the grammar:
A a C B b C C Unstructured
If we are currently focussed at non-ter�al A, we want any token except "a" to lead into production C. If we are focussed at non-terminal B, then we want any token but ''h" to be accepted by C. Thus, the meaning of the same unstructured element ( and by side-effect, C) will changed at run-time depending on the current parsing context when it is encountered.
A preferred production is a production that can find structure within an unstructured non-terminal. Its first set is calculated as for normal productions rules. Be cause the preferred production can be followed by the resumption of the unstructured non-terminal then the follow set should be anything that does not cause con flict with the preferred production. Thus if p E Pu , y = left-hand-side{p ),
Follow(y) Can-Legally-Follow(y) where Can-Legally-Follow is a relation that generates the set of all tokens that can follow a non-terminal with out causing a parsing conflict with that non-terminal.
TRANSFORMATIONS
Orailoglu devised specialized algorithms to parse based on incomplete grammars. This section will show how to transform an incomplete grammar into a com plete grammar that can be parsed with conventional LL{l) algorithms. The obstacle to the traditional pars ing of incomplete grammars has been that the first set of an unstructured element effectively changes at run time depending on the state of the parse tree where the unstructured element is introduced. It will be shown that the decisions in Orailoglu 's implementation which are made at run-time, can be predicted at the time the incomplete grammar is analyzed. This allows the in complete grammar to be transformed into a complete grammar that recognizes the same language.
A simple example is presented to show the flavor of the material that will follow. Consider the incomplete grammar:
A ac be U (an unstructured element) of this grammar is {b, c, d, ERROR} .
The token set of the grammar is {a, b, c, ERROR} . The intent of the grammar writer is clearly that a lead ing token of a will invoke the first right-hand-side, a leading token of b will invoke the second right-hand-side, and any other token will invoke the third right-hand-side because of the unstructured element. Thus the first set of the unstructured element is effectively {c, ERROR} and as a result the first set of non-terminal
The intent of the unstructured element in the grammar varies with the shape of the parse tree. If the current non-terminal is B then any token in the set { c, ERROR} will derive the unstructured element in D but if the non terminal is C then any token in the set {b, ERROR} will derive the unstructured element. The thing to note is that this can be predicted at the time that the grammar is analyzed.
The above grammar can be transformed into the grammar:
A C u, u, bB cC C D u, ( u,, )* b C ERROR B D u,, b D d u, b C d ERROR
This grammar has the same token set as the previous grammar. The only difference is that three new produc-· tions are introduced to represent the structure of the incomplete element. The first production gives the con ceptual structure of the incomplete element. The second production represents tokens that can occur first in the unstructured element and the third production repre-.' sents what may follow the first element as the body of the unstructured element. Notice that Ut contains any token that is not otherwise in the first set of D. This causes the grammar to be ambiguous because the token bis in the first set of both alternatives of non-terminal B and the token c is in the first set of both alternatives to non-terminal C. The key to the transformation method is to resolve the conflict in each case in favor of the al ternative that does not derive the unstructured element.
With this method of resolving the parsing ambiguity, the transformed grammar recognizes exactly the same language as the untransformed grammar.
The above example illustrates the spirit of the trans formation method on a very simple grammar. The re mainder of this section will show that the method can be applied to any incomplete grammar of the form de scribed by Orailoglu [Ora83]
For parse table calculations each unstructured non terminal is recognized as a separate production but is treated s,9inewhat differently when checking LL(l) grammar restrictions. Although they are technically different elements, unstructured elements must satisfy some restrictions as if they were the same terminal.
Two distinct unstructured elements cannot both occur in first set of a production or in the follow set of a pro duction. There are also restrictions to avoid ambiguity. An incomplete element cannot be followed by another . incomplete element, and incomplete elements can nei ther start nor end preferred productions. If a token is both in the first set of a preferred production and the follow set of an unstructured element then the conflict will be resolved in favor of the follow set. · No token may appear in the first set of more than one preferred pro d uction because this would cause a grammar ambiguity.
An unstructured element may be legally derived at run-time if all of the following conditions apply: • The current parsing position is a non-terminal that can derive the unstructured element in the gram mar • The current parse queue element is a token that is not in the select set of the current non-terminal. • The current non-terminal is not nullable with the input token in its follow set 3 •
If all of the . above conditions apply then the tree is ex panded to derive the unstructured element and the al gorithm enters unstructured parsing mode. While in unstructured mode the parser accepts any token as part of the incomplete element until it receives a member of the follow set of the incomplete element or a member of the first set of a preferred production. If a member of the follow set is encountered then the incomplete ele ment is closed. If a member of the first set of a preferred production is encountered then the preferred production is instantiated and parsed normally, and unstructured parsing is resumed when it completes.
The transformation approach will be to replace each unstructured element U by a non-terminal Ut which is the left-hand-side of a production rule of the _ form
3This slight variation from Orailoglu's implementation is in troduced to give a more consistent treatment of W1Structured elements.
48
UI U, ( U b )*
where U, derives tokens and preferred non-terminals that may start the unstructured element and U b derives the set of tokens and non-terminals that may be in the body of the unstructured element.
The production rule for U b is the easier of the two to calculate. The first step is to calculate the follow set of U in the normal manner. This calculation is already performed by the existing algorithms. This tells what
This production correctly parses the internal part of the incomplete element because it derives all the pre ferred productions and all tokens not in the first set · of a preferred production or in its follow set. If there is a conflict between the first set of some P i and the follow set of U then, as before, the conflict is resolved in favor of the follow set.
The calculation of how unstructured elements can be derived involves not only calculation of the production rule for U, but also the rules for resolving conflicts that arise in the select tables of the grammar. An unstruc tured element occurs in the right-hand-side of a produc tion of the fo rm
A wUx rhs2
where w and x may each be empty and where n # 1.
Thus the simplest production rule containing an un structured element is of the form
A U -first(w) -first(rhs2) -... -first(rhs 0 ). F -first(p1 ) -... -first(pn ) = t1; ... , t; and the production rule U, is u,
The first step in calculating a production rule for U, is determining whether w is nullable. Let F be the set of tokens that can occur in the first set of U. If w is not nullable then set F to the entire token set. Any pars ing conflicts with w will be resolved in the parse table construction phase. If w is nullable then F must be cal culated so that it does not cause a parsing conflict with w or with any other right-hand-side of the production rule. Thus, the lead-in to U can be
F = T
The set F is the select set of U for parsing purposes. This will keep members of the first set of a preferred production that are· not in F fr om interfering with cal culation of the select table. The set of tokens that can lead directly to U is then
t 1 ' · t; Pt Pn
where some of the P i may not be derivable because no member of their first set is a member of F. The is allowable because the · first set of U has already been calculated.
Using Fas the first set for U, guarantees that the pro duction Ul will not cause a parsing conflict with the first sets of the right-hand-sides of the production in which it occurs, but . it may still cause a conflict in produc tions that c_ an derive A. The key to the transformation method is to . always resolve the ambiguity against the alternative that derives the unstructured element. The first step of this is to calculate the select table and follow sets in the usual manner , using the designated first sets for the transformed elements. Next comes the grammar validity check.
If there is a first-first conflict in the grammar then check to see if one of the alternatives derives a trans formed unstructured element. If so, resolve the conflict by selecting the other alternative. If there is a first follow conflict caused by the first set of an unstructured element in the follow set, remove the cqnflicting token fr om the first set of the. following non-terminal that de rives the unstructured element. If there is a first-follow conflict caused by an unstructured element in the first set of a non-terminal, then remove the token fr om the first set of the non-terminal that derives the unstruc tured element. The first-first conflicts should be re solved before the first-follow conflicts so that the prob lem of multiple conflicts does not arise. Note that all of these conflicts do not occur in the parse table con-
struction for a parser that treats incomplete grammars specially because the unstructured elements are treated essentially as distinguished unique tokens in the gram mar analysis.
The purpose of the above conflict resolution strategy is to make the decisions when the parse tables are built that the parser would make at run-time in a parser for incomplete grammars. To see that this is true, first con sider the production U, in the case where win the gram mar above is non-nullable. In an unstructured parser the incomplete element will be· encountered and instan tiated when w completes, i.e., when the parser encoun ters a legal follow of w. This is exactly what ·happens in the transformed grammar.
· · -·
Suppose that w is .nullable. Then �he unstructured element can be derived directly by A and indirectly by productions that derive A. Assu�e that the current non terminal is A. The unstructured element ,wiµ be _ dii:e�t,ly derived if the current token is not in the first set �of. w or the first set of any other right-hand-side of A, and if A is not nullable with the current t�ken in the .follo.w set. The same action is taken in the transformed grammar because Ul does not have any me�bers of the first set of w or the other right-hand-sid�s in its first set.
Now assume that the current non-terminal is not A but one that can derive A. In the unstructured parser, the unstructured element in A can be derived if the current token is not in the first set of the current non terminal and if the· current non-terminal is not nullable
The last point to establish is the validity of the gram mar model in which the incomplete element was intro duced. The model is valid because only · one unstruc tured element needs to be concentrated oii at a time. This is true because • A non-terminal cannot have two separate unstructured elements in its first set. · · ·
• An unstructured element cannot have an unstruc tured element in its follow set.
• A preferred production cannot start or end with an unstructured element.
It has been shown that an incomplete grammar may be transformed into an equivalent complete grammar. Is there any advantage in doing so? The grammar trans formation introduces new productions and thus causes the parsing tables to increase in size. This will in turn cause the run-time parse tree data structured to grow in size. The transformed grammar will introduce ap
proximately one extra parse tree node for each token that is parsed as part of an unstructured element. The transformation process also significantly increases the complexity of the grammar analysis process. The real advantage of the algorithm is that it allows the incom plete grammar to be parsed by a conventional LL(l) parser._ This is an advantage because it makes the gram mars more easily adapted to other parsers and because it reduces the complexity of the parsing algorithm.
PREVIOUS WORK
Syntax-directed editors such as the Cornell Synthesizer [RI'84, TR81] allow phrases to be entered as text below some level in the syntax. Textual input is parsed by a stand alone bottom-up parser that begins with the non terminal represented by the current placeholder. The parsed text must be able to be grafted onto the parse tree as a complete, correct subtree.
Carlo Ghezzi and Dino Mandrioli have developed · a bottom up parsing algorithm with is based on the use of grammars that are both LR and RL [GM79b, GM79a]. The authors also have published .;an algorithm that is more complex but operates on a more general class of LR grammars [GM80]. The BABEL editor [Hor81] is based on the Ghezzi and Mandrioli symmetric algo rithm. Programs are not permitted to be incomplete, and it is not possible to place unexpanded placehold ers in the tree. Kirslis [CK84, · Kir85] has extended the Ghezzi and Mandrioli LR(0) algorithm to LR(l), has modified the parsing algorithm to handle comments and introduced explicit error handling routines.
An editor dubbed SRE for Syntax Recognizing Ed itor' has. been developed at the University of Toronto [BHZ85]. This editor provides flexible error handling · by dividing the parser fu nction into two levels. A low level parser guarantees that the user's program consists of a sequence of syntactically correct lines. A high-level parser guarantees that the syntactically legal lines form a syntactically legal program. Only low-level syntac tic correctness is enforced while text is being entered. Syntax errors within lines are pointed our immediately and the user is forced to correct them before proceed ing. Syntax errors between lines are only pointed out when the user requests a high-level parse. Morris and Schwartz [MS81] published a LL(l) parsing algorithm that maintains a sequence of syntactically correct parse trees.
50
Orailoglu implemented an LL(l) incremental parsing algorithm as part of the the restructuring programmable display editor (RPDE, now called Fred) at the Univer sity of Illinois [Ora83, Shi85]. The algorithm maintains a single parse tree but allows multiple errors with unre stricted parsing by invoking a simple context ( and his
CONCLUSION
This paper presents an incremental LL(l) parsing al gorithm that is suitable for use in language-based edi tors and that has been implemented in Fre d, structured, screen-based editor. A keystroke intensive mode of user interaction motivates the follow-the-cursor style of pars ing in which parsing is normally halted at the cursor, leaving suspensions in the parse tree that are indicated to the user as soft-templates. Algorithms for tree prepa ration, incremental parsing, and error recovery are pre sented. The algorithms implement a style of user inter action that is both efficient and convenient. It is efficient because the editor only needs to perform limited parsing after changes. It is convenient because the user is able to enjoy the benefit of structuralization while retaining complete freedom of program entry.
(Figure 2 :
2usually) white space in this presentation Sweep as far as it can until it hits a token. The first non terminal to the right of that token becomes the location of the current parsing position.
Figure 4
4shows the inner parsing algorithm. This al gorithm iterates through its parsing decisions until it runs out of tokens and/or runs out of open parse tree. If the front of the parse queue and the predicted parse tree element at the current parsing position agree then the queue element is simply grafted onto the tree at the current position. The parse queue is then popped and the parse position advanced. It may be that there is not an exact match but that the queue element is in the select set of Parse Position. In that case the production 44 Outer Parse while (IOT Empty (ParseQueue)) do InnerPar1e(Par1eP01ition , ParseQueue); if ((Sat isfied(ParsePosit ion) ) AID (IOT Empty(ParseQueue))) then ResolveConflict (ParsePosition) ; endif endwhile BrrorRecovery();
Figure 5 :
5Outer Parse for Follow-the-Cursor Parsing indicated is instantiated (there can be only one by LL(l) restrictions) and the Parse Position is advanced to the first element of the new production. If neither of the above cases hold then the element at the front of the parse queue does not fit at the current position. The algorithm checks to see if there is a non terminal subtree at the front of the parse queue that can be reduced. If this is not the case then it checks to see if Parse Position is nullable with Queue Element as a correct follow. If this is the case then the non terminal at Parse Position is nulled and Parse Position advances. If none of the above cases holds then the conflict resolution algorithm is invoked.
Figure 3 :Figure 4 :
34FindDecision (ParsePosition , Royallode , QueueElement ) Listlode = FindList(Decisionlode , Royallode); lullablelode = Findlullable(ParsePosition , Royallode , QueueElement ); if (Liatlode != IULL) then ParsePosition = GraftlevList(Listlode , ParsePosition) ; elseif (Decisionlode != IULL ) then ParsePosition = StealProduction(Decisionlode , ParsePosition) ; elseif (lullablelode != IULL ) then ParsePosition = StealProduction(lullablelode, ParsePosition) ; elaeif (Islonterm(QueueElement )) then Reduce(ParseQueue); else GraftError(ParsePoaition) ; endif endwhile Conflict Resolution Algorithm Inner Parse(ParsePosition, ParseQueue) while ((IDT Empty(ParseQueue)) AID (IDT Satisfied(ParsePosition) )) do QueueElement = Front (ParseQueue); if (QueueElement matches ParsePosition) then Graft(QueueElement , ParsePosition) ; Pop(ParseQueue); Advance(ParaePoaition) ; elaeif (Select [ParsePoaition, QueueElement] != ERROR) then Instantiate(ParaePosition , Select [ParsePosition, QueueElement] ); Advance(ParsePosition) ; elseit (QueueElement not a terminal) then Reduce(ParseQueue); elseif (lullable(ParsePosition) AID (Follows (ParsePosition, QueueElement )) then lullProduction(ParsePosition) ; Advance(ParsePosition) ; else ResolveConflict (ParsePosition , ParseQueue); endif endwhile Inner Parsing Algorithm
Figure 6 :
6Error Recovery in 6
A is the entire token set.Now consider the grammar:
not to include in the token set derivable from U b , Let the set of preferred non-terminals be denoted P = P1 , , Pn and let F = T -follow(U) -first(p1 ) -... -first(p n ) Then the production rule for U b is U b ft Pn where ti, ... , . t-t are . the elements of F.
with the token in its follow sets. These are exactly the conditions under which u, can be derived in' the trans formed grammar. Tokens that would not derive the unstructured element above will not do so in the trans formed grammar because of the inanne· r in which pars ing conflicts are resolved in the select_ table. The·tokens that are left are those that do not cause conflfots and they derive the unstructur_ed element.
tory) sensitive error recovery algorithm. The key dis advantage of the algorithm is that it lacks an effective means of limiting parsing and tends to parse forward too far, recovering from errors along the way, when changes are made to the internal structure of a program. Orailoglu [Ora83] provided the original implementation of incomplete grammars.
Figure 1: Change-Up date Loop should be notified at the earliest possible moment if an error is made. The solution to this conflict is to imple ment what is called fo llow-the-cursor parsing with soft templates. As a user makes changes the method will parse only up to ( and including) the token that con tains the cursor. This keeps it from trying to parse past the cursor when a user has not yet completed a change. Unsatisfied elements of a production are indicated to the user as soft templates. Soft templates visually show the user what . is missing in the -parse tree. They are templates in that they should a valid production at the point they appear but they are soft because they do not constrain the user in any way. Further text is brought into consideration through cursor movement. The in cremental LL(l) parsing algorithms presented here are a generalization of the table driven LL( 1) parsing al gorithms presented by Lewis, Rosenkrantz, and Sterns [PLRS76] and use Select, Nullable and Fo llows tables.THE CHANGE-UPDATE LOOPAs a user changes a program the editor executes the loop illustrated in figure 1 to achieve a correct restruc turalization. The localized region of change must be retokenized, the tree prepared, and the new tree state incrementally parsed. The data structures of the non incremental algorithm are extended to fa cilitate incre mental parsing. The parsing queue is modified to handle both tokens and non-terminals so that subtrees from the parse tree do not always have to be broken down into to kens as they are moved to the parse queue. This means that the parsing tables must be expanded to take ac count of non-terminals. We now assume that both the Select table and the Follows table cross reference non terminals with both tokens and non-terminals.-TOKENIZATION We will regard the tokenization phase as a black box process that produces a series of tokens from the local ized region of change. It is assumed that incremental tokenization produces a queue of tokens and two mark ers in the parse tree denoted the Lexical Left Boundary and the Lexical Right Boundary. These markers point out the region along the frontier of the parse tree (in clusive) that has become invalid as a result of the new TREE PREPARATION • SWEEP The next step in the change-update loop is the tree preparation process called Sweep. This is the process that breaks down the affe cted region of the parse treeand prepares the tree for the parsing algorithm. Two nodes of the parse tree have special meaning in this process. They are called the Co mmon Ancestor and the Royal Node and are defined as follows:• The Common Ancestor is the lowest node in the parse tree that is an ancestor of both the Lexical Left Boundary and the Lexical Right Boundary.while (TRUE)
<user change>
<retokenization>
<preparation of Parse Tree (Sweep)>
<incremental parse>
_ <semantic updat e>
42
tokenization.
• The Royal Node is the highest node in the parse
tree such that the Lexical Left Boundary is the first
token of the production 1 • If there is no such node
then the Royal Node is the Lexical Left Boundary.
Incomplete LL( 1) grammars are presented as a way of dealing with the complexity of fu ll language gram mars and as a mechanism for providing structured editor support for task languages that are only partially struc tured. Orailoglu devised specialized algorithms for pars ing based on incomplete grammars. This work shows how the grammars can be translated into conventional LL(l) grammars, eliminating the need for specialized parsing algorithms.April 1984. (Released as ACM SOFTWAREENGINEERING Notes 9(3) and ACM SIG PLAN Notices 19(5).).[GM79a] C.Ghezzi
Si'e -a syntax recogniz ing editor. Software-Practice and Experience. J Frank, Richard C Budinski, Holt, B Safwat, Zaky, 15Frank J. Budinski, Richard C. Holt, and Safwat B .. Zaky. Si'e -a syntax recogniz ing editor. Software-Practice and Experience, 15(5):489-497, May 1985.
The saga project: A system for software devel opment. H Roy, Campbell, A Peter, Kirslis, Pro ceedings of the ACM SIGSOFT/SIGPLAN Software Engineering Symposium on Prac tical Software Development Environments. Peter HendersonRoy H. Campbell and Peter A Kirslis. The saga project: A system for software devel opment. In Peter Henderson, editor, Pro ceedings of the ACM SIGSOFT/SIGPLAN Software Engineering Symposium on Prac tical Software Development Environments, |
220,837,074 | [] | Changement stylistique de phrases par apprentissage faiblement supervisé
Damien Sileo damien.sileo@synapse-fr.com
Synapse Développement
5 Rue du Moulin Bayard31000Toulouse
Camille Pradel camille.pradel@synapse-fr.com
Synapse Développement
5 Rue du Moulin Bayard31000Toulouse
Philippe Muller philippe.muller@irit.fr
Synapse Développement
5 Rue du Moulin Bayard31000Toulouse
Tim Van De Cruys tim.van-de-cruys@irit.fr
Synapse Développement
5 Rue du Moulin Bayard31000Toulouse
Changement stylistique de phrases par apprentissage faiblement supervisé
(2) IRIT, Université Paul Sabatier 118 Route de Narbonne 31062 Toulouse (*) Contributions égalesvariational auto-encoderweakly supervised learningstyle transfer
Plusieurs tâches en traitement du langage naturel impliquent de modifier des phrases en conservant au mieux leur sens, comme la reformulation, la compression, la simplification, chacune avec leurs propres données et modèles. Nous introduisons ici une méthode générale s'adressant à tous ces problèmes, utilisant des données plus simples à obtenir : un ensemble de phrases munies d'indicateurs sur leur style, comme des phrases et le type de sentiment qu'elles expriment. Cette méthode repose sur un modèle d'apprentissage de représentations non supervisé (un auto-encodeur variationnel), puis sur le changement des représentations apprises pour correspondre à un style donné. Le résultat est évalué qualitativement, puis quantitativement sur le jeu de données de compression de phrases Microsoft, avec des résultats encourageants.ABSTRACTTextual Style Transfer using Weakly Supervised LearningSeveral natural language processing tasks, such as sentence paraphrasing, compression, or simplification, consist of sentence modifications that aim to preserve the global sentence meaning. Most existing methods rely on specific data and models tuned towards a particular task. We introduce a general method that is capable of tackling those problems using simpler data : a set of sentences paired with their stylistic features, such as their lengths for compression. The method relies on unsupervised representation learning with a variational auto-encoder, and then changing the input text representation to match a given style. The method is evaluated both qualitatively and quantitatively on Microsoft's dataset for sentence compression, with encouraging results.. MOTS-CLÉS : auto-encodeur variationnel, apprentissage faiblement supervisé, changement stylistique.
Introduction
La génération textuelle est une tâche centrale pour l'interaction entre un système intelligent et ses utilisateurs (réponse d'un agent conversationnel, résumé de texte, génération d'article...). Lors de cette interaction, il est désirable de contrôler les générations afin qu'elles respectent des contraintes imposées par le contexte. On peut ainsi vouloir agir sur la longueur d'une phrase générée, son niveau de langue, sa politesse, sa polarité, et d'autres caractéristiques dé-corrélables, au moins en partie, de la sémantique. On les regroupera sous le terme fédérateur de "style". La transformation de textes pour modifier leur style et résoudre ce problème constitue un domaine actif de recherche, où sont notamment utilisés des modèles et données spécifiques (Pitler, 2010) (Shardlow, 2014) (Xu et al., 2012). Les données utilisées sont des couples alignés de phrases du style original et du style "cible".
Prenons l'exemple de la contraction de phrase. On peut vouloir passer automatiquement de la phrase source They also, by law, have to be held in Beirut à une phrase contractée dite cible : They have to be held in Beirut. Cependant, les données sous forme de telles paires alignées sont parfois trop peu nombreuses pour bien apprendre directement la tâche, et leur création est coûteuse. Nous nous plaçons ici dans un cadre à la fois unificateur aux problèmes de changement de style et nécessitant une plus faible supervision. Au lieu de phrases alignées de deux styles différents, la méthode proposée se contente d'un ensemble de phrases et d'indicateurs sur leur style. Ces indicateurs peuvent être liés à l'origine des phrases (par exemple l'année d'écriture) ou calculés (comme la longueur des phrases). Notre modèle utilise un indicateur comme signal pour modifier les générations et correspondre à une valeur voulue. Pour cela, nous introduisons d'abord des modèles neuronaux de génération de phrases (section 2), puis proposons une méthode de changement de style (section 3).
Modèles génératifs conditionnés
Nous présentons d'abord les modèles capables d'apprendre une représentation de phrase qui permettra des modifications préservant certaines propriétés : les auto-encodeurs variationnels récurrents.
Auto-encodeurs récurrents
Les modèles de langue définissent une distribution de probabilité sur des séquences de mots x = w 1 ...w n .
p(x) = p(w 1 ...w n ) = n i=1 p(w i |w 1 ...w i−1 )(1)
Capable de prédire la probabilité d'un mot sachant ce qui précède, un tel modèle de langue est un modèle génératif (Bengio et al., 2003).
Afin de générer des phrases correspondant précisément à un message, on suppose que p est conditionné par une représentation latente z ∈ R d de la phrase (d est la dimension de l'espace latent).
p(x|z) = n i=1 p(w i |w 1 ...w i−1 , z)(2)
z est censé capturer les dimensions sémantiques et stylistiques de la phrase.
Ici p(x|z) est paramétré par un réseau de neurones récurrents conditionné par z, et z est obtenu par encodage de la phrase par z = q(x) où q est un autre réseau de neurones récurrent. C'est l'architecture seq2seq (Sutskever et al., 2014), constituée de cet encodeur et d'un décodeur. Ainsi, dans le paradigme des auto-encodeurs, on apprend un modèle génératif en minimisant l'erreur de reconstruction des données. L'encodeur change la phrase en un signal qui aide le décodeur à la restituer. . On dit désormais que z est une variable aléatoire z ∼ N (0, I), ici une gaussienne multivariée. L'encodeur apprend alors à conditionner cette variable en estimant le postérieur P (z|x) par q(z|x). Cela permet de déterminer un tirage qui sera présenté au décodeur. Concrètement, l'encodeur prédit ici pour chaque x les moyennes µ de z ainsi qu'une matrice de covariance Σ qui définissent le tirage. La figure 1 montre une vue d'ensemble du système. Toujours fixer Σ = 0 ramène au cas de la section 2.1.
On a alors q(z|x) = N (z − µ, Σ). Le décodeur p estime la probabilité p(x|z) de la phrase à partir d'un tirage de z .
Dans ce cadre, on ne peut pas calculer simplement la vraisemblance des données, mais il existe la borne inférieure suivante :
L(x, Θ) = E q(z|x) [log p(x|z)] − KL(q(z|x)||p(z)), où Θ désigne les paramètres de p et q(3)
KL désigne la divergence de Kullback-Leibler qui mesure l'écart entre deux distributions de probabilités. Maximiser L permet de maximiser la vraisemblance des données, et ses deux termes sont interprétables : le premier correspond au terme de reconstruction et le second à une régularisation qui rapproche le postérieur du prior. C'est le modèle des auto-encodeurs variationnels (VAE). L'optimisation est possible par descente de gradient.
Procédure de changement style
On introduit maintenant une variable y qui correspond à une valeur de style. On suppose un modèle graphique où z conditionne à la fois x et y, comme le montre la figure 2 Enlever y ramène au VAE classique.
La variable y peut être le nombre de mots d'une phrase, son sentiment, son temps verbal, etc. Pour trouver z, on optimise maintenant P (z|y, x), la probabilité de la représentation z sachant un texte x et le style y qu'on veut lui imposer. On a d'après Bayes, et en conditionnant par x : Comme y ne dépend que de z si z est connu, on a donc P (y|z, x) = P (y|z). Supposons que y dépend linéairement de µ et que le modèle linéaire θµ s'écarte de y avec une variance 2 . On approxime alors P (y|z) avec r(y|z) = N (y − θµ, 2 ) où µ est la moyenne de z. Par ailleurs, q(z|x) = N (z − µ, Σ) , décrit dans 2.2, estime P (z|x). D'où :
P (z|y, x) ∝ P (y|z, x)P (z|x)(4)P (z|y, x) ∝ ∼ r(y|z)q(z|x)(5)
Il suffit alors d'estimer r et q pour trouver la représentation z désirée correspondant à la fois à x et y, en maximisant P (z|y, x). Nous estimons d'abord q puis r sur des données d'entraînement d'après la procédure présentée dans la section suivante.
Apprentissage
Nous prenons comme point de départ un ensemble de textes U , un ensemble S de textes pour lesquels y s , s ∈ S est défini, et un autre ensemble T de textes à changer selon y t , t ∈ T . Ces ensembles peuvent se chevaucher. Nous entraînons un auto-encodeur variationnel afin d'apprendre q et p avec une descente de gradient sur les textes x u , u ∈ U. On extrait ensuite les représentations latentes z s des textes s ∈ S de par les moyennes µ s et les covariances Σ s . Afin de déterminer r, on entraîne un modèle linéaire θ pour prédire y s à partir de µ s , par maximisation de la vraisemblance i∈S N (y i − θµ i , 2 ) en réservant une partie de S pour la validation. L'estimation de y n par θµ n exhibe une variance empirique 2 estimée sur les données de validation, et θ est fixé pour la suite.
Inférence
Pour changer les styles de textes t ∈ T , afin qu'ils correspondent à y t , on cherche à maximiser la vraisemblance de leurs représentations z t :
L = i∈T N (y i |θz i , 2 )N (z i − µ i , Σ i )(6)
La fonction de coût du changement de style est alors la suivante (log-vraisemblance) :
C(z, y, µ, Σ) = i∈T ((y i − θz i ) 2 + 2 (z i − µ i ) T Σ −1 i (z u − µ i ))(7)
L'apprentissage non supervisé utilise 22M posts en anglais du réseau social
Configuration
On On prend pour y le nombre de mots par phrase, et on apprend θ à partir d'une version normalisée de y et de 800k exemples dont 10% servent à la validation et à la détermination de .
Résultats
La table 1 montre des phrases x choisies comme entrées, une longueur y qu'on veut leur imposer et pour chacune deux phrases générées par la procédure 3.1 avec ces deux entrées. Les générations conservent globalement le sens de la phrase source tout en rapprochant le nombre de mots de l'objectif. Afin d'évaluer la compression, on génère des phrases à partir des phrases sources, qu'on tronque pour qu'elles soient de la même longueur que la phrase cible du jeu de données. On compare la génération fournie par notre modèle puis tronquée avec la compression terrain. Une mesure de rappel au niveau des mots permet d'estimer si l'information de la phrase source a pu être conservée. Cette TABLE 2 -résultats compression et intervalles de confiance rappel longueur des générations z = 0 5.26 ± 0.16 10.91 ± 0.015 z = µ 6.65 ± 0.15 11.14 ± 0.029 z =z 6.88 ± 0.15 11.09 ± 0.029 métrique n'est pas optimale pour évaluer le système(Toutanova & Brockett, 2016) mais est simple et interprétable. La baseline z = 0 correspond à une génération de phrase non conditionnée par la phrase source. Le cas z = µ correspond à une génération conditionnée par la phrase source x. Enfin, le cas z =z correspond à la génération conditionnée par x et la longueur y de la phrase cible en y retranchant l'écart type de y pour compresser davantage. Les résultats présentés à la table 2 démontrent que le modèle compression raccourcit les phrases en conservant des mots présents dans les phrases cibles donc a priori importants. Bien sûr les résultats sont plus faibles que des modèles dédiés à la tâche mais notre approche est très générale et évaluée sur des données différentes de celles de l'apprentissage de l'auto-encodeur.
Travaux connexes
Ce travail s'inscrit dans le domaine des changement de style de texte (Shardlow, 2014;Xu et al., 2012), mais avec une approche radicalement différente, plus générale et souple. Des analogies sont possibles avec les méthodes de transfert de style appliquées au traitement d'image (Gatys et al., 2015), où on rapproche les textures d'une image à celles que l'on désire.
Les VAE conditionnés ont été utilisés dans d'autres travaux. considère dans le modèle M2 que y est une variable latente concaténée à z, et z est utilisée pour prédire y dans le modèle M1 mais sans interprétation probabiliste et dans l'optique d'apprentissage semi-supervisé.
Le modèle le plus proche du notre est (Hu et al., 2017), pré-publié récemment et qui s'attaque au même problème avec une procédure moins découplée (prise en compte de nouveaux indicateurs bien plus coûteuse) et en attachant le style à la variable latente z. Leur évaluation ne porte pas sur une tâche établie comme la compression mais sur le changement de polarité de reviews de films. Enfin,(Suzuki et al., 2016) utilise un modèle graphique comparable pour l'apprentissage multimodal, cadre aussi exploré par (Sohn et al., 2015). Notre approche est la seule applicable au texte, et qui permette si simplement d'ajouter de nouveaux changements de style.
FIGURE 2 -Modèle graphique proposé
utilise un vocabulaire de 32k mots et le tokenizer Treebank (Bird et al., 2009) ; [unk] dénote les mots hors vocabulaire. Les réseaux récurrents q et p sont des "Recurrent Highway Network" (Zilly et al., 2016) de dimension 600 et profondeur 5. Les représentations des mots sont initialisées avec ConceptnetNumberbatch (Speer & Chin, 2016) qui sont de dimension 300. z est de dimension d = 64. Les représentations des mots en entrée et sortie sont les mêmes, une projection affine étant utilisée sur la dernière couche du réseau (Press & Wolf, 2017). L'optimisation est réalisée avec l'algorithme Adam (Kingma & Ba, 2015), le taux d'apprentissage fixé à 10 −4 et des "batch" de taille 128 sur 2 epochs. On utilise un "word dropout" (Bowman et al., 2016) et un minimum d'information de 4 pour la divergence KL (Kingma et al., 2016). Le décodeur génère 2000 phrases par exemple dans l'analyse qualitative où les deux meilleures sont sélectionnées au sens de la vraisemblance obtenue. Dans l'analyse quantitative, les exemples étant bien plus nombreux, seulement 50 phrases sont générées pour chaque exemple et la meilleure est retenue.
On s'évalue sur une tâche de compression de phrase avec le jeu de données Microsoft (Toutanova & Brockett, 2016) composé de phrases sources alignées avec des phrases raccourcies par des humains. On garde les 3423 phrases dont la cible a moins de 16 mots et les phrases sources de moins de 40 mots correspondant à x t , t ∈ T .Reddit ayant entre 4 et 40
symboles et ne contenant pas d'hyperliens.
TABLE 1 -
1Exemples de changement de style : compression et allongement.
phrase source (x)
y
phrases générées (x)
yes, i totally agree with you 1
yeah , i know .
1
agreed , it is .
i don't know
18 i do n't think it counts .
18 i do n't know they did .
the sky is blue
4
the [unk] is [unk]
4
[unk] [unk] [unk] [unk]
On retrouve une régularisation Tikhonov, de coefficients dictés par le VAE. Intuitivement, z i s'éloigne de la représentation d'origine µ i pour qu'elle ait plus de chances de correspondre à un style y i , éloignement davantage pénalisé sur les dimensions où la variance prédite est faible, qui sont plus importantes pour garder le sens de la phrase. La représentationz d'un texte qui minimise C a donc subi un changement de style et le décodagex dez par p renvoie une phrase correspondant au style y.
On a introduit un cadre faiblement supervisé de changement de style unifiant plusieurs tâches, et montré des éléments de faisabilité. L'approche est très générale et peut aider à comprendre les représentions latentes. Des applications concrètes peuvent maintenant être envisagées avec de meilleurs modèles d'apprentissage non supervisé ou en utilisant des heuristiques pour sélectionner les meilleures phrases générées par le modèle. On a introduit un cadre faiblement supervisé de changement de style unifiant plusieurs tâches, et montré des éléments de faisabilité. L'approche est très générale et peut aider à comprendre les représentions latentes. Des applications concrètes peuvent maintenant être envisagées avec de meilleurs modèles d'apprentissage non supervisé ou en utilisant des heuristiques pour sélectionner les meilleures phrases générées par le modèle.
A Neural Probabilistic Language Model. Bengio Y Références, Ducharme R, Vincent P. & Janvin C, The Journal of Machine Learning Research. 3Références BENGIO Y., DUCHARME R., VINCENT P. & JANVIN C. (2003). A Neural Probabilistic Language Model. The Journal of Machine Learning Research, 3, 1137-1155.
Natural Language Processing with Python. Bird S, Klein E. & Loper E, O'Reilly Media, IncBIRD S., KLEIN E. & LOPER E. (2009). Natural Language Processing with Python. O'Reilly Media, Inc.
. R Bowman S, L Vilnis, O Vinyals, M Dai A, & Jozefowicz R, Bengio S, BOWMAN S. R., VILNIS L., VINYALS O., DAI A. M., JOZEFOWICZ R. & BENGIO S. (2016).
Generating sentences from a continuous space. Proceedings of the Twentieth Conference on Computational Natural Language Learning. the Twentieth Conference on Computational Natural Language LearningBerlin, GermanyAssociation for Computational LinguisticsGenerating sentences from a continuous space. In Proceedings of the Twentieth Conference on Com- putational Natural Language Learning, p. 10-21, Berlin, Germany : Association for Computational Linguistics.
A Gatys L, S Ecker A, Bethge M, A Neural Algorithm of Artistic Style. arXiv preprintGATYS L. A., ECKER A. S. & BETHGE M. (2015). A Neural Algorithm of Artistic Style. arXiv preprint, p. 1-16.
. Yang Z Hu Z, Liang X, P Salakhutdinov R. & Xing E, Controllable Text Generation. arXiv preprintHU Z., YANG Z., LIANG X., SALAKHUTDINOV R. & XING E. P. (2017). Controllable Text Generation. arXiv preprint, p. 1-10.
Adam : A Method for Stochastic Optimization. P Kingma D, Ba J, International Conference on Learning Representations. KINGMA D. P. & BA J. (2015). Adam : A Method for Stochastic Optimization. In International Conference on Learning Representations.
Semi-supervised Learning with Deep Generative Models. P Kingma D, J Rezende D, M & Shakir, Welling M, Advances in Neural Information Processing Systems. 27KINGMA D. P., REZENDE D. J., SHAKIR M. & WELLING M. (2014). Semi-supervised Learning with Deep Generative Models. In Advances in Neural Information Processing Systems 27, p. 3581-3589.
Improved variational inference with inverse autoregressive flow. P Kingma D, T Salimans, Jozefowicz R, Chen X, & Sutskever I, Welling M, Advances in Neural Information Processing Systems. 29KINGMA D. P., SALIMANS T., JOZEFOWICZ R., CHEN X., SUTSKEVER I. & WELLING M. (2016). Improved variational inference with inverse autoregressive flow. In Advances in Neural Information Processing Systems 29, p. 4743-4751.
Auto-Encoding Variational Bayes. P Kingma D, Welling M, International Conference on Learning Representations. KINGMA D. P. & WELLING M. (2014). Auto-Encoding Variational Bayes. In International Conference on Learning Representations.
Using the output embedding to improve language models. Pitler E , PRESS O. & WOLF L.Proceedings of the 15th Conference of the European Chapter. the 15th Conference of the European ChapterValencia, SpainAssociation for Computational Linguistics2Rapport interne, University of PennsylvaniaPITLER E. (2010). Methods for Sentence Compression. Rapport interne, University of Pennsylvania. PRESS O. & WOLF L. (2017). Using the output embedding to improve language models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics : Volume 2, Short Papers, p. 157-163, Valencia, Spain : Association for Computational Linguistics.
A Survey of Automated Text Simplification. Shardlow M, Special Issue on Natural Language Processing. SHARDLOW M. (2014). A Survey of Automated Text Simplification. International Journal of Advanced Computer Science and Applications, (Special Issue on Natural Language Processing), 58-70.
Learning Structured Output Representation using Deep Conditional Generative Models. Sohn K, H Lee, Yan X, Advances in Neural Information Processing Systems. SOHN K., LEE H. & YAN X. (2015). Learning Structured Output Representation using Deep Conditional Generative Models. In Advances in Neural Information Processing Systems, p. 3483- 3491.
An Ensemble Method to Produce High-Quality Word Embeddings. Speer R, Chin J, arXiv preprintSPEER R. & CHIN J. (2016). An Ensemble Method to Produce High-Quality Word Embeddings. arXiv preprint, p. 1-12.
Sequence to sequence learning with neural networks. Sutskever I, Vinyals O, V Le Q, Advances in Neural Information Processing Systems. 27SUTSKEVER I., VINYALS O. & LE Q. V. (2014). Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27, p. 3104-3112.
Suzuki M, & Nakayama K, Matsuo Y, Joint Multimodal Learning With Deep Generative Models. arXiv preprintSUZUKI M., NAKAYAMA K. & MATSUO Y. (2016). Joint Multimodal Learning With Deep Generative Models. arXiv preprint, p. 1-12.
A Dataset and Evaluation Metrics for Abstractive Compression of Sentences and Short Paragraphs. Toutanova K, Brockett C, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingTOUTANOVA K. & BROCKETT C. (2016). A Dataset and Evaluation Metrics for Abstractive Compression of Sentences and Short Paragraphs. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP-16), p. 340-350.
Paraphrasing for style. Xu W, Ritter A, Dolan B, Grishman R, Cherry C, The COLING 2012 Organizing Committee. Mumbai, IndiaProceedings of COLING 2012XU W., RITTER A., DOLAN B., GRISHMAN R. & CHERRY C. (2012). Paraphrasing for style. In Proceedings of COLING 2012, p. 2899-2914, Mumbai, India : The COLING 2012 Organizing Committee.
Variational Neural Machine Translation. Zhang B, Xiong D, ZHANG B. & XIONG D. (2016). Variational Neural Machine Translation. p. 521-530.
J G Zilly, K Srivastava R, Koutník J. & Schmidhuber J, Recurrent Highway Networks. arXiv preprintZILLY J. G., SRIVASTAVA R. K., KOUTNÍK J. & SCHMIDHUBER J. (2016). Recurrent Highway Networks. arXiv preprint, p. 1-12. |
||
252,819,332 | MANTIS at SMM4H'2022: Pre-Trained Language Models Meet a Suite of Psycholinguistic Features for the Detection of Self-Reported Chronic Stress | This paper describes our submission to the Social Media Mining for Health (SMM4H) 2022 Shared Task 8, aimed at detecting selfreported chronic stress on Twitter. Our approach leverages a pre-trained transformer model (RoBERTa) in combination with a Bidirectional Long Short-Term Memory (BiLSTM) network trained on a diverse set of psycholinguistic features. We handle the class imbalance issue in the training dataset by augmenting it by another dataset used for stress classification in social media. | [
207870937,
14068874
] | MANTIS at SMM4H'2022: Pre-Trained Language Models Meet a Suite of Psycholinguistic Features for the Detection of Self-Reported Chronic Stress
18 October 12-17, 2022
Sourabh Zanwar sourabh.zanwar@rwth-aachen.de
RWTH Aachen University
University of Amsterdam
RWTH Aachen University
RWTH Aachen University
Daniel Wiechmann d.wiechmann@uva.nl
RWTH Aachen University
University of Amsterdam
RWTH Aachen University
RWTH Aachen University
Yu Qiao yu.qiao@rwth-aachen.de
RWTH Aachen University
University of Amsterdam
RWTH Aachen University
RWTH Aachen University
Elma Kerz elma.kerz@ifaar.rwth-aachen.de
RWTH Aachen University
University of Amsterdam
RWTH Aachen University
RWTH Aachen University
MANTIS at SMM4H'2022: Pre-Trained Language Models Meet a Suite of Psycholinguistic Features for the Detection of Self-Reported Chronic Stress
Proceedings of the 29th International Conference on Computational Linguistic
the 29th International Conference on Computational Linguistic1618 October 12-17, 202216
This paper describes our submission to the Social Media Mining for Health (SMM4H) 2022 Shared Task 8, aimed at detecting selfreported chronic stress on Twitter. Our approach leverages a pre-trained transformer model (RoBERTa) in combination with a Bidirectional Long Short-Term Memory (BiLSTM) network trained on a diverse set of psycholinguistic features. We handle the class imbalance issue in the training dataset by augmenting it by another dataset used for stress classification in social media.
Introduction
The global increase in social media use over the past decade has afforded researchers new opportunities to mine health-related information that can ultimately be used to improve public health. The Social Media Mining for Health Applications (SMM4H) Shared Task involved ten natural language processing challenges of using social media data for health research (Weissenbacher et al., 2022). In our submission to the task targeting the classification of self-reported chronic stress on Twitter (Task 8), we built hybrid models that combine pre-trained transformer language models with Bidirectional Long Short-Term Memory (BiLSTM) networks trained on a diverse set of psycholinguistic features.
Data
The Twitter data provided by the organizers of Task 8 comprised of a total of 4,195 tweets whose distributions over training, development and testing sets are shown in Table 3 of supplementary material 1 . About 37% of the tweets are positive (selfdisclosure of chronic stress, Pos) and 63% are negative (non-self-disclosure of chronic stress, Neg). The only preprocessing step that was applied was 1 Supplementary material the removal of HTML and links from the text. To address the class imbalance in the data, we augmented the data using 1000 items with positive labels and 200 ones with negative labels from the Dreaddit dataset (Turcan and McKeown, 2019).
Measurement of Psycholinguistic Features
A set of 435 psycholinguistic features used in our approach fall into the following four categories: (1) features of morpho-syntactic complexity (N=19), (2) features of lexical richness, diversity and sophistication (N=77), (3) readability features (N=14), and (4) lexicon features designed to detect sentiment, emotion and/or affect (N=325). Measurements of these features were obtained using an automated text analysis system (for its recent applications, see e.g. for predicting eye-moving patterns during reading and for detection of Big Five personality traits and Myers-Briggs types). Tokenization, sentence splitting, part-of-speech tagging, lemmatization and syntactic PCFG parsing were performed using Stanford CoreNLP (Manning et al., 2014).
Description of System Architecture
We conducted experiments with a total of five models: (1) a fine-tuned Bidirectional Encoder Representations from Transformers (BERT) model (Devlin et al., 2018), (2) a fine-tuned RoBERTa model (Robustly Optimized BERT pre-training Approach) (Liu et al., 2019), (3) a bidirectional neural network classifiers trained on measurements of psycholinguistic features described in Section 2.2, and (4) and (5) two hybrid models integrating BERT and RoBERTa predictions with the psycholinguistic features. For each model, we performed experiments with and without data augmentation.
For (1) and (2) we used the pretrained 'bert-baseuncased' and 'roberta-base' models from the Huggingface Transformers library (Wolf et al., 2020) each with an intermediate BiLSTM layer with 256
h n = [ − → h n | ← − h n ]
is then transformed through a 2-layer feedforward neural network, using the Rectifier linear unit (ReLU) is as an activation function. The output of this network is then passed to a Dense Fully Connected (FC) Layer with dropout of 0.2 and is finally passed on to a terminal, fully connected layer. The output is a K dimensional vector, where K is the number of emotion labels. The architecture of the hybrid classification modelsmodels (4) and (5) -consists of two parts: (i) a pre-trained Transformer-based model with a BiL-STM layer and FC layer on top of it and (ii) the psycholinguistic features of the text fed into a BiL-STM layer and a FC layer. We concatenate the outputs of these layers before passing them into a final FC layer with a sigmoid activation function. Since the evaluation is based on the sensitivity of the system -i.e., here the classification of positive labels is more important -, we reduced the threshold of positive labels from 0.5 to 0.3. The model used to generate predictions for the test set was the RoBERTa-PsyLing hybrid model with the following configuration: BiLSTM-PsyLing: 2-layers, hidden size of 512 and dropout 0.2. We trained this model for 12 epochs, saving the model with the best performance from the development set. The optimizer used is AdamW with a learning rate of 2e-5 and a weight decay of 1e-4. We trained 5 models with 5 random 80% splits of the training data and superimposed a meta-learner to get the
Results
An overview of the performance metrics on the validation set is presented in Table 1. We found that the proposed hybrid models consistently outperformed the standard transformer-based baseline models, with an improvement in F1 of up to 3%. Training the models on the augmented data led to an average increase in performance of 3.8% F1 over the non-augmented data. Highest performance on the validation set (F1 = 81) was achieved by the RoBERTa hybrid model. Error inspection indicated that the the majority of errors are False Positives (see Table 5 2 ). This behavior was intentionally evoked by lowering the threshold for positive labels, as task scoring focused on the F1 assessment of these labels. Manual inspection of the errors also revealed that some predictions were incorrect due to labeling errors in the development set. Examples of such cases can be found in Table 6 2 .
Conclusion
We developed hybrid classification systems for the detection of self-reported chronic stress that integrate pre-trained transformer language models with BiLSTM networks trained on a diverse set of psycholinguistic features. Our experiments show that such hybrid models significantly outperform base transformer models for both augmented and nonaugmented data.
Figure 1 :
1Proposed hybrid model architecture hidden units(Al-Omari et al., 2020). For (3) -the model based solely on psycholinguistic features, we constructed a 2-layer BiLSTM with a hidden state dimension of 32. The input to that model is a sequence CM N 1 = (CM 1 , CM 2 . . . , CM N ), where CM i , the output of CoCoGen for the ith sentence of a document, is a 435 dimensional vector and N is the sequence length. To predict the labels of a sequence, we concatenate the last hidden states of the last layer in forward ( − → h n ) and backward directions ( ← − h n ). The result vector of concatenation
Table 1 :
1Results on the validation setModel
Aug Acc Prec Rec F1
PsyLing
No
69
67
67
67
BERT
No
75
75
77
74
RoBERTa
No
71
77
76
71
BERT-Hybrid
No
78
77
79
78
RoBERTa-Hyb. No
79
78
79
78
PsyLing
Yes 71
69
67
68
BERT
Yes 79
78
80
78
RoBERTa
Yes 78
79
81
78
BERT-Hybrid
Yes 81
80
81
80
RoBERTa-Hyb. Yes 82
81
83
81
final predictions.
Table 2 :
2Results on the test set (courtesy of challenge organizers)Model
Acc Prec Rec F1
RoBERTa-Hybrid 79.8 72
76
75
Tables 5,6 in supplementary material
A Supplementary materialSupplementary material can be found here https://bit.ly/3xq68bx.
Emodet2: Emotion detection in english textual dialogue using bert and bilstm models. Hani Al-Omari, Malak A Abdullah, Samira Shaikh, 10.1109/ICICS49469.2020.2395392020 11th International Conference on Information and Communication Systems (ICICS). Hani Al-Omari, Malak A. Abdullah, and Samira Shaikh. 2020. Emodet2: Emotion detection in english tex- tual dialogue using bert and bilstm models. In 2020 11th International Conference on Information and Communication Systems (ICICS), pages 226-232.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Pushing on personality detection from verbal behavior: A transformer meets text contours of psycholinguistic features. Elma Kerz, Yu Qiao, Sourabh Zanwar, Daniel Wiechmann, arXiv:2204.04629arXiv preprintElma Kerz, Yu Qiao, Sourabh Zanwar, and Daniel Wiechmann. 2022. Pushing on personality detec- tion from verbal behavior: A transformer meets text contours of psycholinguistic features. arXiv preprint arXiv:2204.04629.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
The stanford corenlp natural language processing toolkit. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, David Mcclosky, Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations. 52nd annual meeting of the association for computational linguistics: system demonstrationsChristopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language process- ing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55-60.
Dreaddit: A Reddit dataset for stress analysis in social media. Elsbeth Turcan, Kathy Mckeown, Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019). the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019)Association for Computational LinguisticsElsbeth Turcan and Kathy McKeown. 2019. Dread- dit: A Reddit dataset for stress analysis in social media. In Proceedings of the Tenth International Workshop on Health Text Mining and Information Analysis (LOUHI 2019), pages 97-107. Association for Computational Linguistics.
Overview of the seventh social media mining for health applications smm4h shared tasks at coling 2022. Davy Weissenbacher, Ari Z Klein, Luis Gascó, Darryl Estrada-Zavala, Martin Krallinger, Yuting Guo, Yao Ge, Abeed Sarker, Ana Lucia Schmidt, Raul Rodriguez-Esteban, Mathias Leddin, Arjun Magge, Juan M Banda, Vera Davydova, Elena Tutubalina, Graciela Gonzalez-Hernandez, Proceedings of the Seventh Social Media Mining for Health (SMM4H) Workshop and Shared Task. the Seventh Social Media Mining for Health (SMM4H) Workshop and Shared TaskDavy Weissenbacher, Ari Z. Klein, Luis Gascó, Dar- ryl Estrada-Zavala, Martin Krallinger, Yuting Guo, Yao Ge, Abeed Sarker, Ana Lucia Schmidt, Raul Rodriguez-Esteban, Mathias Leddin, Arjun Magge, Juan M. Banda, Vera Davydova, Elena Tutubalina, and Graciela Gonzalez-Hernandez. 2022. Overview of the seventh social media mining for health ap- plications smm4h shared tasks at coling 2022. In Proceedings of the Seventh Social Media Mining for Health (SMM4H) Workshop and Shared Task, pages 33-40.
Measuring the impact of (psycho-) linguistic and readability features and their spill over effects on the prediction of eye movement patterns. Daniel Wiechmann, Yu Qiao, Elma Kerz, Justus Mattern, arXiv:2203.08085arXiv preprintDaniel Wiechmann, Yu Qiao, Elma Kerz, and Justus Mattern. 2022. Measuring the impact of (psycho-) linguistic and readability features and their spill over effects on the prediction of eye movement patterns. arXiv preprint arXiv:2203.08085.
Transformers: State-of-theart natural language processing. Thomas Wolf, Julien Chaumond, Lysandre Debut, Victor Sanh, Clement Delangue, Anthony Moi, Pierric Cistac, Morgan Funtowicz, Joe Davison, Sam Shleifer, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsThomas Wolf, Julien Chaumond, Lysandre Debut, Vic- tor Sanh, Clement Delangue, Anthony Moi, Pier- ric Cistac, Morgan Funtowicz, Joe Davison, Sam Shleifer, et al. 2020. Transformers: State-of-the- art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing: System Demonstrations, pages 38-45. |
196,211,550 | Scaling Up Open Tagging from Tens to Thousands: Comprehension Empowered Attribute Value Extraction from Product Title | Supplementing product information by extracting attribute values from title is a crucial task in e-Commerce domain. Previous studies treat each attribute only as an entity type and build one set of NER tags (e.g., BIO) for each of them, leading to a scalability issue which unfits to the large sized attribute system in real world e-Commerce. In this work, we propose a novel approach to support value extraction scaling up to thousands of attributes without losing performance: (1) We propose to regard attribute as a query and adopt only one global set of BIO tags for any attributes to reduce the burden of attribute tag or model explosion;(2) We explicitly model the semantic representations for attribute and title, and develop an attention mechanism to capture the interactive semantic relations in-between to enforce our framework to be attribute comprehensive. We conduct extensive experiments in real-life datasets. The results show that our model not only outperforms existing state-of-the-art N-ER tagging models, but also is robust and generates promising results for up to 8, 906 attributes. | [
6300165,
2747277,
6042994,
2844020,
10489017
] | Scaling Up Open Tagging from Tens to Thousands: Comprehension Empowered Attribute Value Extraction from Product Title
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 28 -August 2, 2019. 2019
Huimin Xu
School of Computer Science and Software Engineering
East China Normal University
Alibaba Group
Wenting Wang
Alibaba Group
Xin Mao
School of Computer Science and Software Engineering
East China Normal University
Alibaba Group
Xinyu Jiang xyjiang@stu.ecnu.edu.cn
School of Computer Science and Software Engineering
East China Normal University
Man Lan mlan@cs.ecnu.edu.cn
School of Computer Science and Software Engineering
East China Normal University
Scaling Up Open Tagging from Tens to Thousands: Comprehension Empowered Attribute Value Extraction from Product Title
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsJuly 28 -August 2, 2019. 2019
Supplementing product information by extracting attribute values from title is a crucial task in e-Commerce domain. Previous studies treat each attribute only as an entity type and build one set of NER tags (e.g., BIO) for each of them, leading to a scalability issue which unfits to the large sized attribute system in real world e-Commerce. In this work, we propose a novel approach to support value extraction scaling up to thousands of attributes without losing performance: (1) We propose to regard attribute as a query and adopt only one global set of BIO tags for any attributes to reduce the burden of attribute tag or model explosion;(2) We explicitly model the semantic representations for attribute and title, and develop an attention mechanism to capture the interactive semantic relations in-between to enforce our framework to be attribute comprehensive. We conduct extensive experiments in real-life datasets. The results show that our model not only outperforms existing state-of-the-art N-ER tagging models, but also is robust and generates promising results for up to 8, 906 attributes.
Introduction
Product attributes are vital to e-Commerce as platforms need attribute details to make recommendations and customers need attribute information to compare products and make purchase decisions. However, attribute information is often noisy and incomplete because of the inevitable hurdles posed to retailers by the extremely huge and complex e-Commerce attribute system. On the other hand, product titles which are carefully designed by retailers are packed tightly with details to highlight all important aspects of products. Figure 1 shows the product page of a 'dress' from AliExpress 1 which is an emerging and fast-1 https://www.aliexpress.com/ growth global e-Commerce platform. The product title "2019 Summer Women Button Decorated Print Dress Off-shoulder Party Beach Sundress Boho Spaghetti Long Dresses Plus Size FICUS-RONG" contains attribute values: (1) already listed in Item Specifics, such as 'Women' for Gender, 'Summer' for Season, etc; (2) missing in Item Specifics, such as '2019' for Year, 'Plus Size' for Size, etc. In this paper, we are interested in supplementing attribute information from product titles, especially for the real world e-Commerce attribute system with thousands of attributes built-in and new attributes and values popping out everyday.
Previous work (Ghani et al., 2006;Ling and Weld, 2012;Sheth et al., 2017) on attribute value extraction suffered from Closed World Assumption which heavily depends on certain pre-defined attribute value vocabularies. These methods were unable to distinguish polysemy values such as 'camel' which could be the Color for a sweater rather than its Brand Name, or find new attribute values which have not been seen before. More recently, many research works (More, 2016;Zheng et al., 2018) formulate attribute value extraction problem as a special case of Named Entity Recognition (NER) task (Bikel et al., 1999;Collobert et al., 2011). They adopted sequence tagging models in NER as an attempt to address the Open World Assumption purely from the attribute value point of view. However, such tagging approach still failed to resolve two fundamental challenges in real world e-Commerce domain:
Challenge 1. Need to scale up to fit the large sized attribute system in the real world. Product attribute system in e-Commerce is huge and may overlap cross domains because each industry designs its own standards. The attribute size typically falls into the range from tens of thousands to millions, conservatively. For example, Sports & Entertainment category from AliExpress alone contains 344, 373 products (may vary daily) with 77, 699 attributes and 482, 780 values. Previous NER tagging models have to introduce one set of entity tags (e.g., BIO tags) for each attribute. Thus, the large attribute size in reality renders previous works an infeasible choice to model attribute extraction. Moreover, the distribution of attributes is severely skewed. For example, 85% of attributes appear in less than 100 Sports & Entertainment products. Model performance could be significantly degraded for such rarely occurring attributes (e.g., Sleeve Style, Astronomy, etc.) due to insufficient data.
Challenge 2. Need to extend Open World Assumption to include new attribute. With the rapid development of e-Commerce, both new attributes and values for newly launched products are emerging everyday. For example, with the recent announcement of 'foldable mobile phone, a new attribute Fold Type is created to describe how the mobile phone can be folded with corresponding new attribute values 'inward fold', 'outward fold', etc. Previous NER tagging models view each attribute as a separate entity type and neglect the hidden semantic connections between attributes. Thus, they all fail to identify new attributes with zero manual annotations.
In this paper, to address the above two issues, we propose a novel attribute-comprehension based approach. Inspired by Machine Reading Comprehension (MRC), we regard the product title and product attribute as 'context' and 'query' respectively, then the 'answer' extracted from 'context' equals to the attribute value wanted. Specifically, we model the contexts of title and attribute respectively, capture the semantic interaction between them by attention mechanism, and then use Conditional Random Fields (CRF) (Lafferty et al., 2001) as output layer to identify the corresponding attribute value. The main contributions of our work are summarized as follows:
• Model. To our knowledge, this is the first framework to treat attribute beyond NER type alone but leverage its contextual representation and interaction with title to extract corresponding attribute value.
• Learning. Instead of the common BIO setting where each attribute has its own BIO tags, we adopt a novel BIO schema with only one output tag set for all attributes. This is enabled by our model designed to embed attribute contextually rather than attribute tag along. Then learning to extract thousands of attributes first becomes feasible.
• Experiments. Extensive experiments in real world dataset are conducted to demonstrate the efficacy of our model. The proposed attribute-comprehension based model outperforms state-of-the-art models by average 3% in F 1 score. Moreover, the proposed model scales up to 8, 906 attributes with an overall F 1 score of 79.12%. This proves its ability to produce stable and promising results for not only low and rare frequency attributes, but also new attributes with zero extra annotations.
To the best of our knowledge, this is the first framework to address the two fundamental real world issues for open attribute value extraction: scalability and new-attribute. Our proposed model does not make any assumptions on attribute size, attribute frequencies or the amount of additional annotations needed for new attributes.
The rest of the paper is organized as follows. Section 2 gives a formal problem statement for this task. Section 3 depicts our proposed model in details. Section 4 lists the experimental settings of this work. Section 5 reports the experimental results and analysis. Section 6 summarizes the related work, followed by a conclusion in Section 7.
Problem Statement
In this section, we formally define the attribute value extraction task. Given product title T and • Attributes: Season, Gender, Neckline
Considering the three attributes of interest, i.e., Season, Gender and Neckline, we aim to obtain 'Summer' for Season, 'Women' for Gender and 'NULL' for Neckline, where the former two attributes are described in title but the latter is not presented in title.
Formally, given the product title T = {x t 1 , x t 2 , . . . , x t m } of length m and attribute A = {x a 1 , x a 2 , . . . , x a n } of length n, our model outputs the tag sequence y = {y 1 , y 2 , . . . , y m }, y i ∈ {B, I, O}, where B and I denote the beginning and inside tokens for the extracted attribute value respectively, and O denotes outside of the value.
Attribute-Comprehension Open Tagging Model
Previous work on sequence tagging built one model for every attribute with a corresponding set of attribute-specific tags. Such approach is unrealistic on real-life large sized attribute set because of two reasons: (1) it is computationally inefficient to model thousands of attributes;
(2) very limited data samples are presented for most attributes resulting in non-guaranteed performance. To tackle the two challenges raised in Section 1, we propose a novel attribute-comprehension based open tagging approach to attribute value extraction. Figure 2 shows the architecture of our proposed model. At first glance, our model, adopting BiLSTM, attention and CRF components, looks similar to previous sequence tagging systems including BiLSTM (Huang et al., 2015) and OpenTag (Zheng et al., 2018). But in fact our model is fundamentally different from previous works: unlike their strategy to regard attribute as only tag, we model attribute semantically, capture its semantic interaction with title via attention mechanism, then generate attribute-comprehension title representation to CRF for final tagging. Next we will describe the architecture of our model in detail.
Word Representation Layer. We map each word in the title and attribute to a high-dimensional vector space through the pre-trained Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) which is the state-of-the-art language representation model. For each word in a sentence, BERT generates a particular word representation which considers the specific contexts. Formally, BERT encodes the title T and attribute A into a sequence of word representations {w t 1 , w t 2 , . . . , w t m } and {w a 1 , w a 2 , . . . , w a n }.
Contextual Embedding Layer. Long-Short Term Memory (LSTM) Neural Network (Hochreiter and Schmidhuber, 1997) addresses the vanishing gradient problems and is capable of modeling long-term contextual information along the sequence. Bidirectional LSTM (BiLSTM) captures the context from both past and future time steps jointly while vanilla LSTM only considers the contextual information from the past.
In this work, we adopt two BiLSTMs to model the title and attribute representation individually. One BiLSTM is used to get hidden states as contextual representation of title
H t = {h t 1 , h t 2 , . . . , h t m }. h t i = − → h t i ; ← − h t i = BiLSTM − − → h t i+1 , ← − − h t i−1 , w t i
Another BiLSTM is used to obtain the attribute representation. Slightly different from the design for title, we only use the last hidden state of BiL-STM as the attribute representation h a since the length of attribute is normally much shorter (i.e., no more than 5).
h a = − → h a n ; ← − h a n = BiLSTM − → h a n ,
← − h a n , w a n Attention Layer. In Natural Language Processing (NLP), attention mechanism was first used in Neural Machine Translation (NMT) (Bahdanau et al., 2014) and has achieved a great success. It is designed to highlight the important information in a sequence, instead of paying attention to everything.
OpenTag (Zheng et al., 2018) uses selfattention (Vaswani et al., 2017) to capture the important tokens in the title, but treats attribute only as a type and neglects attribute semantic information. Thus, OpenTag has to introduce one set of tags (B a , I a ) for each attribute a, leading to its failure to be applicable in e-Commerce which has ten of thousands attributes. Different from their work, our model takes the hidden semantic interaction between attribute and title into consideration by computing the similarities between the attribute and each word in title. This means different tokens in the title would be attended in order to extract values for different attributes, resulting in different weight matrix. Thus, our model is able to handle huge amounts of attributes with only one set of tags (B, I, O). Even for attributes that have never been seen before, our model is able to identify tokens associated with it from the title by modeling its semantic information.
We first compute the similarity between the attribute and each word in title to obtain attention vector S = {α 1 , α 2 , . . . , α m }. The attributecomprehension title is C = S H t , where represents element-wise. This vector indicates the weighted sum of words in the title with respect to the attribute. The similarity function between two vectors is measured by cosine similarity:
α i = cosine h t i , h a Output
Layer. The goal of this task is to predict a tag sequence that marks the position of attribute values in the title. CRF is often used in sequence tagging model because it captures dependency between the output tags in a neighborhood. For example, if we already know the tag of a token is I, this decreases the probability of the next token to be B.
We concatenate the title H t and attributecomprehension title C to obtain a matrix M = H t ; C , which is passed into the CRF layer to predict tag sequence. Each column vector of M expected to contain contextual information about the word with respect to the title and attribute. The joint probability distribution of tags y is given by:
P r (y|T ; ψ) ∝ m i=1 exp K k=1 ψ k f k (y i−1 , y i , M i )
where ψ k is corresponding weight, f k is the feature function, K is the number of features. The final output is the best label sequence y * with the highest conditional probability:
y * = argmax y P r (y|u; ψ)
Training. For training this network, we use the maximum conditional likelihood estimation:
L (ψ) = N i=1 P r (y i |u i ; ψ)
where N is the number of training instances. In initial dataset, there are 513, 564 positive triples (15%) whose value is included in title, the remainder are negative triples whose value is marked as 'NULL' as it is missing in title. We randomly select 143, 846 negative triples, then combine them with all positive triples to compose the dataset AE-650K whose positive-negative ratio is 4:1. Then this set of 657, 410 triples is partitioned into training, development and test set with the ratio of 7:1:2. In total, the AE-650k dataset contains 8, 906 types of attributes and their distributions are extremely uneven. In order to have a deep insight into the attribute distribution, we categorize them into five groups (i.e., High, Subhigh, Medium, Low and Rare frequency) according their occurrences. Table 1 shows the number of unique attributes in each frequency group together with some examples. We observe that high frequency attributes are more general (e.g., Gender, Material), while low and rare frequency attributes are more product specific (e.g., Sleeve Style, Astronomy). For example, one Barlow lens product has value 'Telescope Eyepiece for Astron- omy 3 . In addition, we find these attributes has "long tail" phenomenon, that is, a small number of general attributes can basically define a product while there are a large number of specific attributes to define products more detailedly. These details are important in the accurate produces recommendation or other personalized services.
In order to make fair comparison between our model and previous sequence tagging models which cannot handle huge amounts of attributes, we pick up the four frequent attributes (i.e., Brand Name, Material, Color and Category) to compose the second dataset AE-110k with a total of 117, 594 triples. Table 2 shows the statistics and distributions of attributes in AE-110k.
Moreover, since the dataset is automatically constructed based on Exact Match criteria by pairing product title with its attributes and values present in Item Specific, it may involve some noises for positive triples. For example, the title of a 'dress' contains 'long dresses', the word 'long' may be tagged as values for attributes Sleeve Length and Dresses Length simultaneously. Thus we randomly sampled 1, 500 triples from AE-650k for manual evaluation and the accuracy of automatic labeling is 95.6%. This shows that the dataset is high-quality.
Evaluation Metrics
We use precision, recall and F 1 score as evaluation metrics denoted as P , R and F 1 . We follow Exact Match criteria in which the full sequence of extracted value need to be correct. Clearly, this is a strict criteria as one example gets credit only when the tag of each word is correct.
Baselines
To make the comparison reliable and reasonable, three sequence tagging models serve as baselines due to their reported superior tagging results like OpenTag (Zheng et al., 2018) or their typical representation (Huang et al., 2015).
• BiLSTM uses the pre-trained BERT model to represent each word in title, then applies BiLSTM to produce title contextual embedding. Finally, a softmax function is exploited to predict the tag for each word.
• BiLSTM-CRF (Huang et al., 2015) is considered to be the pioneer and the state-of-the-art sequence tagging model for NER which uses CRF to model the association of predicted tags. In this baseline, the hidden states generated by BiLSTM are used as input features for CRF layer.
• OpenTag (Zheng et al., 2018) is the recent sequence tagging model for this task which adds self-attention mechanism to highlight important information before CRF layer. Since the source code of OpenTag is not available, we implement it using Keras.
Implementation Details
All models are implemented with Tensorflow (Abadi et al., 2016) and Keras (Chollet et al., 2015). Optimization is performed using Adam (Kingma and Ba, 2014) with default parameters. We train up to 20 epochs for each model. The model that performs the best on the development set is then used for the evaluation on the test set. For all models, the word embeddings are pre-trained via BERT and the dimension is 768. The dimension of the hidden states in BiLSTM is set to 512 and the minibatch size is fixed to 256. The BIO tagging strategy is adopted. Note that only one global set of BIO tags for any attributes is used in this work.
Results and Discussion
We conduct a series of experiments under various settings with the purposes to (1) make comparison of attribute extraction performance on frequent attributes with existing state-of-the-art models; (2) explore the scalability of our model up to thousands of attributes; and (3) examine the capability of our model in discovering new attributes which have not been seen before.
Results on Frequent Attributes
The first experiment is conducted on four frequent attributes (i.e., with sufficient data) on AE-110k and AE-650k datasets. Table 3 reports the comparison results of our two models (on AE-110k and AE-650k datasets) and three baselines. It is observed that our models are consistently ranked the best over all competing baselines. This indicates that our idea of regarding 'attribute' as 'query' successfully models the semantic information embedded in attribute which has been ignored by previous sequence tagging models. Besides, different from the self-attention mechanism only in- Micro-P(%) Micro-R(%) Micro-F1(%) Figure 3: Performance of our model on 8, 906 attributes in AE-650K dataset. 'All' stands for all attributes while 'High', 'Sub-high', 'Medium', 'Low' and 'Rare' denote the five frequency groups of attributes defined in Table 1, respectively. side title adopted by OpenTag, our interacted similarity between attribute and title does attend to words which are more relevant to current extraction.
In addition, our model is the only one that can be applied to AE-650K dataset which contains 8, 906 types of attributes. From Table 3, we compare the performance of our two models trained on different sizes of triples. It is interesting to find that extra training data on other attributes boosts the performances of the target four attributes, and outperforms the best baseline by average 3% in F 1 score. We believe the main reason is that all the other attributes in AE-650k can be viewed as relevant tasks from Multi-task (Caruana, 1997) perspective. Usually, the model would take the risk of over-fitting if it is only optimized upon the target attributes due to unavoidable noises in the dataset. However, the Multi-task learning implicitly increases training data of other relevant tasks having different noise patterns and can average these noise patterns to obtain a more general representation and thus improve generalization of the model.
Results on Thousands of Attributes
The second experiment is to explore the scalability of models up to thousands of attributes. Clearly, previous sequence tagging models fail to report results on large amounts of tags for attributes. Using a single model to handle large amounts of attributes is one advantage of our model. To verify this characteristic, we compute Micro-P, Micro-R, Micro-F 1 on entire test set of AE-650k, as shown in the leftmost set of columns of Figure 3. The performances of our model on 8, 906 attributes reach 84.13%, 76.08% and 79.12%, respectively. In order to validate the robustness of our model, we also perform experiments on five attribute frequency groups defined in Table 1. Their results are shown in Figure 3. We observe that our model achieves Micro-F 1 of 84.60% and 79.79% for frequent attributes in 'High' and 'Sub-high' groups respectively. But more importantly, our model achieves good performance (i.e., Micro-F 1 66.06% and 53.94% respectively) for less frequent attributes in 'Medium' and 'Low' groups, and even a promising result (i.e., Micro-F 1 35.70%) for 'Rare' attributes which are presented less than 10 times. Thus, we are confident to conclude that our model has the ability to handle large amounts of attributes with only a single model.
Results of Discovering New Attributes
To further examine the ability of our model in discovering new attributes which has never been seen before, we select 5 attributes with relatively low occurrences: Frame Color, Lenses Color, Shell Material, Wheel Material, and Product Type. We shuffle the AE-650K dataset to make sure they are not in training and development set, and evaluate the performance for these 5 attributes. Table 4 reports the results of discovering 5 new attributes. It is not surprising to see that our model still achieves acceptable performance (i.e., averaged F 1 50.85%) on new attributes with no additional training data. We believe that some data in training set are semantically related to unseen attributes and they provide hints to help the extraction.
To further confirm this hypothesis, we map attributes features h a generated by contextual embedding layer into two-dimensional space by t-SNE (Rauber et al., 2016), as shown in al-related and others respectively, and the areas are proportional to the frequency of attributes. An interesting observation is that Color-related and Material-related attributes are clustered into a small and concentrated area of two-dimensional space, respectively. Meanwhile, although Type and Product Type are very close, the distribution of all Type-related attributes is scattered in general. It may be because Type is not a specifically defined concept compared to Color or Material, the meaning of a Type-related attribute is determined by the word paired with Type. Therefore, we select two Type-related attributes adjacent to Material and find they are Fabric Type and Plastic Type. In fact, these two attributes are indeed relevant to the material of products.
To verify the ability of our model to handle a larger number of new attributes, we collect additional 20, 532 products from new category Christmas, and form 46, 299 triples as test set. The Christmas test set contains 1, 121 types of attributes, 708 of which are new attributes. Our model achieves Micro-F 1 of 66.37% on this test set. This proves that our model has good generalization and is able to transfer to other domains with a large number of new attributes.
Attention Visualizations
To illustrate the attention learned from the product in Figure 1, we plot the heat map of attention vectors S for three attributes (Year, Color and Brand Name) where the lighter the color is the higher the weight is. Since each bar in the heat map represents the importance of a word in the title of each
Year
Color Brand Name attribute, it indirectly affects the prediction decision. By observing Figure 5, we see that our model indeed adjusts the attention vector according to different attributes to highlight the value.
Related Work
Previous work for attribute value extraction use rule-based extraction techniques (Vandic et al., 2012;Gopalakrishnan et al., 2012) which use domain-specific seed dictionary to spot key phrase. Ghani et al. (2006) predefine a set of product attributes and utilize supervised learning method to extract the corresponding attributes values. An NER system was proposed by Putthividhya and Hu (2011) for extracting product attributes and values. In this work, supervised NER and bootstrapping technology are combined to expand the seed dictionary of attribute values. However, these methods suffer from Limited World Assumption. More (2016) build a similar NER system which leverage existing values to tag new values.
With the development of deep neural network, several different neural network methods have been proposed and applied in sequence tagging successfully. Huang et al. (2015) is the first to apply BiLSTM-CRF model to sequence tagging task, but this work employ heavy feature engineering to extract character-level features. Lample et al. (2016) utilize BiLSTM to model both word-level and character-level information rather than hand-crafted features, thus construct end-toend BiLSTM-CRF model for sequence tagging task. Convolutional neural network (CNN) (Le-Cun et al., 1989) is employed to model characterlevel information in Chiu and Nichols (2016) which achieves competitive performance for two sequence tagging tasks at that time. Ma and Hovy (2016) propose an end to end LSTM-CNNs-CRF model.
Recently, several approaches employ sequence tagging model for attribute value extraction. Kozareva et al. (2016) adopt BiLSTM-CRF model to tag several product attributes from search queries with hand-crafted features. Furthermore, Zheng et al. (2018) propose an end-to-end tagging model utilizing BiLSTM, CRF, and Attention without any dictionary and hand-crafted features. Besides extracting attribute value from title, other related tasks have been defined. Nguyen et al. (2011);Sheth et al. (2017); Qiu et al. (2015) extracted attribute-value pairs from specific product description.
Conclusion
To extract product attribute values in e-Commerce domain, previous sequence tagging models face two challenges, i.e., the huge amounts of product attributes and the emerging new attributes and new values that have not been seen before. To tackle the above issues, we present a novel architecture of sequence tagging with the integration of attributes semantically. Even if the attribute size reaches tens of thousands or even millions, our approach only trains a single model for all attributes instead of building one specific model for each attribute. When labeling new attributes that have not encountered before, by leveraging the learned information from existing attributes which have similar semantic distribution as the new ones, this model is able to extract the new values for new attributes. Experiments on a large dataset prove that this model is able to scale up to thousands of attributes, and outperforms state-of-the-art N-ER tagging models.
Figure 1 :
1Snapshot of a product page.
Figure 2 :
2Architecture of the proposed attribute-comprehension open tagging model. attribute A, our goal is to extract corresponding attribute value for A from T . For example, the title and attributes from Figure 1 are given as below: • Product Title: 2019 Summer Women Button Decorated Print Dress Off-shoulder Party Beach Sundress Boho Spaghetti Long Dresses Plus Size FICUSRONG.
Figure 4 .Figure 4 :
44InFigure 4the four colors of circles represent the attributes of Color-related, 4 Type-related, Materi-Distribution between semantically related new and existing attributes. E.g., Shell Material and Wheel Material are new attributes while Material is frequently known attributes.
Figure 5 :
5The heat map of attention vector S.
2 https://www.aliexpress.com/item/32956754932.htmlAttributes
Train
Dev
Test
Brand Name 50,413 5,601 14,055
Material
22,814 2,534
6,355
Color
5,594
621
1,649
Category
5,906
590
1,462
Total
84,727 9,346 23,521
Table 2 :
2Statistics of dataset AE-110K.
Table 3 :
3Performance comparison between our model and three baselines on four frequent attributes. For baselines, only the performance on AE-110K is reported since they do not scale up to large set of attributes; while for our model, the performances on both AE-110K and AE-650K are reported.
Table 4 :
4Performance of our model in discovering values for new attributes.
https://www.aliexpress.com/item/32735772355.html
'a-related' denotes all attributes whose text contains the substring a.
AcknowledgementsThe authors wish to thank all reviewers for their helpful comments and suggestions. This work was supported by Alibaba Group through Alibaba Innovative Research (AIR) Program. This work has been completed during Huimin Xu and Xin Mao's internship in Alibaba Group.
Tensorflow: A system for large-scale machine learning. Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek Gordon Murray, Benoit Steiner, Paul A Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, Xiaoqiang Zheng, 12th USENIX Symposium on Operating Systems Design and Implementation. Savannah, GA, USAMartín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek Gordon Murray, Benoit Stein- er, Paul A. Tucker, Vijay Vasudevan, Pete War- den, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2016, Savannah, GA, USA, November 2-4, 2016., pages 265-283.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, ab- s/1409.0473CoRRDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by joint- ly learning to align and translate. CoRR, ab- s/1409.0473.
An algorithm that learns what's in a name. M Daniel, Richard M Bikel, Ralph M Schwartz, Weischedel, 10.1023/A:1007558221122Machine Learning. 34Daniel M. Bikel, Richard M. Schwartz, and Ralph M. Weischedel. 1999. An algorithm that learns what's in a name. Machine Learning, 34(1-3):211-231.
Multitask learning. Rich Caruana, 10.1023/A:1007379606734Machine Learning. 28Rich Caruana. 1997. Multitask learning. Machine Learning, 28(1):41-75.
Named entity recognition with bidirectional lstm-cnns. Jason Chiu, Eric Nichols, Transactions of the Association for Computational Linguistics. 4Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transac- tions of the Association for Computational Linguis- tics, 4:357-370.
. François Chollet, François Chollet et al. 2015. Keras. https:// keras.io.
Natural language processing (almost) from scratch. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, Pavel P Kuksa, Journal of Machine Learning Research. 12Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493-2537.
BERT: pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, abs/1810.04805CoRRJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.
Text mining for product attribute extraction. Rayid Ghani, Katharina Probst, Yan Liu, Marko Krema, Andrew E Fano, 10.1145/1147234.1147241SIGKDD Explorations. 81Rayid Ghani, Katharina Probst, Yan Liu, Marko Kre- ma, and Andrew E. Fano. 2006. Text mining for product attribute extraction. SIGKDD Explorations, 8(1):41-48.
Matching product titles using web-based enrichment. Vishrawas Gopalakrishnan, Amit Suresh Parthasarathy Iyengar, Rajeev Madaan, Rastogi, H Srinivasan, Sengamedu, 10.1145/2396761.239683921st ACM International Conference on Information and Knowledge Management, CIKM'12. Maui, HI, USAVishrawas Gopalakrishnan, Suresh Parthasarathy Iyen- gar, Amit Madaan, Rajeev Rastogi, and Srini- vasan H. Sengamedu. 2012. Matching product titles using web-based enrichment. In 21st ACM Inter- national Conference on Information and Knowledge Management, CIKM'12, Maui, HI, USA, October 29 -November 02, 2012, pages 605-614.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, 10.1162/neco.1997.9.8.1735Neural Computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.
Bidirectional LSTM-CRF models for sequence tagging. Zhiheng Huang, Wei Xu, Kai Yu, abs/1508.01991CoRRZhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidi- rectional LSTM-CRF models for sequence tagging. CoRR, abs/1508.01991.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, ab- s/1412.6980CoRRDiederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, ab- s/1412.6980.
Recognizing salient entities in shopping queries. Zornitsa Kozareva, Qi Li, Ke Zhai, Weiwei Guo, 10.18653/v1/P16-2018Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational Linguistics2Association for Computational LinguisticsZornitsa Kozareva, Qi Li, Ke Zhai, and Weiwei Guo. 2016. Recognizing salient entities in shopping queries. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 107-111. Associa- tion for Computational Linguistics.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. John D Lafferty, Andrew Mccallum, Fernando C N Pereira, Proceedings of the Eighteenth International Conference on Machine Learning (ICM-L 2001). the Eighteenth International Conference on Machine Learning (ICM-L 2001)Williams College, Williamstown, MA, USAJohn D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth In- ternational Conference on Machine Learning (ICM- L 2001), Williams College, Williamstown, MA, USA, June 28 -July 1, 2001, pages 282-289.
Neural architectures for named entity recognition. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer, 10.18653/v1/N16-1030Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsGuillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 260-270. Association for Computational Lin- guistics.
Backpropagation applied to handwritten zip code recognition. Yann Lecun, Bernhard E Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne E Hubbard, Lawrence D Jackel, 10.1162/neco.1989.1.4.541Neural Computation. 14Yann LeCun, Bernhard E. Boser, John S. Denker, Don- nie Henderson, Richard E. Howard, Wayne E. Hub- bard, and Lawrence D. Jackel. 1989. Backpropa- gation applied to handwritten zip code recognition. Neural Computation, 1(4):541-551.
Fine-grained entity recognition. Xiao Ling, Daniel S Weld, Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence. the Twenty-Sixth AAAI Conference on Artificial IntelligenceToronto, Ontario, CanadaXiao Ling and Daniel S. Weld. 2012. Fine-grained en- tity recognition. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, July 22- 26, 2012, Toronto, Ontario, Canada.
End-to-end sequence labeling via bi-directional lstm-cnns-crf. Xuezhe Ma, Eduard Hovy, 10.18653/v1/P16-1101Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Long Papers)Xuezhe Ma and Eduard Hovy. 2016. End-to-end se- quence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1064-1074. Association for Computational Linguistics.
Attribute extraction from product titles in ecommerce. Ajinkya More, abs/1608.04670CoRRAjinkya More. 2016. Attribute extraction from product titles in ecommerce. CoRR, abs/1608.04670.
Synthesizing products for online catalogs. Hoa Nguyen, Ariel Fuxman, Stelios Paparizos, Juliana Freire, Rakesh Agrawal, 10.14778/1988776.1988777PVLDB. 47Hoa Nguyen, Ariel Fuxman, Stelios Paparizos, Juliana Freire, and Rakesh Agrawal. 2011. Synthesizing products for online catalogs. PVLDB, 4(7):409- 418.
Bootstrapped named entity recognition for product attribute extraction. Duangmanee Putthividhya, Junling Hu, Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. the 2011 Conference on Empirical Methods in Natural Language ProcessingEdinburgh, UKJohn McIntyre Conference Centre2011A meeting of SIGDAT, a Special Interest Group of the ACLDuangmanee Putthividhya and Junling Hu. 2011. Bootstrapped named entity recognition for produc- t attribute extraction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2011, 27-31 July 2011, John McIntyre Conference Centre, Edinburgh, UK, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1557-1567.
DEX-TER: large-scale discovery and extraction of product specifications on the web. Disheng Qiu, Luciano Barbosa, Xin Luna Dong, Yanyan Shen, Divesh Srivastava, 10.14778/2831360.2831372PVLDB8Disheng Qiu, Luciano Barbosa, Xin Luna Dong, Yanyan Shen, and Divesh Srivastava. 2015. DEX- TER: large-scale discovery and extraction of prod- uct specifications on the web. PVLDB, 8(13):2194- 2205.
Visualizing time-dependent data using dynamic t-sne. Paulo E Rauber, Alexandre X Falcão, Alexandru C Telea, 10.2312/eurovisshort.20161164Eurographics Conference on Visualization. Groningen, The Netherlands, 6-10Short PapersPaulo E. Rauber, Alexandre X. Falcão, and Alexan- dru C. Telea. 2016. Visualizing time-dependent da- ta using dynamic t-sne. In Eurographics Confer- ence on Visualization, EuroVis 2016, Short Papers, Groningen, The Netherlands, 6-10 June 2016., pages 73-77.
P Amit, Axel Sheth, Yin Ngonga, Elizabeth Wang, Dominik Chang, Bogdan Slezak, Rainer Franczyk, Xiaohui Alt, Rainer Tao, Unland, Proceedings of the International Conference on Web Intelligence. the International Conference on Web IntelligenceLeipzig, GermanyACMAmit P. Sheth, Axel Ngonga, Yin Wang, Elizabeth Chang, Dominik Slezak, Bogdan Franczyk, Rainer Alt, Xiaohui Tao, and Rainer Unland, editors. 2017. Proceedings of the International Conference on Web Intelligence, Leipzig, Germany, August 23-26, 2017. ACM.
Faceted product search powered by the semantic web. Damir Vandic, Jan-Willem Van Dam, Flavius Frasincar, 10.1016/j.dss.2012.02.010Decision Support Systems. 533Damir Vandic, Jan-Willem van Dam, and Flavius Frasincar. 2012. Faceted product search powered by the semantic web. Decision Support Systems, 53(3):425-437.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Long Beach, CA, USAAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 Decem- ber 2017, Long Beach, CA, USA, pages 6000-6010.
Opentag: Open attribute value extraction from product profiles. Guineng Zheng, Subhabrata Mukherjee, Xin Luna Dong, Feifei Li, 10.1145/3219819.3219839Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningLondon, UKGuineng Zheng, Subhabrata Mukherjee, Xin Luna Dong, and Feifei Li. 2018. Opentag: Open attribute value extraction from product profiles. In Proceed- ings of the 24th ACM SIGKDD International Con- ference on Knowledge Discovery & Data Mining, KDD 2018, London, UK, August 19-23, 2018, pages 1049-1058. |
19,011,769 | Collective Opinion Target Extraction in Chinese Microblogs | Microblog messages pose severe challenges for current sentiment analysis techniques due to some inherent characteristics such as the length limit and informal writing style. In this paper, we study the problem of extracting opinion targets of Chinese microblog messages. Such fine-grained word-level task has not been well investigated in microblogs yet. We propose an unsupervised label propagation algorithm to address the problem. The opinion targets of all messages in a topic are collectively extracted based on the assumption that similar messages may focus on similar opinion targets. Topics in microblogs are identified by hashtags or using clustering algorithms. Experimental results on Chinese microblogs show the effectiveness of our framework and algorithms. | [
12211329,
12979818,
6684426,
16719115,
15652752,
7105713
] | Collective Opinion Target Extraction in Chinese Microblogs
Association for Computational LinguisticsCopyright Association for Computational Linguistics18-21 October 2013. 2013
Xinjie Zhou zhouxinjie@pku.edu.cn
Institute of Computer Science and Technology
The MOE Key Laboratory of Computational Linguistics Peking University No. 5
Yiheyuan RoadBeijingChina
Xiaojun Wan wanxiaojun@pku.edu.cn
Institute of Computer Science and Technology
The MOE Key Laboratory of Computational Linguistics Peking University No. 5
Yiheyuan RoadBeijingChina
Jianguo Xiao xiaojianguo@pku.edu.cn
Institute of Computer Science and Technology
The MOE Key Laboratory of Computational Linguistics Peking University No. 5
Yiheyuan RoadBeijingChina
Collective Opinion Target Extraction in Chinese Microblogs
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing
the 2013 Conference on Empirical Methods in Natural Language ProcessingSeattle, Washington, USAAssociation for Computational Linguistics18-21 October 2013. 2013
Microblog messages pose severe challenges for current sentiment analysis techniques due to some inherent characteristics such as the length limit and informal writing style. In this paper, we study the problem of extracting opinion targets of Chinese microblog messages. Such fine-grained word-level task has not been well investigated in microblogs yet. We propose an unsupervised label propagation algorithm to address the problem. The opinion targets of all messages in a topic are collectively extracted based on the assumption that similar messages may focus on similar opinion targets. Topics in microblogs are identified by hashtags or using clustering algorithms. Experimental results on Chinese microblogs show the effectiveness of our framework and algorithms.
Introduction
Microblogging services such as Twitter 1 , Sina Weibo 2 and Tencent Weibo 3 have swept across the globe in recent years. Users of microblogs range from celebrities to ordinary people, who usually express their emotions or attitudes towards a broad range of topics. It is reported that there are more than 340 million tweets per day on Twitter and more than 200 million on Sina Weibo. A tweet means a post on Twitter. Since we mainly focus on Chinese microblogs instead of Twitter in this paper, we will refer to a post as a message. Each message is limited to 140 Chinese characters and usually contains several sentences.
Currently, researches on microblog sentiment analysis have been conducted on polarity classification ( Barbosa and Feng, 2010;Jiang el al., 2011;Speriosu et al., 2011) and have been proved to be useful in many applications, such as opinion polling (Tang et al., 2012), election prediction (Tumasjan et al., 2010) and even stock market prediction (Bollen et al., 2011). However, classifying microblog texts at the sentence level is often insufficient for applications because it does not identify the opinion targets. In this paper, we will study the task of opinion target extraction for Chinese microblog messages.
Opinion target extraction aims to find the object to which the opinion is expressed. For example, in the sentence "The sound quality is good!", "sound quality" is the opinion target. This task is mostly studied in customer review texts in which opinion targets are often referred as features or aspects (Liu, 2012). Most of the opinion target extraction approaches rely on dependency parsing (Zhuang et al., 2006;Jakob and Gurevych, 2010;Qiu et al., 2011) and are regarded as a domain-dependent task (Li et al., 2012a). However, such approaches are not suitable for microblogs because the natural language processing tools perform poorly on microblog texts due to their inherent characteristics. Studies show that one of the state-of-the-art partof-speech taggers -OpenNLP only achieves the accuracy of 74% on tweets . The syntactic analysis tool that generates dependency relation may perform even worse. Besides, microblog messages may express opinion in different ways and do not always contain opinion words, which lowers the performance of methods utilizing opinion words to find opinion targets.
In this study, we propose an unsupervised method to collectively extract the opinion targets from opinionated sentences in the same topic.
Topics are directly identified by hashtags. We first present a dynamic programming based segmentation algorithm for Chinese hashtag segmentation. By leveraging the contents in a topic, our segmentation algorithm can successfully identify out-ofvocabulary words and achieve promising results. Afterwards, all the noun phrases in each sentence and the hashtag segments are extracted as opinion target candidates. We propose an unsupervised label propagation algorithm to collectively rank the candidates of all sentences based on the assumption that similar sentences in a topic may share the same opinion targets. Finally, for each sentence, the candidate which gets the highest score after unsupervised label propagation is selected as the opinion target.
Our contributions in this study are summarized as follows: 1) our method considers not only the explicit opinion targets within the sentence but also the implicit opinion targets in the hashtag or mentioned in the previous sentence. 2) We develop an efficient algorithm to segment Chinese hashtags. It can successfully identify out-ofvocabulary words by leveraging contextual information and help to improve the segmentation performance of the messages in the topic. 3) We develop an unsupervised label propagation algorithm for collective opinion target extraction. Label propagation (Zhu and Ghahramani, 2002) aims to spread label distributions from a small training set throughout the graph. However, our unsupervised algorithm leverages the connection between two adjacent unlabeled nodes to find the correct labels for both of them. The proposed unsupervised method does not need any training corpus which will cost much human labor especially for fine-grained annotation. 4) To the best of our knowledge, the task of opinion target extraction in microblogs has not been well studied yet. It is more challenging than microblog sentiment classification and opinion target extraction in review texts.
Characteristics of Chinese Microblogs
Most of previous microblog sentiment analysis researches focus on Twitter and especially in English. However, the analysis of Chinese microblogs has some differences with that of Twitter: 1) Chinese word segmentation is a necessary step for Chinese sentiment analysis, but the existing seg-mentation tool performs poorly on microblogs because the microblog texts are much different from regular texts. 2) Wang et al. (2011) find that hashtags in English tweets are used to highlight the sentiment information such as " #love", "#sucks" or serve as user-annotated coarse topics such as "#news", "#sports". But in Chinese microblogs, most of the hashtags are used to indicate fine-grained topics such as #NBA 总决赛第七场# (#NBAFinalG7#). Besides, hashtags in Twitter always appear within a sentence such as "I love #BarackObama!" while hashtags in Chinese microblogs are always isolated and are surrounded by two # symbols such as "#巴拉克奥巴马# 我爱 他!" ("#BarackObama# I love him!").
It is noteworthy that topics aggregated by the same hashtag play an important role in Chinese microblog websites. These websites often provide an individual webpage 4 to list hot topics and invite people to participate in the discussion, where each topic consists of tens of thousands of messages with the same hashtag. The hot topics have a wide coverage of timely events and entities. Analyzing the opinion targets of these topics can help to get a deeper overview of the public attitudes towards the entities involved in the hot topics.
Motivation
As described above, #hashtags# in Chinese microblogs often indicate fine-grained topics. In this study, we aim to collectively extract the opinion targets of messages with the same hashtag, i.e. in the same topic. Opinion target of a sentence can be divided into two types, one of which called explicit target appears in the sentence such as "I love Obama", and the other one called implicit target Table 1 directly comments on the target in the hashtag "#Property publicity of government officials#" . Such implicit opinion targets are not considered in previous works and are more difficult to extract than explicit targets. However, we believe that the contextual information will help to locate both of the two kinds of opinion targets because similar sentences in a topic may share the same opinion target, which provides the possibility for collective extraction. Table 1 shows the motivation examples of two topics and four sentences. The two sentences in each topic are considered to be similar because they share several Chinese words. In the topic #官 员财产公示# (#Property publicity of government officials#), the first sentence omits the opinion target. However, the second one contains an explicit target "财产公示" ("property publicity") in the sentence. If we find the correct opinion target for sentence 2, we can infer that sentence 1 may have an implicit opinion target similar to the opinion target in sentence 2. In the second topic, both sentences contain a noun word "政府" ("government"). The similarity between these two sentences may indicate that both of the two sentences are expressing opinion on "政府".
Based on the above observation, we can assume that similar sentences in a topic may have the same opinion targets. Such assumption can help to locate both explicit and implicit opinion targets. Following this idea, we firstly extract all the noun phrases in each sentence as opinion target candidates after applying Chinese word segmentation and part-of-speech tagging. Afterwards, an unsupervised label propagation algorithm is proposed to rank these candidates for all sentences in the topic.
In our methods, hashtags are used to find goldstandard topics. For messages without hashtags, an alternative way is to generate pseudo topics by clustering microblogs messages and then apply the proposed algorithm to each pseudo topic. The detailed discussion of such general circumstance is shown in Section 5.7.
Methodology
Context-Aware Hashtag Segmentation
In our approach, the Chinese word segmentations of hashtags and topic contents are treated separately. Existing Chinese word segmentation tools work poorly on microblog texts. The segmentation errors especially on opinion target words will directly influence the results of part-of-speech tagging and candidate extraction. However, some of the opinion target words in a topic are often included in the hashtag. By finding the correct segments of a hashtag and adding them to the user dictionary of the Chinese word segmentation tool, we can remarkably improve the overall segmentation performance.
The following example can help to understand the idea better. In the topic #90 后打老人# (means "A young man hits an old man"), "90 后" (literally "90 later" and means a young man born in the 90s) is an important word because it is the opinion target of many sentences. However, existing Chinese word segmentation tools will regard it as two separate words "90" and "后" ("later"). Then in the part-of-speech tagging stage, "90" will be tagged as number and "后" will be tagged as localizer. As we only extract noun phrases as opinion target candidates, the wrong segmentation on "90 后" makes it impossible to find the right opinion target. Such error may occur many times in sentences that mention the word "90 后" and express opinion on it. In our method, the message texts of the topic are utilized to identify such out-of-vocabulary words based on its frequency in the topic. For example, the high frequency of "90 后" is a strong indication that it should be regard as a single word. After segmenting the hashtag correctly into "90 后 /打/老人", we can add the hashtag segments to the user dictionary of the segmentation tool to further segment the message texts of the topic.
The basic idea for our hashtag segmentation algorithm is to regard strings that appear frequently in a topic as words. Formally, given a hashtag h that contains n Chinese characters c 1 c 2 ...c n . We want to segment into several words w 1 w 2 ...w m , where each word is formed by one of more characters.
Firstly, we define the stickiness score for a Chinese string c 1 c 2 ...c n based on the Symmetrical Conditional Probability (SCP) (Silva and Lopes, 1999): for string with only one character. Pr(c 1 c 2 ...c n ) is the occurrence frequency of the string in the topic.
Following (Li et al., 2012b), we smooth the SCP value by taking logarithm calculation. Besides, the length of the string is taken into consideration,
1 2 1 2 ( ... ) log ( ... ) nn SCP c c c n SCP c c c (2)
where n is the number of characters in the string. Then the stickiness score is defined by the sigmoid function as follows:
12 12 ( ... ) 2 ( ... ) 1 n n SCP c c c Stickiness c c c e (3) For the hashtag h = c 1 c 2 ...c n , we want to seg- ment it into m words w 1 w 2 ...w m which maximize the following equation, 1 max ( ) m i i Stickness w (4)
The optimization of Equation (4) can be solved efficiently by dynamic programming which iteratively segments a string into two substrings. Different from (Li et al., 2012b) which calculates the SCP value of each string based on Microsoft Web N-Gram, our hashtag segmentation algorithm only uses the topic content and do not need any additional corpus.
Candidate Extraction
After segmenting the hashtag, all the hashtag segments with length greater than one are added to the user dictionary of the Chinese word segmentation tool ICTCLAS 5 to further segment the message texts of the topic. It also assigns the part-ofspeech tag for each word after segmentation. The noun phrases in each sentence is extracted by the following regular expression:
( | )( ) . noun adj noun adj noun 的
That means a noun phrase can only include nouns, adjectives and the Chinese word "的" ("of"). It should begin with a noun or adjective and end with a noun. For 5 http://www.ictclas.org/ example, in the following sentence, "中国/n 的/u 教育/n 制度/n 有/v 问题/n 。/w" ("Chinese education system has problems."), "中国的教育制度" ("Chinese education system") and "问题" ("problem") are extracted as noun phrases.
The character number of a noun phrase is limited between two and seven Chinese characters. For each sentence, all phrases that match the regular expression and meet the length restriction are extracted as explicit opinion target candidates. The hashtag segments are regarded as implicit candidates for all sentences. Besides, some opinionated sentences in microblogs do not contain any noun phase, such as " 无 聊 至 极 ! " ("So boring!").
These sentences may express opinion on object that has been mentioned before. Therefore, the explicit candidates of the previous sentence in the same message are also taken as the implicit candidates for such sentences.
We do not use any syntactic parsing tool to extract noun phrases because the parsing results on microblogs are not reliable. A performance comparison of our rule based method and the state-ofthe-art syntactic parser will be shown in Section 5.
Unsupervised Label Propagation for Candidate Ranking
We simply assume that each opinionated sentence has one opinion target, which is consistent with
,v uv u v u V u v D W Y S F 7: ˆi nj cont v v v Y p Y p D
8: end for 9: until convergence the statistical result of our dataset that over 93% sentences have only one opinion target and each sentence has an average of 1.09 targets. Therefore, the most confident candidate of each sentence will be selected as the opinion target. In this section, we introduce an unsupervised graph-based label propagation algorithm to collectively rank the candidates of all sentences in a topic. Label propagation (Zhu and Ghahramani, 2002;Talukdar and Crammer, 2009) is a semisupervised algorithm which spreads label distributions from a small set of nodes seeded with some initial label information throughout the graph. The basic idea is to use information from the labeled nodes to label the adjacent nodes in the graph. However, our idea is to use the connection between different nodes to find the correct labels for all of them. Our unsupervised label propagation algorithm is summarized in Algorithm 1. Sentences are regarded as nodes and candidates of each sentence are regarded as labels. The label vector for each node is initialized based on the results of the candidate extraction step, which means no manually-labeled instances are needed in our model. In each iteration, the label vector of one node is propagated to the adjacent nodes. Both the sentence (node) similarity and the candidate (label) similarity are considered during propagation. Finally, we select the candidate with the highest score in the label vector as the opinion target for each sentence. The details of Algorithm 1 are presented as follows.
Formally, an undirected graph ,, G V E W is built for each topic. A node vV represents a sentence in the topic and an edge e = (a, b) E indicates that the labels of the two vertices should be similar. W is the normalized weight matrix to reflect the strength of this similarity. The similarity between two nodes W ab is simply calculated by using the cosine measure (Salton et al., 1975)
The total number of candidates in the topic is denoted by M = |CT|. We calculate the candidate similarity matrix
MM SR
based on Jaccard Index:
( ) ( ) 1 ( ) ( ) ij ij ij A CT A CT S i j M A CT A CT (7)
where A(CT i ) and A(CT j ) are the Chinese character sets of the i-th and j-th candidates in CT respectively.
Candidates are regarded as labels in our model and without loss of generality we assume that the possible labels for the whole topic are L = {1…M} and each label in L corresponds to a unique candidate in CT. For each node vV, a label vector
1 M v YR is initialized as 1 0 kv v k kv w L C Y k M LC (8)
where w is the initial weight of the candidate. We set w = w e if L k is an explicit candidate (extracted noun phrase) of v and w = w i if L k is an implicit candidate (hashtag segment or inherited from previous sentence) of v. If L k is not a candidate of the current sentence, then the corresponding value in the label vector is 0. These values which are initialized as zero should always remain zero during the propagation algorithm because the corresponding label does not belong to the candidate set C v of node v. To reset the values on these positions, a diagonal matrix
MM v FR is created for all nodes v, 10 1 00 v k v kk v k Y F k M Y (9)
where the subscript kk denotes the k-th position in the diagonal of matrix F v . We can right-multiply Y v by F v to clear the values of the invalid candi-dates. Figure 1 shows an example of creating the filtering matrix for a label vector.
The propagation process is formalized via two possible actions: inject and continue, with predefined probabilities p inj and p cont . Their sum is unit: p inj + p cont = 1. In each iteration, every node is influenced by its adjacent nodes. The propagation influence for each node v is
,v uv u v u V u v D W Y S F (10)
where ˆu Y is the label vector of node u at the previous iteration. By multiplying the candidate similarity matrix S, we aim to propagate the score of the i-th candidate of node u not only to the i-th candidate of node v, but also to all the other candidates. W uv measures the strength of such propagation. The filtering matrix F v is used to clear the values of the invalid candidates as described above.
Then the label vector of node v is updated as follow, ˆi nj
cont v v v Y p Y p D (11)
When the positions of the largest values in all label vectors keep unchanged in ten iterations, it is regarded that the algorithm has already converged.
Experiments
Dataset
We use the dataset from the 2012 Chinese Microblog Sentiment Analysis Evaluation (CMSAE) 6 held by China Computer Federation (CCF). There are three tasks in the evaluation: subjectivity classification, polarity classification and opinion target extraction. The dataset contains 20 topics collected from Tencent Weibo, a popular Chinese microblogging website. All the messages in a topic contain the same hashtag. The dataset has a total 6 http://tcci.ccf.org.cn/conference/2012/pages/page04_eva. html. The dataset can also be publicly accessed on the website. of 17518 messages and 31675 sentences. In each topic, 100 messages are manually annotated with subjectivity, polarity and opinion targets. A total of 2361opinion targets are annotated for 2152 opinionated sentences.
Evaluation Metric
Precision, recall and F-measure are used in the evaluation. Since expression boundaries are hard to define exactly in annotation guidelines (Wiebe et al., 2005), both the strict evaluation metric and the soft evaluation metric are used in CMSAE.
Strict Evaluation: For a proposed opinion target, it is regarded as correct only if it covers the same span with the annotation result. Note that, in CMSAE, an opinion target should be proposed along with its polarity. The correctness of the polarity is also necessary.
Soft Evaluation: The soft evaluation metric presented in (Johansson and Moschitti, 2010) is adopted by CMSAE. The span coverage c between each pair of the proposed target span s and the gold standard span s' is calculated as follows,
, ss c s s s (12)
In Equation 12, the operator |· | counts Chinese characters, and the intersection ∩ gives the set of characters that two spans have in common.
Using the span coverage, the span set coverage C of a set of spans S with respect to another set S'
is , ( , ) s S s S C S S c s s (13)
The soft precision P and recall R of a proposed set of spans Ŝ with respect to a gold standard set S is defined as follows:
ˆ( , ) ( , ) Precision Recall || || C S S C S S S S (14)
Note that the operator |· | counts spans in Equation 14. The soft F-measure is the harmonic mean of soft precision and recall.
Comparison Methods
Our proposed approach is first compared with the CMSAE teams. here. They are denoted as Team-1, Team-2 and Team-3 respectively. The average result of all the sixteen teams is also included and is denoted as Team-Avg. We will briefly introduce the best team's method. The most important component of their model is a topic-dependent opinion target lexicon which is called object sheet. If a word or phrase in the object sheet appears in a sentence or a hashtag, it is extracted as opinion target. The object sheet is manually built for each topic, which means their method cannot be applied to new topics.
The following models are also used for comparison.
AssocMi: We implement the unsupervised method for opinion target extraction based on (Hu and Liu, 2004), which relies on association mining and a sentiment lexicon to extract frequent and infrequent product features.
CRF: The CRF-based method used in (Jakob and Gurevych, 2010) is also used for comparison. We implement both the single-domain and crossdomain models. Both models are evaluated using 5-fold cross-validation. More specifically, the single-domain model, denoted as CRF-S, trains different models for different topics. In each crossvalidation round, 80 percent of each topic is used for training and the other 20 percent is used for test. The cross-domain model, denoted as CRF-C, uses 16 topics for training and the rest 4 topics for test in each round.
Comparison Results
CMSAE requires all the teams to perform the subjectivity and polarity classification task in advance.
The opinion targets are extracted only for opinionated sentences and should be proposed along with their polarity. To make a fair comparison, we directly use the subjectivity and polarity classification results of Team-1. Then our unsupervised label propagation (ULP) method is used to extract the opinion targets for the proposed opinionated sentences. The parameters of our method are simply set as p inj = p cont = 0.5, w e = 1 and w i = 0.5. Table 2 lists the comparison results with CMSAE teams. The average F-measure of all teams is 0.12 and 0.20 in strict and soft evaluation, respectively. It shows that opinion target extraction is a quite hard problem in Chinese microblogs. Our method performs better than all the teams. It increases by 10% and 13% in the two kinds of Fmeasure compared to the best team. Besides, we do not need any prior information of the topics while Team-1 has to manually build an opinion target lexicon for each topic.
To compare with the other opinion target extraction methods, we only use gold-standard opinionated sentences for evaluation and do not classify the polarity of the opinion targets. Table 3 shows the experimental results of the four models. Our approach achieves the best result among them. AssocMi performs worst in strict evaluation but gets better results than the two CRF-based models in soft evaluation. The two CRF-based models achieve high precision but low recall. We can also observe that CRF-S is much more effective than CRF-C. It achieves high results because it has already seen the opinion targets in the training set. However, it is impossible to build such singledomain model in practical applications because labeled instances are not available for new topics. Our proposed method does not require any training data and gets an increase of 17% over CRF-S and 70% over CRF-C in strict evaluation. In terms of soft evaluation, we achieve an increase of 41% and 107% over the two CRF models.
Parameter Sensitivity Study
In this section, we study the parameter sensitivity. There are two major parameters in our algorithm: the initial weight w for both explicit and implicit candidates in Equation 8 and the injection probability p inj in Equation 11.
The initial weights of explicit and implicit candidates are set differently because the explicit candidates are more likely to be the opinion targets. These two kinds of initial weights are denoted as w e and w i for explicit and implicit candidate, respectively. To study the impact of the initial weights, we fix w e at 1 and tune w i because we only care about the relative contribution of them. The injection probability is fixed at 0.5. Figure 2(a) displays the opinion target extraction performance when w i varies from 0 to 1.5. Due to limited space, we only list the strict F-measure of opinion target extraction evaluated on opinioned sentences (same experimental setup as Table 3).
In particular, when w i is equal to 0, only explicit candidates are considered. When w i becomes larger than 1, the implicit candidates become more important than explicit candidates. From the curve in Figure 2(a), we can observe that the implicit candidates help to improve the performance significantly when w i varies from 0 to 0.1. The performance reaches the peak when w i = 0.7 and declines rapidly when w i gets larger than 1.
To study the impact of injection probability p inj , we fix the initial weights for explicit and implicit candidates as 1 and 0.5, respectively. Figure 2(b) shows the results of opinion target extraction with respect to different values of the injection probability. We can observe that the performance keeps steady except for the two extreme values 0 and 1. From the above two figures, we can conclude that our proposed method performs well and robustly with a wide range of parameter values.
Analysis of Candidate Extraction
Candidate extraction is an important step in our proposed method. If the correct opinion target is not extracted as a candidate, the ranking step will be in vain. As described in Section 3, we develop a hashtag segmentation algorithm and use a rule based method to extract noun phrases from each sentence. We do not use any parsing tool because we believe the performance of these tools is not good enough when applied on microblogs. A quantitative comparison is shown in this section.
We use one of the state-of-the-art syntactic analysis tools -Berkeley Parser (Petrov et al., 2006) for comparison here. Noun phrases are directly extracted from the parsing results. Our method HS+Rule leverages the hashtag segments to enhance the segmentation result and extracts explicit candidate using a regular expression. To demonstrate the effectiveness of our hashtag segmentation algorithm, the second comparison baseline Rule directly uses ICTCLAS to segment the whole topic content and labels each word with its part-of-speech tag. The explicit candidates are extracted by using the same regular expression.
The performance on candidate extraction is compared in Table 4. The second column shows the number of all extracted candidates for all the opinionated sentences by different methods. The third column shows the number of correct opinion targets among them. We can find that the two rulebased models both outperform Berkeley Parser and our HS+Rule method finds 14% more correct opinion targets than Rule. It proves the effectiveness of our hashtag segmentation algorithm. The Figure 2. Influence of the parameters total number of candidates extracted by HS+Rule is also less than the other two methods. Therefore, the performance of label propagation will be improved when there are fewer candidates to rank. It can be demonstrated by the F-measure of opinion target extraction in the fourth and fifth columns. The experiments are conducted on opinionated sentence only as above. By using HS+Rule to extract candidates, our label propagation algorithm gets the highest F-measure in both evaluation metrics.
Performance on Pseudo Topics by Message Clustering
In our collective extraction algorithm, topics are directly identified by hashtags. For messages without hashtags, we can first employ clustering algorithms to obtain pseudo topics (clusters) and then exploiting the topic-oriented algorithm for collective opinion target extraction. To test the performance of the proposed method in such circumstance, we use the popular clustering algorithm -Affinity Propagation (Frey and Dueck, 2007) to generate topics. The experimental results are shown in Table 5. APCluster means that the messages are clustered after removing all the hashtags. APCluster+HS means that all the hashtags are retained as normal texts for calculating message similarity. Therefore, the clustering performance can be largely improved. The standard cosine similarity is used to measure the distance between microblog messages for Affinity Propagation in the above two methods. The last method denoted as GoldCluster directly uses hashtags to identify the gold-standard topics which shows the upper bound of the performance. After clustering microblogs, the opinion targets of messages in each cluster are collectively extracted by the proposed unsupervised label propagation algorithm. The experiments are conducted on opinionated sentences only. From the results, we can see that clustering microblogs without hashtags is a quite difficult job which only gets an F-Measure of 0.27. However, the corresponding opinion target extraction performance is still promising, which outperforms the AssocMi and CRF-C methods in Table 3. With the help of hashtags, the clustering performance of APCluster+HS is largely improved and the opinion target extraction performance is also increased.
It outperforms all the baseline methods in Table 3. The above results reveal that our proposed unsupervised label propagation algorithm works well in pseudo topics and the performance can be increased with better clustering results. Therefore, we can try to incorporate other social network information to improve the message clustering performance, which will be studied in our future work.
Related Work
Sentiment analysis, a.k.a. opinion mining, is the field of studying and analyzing people's opinions, sentiments, evaluations, appraisals, attitudes, and emotions (Liu, 2012). Most of the previous sentiment analysis researches focus on customer reviews (Pang et al., 2002;Hu and Liu, 2004) and some of them focus on news (Kim and Hovy, 2006) and blogs (Draya et al., 2009). However, sentiment analysis on microblogs has recently attracted much attention and has been proved to be very useful in many applications.
Classification of opinion polarity is the most common task studied in microblogs. Go et.al (2009) follow the supervised machine learning approach of Pang et al. (2002) to classify the polarity of each tweet by distant supervision. The training dataset of their method is not manually labeled but automatically collected using the emoticons. Barbosa and Feng (2010) use the similar pseudo training data collected from three online websites which provide Twitter sentiment analysis services. Speriosu et al. (2009) explore the possibility of exploiting the Twitter follower graph to improve polarity classification.
Opinion target extraction is a fine-grained word-level task of sentiment analysis. Currently, this task has not been well studied in microblogs yet. It is mostly performed on product reviews where opinion targets are always described as product features or aspects. The pioneering research on this task is conducted by Hu and Liu (2004) who propose a method which extracts frequent nouns and noun phrases as the opinion targets. Jakob and Gurevych (2010) model the problem as a sequence labeling task based on Conditional Random Fields (CRF). Qiu et al. (2011) propose a double propagation method to extract opinion word and opinion target simultaneously. use the word translation model in a monolingual scenario to mine the associations between opinion targets and opinion words.
Conclusion and Future Work
In this paper, we study the problem of opinion target extraction in Chinese microblogs which has not been well investigated yet. We propose an unsupervised label propagation algorithm to collectively rank the opinion target candidates of all sentences in a topic. We also propose a dynamic programming based algorithm for segmenting Chinese hashtags. Experimental results show the effectiveness of our method.
In future work, we will try to collect and annotate data for microblogs in other languages to test the robustness of our method. The repost and reply messages can also be integrated into our graph model to help improve the results.
Figure 1 .
1CMSAE Teams: Sixteen teams participated in the opinion target extraction task of CMSAE. The methods of the top 3 teams are used as baselines Example of filtering matrix
and b represented by the standard vector space model and weighted by term frequency. After calculating the similarity matrix W, we get the weight For each sentence (node) v, a candidate set C v is extracted in the previous step. The candidate set CT for the whole topic is the union of all C v ,of
the two sentences.
( , )
ab
ab
a
b
ab
TT
W
cos
T
T
TT
(5)
where T
a and T
b are the term vectors of sentences a
matrix
W by normalizing each row of W such that
1
ab
b
W
.
v
CT
C
Comparison results with CMSAE teams (with subjectivity and polarity classification in advance)Method
Strict
Soft
Precision
Recall
F-Measure
Precision
Recall
F-Measure
AssocMi
0.22
0.20
0.21
0.47
0.43
0.45
CRF-C
0.59
0.15
0.24
0.70
0.18
0.28
CRF-S
0.61
0.27
0.35
0.73
0.31
0.41
ULP
0.43
0.39
0.41
0.61
0.55
0.58
Table 3. Comparison results with baseline methods (only gold-standard opinionated sentences are used)
Method.
Strict
Soft
Precision
Recall
F-measure
Precision
Recall
F-measure
Team-Avg
0.17
0.09
0.12
0.29
0.15
0.20
Team-3
0.26
0.16
0.20
0.40
0.25
0.31
Team-2
0.31
0.18
0.23
0.40
0.22
0.29
Team-1
0.30
0.27
0.29
0.39
0.36
0.37
ULP
0.37
0.27
0.32
0.48
0.37
0.42
Table 2.
AcknowledgmentsThe work was supported by NSFC (61170166), Beijing Nova Program (2008B03) and National High-Tech R&D Program (2012AA011101).
Robust sentiment detection on twitter from biased and noisy data. Barbosa Luciano, Junlan Feng, Proceedings of the 23rd International Conference on Computational Linguistics: Posters. the 23rd International Conference on Computational Linguistics: PostersAssociation for Computational LinguisticsBarbosa Luciano and Junlan Feng. 2010. Robust senti- ment detection on twitter from biased and noisy data. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters. Association for Computational Linguistics, 2010.
Twitter mood predicts the stock market. Johan Bollen, Huina Mao, Xiaojun Zeng, Journal of Computational Science. 21Johan Bollen, Huina Mao and Xiaojun Zeng. 2011. Twitter mood predicts the stock market. Journal of Computational Science 2.1 (2011): 1-8.
Opinion Mining from Blogs. Michel Gé Rard Dray, Ali Plantié, Pascal Harb, Mathieu Poncelet, Roche, Trousset Franç Ois, International Journal of Computer Informa-tion Systems and Industrial Management Applications. Gé rard Dray, Michel Plantié , Ali Harb, Pascal Poncelet, Mathieu Roche and Franç ois Trousset. 2009. Opin- ion Mining from Blogs. In International Journal of Computer Informa-tion Systems and Industrial Man- agement Applications.
Clustering by passing messages between data points. J Brendan, Delbert Frey, Dueck, Science. 315Brendan J. Frey and Delbert Dueck. 2007. "Clustering by passing messages between data points." Science 315.5814 (2007): 972-976.
Twitter sentiment classification using distant supervision. Alec Go, Richa Bhayani, Lei Huang, StanfordCS224N Project ReportAlec Go, Richa Bhayani and Lei Huang. 2009. Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford (2009): 1-12.
Mining and summarizing customer reviews. Minqing Hu, Bing Liu, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. the tenth ACM SIGKDD international conference on Knowledge discovery and data miningACMMinqing Hu and Bing Liu. Mining and summarizing customer reviews. 2004. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 168-177. ACM.
Target-dependent twitter sentiment classification. Long Jiang, Mo Yu, Ming Zhou, Xiaohua Liu, Tiejun Zhao, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies1Long Jiang , Mo Yu, Ming Zhou, Xiaohua Liu and Tiejun Zhao. 2011. Target-dependent twitter senti- ment classification. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies, vol. 1, pp. 151-160.
Extracting opinion targets in a single-and cross-domain setting with conditional random fields. Niklas Jakob, Iryna Gurevych, Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational LinguisticsNiklas Jakob and Iryna Gurevych. Extracting opinion targets in a single-and cross-domain setting with conditional random fields. 2010. In Proceedings of the 2010 Conference on Empirical Methods in Natu- ral Language Processing. Association for Computa- tional Linguistics.
Syntactic and semantic structure for opinion expression detection. Richard Johansson, Alessandro Moschitti, Proceedings of the Fourteenth Conference on Computational Natural Language Learning. the Fourteenth Conference on Computational Natural Language LearningAssociation for Computational LinguisticsRichard Johansson and Alessandro Moschitti. 2010. Syntactic and semantic structure for opinion expres- sion detection. Proceedings of the Fourteenth Con- ference on Computational Natural Language Learning. Association for Computational Linguistics.
Extracting Opinions, Opinion Holders and Topics Expressed in Online News Media Text. Min Soo, Eduard Kim, Hovy, Proceedings of the ACL Workshop on Sentiment and Subjectivity in Text. the ACL Workshop on Sentiment and Subjectivity in TextSoo-Min Kim and Eduard Hovy. 2006. Extracting Opinions, Opinion Holders and Topics Expressed in Online News Media Text. In Proceedings of the ACL Workshop on Sentiment and Subjectivity in Text, 2006, pp. 1-8.
Cross-Domain Co-Extraction of Sentiment and Topic Lexicons. Fangtao Li, Ou Sinno Jialin Pan, Qiang Jin, Xiaoyan Yang, Zhu, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. the 50th Annual Meeting of the Association for Computational LinguisticsJeju, Republic of KoreaFangtao Li, Sinno Jialin Pan, Ou Jin, Qiang Yang and Xiaoyan Zhu. 2012a. Cross-Domain Co-Extraction of Sentiment and Topic Lexicons. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 410-419, Jeju, Republic of Korea, 8-14 July 2012.
Twiner: Named entity recognition in targeted twitter stream. Chenliang Li, Jianshu Weng, Qi He, Yuxia Yao, Anwitaman Datta, Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval. the 35th international ACM SIGIR conference on Research and development in information retrievalACMAixin Sun and Bu-Sung LeeChenliang Li, Jianshu Weng, Qi He, Yuxia Yao, Anwitaman Datta, Aixin Sun and Bu-Sung Lee. 2012b. Twiner: Named entity recognition in targeted twitter stream. In Proceedings of the 35th interna- tional ACM SIGIR conference on Research and de- velopment in information retrieval, pp. 721-730. ACM.
Collective semantic role labeling for tweets with clustering. Xiaohua Liu, Kuan Li, Ming Zhou, Zhongyang Xiong, Proceedings of the Twenty-Second international joint conference on Artificial Intelligence-Volume Volume Three. the Twenty-Second international joint conference on Artificial Intelligence-Volume Volume ThreeAAAI PressXiaohua Liu, Kuan Li, Ming Zhou and Zhongyang Xiong. 2011. Collective semantic role labeling for tweets with clustering. In Proceedings of the Twen- ty-Second international joint conference on Artificial Intelligence-Volume Volume Three, pp. 1832-1837. AAAI Press.
Sentiment analysis and opinion mining. Bing Liu, Synthesis Lectures on Human Language Technologies. 5Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technolo- gies 5.1 (2012): 1-167.
Opinion Target Extraction Using Word-Based Translation Model. Kang Liu, Liheng Xu, Jun Zhao, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningKang Liu, Liheng Xu and Jun Zhao. 2012. Opinion Target Extraction Using Word-Based Translation Model. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Pro- cessing and Computational Natural Language Learn- ing.
Thumbs up?: sentiment classification using machine learning techniques. Bo Pang, Lillian Lee, Shivakumar Vaithyanathan, Proceedings of the ACL-02 conference on Empirical methods in natural language processing. the ACL-02 conference on Empirical methods in natural language processing10Association for Computational LinguisticsBo Pang, Lillian Lee and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pp. 79-86. Associa- tion for Computational Linguistics.
Learning accurate, compact, and interpretable tree annotation. Slav Petrov, Leon Barrett, Romain Thibaux, Dan Klein, Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational LinguisticsSlav Petrov, Leon Barrett, Romain Thibaux and Dan Klein. Learning accurate, compact, and interpretable tree annotation. 2006. In Proceedings of the 21st In- ternational Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pp. 433-440.
Opinion word expansion and target extraction through double propagation. Guang Qiu, Bing Liu, Jiajun Bu, Chun Chen, 37Computational linguisticsGuang Qiu, Bing Liu, Jiajun Bu and Chun Chen. 2011. Opinion word expansion and target extraction through double propagation. Computational linguis- tics 37, no. 1 (2011): 9-27.
A Vector Space Model for Automatic Indexing. G Salton, A Wong, C S Yang, Communications of the ACM. 1811G. Salton, A. Wong and C. S. Yang. 1975. A Vector Space Model for Automatic Indexing, Communica- tions of the ACM, vol. 18, nr. 11, pages 613-620.
A local maxima method and a fair dispersion normalization for extracting multi-word units from corpora. J F Silva, G P Lopes, Proc. of the 6th Meeting on Mathematics of Language. of the 6th Meeting on Mathematics of LanguageJ. F. da Silva and G. P. Lopes. 1999. A local maxima method and a fair dispersion normalization for ex- tracting multi-word units from corpora. In Proc. of the 6th Meeting on Mathematics of Language .
Twitter polarity classification with label propagation over lexical links and the follower graph. Michael Speriosu, Nikita Sudan, Sid Upadhyay, Jason Baldridge, Proceedings of the First Workshop on Unsupervised Learning in NLP. the First Workshop on Unsupervised Learning in NLPMichael Speriosu, Nikita Sudan, Sid Upadhyay and Jason Baldridge. 2011. Twitter polarity classification with label propagation over lexical links and the fol- lower graph. In Proceedings of the First Workshop on Unsupervised Learning in NLP, pp. 53-63. Asso- ciation for Computational Linguistics, 2011.
New regularized algorithms for transductive learning. Partha Talukdar, Koby Crammer, Machine Learning and Knowledge Discovery in Databases. Partha Talukdar and Koby Crammer. New regularized algorithms for transductive learning. 2009. Machine Learning and Knowledge Discovery in Databases (2009): 442-457.
Quantitative study of individual emotional states in social networks. Jie Tang, Yuan Zhang, Jimeng Sun, Jinhai Rao, Wenjing Yu, Yiran Chen, A C M Fong, IEEE Transactions on. 32Affective ComputingJie Tang, Yuan Zhang, Jimeng Sun, Jinhai Rao, Wen- jing Yu, Yiran Chen and A. C. M. Fong. 2012. Quantitative study of individual emotional states in social networks. Affective Computing, IEEE Trans- actions on 3, no. 2 (2012): 132-144.
Predicting elections with twitter: What 140 characters reveal about political sentiment. Andranik Tumasjan, O Timm, Sprenger, G Philipp, Isabell M Sandner, Welpe, Proceedings of the fourth international aaai conference on weblogs and social media. the fourth international aaai conference on weblogs and social mediaAndranik Tumasjan, Timm O. Sprenger, Philipp G. Sandner and Isabell M. Welpe. 2010. Predicting elections with twitter: What 140 characters reveal about political sentiment. In Proceedings of the fourth international aaai conference on weblogs and social media, pp. 178-185.
Learning from labeled and unlabeled data with label propagation. X Zhu, Z Ghahramani, CMU CALD tech report. Technical reportX. Zhu and Z. Ghahramani. 2002. Learning from la- beled and unlabeled data with label propagation. Technical report, CMU CALD tech report.
Movie review mining and summarization. Li Zhuang, Feng Jing, Xiaoyan Zhu, Proceedings of the ACM 15th Conference on Information and Knowledge Management. the ACM 15th Conference on Information and Knowledge ManagementArlington, Virginia, USALi Zhuang, Feng Jing and Xiaoyan Zhu. 2006. Movie review mining and summarization. In Proceedings of the ACM 15th Conference on Information and Knowledge Management, pages 43-50, Arlington, Virginia, USA, November. |
491,439 | AUEB: Two Stage Sentiment Analysis of Social Network Messages | This paper describes the system submitted for the Sentiment Analysis in Twitter Task of SEMEVAL 2014 and specifically the Message Polarity Classification subtask. We used a 2-stage pipeline approach employing a linear SVM classifier at each stage and several features including morphological features, POS tags based features and lexicon based features. | [
388,
13886408,
12979818,
13845267,
62442869,
15720214
] | AUEB: Two Stage Sentiment Analysis of Social Network Messages
SemEval 2014. August 23-24, 2014
Rafael Michael Karampatsis
Department of Informatics
Athens University of Economics
Business Patission 76GR-104 34AthensGreece
John Pavlopoulos
Department of Informatics
Athens University of Economics
Business Patission 76GR-104 34AthensGreece
Prodromos Malakasiotis
Department of Informatics
Athens University of Economics
Business Patission 76GR-104 34AthensGreece
AUEB: Two Stage Sentiment Analysis of Social Network Messages
Proceedings of the 8th International Workshop on Semantic Evaluation
the 8th International Workshop on Semantic EvaluationDublin, IrelandSemEval 2014. August 23-24, 2014
This paper describes the system submitted for the Sentiment Analysis in Twitter Task of SEMEVAL 2014 and specifically the Message Polarity Classification subtask. We used a 2-stage pipeline approach employing a linear SVM classifier at each stage and several features including morphological features, POS tags based features and lexicon based features.
Introduction
Recently, Twitter has gained significant popularity among the social network services. Lots of users often use Twitter to express feelings or opinions about a variety of subjects. Analysing this kind of content can lead to useful information for fields, such as personalized marketing or social profiling. However such a task is not trivial, because the language used in Twitter is often informal presenting new challenges to text analysis.
In this paper we focus on sentiment analysis, the field of study that analyzes people's sentiment and opinions from written language (Liu, 2012). Given some text (e.g., tweet), sentiment analysis systems return a sentiment label, which most often is positive, negative, or neutral. This classification can be performed directly or in two stages; in the first stage the system examines whether the text carries sentiment and in the second stage, the system decides for the sentiment's polarity (i.e., positive or negative). 1 This decomposition is based on the assumption that subjectivity detection and sentiment polarity detection are different problems. This work is licensed under a Creative Commons Attribution 4.0 International Licence. Page numbers and proceedings footer are added by the organisers. Licence details: http://creativecommons.org/licenses/by/4.0/ 1 For instance a 2-stage approach is better suited to systems that focus on subjectivity detection; e.g., aspect based sentiment analysis systems which extract aspect terms only from evaluative texts.
We choose to follow the 2-stage approach, because it allows us to focus on each of the two problems separately (e.g., features, tuning, etc.). In the following we will describe the system with which we participated in the Message Polarity Classification subtask of Sentiment Analysis in Twitter (Task 9) of SEMEVAL 2014 (Rosenthal et al., 2014). Specifically Section 2 describes the data provided by the organizers of the task. Sections 3 and 4 present our system and its performance respectively. Finally, Section 5 concludes and provides hints for future work.
Data
At first, we describe the data used for this year's task. For system tuning the organizers released the training and development data of SEMEVAL 2013 Task 2 (Wilson et al., 2013). Both these sets are allowed to be used for training. The organizers also provided the test data of the same Task to be used for development only. As argued in (Malakasiotis et al., 2013) these data suffer from class imbalance. Concerning the test data, they contained 8987 messages broken down in the following 5 datasets:
-LJ 14 : 2000 sentences from LIVEJOURNAL.
-SMS 13 : SMS test data from last year.
-TW 13 : Twitter test data from last year.
-TW 14 : 2000 new tweets.
-TWSARC 14 : 100 tweets containing sarcasm.
The details of the test data were made available to the participants only after the end of the Task. Recall that SMS 13 and TW 13 were also provided as development data. In this way the organizers were able to check, i) the progress of the systems since last year's task, and ii) the generalization capability of the participating systems.
System Overview
The main objective of our system is to detect whether a message M expresses positive, negative or no sentiment. To achieve that we follow a 2stage approach. During the first stage we detect whether M expresses sentiment ("subjective") or not; this process is called subjectivity detection. In the second stage we classify the "subjective" messages of the first stage as "positive" or "negative". Both stages utilize a Support Vector Machine (SVM (Vapnik, 1998)) classifier with linear kernel. 2 Similar approaches have also been proposed in (Pang and Lee, 2004;Wilson et al., 2005;Barbosa and Feng, 2010;Malakasiotis et al., 2013). Finally, we note that the 2-stage approach, in datasets such the one here (Malakasiotis et al., 2013), alleviates the class imbalance problem.
Data preprocessing
A very essential part of our system is data preprocessing. At first, each message M is passed through a twitter specific tokenizer and part-ofspeech (POS) tagger (Owoputi et al., 2013) to obtain the tokens and the corresponding POS tags, which are necessary for some sets of features. 3 We then use a dictionary to replace any slang with the actual text. 4 We also normalize the text of each message by combining a trie data structure (De La Briandais, 1959) with an English dictionary. 5 In more detail, we replace every token of M not in the dictionary with the most similar word of the dictionary. Finally, we obtain POS tags of all the new tokens.
Sentiment lexicons
Another key attribute of our system is the use of sentiment lexicons. We have used the following:
-HL (Hu and Liu, 2004).
-SENTIWORDNET (Baccianella et al., 2010).
-SENTIWORDNET lexicon with POS tags (Baccianella et al., 2010).
-AFINN (Nielsen, 2011).
-MPQA (Wilson et al., 2005). 2 We used the LIBLINEAR distribution (Fan et al., 2008) 3 Tokens could be words, emoticons, hashtags, etc. No lemmatization or stemming has been applied 4 See http://www.noslang.com/dictionary/. 5 We used the OPENOFFICE dictionary -NRC Emotion lexicon (Mohammad and Turney, 2013).
-NRC S140 lexicon .
-NRC Hashtag lexicon .
-The three lexicons created from the training data in (Malakasiotis et al., 2013).
Note that concerning the MPQA Lexicon we applied preprocessing similar to Malakasiotis et al. (2013) to obtain the following sub-lexicons: W 0 : Contains weak subjective expressions with neutral prior polarity.
Feature engineering
Our system employs several types of features based on morphological attributes of the messages, POS tags, and lexicons of section 3.2. 6
Morphological features
-The existence of elongated tokens (e.g., "baaad").
-The number of elongated tokens.
-The existence of date references.
-The existence of time references. 6 All the features are normalized to [−1, 1] -The number of tokens that contain only upper case letters.
-The number of tokens that contain both upper and lower case letters.
-The number of tokens that start with an upper case letter.
-The number of exclamation marks.
-The number of question marks.
-The sum of exclamation and question marks.
-The number of tokens containing only exclamation marks.
-The number of tokens containing only question marks.
-The number of tokens containing only exclamation or question marks.
-The number of tokens containing only ellipsis (...).
-The existence of a subjective (i.e., positive or negative) emoticon at the message's end.
-The existence of an ellipsis and a link at the message's end.
-The existence of an exclamation mark at the message's end.
-The existence of a question mark at the message's end.
-The existence of a question or an exclamation mark at the message's end.
-The existence of slang. For a bigram b and a class c, F 1 is calculated as:
POS based features
F1(b, c) = 2 · P re(b, c) · Rec(b, c) P re(b, c) + Rec(b, c)(1)
where:
P re(b, c) = #messages of c containing b #messages containing b (2) Rec(b, c) = #messages of c containing b #messages of c (3)
Sentiment lexicon based features
For each lexicon we use seven different features based on the scores provided by the lexicon for each word present in the message. 12 -Sum of scores.
-Maximum of scores.
-Minimum of scores.
-Average of scores.
-The count of words with scores.
-The score of the last word of the message that appears in the lexicon.
-The score of the last word of the message.
We also created features based on the P re and F 1 scores of MPQA and the train data generated lexicons in a similar manner to that described in (Malakasiotis et al., 2013), with the difference that the features are stage dependent. Thus, for subjectivity detection we use the subjective and neutral classes and for polarity detection we use the positive and negative classes to compute the scores.
Miscellaneous features
Negation. Negation not only is a good subjectivity indicator but it also may change the polarity of a message. We therefore add 7 more features, one indicating the existence of negation, and the remaining six indicating the existence of negation that precedes words from lexicons S ± , S + , S − , W ± , W + and W − . 13 Each feature is used in the appropriate stage. 14 We have not implement this type of feature for other lexicons but it might be a good addition to the system.
Carnegie Mellon University's Twitter clusters. Owoputi et al. (2013) released a dataset of 938 clusters containing words coming from tweets. Words of the same clusters share similar attributes. We try to exploit this observation by adding 938 features, each of which indicates if a message's token appears or not in the corresponding attributes.
Feature Selection
To allow our model to better scale on unseen data we have performed feature selection. More specifically, we first merged training and development data of SEMEVAL 2013 Task 2. Then, we ranked the features with respect to their information gain (Quinlan, 1986) on this dataset. To obtain the best set of features we started with a set containing the top 50 features and we kept adding batches of 50 features until we have added all of them. At each step we evaluated the corresponding feature set on the TW 13 and SMS 13 datasets and chose the feature set with the best performance. This resulted in a system which used the top 900 features for Stage 1 and the top 1150 features for Stage 2. 13 We use a list of words with negation. We assume that a token precedes a word if it is in a distance of at most 5 tokens.
14 The features concerning S± and W± are used in subjectivity detection and the remaining four in polarity detection.
Experimental Results
The official measure of the Task is the average F 1 score of the positive and negative classes (F 1 (±)). Table 1 illustrates the F 1 (±) score per evaluation dataset achieved by our system along with the median and best F 1 (±). In the same table AVG all corresponds to the average F 1 (±) across the five datasets while AVG 14 corresponds to the average F 1 (±) across LJ 14 , TW 14 and TWSARC 14 . We observe that in all cases our results are above the median. Table 2 illustrates the ranking of our system according to F 1 (±). Our system ranked 6th according to AVG all and 5th according to AVG 14 among the 50 participating systems. Note that our best results were achieved on the new test sets (LJ 14 , TW 14 , TWSARC 14 ) meaning that our system has a good generalization ability.
Conclusion and future work
In this paper we presented our approach for the Message Polarity Classification subtask of the Sentiment Analysis in Twitter Task of SEMEVAL 2014. We proposed a 2-stage pipeline approach, which first detects sentiment and then decides about its polarity. The results indicate that our system handles well the class imbalance problem and has a good generalization ability. A possible explanation is that we do not use bag-of-words fea-tures which often suffer from over-fitting. Nevertheless, there is still some room for improvement. A promising direction would be to improve the 1st stage (subjectivity detection) either by adding more data or by adding more features, mostly because the performance of stage 1 greatly affects that of stage 2. Finally, the addition of more data for the negative class on stage 2 might be a good improvement because it would further reduce the class imbalance of the training data for this stage.
S
+ : Contains strong subjective expressions with positive prior polarity. S − : Contains strong subjective expressions with negative prior polarity. S ± : Contains strong subjective expressions with either positive or negative prior polarity. S 0 : Contains strong subjective expressions with neutral prior polarity. W + : Contains weak subjective expressions with positive prior polarity. W − : Contains weak subjective expressions with negative prior polarity.W ± : Contains weak subjective expressions with either positive or negative prior polarity.
--
The number of nouns.-The number of proper nouns.-The number of urls. The average, maximum and minimum F 1 scores of the message's POS bigrams for the subjective and the neutral classes. 10 -The average, maximum and minimum F 1 scores of the message's POS bigrams for the positive and the negative classes. 11
Table 1: F 1 (±) scores per dataset.Test Set
AUEB Median Best
LJ 14
70.75
65.48
74.84
SMS 13
64.32
57.53
70.28
TW 13
63.92
62.88
72.12
TW 14
66.38
63.03
70.96
TWSARC 14 56.16
45.77
58.16
AVG all
64.31
56.56
68.78
AVG 14
64.43
57.97
67.62
Test Set
Ranking
LJ 14
9/50
SMS 13
8/50
TW 13
21/50
TW 14
14/50
TWSARC 14
4/50
AVG all
6/50
AVG 14
5/50
Table 2 :
2Rankings of our system.
This feature is used only for subjectivity detection. 8 This feature is used only for polarity detection. 9 This feature is used only for polarity detection. 10 This feature is used only for subjectivity detection. 11 This feature is used only for polarity detection. 12 If a word does not appear in the lexicon it is assigned with a score of 0 and it is not considered in the calculation of the average, maximum, minimum and count scores. Also, we have removed from SENTIWORDNET any instances having positive and negative scores that sum to zero. Moreover, the MPQA lexicon does not provide scores, so, for each word in the lexicon we assume a score equal to 1.
Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. Stefano Baccianella, Andrea Esuli, Fabrizio Sebastiani, Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10). the Seventh International Conference on Language Resources and Evaluation (LREC'10)Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis, Mike Rosner; Valletta, MaltaNicoletta Calzolari (Conference Chair)Stefano Baccianella, Andrea Esuli, and Fabrizio Se- bastiani. 2010. Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opin- ion mining. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis, Mike Ros- ner, and Daniel Tapias, editors, Proceedings of the Seventh International Conference on Language Re- sources and Evaluation (LREC'10), Valletta, Malta, may.
Robust sentiment detection on twitter from biased and noisy data. Luciano Barbosa, Junlan Feng, Proceedings of the 23rd International Conference on Computational Linguistics: Posters, COLING '10. the 23rd International Conference on Computational Linguistics: Posters, COLING '10Beijing, ChinaLuciano Barbosa and Junlan Feng. 2010. Robust sen- timent detection on twitter from biased and noisy data. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, COLING '10, pages 36-44, Beijing, China.
File searching using variable length keys. Rene De, La Briandais, Western Joint Computer Conference, IRE-AIEE-ACM '59 (Western). New York, NY, USAIn Papers Presented at the theRene De La Briandais. 1959. File searching using variable length keys. In Papers Presented at the the March 3-5, 1959, Western Joint Computer Confer- ence, IRE-AIEE-ACM '59 (Western), pages 295- 298, New York, NY, USA.
Liblinear: A library for large linear classification. Kai-Wei Rong-En Fan, Cho-Jui Chang, Xiang-Rui Hsieh, Chih-Jen Wang, Lin, The Journal of Machine Learning Research. 9Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang- Rui Wang, and Chih-Jen Lin. 2008. Liblinear: A library for large linear classification. The Journal of Machine Learning Research, 9:1871-1874.
Mining and summarizing customer reviews. Minqing Hu, Bing Liu, Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '04. the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '04New York, NY, USAMinqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, KDD '04, pages 168-177, New York, NY, USA.
Sentiment analysis and opinion mining. Bing Liu, Synthesis Lectures on Human Language Technologies. 51Bing Liu. 2012. Sentiment analysis and opinion min- ing. Synthesis Lectures on Human Language Tech- nologies, 5(1):1-167.
nlp.cs.aueb.gr: Two stage sentiment analysis. Prodromos Malakasiotis, Rafael Michael Karampatsis, Konstantina Makrynioti, John Pavlopoulos, Proceedings of the Seventh International Workshop on Semantic Evaluation. the Seventh International Workshop on Semantic EvaluationAtlanta, Georgia2Second Joint Conference on Lexical and Computational Semantics (*SEM)Prodromos Malakasiotis, Rafael Michael Karampat- sis, Konstantina Makrynioti, and John Pavlopoulos. 2013. nlp.cs.aueb.gr: Two stage sentiment analysis. In Second Joint Conference on Lexical and Com- putational Semantics (*SEM), Volume 2: Proceed- ings of the Seventh International Workshop on Se- mantic Evaluation (SemEval 2013), pages 562-567, Atlanta, Georgia, June.
Crowdsourcing a word-emotion association lexicon. Saif Mohammad, Peter Turney, 29Saif Mohammad and Peter Turney. 2013. Crowdsourc- ing a word-emotion association lexicon. 29(3):436- 465.
Nrc-canada: Building the state-of-theart in sentiment analysis of tweets. Saif Mohammad, Svetlana Kiritchenko, Xiaodan Zhu, Proceedings of the Seventh International Workshop on Semantic Evaluation. the Seventh International Workshop on Semantic EvaluationAtlanta, Georgia, USASaif Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. Nrc-canada: Building the state-of-the- art in sentiment analysis of tweets. In Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), Atlanta, Georgia, USA, June.
A new anew: evaluation of a word list for sentiment analysis in microblogs. Finnårup Nielsen, Proceedings of the ESWC2011 Workshop on. Matthew Rowe, Milan Stankovic, Aba-Sah Dadzie, and Mariann Hardeythe ESWC2011 Workshop on718Making Sense of Microposts': Big things come in small packagesFinnÅrup Nielsen. 2011. A new anew: evaluation of a word list for sentiment analysis in microblogs. In Matthew Rowe, Milan Stankovic, Aba-Sah Dadzie, and Mariann Hardey, editors, Proceedings of the ESWC2011 Workshop on 'Making Sense of Micro- posts': Big things come in small packages, volume 718 of CEUR Workshop Proceedings, pages 93-98, May.
Improved part-of-speech tagging for online conversational text with word clusters. Olutobi Owoputi, Brendan Oconnor, Chris Dyer, Kevin Gimpel, Nathan Schneider, Noah A Smith, Proceedings of NAACL. NAACLOlutobi Owoputi, Brendan OConnor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In Proceedings of NAACL.
A sentimental education: sentiment analysis using subjectivity summarization based on minimum cuts. Bo Pang, Lillian Lee, Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, ACL '04. the 42nd Annual Meeting on Association for Computational Linguistics, ACL '04Barcelona, SpainBo Pang and Lillian Lee. 2004. A sentimental educa- tion: sentiment analysis using subjectivity summa- rization based on minimum cuts. In Proceedings of the 42nd Annual Meeting on Association for Com- putational Linguistics, ACL '04, Barcelona, Spain.
Induction of decision trees. Ross Quinlan, Mach. Learn. 11Ross Quinlan. 1986. Induction of decision trees. Mach. Learn., 1(1):81-106, March.
SemEval-2014 Task 9: Sentiment Analysis in Twitter. Sara Rosenthal, Preslav Nakov, Alan Ritter, Veselin Stoyanov, Proceedings of the 8th International Workshop on Semantic Evaluation, Se-mEval '14. Preslav Nakov and Torsten Zeschthe 8th International Workshop on Semantic Evaluation, Se-mEval '14Dublin, IrelandSara Rosenthal, Preslav Nakov, Alan Ritter, and Veselin Stoyanov. 2014. SemEval-2014 Task 9: Sentiment Analysis in Twitter. In Preslav Nakov and Torsten Zesch, editors, Proceedings of the 8th In- ternational Workshop on Semantic Evaluation, Se- mEval '14, Dublin, Ireland.
Statistical learning theory. Vladimir Vapnik, Vladimir Vapnik. 1998. Statistical learning theory.
Recognizing contextual polarity in phraselevel sentiment analysis. Theresa Wilson, Janyce Wiebe, Paul Hoffmann, Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing. the conference on Human Language Technology and Empirical Methods in Natural Language ProcessingTheresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase- level sentiment analysis. In Proceedings of the con- ference on Human Language Technology and Em- pirical Methods in Natural Language Processing, pages 347-354.
SemEval-2013 task 2: Sentiment analysis in twitter. Theresa Wilson, Zornitsa Kozareva, Preslav Nakov, Sara Rosenthal, Veselin Stoyanov, Alan Ritter, Proceedings of the International Workshop on Semantic Evaluation, SemEval '13. the International Workshop on Semantic Evaluation, SemEval '13Theresa Wilson, Zornitsa Kozareva, Preslav Nakov, Sara Rosenthal, Veselin Stoyanov, and Alan Ritter. 2013. SemEval-2013 task 2: Sentiment analysis in twitter. In Proceedings of the International Work- shop on Semantic Evaluation, SemEval '13, June. |
9,169,716 | A Human Judgment Corpus and a Metric for Arabic MT Evaluation | We present a human judgments dataset and an adapted metric for evaluation of Arabic machine translation. Our mediumscale dataset is the first of its kind for Arabic with high annotation quality. We use the dataset to adapt the BLEU score for Arabic. Our score (AL-BLEU) provides partial credits for stem and morphological matchings of hypothesis and reference words. We evaluate BLEU, METEOR and AL-BLEU on our human judgments corpus and show that AL-BLEU has the highest correlation with human judgments. We are releasing the dataset and software to the research community. | [
8882144,
813729,
15897060,
2247531,
3112492,
3046638,
21136496,
7647892,
1553433,
2055676
] | A Human Judgment Corpus and a Metric for Arabic MT Evaluation
Association for Computational LinguisticsCopyright Association for Computational LinguisticsOctober 25-29, 2014. 2014
Houda Bouamor hbouamor@cmu.edu
Carnegie Mellon University
Qatar
Hanan Alshikhabobakr
Carnegie Mellon University
Qatar
Behrang Mohit behrang@cmu.edu
Carnegie Mellon University
Qatar
Kemal Oflazer
Carnegie Mellon University
Qatar
A Human Judgment Corpus and a Metric for Arabic MT Evaluation
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)
the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, Qatar. cAssociation for Computational LinguisticsOctober 25-29, 2014. 2014
We present a human judgments dataset and an adapted metric for evaluation of Arabic machine translation. Our mediumscale dataset is the first of its kind for Arabic with high annotation quality. We use the dataset to adapt the BLEU score for Arabic. Our score (AL-BLEU) provides partial credits for stem and morphological matchings of hypothesis and reference words. We evaluate BLEU, METEOR and AL-BLEU on our human judgments corpus and show that AL-BLEU has the highest correlation with human judgments. We are releasing the dataset and software to the research community.
Introduction
Evaluation of Machine Translation (MT) continues to be a challenging research problem. There is an ongoing effort in finding simple and scalable metrics with rich linguistic analysis. A wide range of metrics have been proposed and evaluated mostly for European target languages (Callison-Burch et al., 2011;Macháček and Bojar, 2013). These metrics are usually evaluated based on their correlation with human judgments on a set of MT output. While there has been growing interest in building systems for translating into Arabic, the evaluation of Arabic MT is still an under-studied problem. Standard MT metrics such as BLEU (Papineni et al., 2002) or TER (Snover et al., 2006) have been widely used for evaluating Arabic MT (El Kholy and Habash, 2012). These metrics use strict word and phrase matching between the MT output and reference translations. For morphologically rich target languages such as Arabic, such criteria are too simplistic and inadequate. In this paper, we present: (a) the first human judgment dataset for Arabic MT (b) the Arabic Language BLEU (AL-BLEU), an extension of the BLEU score for Arabic MT evaluation.
Our annotated dataset is composed of the output of six MT systems with texts from a diverse set of topics. A group of ten native Arabic speakers annotated this corpus with high-levels of inter-and intra-annotator agreements. Our AL-BLEU metric uses a rich set of morphological, syntactic and lexical features to extend the evaluation beyond the exact matching. We conduct different experiments on the newly built dataset and demonstrate that AL-BLEU shows a stronger average correlation with human judgments than the BLEU and METEOR scores. Our dataset and our AL-BLEU metric provide useful testbeds for further research on Arabic MT and its evaluation. 1
Related Work
Several studies on MT evaluation have pointed out the inadequacy of the standard n-gram based evaluation metrics for various languages (Callison-Burch et al., 2006). For morphologically complex languages and those without word delimiters, several studies have attempted to improve upon them and suggest more reliable metrics that correlate better with human judgments (Denoual and Lepage, 2005;Homola et al., 2009).
A common approach to the problem of morphologically complex words is to integrate some linguistic knowledge in the metric. ME-TEOR (Denkowski and Lavie, 2011), TER-Plus (Snover et al., 2010) incorporate limited linguistic resources. Popović and Ney (2009) showed that n-gram based evaluation metrics calculated on POS sequences correlate well with human judgments, and recently designed and evaluated MPF, a BLEU-style metric based on morphemes and POS tags (Popović, 2011). In the same direc-tion, Chen and Kuhn (2011) proposed AMBER, a modified version of BLEU incorporating recall, extra penalties, and light linguistic knowledge about English morphology. Liu et al. (2010) propose TESLA-M, a variant of a metric based on n-gram matching that utilizes light-weight linguistic analysis including lemmatization, POS tagging, and WordNet synonym relations. This metric was then extended to TESLA-B to model phrase synonyms by exploiting bilingual phrase tables (Dahlmeier et al., 2011). Tantug et al. (2008) presented BLEU+, a tool that implements various extension to BLEU computation to allow for a better evaluation of the translation performance for Turkish.
To the best of our knowledge the only human judgment dataset for Arabic MT is the small corpus which was used to tune parameters of the ME-TEOR metric for Arabic (Denkowski and Lavie, 2011). Due to the shortage of Arabic human judgment dataset, studies on the performance of evaluation metrics have been constrained and limited. A relevant effort in this area is the upper-bound estimation of BLEU and METEOR scores for Arabic MT output (El Kholy and Habash, 2011). As part of its extensive functionality, the AMEANA system provides the upper-bound estimate by an exhaustive matching of morphological and lexical features between the hypothesis and the reference translations. Our use of morphological and lexical features overlaps with the AMEANA framework. However, we extend our partial matching to a supervised tuning framework for estimating the value of partial credits. Moreover, our human judgment dataset allows us to validate our framework with a large-scale gold-standard data.
Human judgment dataset
We describe here our procedure for compiling a diverse Arabic MT dataset and annotating it with human judgments.
Data and systems
We annotate a corpus composed of three datasets:
(1) the standard English-Arabic NIST 2005 corpus, commonly used for MT evaluations and composed of news stories. We use the first English translation as the source and the single corresponding Arabic sentence as the reference. (2) the MEDAR corpus (Maegaard et al., 2010) that consists of texts related to the climate change with four Arabic reference translations. We only use the first reference in this study. (3) a small dataset of Wikipedia articles (WIKI) to extend our corpus and metric evaluation to topics beyond the commonly-used news topics. This sub-corpus consists of our in-house Arabic translations of seven English Wikipedia articles. The articles are: Earl Francis Lloyd, Western Europe, Citizenship, Marcus Garvey, Middle Age translation, Acadian, NBA. The English articles which do not exist in the Arabic Wikipedia were manually translated by a bilingual linguist. Table 1 gives an overview of these sub-corpora characteristics.
NIST MEDAR WIKI # of Documents 100 4 7 # of Sentences 1056 509 327 Table 1: Statistics on the datasets.
We use six state-of-the-art English-to-Arabic MT systems. These include four research-oriented phrase-based systems with various morphological and syntactic features and different Arabic tokenization schemes and also two commercial offthe-shelf systems.
Annotation of human judgments
In order conduct a manual evaluation of the six MT systems, we formulated it as a ranking problem. We adapt the framework used in the WMT 2011 shared task for evaluating MT metrics on European language pairs (Callison-Burch et al., 2011) for Arabic MT. We gather human ranking judgments by asking ten annotators (each native speaker of Arabic with English as a second language) to assess the quality of the English-Arabic systems, by ranking sentences relative to each other, from the best to the worst (ties are allowed).
We use the Appraise toolkit (Federmann, 2012) designed for manual MT evaluation. The tool displays to the annotator, the source sentence and translations produced by various MT systems. The annotators received initial training on the tool and the task with ten sentences. They were presented with a brief guideline indicating the purpose of the task and the main criteria of MT output evaluation.
Each annotator was assigned to 22 ranking tasks. Each task included ten screens. Each screen involveed ranking translations of ten sentences. In total, we collected 22, 000 rankings for 1892 sen-tences (22 tasks×10 screens×10 judges). In each annotation screen, the annotator was shown the source-language (English) sentences, as well as five translations to be ranked. We did not provide annotators with the reference to avoid any bias in the annotation process. Each source sentence was presented with its direct context. Rather than attempting to get a complete ordering over the systems, we instead relied on random selection and a reasonably large sample size to make the comparisons fair (Callison-Burch et al., 2011).
An example of a source sentence and its five translations to be ranked is given in Table 2.
Annotation quality and analysis
In order to ensure the validity of any evaluation setup, a reasonable of inter-and intra-annotator agreement rates in ranking should exist. To measure these agreements, we deliberately reassigned 10% of the tasks to second annotators. Moreover, we ensured that 10% of the screens are redisplayed to the same annotator within the same task. This procedure allowed us to collect reliable quality control measure for our dataset. Table 3: Inter-and intra-annotator agreement scores for our annotation compared to the average scores for five English to five European languages and also English-Czech (Callison-Burch et al., 2011).
We measured head-to-head pairwise agreement among annotators using Cohen's kappa (κ) (Cohen, 1968), defined as follows:
κ = P (A) − P (E) 1 − P (E)
where P(A) is the proportion of times annotators agree and P(E) is the proportion of agreement by chance. Table 3 gives average values obtained for interannotator and intra-annotator agreement and compare our results to similar annotation efforts in WMT-13 on different European languages. Here we compare against the average agreement for English to five languages and also from English to one morphologically rich language (Czech). 4 Based on Landis and Koch (1977) κ interpretation, the κ inter value (57%) and also comparing our agreement scores with WMT-13 annotations, we believe that we have reached a reliable and consistent annotation quality.
AL-BLEU
Despite its well-known shortcomings (Callison-Burch et al., 2006), BLEU continues to be the de-facto MT evaluation metric. BLEU uses an exact n-gram matching criterion that is too strict for a morphologically rich language like Arabic. The system outputs in Table 2 are examples of how BLEU heavily penalizes Arabic. Based on BLEU, the best hypothesis is from Sys 5 which has three unigram and one bigram exact matches with the reference. However, the sentence is the 4 th ranked by annotators. In contrast, the output of Sys 3 (ranked 1 st by annotators) has only one exact match, but several partial matches when morphological and lexical information are taken into consideration.
We propose the Arabic Language BLEU (AL-BLEU) metric which extends BLEU to deal with Arabic rich morphology. We extend the matching to morphological, syntactic and lexical levels with an optimized partial credit. AL-BLEU starts with the exact matching of hypothesis tokens against the reference tokens. Furthermore, it considers the following: (a) morphological and syntactic feature matching, (b) stem matching. Based on Arabic linguistic intuition, we check the matching of a subset of 5 morphological features: (i) POS tag, (ii) gender (iii) number (iv) person (v) definiteness. We use the MADA package (Habash et al., 2009) to collect the stem and the morphological features of the hypothesis and reference translation. Figure 1 summarizes the function in which we consider partial matching (m(t h , t r )) of a hypothesis token (t h ) and its associated reference token (t r ). Starting with the BLEU criterion, we first check if the hypothesis token is same as the reference one and provide the full credit for it. If the exact matching fails, we provide partial credit for matching at the stem and morphological level. The value of the partial credits are the sum of the stem weight (w s ) and the morphological fea- ture weights (w f i ). Each weight is included in the partial score, if such matching exist (e.g., stem match). In order to avoid over-crediting, we limit the range of weights with a set of constraints. Moreover, we use a development set to optimize the weights towards improvement of correlation with human judgments, using a hill-climbing algorithm (Russell and Norvig, 2009 Following the BLEU-style exact matching and scoring of different n-grams, AL-BLEU updates the n-gram scores with the partial credits from non-exact matches. We use a minimum partial credit for n-grams which have tokens with different matching score. The contribution of a partially-matched n-gram is not 1 (as counted in BLEU), but the minimum value that individual tokens within the bigram are credited. For example, if a bigram is composed of a token with exact matching and a token with stem matching, this bigram receives a credit equal to a unigram with the stem matching (a value less than 1). While partial credits are added for various n-grams, the final computation of the AL-BLEU is similar to the original BLEU based on the geometric mean of the different matched n-grams. We follow BLEU in using a very small smoothing value to avoid zero n-gram counts and zero score.
m(t h , t r ) = 1, if t h = t r w s + 5 i=1 w f i otherwise
Experiments and results
An automatic evaluation metric is said to be successful if it is shown to have high agreement with human-performed evaluations (Soricut and Brill, 2004). We use Kendall's tau τ (Kendall, 1938), a coefficient to measure the correlation between the system rankings and the human judgments at the sentence level. Kendall's tau τ is calculated as follows: τ = # of concordant pairs -# of discordant pairs total pairs where a concordant pair indicates two translations of the same sentence for which the ranks obtained from the manual ranking task and from the corresponding metric scores agree (they disagree in a discordant pair). The possible values of τ range from -1 (all pairs are discordant) to 1 (all pairs are concordant). Thus, an automatic evaluation metric with a higher τ value is making predictions that are more similar to the human judgments than an automatic evaluation metric with a lower τ . We calculate the τ score for each sentence and average the scores to reach the corpus-level correlation. We conducted a set of experiments to compare the correlation of AL-BLEU against the state-of-the art MT evaluation metrics. For this we use a subset of 900 sentences extracted from the dataset described in Section 3.1. As mentioned above, the stem and morphological features in AL-BLEU are parameterized each by weights which are used to calculate the partial credits. We optimize the value of each weight towards correlation with human judgment by hill climbing with 100 random restarts using a development set of 600 sentences. The 300 remaining sentences (100 from each corpus) are kept for testing. The development and test sets are composed of equal portions of sentences from the three sub-corpora (NIST, MEDAR, WIKI). As baselines, we measured the correlation of BLEU and METEOR with human judgments collected for each sentence. We did not observe a strong correlation with the Arabic-tuned ME-TEOR. We conducted our experiments on the standard METEOR which was a stronger baseline than its Arabic version. In order to avoid the zero ngram counts and artificially low BLEU scores, we use a smoothed version of BLEU. We follow Liu and Gildea (2005) to add a small value to both the matched n-grams and the total number of n-grams (epsilon value of 10 −3 ). In order to reach an optimal ordering of partial matches, we conducted a set of experiments in which we compared different orders between the morphological and lexical matchings to settle with the final order which was presented in Figure 1. Table 4 shows a comparison of the average correlation with human judgments for BLEU, ME-TEOR and AL-BLEU. AL-BLEU shows a strong improvement against BLEU and a competitive improvement against METEOR both on the test and development sets. The example in Table 2 shows a sample case of such improvement. In the example, the sentence ranked the highest by the annotator has only two exact matching with the reference translation (which results in a low BLEU score). The stem and morphological matching of AL-BLEU, gives a score and ranking much closer to human judgments.
Conclusion
We presented AL-BLEU, our adaptation of BLEU for the evaluation of machine translation into Arabic. The metric uses morphological, syntactic and lexical matching to go beyond exact token matching. We also presented our annotated corpus of human ranking judgments for evaluation of Arabic MT. The size and diversity of the topics in the corpus, along with its relatively high annotation quality (measured by IAA scores) makes it a useful resource for future research on Arabic MT. Moreover, the strong performance of our AL-BLEU metric is a positive indicator for future exploration of richer linguistic information in evaluation of Arabic MT.
Figure 1 :
1Formulation of our partial matching.
Figure 2 :
2An MT example with exact matchings (blue), stem and morphological matching (green), stem only matching (red) and morphological-only matching (pink).
Table 2 :
2Example of ranked MT outputs in our gold-standard dataset. The first two rows specify the English input and the Arabic reference, respectively. The third row of the table lists the different MTsystem as ranked by annotators, using BLEU scores (column 4) and AL-BLEU (column 6). The differ-
ent translation candidates are given here along with their associated Bucklwalter transliteration. 3 This
example, shows clearly that AL-BLEU correlates better with human decision.
Table 4 :
4Comparison of the average Kendall's τ correlation.
The dataset and the software are available at: http://nlp.qatar.cmu.edu/resources/ AL-BLEU
We compare against the agreement score for annotations performed by WMT researchers which are higher than the WMT annotations on Mechanical Turk.
AcknowledgementsWe thank Michael Denkowski, Ahmed El Kholy, Francisco Guzman, Nizar Habash, Alon Lavie, Austin Matthews, Preslav Nakov for their comments and help in creation of our dataset. We also thank our team of annotators from CMU-Qatar. This publication was made possible by grants YSREP-1-018-1-004 and NPRP-09-1140-1-177 from the Qatar National Research Fund (a member of the Qatar Foundation). The statements made herein are solely the responsibility of the authors.
Re-evaluating the Role of BLEU in Machine Translation Research. Chris Callison, - Burch, Miles Osborne, Philipp Koehn, Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics. the 11th Conference of the European Chapter of the Association for Computational LinguisticsTrento, ItalyChris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluating the Role of BLEU in Machine Translation Research. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics, Trento, Italy.
Chris Callison-Burch, Philipp Koehn, Christof Monz, Omar Zaidan, Proceedings of the Sixth Workshop on Statistical Machine Translation. the Sixth Workshop on Statistical Machine TranslationEdinburgh, ScotlandFindings of the 2011 Workshop on Statistical Machine TranslationChris Callison-Burch, Philipp Koehn, Christof Monz, and Omar Zaidan. 2011. Findings of the 2011 Workshop on Statistical Machine Translation. In Proceedings of the Sixth Workshop on Statistical Machine Translation, Edinburgh, Scotland.
AMBER: A Modified BLEU, Enhanced Ranking Metric. Boxing Chen, Roland Kuhn, Proceedings of the Sixth Workshop on Statistical Machine Translation. the Sixth Workshop on Statistical Machine TranslationEdinburgh, ScotlandBoxing Chen and Roland Kuhn. 2011. AMBER: A Modified BLEU, Enhanced Ranking Metric. In Pro- ceedings of the Sixth Workshop on Statistical Ma- chine Translation, pages 71-77, Edinburgh, Scot- land.
Weighted Kappa: Nominal Scale Agreement Provision for Scaled Disagreement or Partial Credit. Jacob Cohen, Psychological bulletin. 704213Jacob Cohen. 1968. Weighted Kappa: Nominal Scale Agreement Provision for Scaled Disagreement or Partial Credit. Psychological bulletin, 70(4):213.
TESLA at WMT 2011: Translation Evaluation and Tunable Metric. Daniel Dahlmeier, Chang Liu, Hwee Tou Ng, Proceedings of the Sixth Workshop on Statistical Machine Translation. the Sixth Workshop on Statistical Machine TranslationEdinburgh, ScotlandAssociation for Computational LinguisticsDaniel Dahlmeier, Chang Liu, and Hwee Tou Ng. 2011. TESLA at WMT 2011: Translation Eval- uation and Tunable Metric. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 78-84, Edinburgh, Scotland, July. Association for Computational Linguistics.
Meteor 1.3: Automatic Metric for Reliable Optimization and Evaluation of Machine Translation Systems. Michael Denkowski, Alon Lavie, Proceedings of the EMNLP 2011 Workshop on Statistical Machine Translation. the EMNLP 2011 Workshop on Statistical Machine TranslationEdinburgh, UKMichael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic Metric for Reliable Optimization and Evaluation of Machine Translation Systems. In Proceedings of the EMNLP 2011 Workshop on Sta- tistical Machine Translation, Edinburgh, UK.
BLEU in Characters: Towards Automatic MT Evaluation in Languages Without Word Delimiters. Etienne Denoual, Yves Lepage, Proceedings of the Second International Joint Conference on Natural Language Processing. the Second International Joint Conference on Natural Language ProcessingJeju Island, Republic of KoreaEtienne Denoual and Yves Lepage. 2005. BLEU in Characters: Towards Automatic MT Evaluation in Languages Without Word Delimiters. In Proceed- ings of the Second International Joint Conference on Natural Language Processing, Jeju Island, Republic of Korea.
Automatic Error Analysis for Morphologically Rich Languages. Ahmed El Kholy, Nizar Habash, Proceedings of the MT Summit XIII. the MT Summit XIIIXiamen, ChinaAhmed El Kholy and Nizar Habash. 2011. Auto- matic Error Analysis for Morphologically Rich Lan- guages. In Proceedings of the MT Summit XIII, pages 225-232, Xiamen, China.
Orthographic and Morphological Processing for English-Arabic Statistical Machine Translation. Machine Translation. Ahmed El Kholy, Nizar Habash, 26Ahmed El Kholy and Nizar Habash. 2012. Ortho- graphic and Morphological Processing for English- Arabic Statistical Machine Translation. Machine Translation, 26(1):25-45.
Appraise: an Open-Source Toolkit for Manual Evaluation of MT Output. Christian Federmann, The Prague Bulletin of Mathematical Linguistics. 981Christian Federmann. 2012. Appraise: an Open- Source Toolkit for Manual Evaluation of MT Out- put. The Prague Bulletin of Mathematical Linguis- tics, 98(1):25-35.
Mada+ Tokan: A Toolkit for Arabic Tokenization, Diacritization, Morphological Disambiguation, POS Tagging, Stemming and Lemmatization. N Habash, O Rambow, R Roth, Proceedings of the Second International Conference on Arabic Language Resources and Tools (MEDAR). the Second International Conference on Arabic Language Resources and Tools (MEDAR)Cairo, EgyptN. Habash, O. Rambow, and R. Roth. 2009. Mada+ Tokan: A Toolkit for Arabic Tokenization, Diacriti- zation, Morphological Disambiguation, POS Tag- ging, Stemming and Lemmatization. In Proceed- ings of the Second International Conference on Ara- bic Language Resources and Tools (MEDAR), Cairo, Egypt.
A Simple Automatic MT Evaluation Metric. Petr Homola, Vladislav Kuboň, Pavel Pecina, Proceedings of the Fourth Workshop on Statistical Machine Translation. the Fourth Workshop on Statistical Machine TranslationAthens, Greece, MarchAssociation for Computational LinguisticsPetr Homola, Vladislav Kuboň, and Pavel Pecina. 2009. A Simple Automatic MT Evaluation Metric. In Proceedings of the Fourth Workshop on Statistical Machine Translation, pages 33-36, Athens, Greece, March. Association for Computational Linguistics.
A New Measure of Rank Correlation. G Maurice, Kendall, Biometrika. Maurice G Kendall. 1938. A New Measure of Rank Correlation. Biometrika.
The Measurement of Observer Agreement for Categorical Data. Richard Landis, Gary G Koch, Biometrics. 331J Richard Landis and Gary G Koch. 1977. The Mea- surement of Observer Agreement for Categorical Data. Biometrics, 33(1):159-174.
Syntactic Features for Evaluation of Machine Translation. Ding Liu, Daniel Gildea, Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or SummarizationDing Liu and Daniel Gildea. 2005. Syntactic Features for Evaluation of Machine Translation. In Proceed- ings of the ACL Workshop on Intrinsic and Extrin- sic Evaluation Measures for Machine Translation and/or Summarization, pages 25-32.
TESLA: Translation Evaluation of Sentences with Linear-Programming-Based Analysis. Chang Liu, Daniel Dahlmeier, Hwee Tou Ng, Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and Metrics (MATR). the Joint Fifth Workshop on Statistical Machine Translation and Metrics (MATR)Chang Liu, Daniel Dahlmeier, and Hwee Tou Ng. 2010. TESLA: Translation Evaluation of Sentences with Linear-Programming-Based Analysis. In Pro- ceedings of the Joint Fifth Workshop on Statistical Machine Translation and Metrics (MATR), pages 354-359.
Results of the WMT13 Metrics Shared Task. Matouš Macháček, Ondřej Bojar, Proceedings of the Eighth Workshop on Statistical Machine Translation. the Eighth Workshop on Statistical Machine TranslationSofia, BulgariaMatouš Macháček and Ondřej Bojar. 2013. Results of the WMT13 Metrics Shared Task. In Proceed- ings of the Eighth Workshop on Statistical Machine Translation, pages 45-51, Sofia, Bulgaria.
Cooperation for Arabic Language Resources and Tools-The MEDAR Project. Bente Maegaard, Mohamed Attia, Khalid Choukri, Olivier Hamon, Steven Krauwer, Mustafa Yaseen, Proceedings of LREC. LRECValetta, MaltaBente Maegaard, Mohamed Attia, Khalid Choukri, Olivier Hamon, Steven Krauwer, and Mustafa Yaseen. 2010. Cooperation for Arabic Language Resources and Tools-The MEDAR Project. In Pro- ceedings of LREC, Valetta, Malta.
BLEU: A Method for Automatic Evaluation of Machine Translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. the 40th Annual Meeting on Association for Computational LinguisticsPhiladelphia, PennsylvaniaKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A Method for Auto- matic Evaluation of Machine Translation. In Pro- ceedings of the 40th Annual Meeting on Associa- tion for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania.
Syntaxoriented Evaluation Measures for Machine Translation Output. Maja Popović, Hermann Ney, Proceedings of the Fourth Workshop on Statistical Machine Translation. the Fourth Workshop on Statistical Machine TranslationAthens, GreeceMaja Popović and Hermann Ney. 2009. Syntax- oriented Evaluation Measures for Machine Trans- lation Output. In Proceedings of the Fourth Work- shop on Statistical Machine Translation, pages 29- 32, Athens, Greece.
Morphemes and POS Tags for n-gram Based Evaluation Metrics. Maja Popović, Proceedings of the Sixth Workshop on Statistical Machine Translation. the Sixth Workshop on Statistical Machine TranslationEdinburgh, ScotlandMaja Popović. 2011. Morphemes and POS Tags for n-gram Based Evaluation Metrics. In Proceedings of the Sixth Workshop on Statistical Machine Trans- lation, pages 104-107, Edinburgh, Scotland.
Artificial Intelligence: A Modern Approach. Stuart Russell, Peter Norvig, Prentice Hall Englewood CliffsStuart Russell and Peter Norvig. 2009. Artificial Intel- ligence: A Modern Approach. Prentice Hall Engle- wood Cliffs.
A Study of Translation Edit Rate with Targeted Human Annotation. Matthew Snover, Bonnie J Dorr, Richard Schwartz, Linnea Micciulla, John Makhoul, Proceedings of AMTA. AMTABoston, USAMatthew Snover, Bonnie J. Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A Study of Translation Edit Rate with Targeted Hu- man Annotation. In Proceedings of AMTA, Boston, USA.
TER-Plus: Paraphrase, Semantic, and Alignment Enhancements to Translation Edit Rate. Matthew Snover, Nitin Madnani, Bonnie J Dorr, Richard Schwartz, Machine Translation. 232-3Matthew Snover, Nitin Madnani, Bonnie J. Dorr, and Richard Schwartz. 2010. TER-Plus: Paraphrase, Semantic, and Alignment Enhancements to Transla- tion Edit Rate. Machine Translation, 23(2-3).
A Unified Framework For Automatic Evaluation Using 4-Gram Cooccurrence Statistics. Radu Soricut, Eric Brill, Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume. the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main VolumeBarcelona, SpainRadu Soricut and Eric Brill. 2004. A Unified Frame- work For Automatic Evaluation Using 4-Gram Co- occurrence Statistics. In Proceedings of the 42nd Meeting of the Association for Computational Lin- guistics (ACL'04), Main Volume, pages 613-620, Barcelona, Spain, July.
BLEU+: a Tool for Fine-Grained BLEU Computation. Cüneyd Tantug, Kemal Oflazer, Ilknur Durgar El-Kahlout, Proceedings of the 6th edition of the Language Resources and Evaluation Conference. the 6th edition of the Language Resources and Evaluation ConferenceMarrakech, MoroccoCüneyd Tantug, Kemal Oflazer, and Ilknur Durgar El- Kahlout. 2008. BLEU+: a Tool for Fine-Grained BLEU Computation. In Proceedings of the 6th edition of the Language Resources and Evaluation Conference, Marrakech, Morocco. |
251,466,086 | Medical Crossing: a Cross-lingual Evaluation of Clinical Entity Linking | Medical data annotation requires highly qualified expertise. Despite the efforts devoted to medical entity linking in different languages, available data is very sparse in terms of both data volume and languages. In this work, we establish benchmarks for cross-lingual medical entity linking using clinical reports, clinical guidelines, and medical research papers. We present a test set filtering procedure designed to analyze the "hard cases" of entity linking approaching zero-shot cross-lingual transfer learning, evaluate state-of-the-art models, and draw several interesting conclusions based on our evaluation results. | [
2326624,
208117506,
52967399,
44094153,
189999659,
196185195,
235254228,
227231160,
218470427,
196197662
] | Medical Crossing: a Cross-lingual Evaluation of Clinical Entity Linking
June 2022
Anton Alekseev anton.m.alexeyev@gmail.com
Steklov Mathematical Institute at St. Petersburg
St. PetersburgRussia
SPbSU
St. PetersburgRussia
Zulfat Miftahutdinov zulfatmi@gmail.com
Kazan Federal University
KazanRussia
Elena Tutubalina tutubalinaev@gmail.com
HSE University
MoscowRussia
Sber AI
MoscowRussia
Artem Shelmanov artemshelmanov@gmail.com
AIRI
MoscowRussia
Vladimir Ivanov
Innopolis University
InnopolisRussia
Vladimir Kokh
Sber AI Lab
MoscowRussia
Alexander Nesterov
Sber AI Lab
MoscowRussia
Manvel Avetisian
Sber AI Lab
MoscowRussia
Andrey Chertok achertok@sberbank.rusergey@logic.pdmi.ras.ru
Sber AI
MoscowRussia
AIRI
MoscowRussia
Sergey Nikolenko
Steklov Mathematical Institute at St. Petersburg
St. PetersburgRussia
Medical Crossing: a Cross-lingual Evaluation of Clinical Entity Linking
Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022)
the 13th Conference on Language Resources and Evaluation (LREC 2022)MarseilleJune 2022Language Resources Association (ELRA), licensed under CC-BY-NC-4.0 4212medical entity linkingembeddingslinking evaluationcross-lingual methodszero-shot learning
Medical data annotation requires highly qualified expertise. Despite the efforts devoted to medical entity linking in different languages, available data is very sparse in terms of both data volume and languages. In this work, we establish benchmarks for cross-lingual medical entity linking using clinical reports, clinical guidelines, and medical research papers. We present a test set filtering procedure designed to analyze the "hard cases" of entity linking approaching zero-shot cross-lingual transfer learning, evaluate state-of-the-art models, and draw several interesting conclusions based on our evaluation results.
Introduction
Entity linking is the task of establishing correspondences between free-form text mentions and a formalized list of concepts (Shen et al., 2014;Sevgili et al., 2020). In this work, we consider medical entity linking -the task where entity mentions are mapped against a large set of medical concept names and their concept unique identifiers (CUIs). The biomedical domain is characterized by extensive dictionaries of concepts such as the Unified Medical Language System (UMLS) (Bodenreider, 2004), Medical Subject Headings (MeSH) (Coletti and Bleich, 2001), Systematized Nomenclature of Medicine Clinical Terms (SNOMED-CT) (Spackman et al., 1997), or Medical Dictionary for Regulatory Activities (MedDRA) (Brown et al., 1999), and a high variation of mentions. Early models for biomedical entity linking commonly used classification type losses (Rios and Kavuluru, 2018;Miftahutdinov and Tutubalina, 2019;Lou et al., 2020) that work well on narrow benchmarks but often lead to significant performance degradation on other domains and structurally different texts. Modern approaches usually employ similarity between embeddings (distributed representations) of words and concepts. From classical tf-idf and word2vec embeddings (Aronson, 2001;Ghiasvand and Kate, 2014;Van Mulligen et al., 2016;Leaman and Lu, 2016;Dermouche et al., 2016), entity linking systems have evolved to leverage vector representations constructed by deep neural models that take advantage of selfattention (Vaswani et al., 2017) and a BERT-like ranking architecture (Zhu et al., 2019;Sung et al., 2020;Tutubalina et al., 2020). We especially note the Biomedical Named Encoder (BNE) (Phan et al., 2019) and
BioSyn (Sung et al., 2020) -a Transformer model based on BioBERT .
Along with the progress of embedding techniques and neural architectures, the reported performance of stateof-the-art entity linking models has been steadily increasing over the past years. However, their evaluation in many works remains limited. Oftentimes, models are evaluated in the single-terminology setting, on the same kind of data they were trained on, and in a very narrow domain devoted to a specific type of texts and/or a specific set of diseases (e.g. oncological) simply divided into training and test parts. Moreover, standard train/test splits often contain data leaks where the same terminology and even mentions of the same kind leak from the training set into the test set, significantly improving the scores and restricting fair evaluation of transfer capabilities to other domains. Tutubalina et al. (2020) show that this effect leads to a significant positive bias in reported quality metrics and that such data leaks do exist in biomedical datasets of English scientific abstracts widely used for entity linking evaluation.
Lately, entity linking has started to shift towards the zero-shot setting, where the test set contains only novel concepts that have not been seen in the training data (Logeswaran et al., 2019;Basaldella et al., 2020;Mohan et al., 2021;Sevgili et al., 2020). This setting is harder and can be considered more "fair" since it mitigates many trivial linking cases. In this work, unlike commonly used single-terminology evaluation, where all concept names and CUIs from a target dictionary are seen during training, we consider a crossterminology setting -a sophisticated version of zeroshot: test sets contain novel concepts from a target terminology, while another terminology is used during training. A recent systematic literature survey (Kersloot et al., 2020) reviews the current state of the development and evaluation of NLP algorithms for mapping medical text fragments onto ontology concepts. The authors study 77 works, and only 17 (22%) of them perform the evaluation on non-English datasets, including Italian (Combi et al., 2018), Portuguese (Duarte et al., 2018), Japanese (Usui et al., 2018), and Korean (Kang et al., 2008). Although those datasets do not always contain entity linking annotations, there is an imbalance of English/non-English data. Moreover, prior art with cross-terminology evaluation has been restricted to the single-language setting. In this work, we make another step towards the fair evaluation of medical entity linking models across languages. We unite these two directions, providing both crossterminology and cross-lingual evaluation on real-life biomedical and clinical texts. We test the transfer capabilities of recently proposed models for medical entity linking across languages, taking care to avoid leaks from training to test parts of the datasets used. We seek to answer the following research questions: RQ1: Do test sets of current benchmarks in English, Spanish, French, German, and Dutch lead to an overestimation of performance?
RQ2: What is the fair evaluation strategy?
RQ3: What is the potential of a model trained on a corpus in English to generalize for the zero-shot clinical entity linking in other languages?
RQ4: What types of word representations can be used for cross-lingual clinical entity linking (state-ofthe-art contextual word representations, sparse representations)?
We show that filtering the test sets to avoid leaks proves to be crucial for a fair evaluation and provides new interesting and sometimes unexpected conclusions: sparse baselines consistently outperform BERTbased models, domain knowledge is very important for the quality, and fine-tuning on medical datasets can significantly improve the results, an effect that is not noticeable in common benchmarks without filtering.
Data
We construct a full-scale multilingual evaluation benchmark from several real-life clinical and biomedical datasets. Table 1 summarizes basic statistics of these datasets: number of concepts, number and the average length of entity mentions, percentage of mentions with numerals. Examples of dataset instances are presented in Table 2.
CodiEsp
The CodiEsp dataset was presented at Clinical Case Coding in Spanish Shared Task at the CLEF 2020 evaluation forum (Miranda-Escalada et al., 2020b). It contains structured information (clinical records) with entities mapped against the ICD-10 vocabulary (CodeBooks, 2016); we use the CodiEsp Diagnosis (CodiEsp-D) subset and the dictionary provided in CodiEsp.
Cantemist
Cantemist (
MCN
MCN (Medical Concept Normalization) (Luo et al., 2019) is a large-scale manually annotated corpus in English for clinical concept normalization produced from a corpus released for the 4th i2b2/VA shared task (Uzuner et al., 2011) with a dictionary of concepts from SNOMED-CT extracted from the UMLS 2020 AA release.
Mantra
Mantra GSC (Kors et al., 2015) is a collection of biomedical text units such as drug labels and patent claims manually cross-labeled by several annotators in five different languages: English, French, German, Spanish, and Dutch. The Mantra terminology is a subset of UMLS with concepts from MeSH, SNOMED-CT, and MedDRA extracted from the UMLS 2020 AA release; we use DISO entities (UMLS semantic group "Disorders" (Bodenreider and McCray, 2003)).
Other Datasets
Other available clinical datasets do not suit our needs. The German clinical guidelines dataset (Borchert et al., 2020) does not have concept-level annotations. English, Spanish, and Portuguese texts in Multi-NEL (Ruas et al., 2020) are synthetic. The Portuguese clinical notes dataset (Peters et al., 2020), the Japanese dataset of patient complaints (Usui et al., 2018), the Korean clinical dataset (Kang et al., 2008), and the Italian drug reaction corpus (Combi et al., 2018) are not publicly available yet. The dataset of death certificates in Portuguese does not contain annotated entities and is not publicly available (Duarte et al., 2018). An important recent work presented the XL-BEL cross-lingual biomedical entity linking task (Liu et al., 2021) that allowed to test domain transfer across languages. However, XL-BEL does not allow for crossterminology transfer evaluation and basically represents WikiMed (Vashishth et al., 2020) aligned across ten different languages via Wikipedia, so the critique above fully applies to XL-BEL as well. We note an important difference between datasets such as WikiMed (Vashishth et al., 2020) and medical texts such as clinical health records or scientific abstracts. The usage of medical terms is very different between Wikipedia and other texts, so entity linking results may not transfer well. In this work, we use a disease-centric approach to data collection, with a broad collection of datasets with real medical texts.
Filtering Strategies
We present a novel test set filtering strategy to avoid train/test leaks and provide a fair and more challenging comparison in the cross-terminology setting. We construct a reference set of terms from concept names in an entity dictionary (thesaurus) and filter out from the test set all instances, in which mention surface forms match any term in the reference set (filtering by a dictionary).
We also perform the evaluation in a less challenging setting suggested by Tutubalina et al. (2020) where the reference set for filtering is constructed from the entity mentions in the training dataset (filtering by a training set). For a reference set of terms/entities, we provide the following evaluation types:
• Full: compute metrics on the test set as provided in the dataset itself;
• Filtered: remove from the test set all entities that are already present in the reference set (exact match, e.g., we remove all instances of "depression" from the test set if it is already present in the reference set);
• Filtered 0.2 : remove from the test set all entities where the character-based Levenshtein distance to the nearest neighbor in the reference set is under 0.2 (e.g., we remove "depressed" if "depression" occurs in the reference set). This complicates the task even further since a model cannot rely on word similarity and have to use more sophisticated contextual features. The bigger the threshold the harder the evaluation setting. Table 1 shows how many concepts and entity mentions remain in the test sets of each of the datasets after the corresponding filtering method is applied. Note that filtering significantly reduces the number of entity mentions in test sets across all datasets, and the difference is especially striking for training set filtering. This indicates a large number of train set leaks that we discussed in Section 1.
Models for Medical Entity Linking
For entity linking, we use a ranking model based on embeddings of a mention and a possible concept. Each entity mention and a concept name is passed first through a model that produces their embeddings and then through an average pooling layer that yields a fixed-sized vector. The inference task is then reduced to finding the closest concept name representation to entity mention representation in a common embedding space, where the Euclidean distance can be used as the metric. Nearest concept names are chosen as top-k concepts for entities.
Entity and Concept Representations
We compare the following mention/entity vector representations:
• Tf-idf : standard sparse tf-idf representations constructed from character-level unigrams and bigrams; R59.0 Adenopatías inguinales MCN en "Gastritis", "Gastric catarrh", etc. C0017152 gastritis ...was negative for gastritis , stricture or ulcer... "Empirical therapy (procedure)" C1299597 empiric treatment ...was started on empiric treatment...
Mantra (DISO)
de "Arthralgie", "Gelenkschmerz", etc. C0003862 arthralgien ...Übelkeit, Arthralgien, niedrigem Blutdruck... "Lumbalgie", "Unterer Rueckenschmerz", etc. C0024031 kreuzschmerzen ...und mittelstarken Kreuzschmerzen kommen... en "Nausea (disorder)", "Feeling queasy", etc. C0027497 nausea "Arthralgia", "Pain in joint", etc.
C0003862 arthralgia ...reactions, nausea, arthralgia, low blood pressure... es "Inflamación pulmonar", "Neumonía", etc. C0032285 neumonía ...Neumonía * , infección de vías respiratorias... "Infección de los senos", "Sinusitis", etc.
C0037199 sinusitis ...respiratorias altas, sinusitis, candidiasis oral... fr "Anoréxique", "Anorexie", etc. C0003123 anorexie ...incluent fièvre, anorexie (perte d' appétit)... "Irritabilité", "Humeur irritable", etc.
C0022107 irritabilité ...vomissements, diarrhée, irritabilité, somnolence... nl "blaasneoplasma", "neoplasma blaas", etc. C0005695 blaastumoren ...classificatie van blaastumoren en de behandeling... "weefsel infiltratie" C0332448 infiltrerende ...de oppervlakkig infiltrerende tumoren... Table 2: Data samples from test sets (with fragments of original source texts where available). Each contains a mention (e.g. "sinusitis") and a concept ID (e.g. "C0037199"). Note that identifiers come from different sets. "Names" are taken from: "valid codes.txt" (a list of codes and names provided by the competition organizers) for Cantemist, "codiesp codes" (a list of valid CIE10 codes provided by the CLEF2020 eHealth track organizers as a dictionary for the corresponding task) for CodiEsp, SNOMEDCT US part of UMLS for MCN and Mantra-En, the rest are taken from MedDRA in German, Spanish, French, and Dutch, respectively.
• BERT: multilingual BERT embeddings with no fine-tuning ; this is a crosslingual baseline that has not been trained on biomedical texts;
• BETO: Spanish BERT embeddings (Cañete et al., 2020);
• BioBERT-esp: BioBERT embeddings fine-tuned over Spanish clinical data (Villena, 2021) (we test BioBERT-esp and BETO on Spanish datasets);
• SapBERT: a BERT-based metric learning framework that generates hard triplets based on the UMLS for large-scale pre-training (Liu et al., 2021a) and also allows for a cross-lingual variant (Liu et al., 2021b) trained on XL-BEL (Liu et al., 2021).
Fine-tuning
To fine-tune SapBERT models, we use synonym marginalization and iterative candidate retrieval as suggested in a recent state-of-the-art model BioSyn (Sung et al., 2020). We compare the following versions:
• SapBERT+target with fine-tuning on the target train set;
• SapBERT+mcn with fine-tuning on the MCN English train set;
• SapBERT+mcn-fz4 and SapBERT+mcn-fz10 on the MCN English training set with freezing the first four and ten layers, respectively.
Experiments
Experimental Setup
For monolingual evaluation, we leverage the train / test splits provided with each corpus. As shown in Table 1, only CANTEMIST, CodiEsp, and MCN have a train/test split in our study: Mantra subsets are too small for fine-tuning. For cross-lingual evaluation, we train models on the MCN English train set with a source dictionary and evaluate on the test sets of each other corpora (i.e., the target). Specifically, ranking models retrieve the nearest concept name in a target dictionary for a given mention representation at the inference time. We note that cross-lingual evaluation pro- vides a challenging setup for the standard supervised models, especially for linking of mentions in another language not encountered during training. We evaluate the models in the information retrieval scenario, where the goal is to find top-k concepts for every entity mention in a dictionary of concept names and their identifiers. Following previous works on entity linking (Suominen et al., 2013;Pradhan et al., 2014;Wright et al., 2019;Phan et al., 2019;Sung et al., 2020;Tutubalina et al., 2020), we use the top-k accuracy as the evaluation metric: Acc@k = 1 if the correct UMLS concept unique identifier is retrieved at the rank ≤ k, otherwise Acc@k = 0. For evaluation of methods that perform ranking without fine-tuning, we leverage publicly available implementation from (Tutubalina et al., 2020) 1 and the following pre-trained models available in the Hugging Face (Wolf et al., 2020) repository:
• BERT-multilingual :
bert-base-multilingual-cased;
• BETO (Cañete et al., 2020): dccuchile/ bert-base-spanish-wwm-uncased;
• BioBERT-esp (Villena, 2021) fvillena/ bio-bert-base-spanish-wwm-uncased.
The implementation of the core SapBERT is based on the publicly available repository (Sung et al., 2020) 2 . The modifications are taken 1 https://github.com/insilicomedicine/ Fair-Evaluation-BERT 2 https://github.com/cambridgeltl/ sapbert from the public BioSyn repository 3 . We finetune various SapBERT models (Liu et al., 2021b) starting from the pre-trained checkpoint SapBERT-UMLS-2020AB-all-lang-from-XLMR, which was constructed by the authors from cross-lingual RoBERTa (Conneau et al., 2019), xlm-roberta-base. The pre-training hyperparameters for SapBERT can be found in the original work. We performed the fine-tuning with the following hyperparameters: the number of top candidates k is 20, the mini-batch size is 16, the learning rate is 1e-5, the dense ratio for candidate retrieval is 0.5. Table 3 shows the Acc@1 and Acc@5 metrics for datasets with the training set used as the reference set for filtering, while Table 4 shows these variations with the entity dictionary used as the reference set for filtering. Table 3 does not contain the Mantra dataset because it is too small to reasonably use for fine-tuning. The results of our evaluation suggest several important and interesting conclusions. First, Tables 3 and 4 show a significant difference between evaluation strategies: on full test sets, there is virtually no difference between SapBERT variations, but on filtered datasets, fine-tuning on MCN or the target dataset brings a significant increase in accuracy. For weaker baselines, the filtering effect can be drastic. For example, note how the BERT-based model in Table 4 dropped from 48% top-1 accuracy to 12.5% and 6.2% on the MCN dataset after dictionary-based filtering. This indicates that the most successful matches Table 4: Results of the evaluation with filtering by a dictionary. of these models come from training set leaks and very simple cases of entity linking (surface forms). A fair comparison requires filtering procedures such as the ones we suggest in this paper. Another result is that fine-tuning on additional medical data is generally beneficial; e.g., we have found that SapBERT fine-tuned on English clinical notes outperforms basic SapBERT consistently across all datasets in our study. However, a separate experimental evaluation is required to find the best parameters for this process: which layers to freeze during fine-tuning, how many epochs of training to conduct, etc. Interestingly, fine-tuning SapBERT improves results only after one epoch (we show these in the tables), and then the quality begins to drop, probably signifying overfitting. We also note that fine-tuning on the target dataset instead of English MCN as expected helps to substantially improve the quality. Finally, the weaker baselines also provide new insights. The sparse tf-idf baseline consistently outper-forms BERT-based ranking. Many recent works forgo sparse baselines entirely, but our results suggest that it may be premature. Both multilingual and Spanish BERT consistently perform much worse than all competitors, showing that biomedical domain knowledge is crucial for solving this task.
Results
Conclusion
We have presented the first cross-lingual benchmark for clinical entity linking in English, Spanish, French, German, and Dutch. We perform an extensive evaluation of BERT-based models with state-of-the-art biomedical representations in two setups: with official train/test splits and with filtered test sets. Our filtering strategy keeps only entity mentions, which are dissimilar to entries from the reference set. As the reference set, we adopt a training set or a target entity dictionary. Our evaluation shows the great divergence in performance between official and proposed test sets for all languages and models, answering positively to the RQ1 and supporting the claim that fair evaluation requires the proposed dataset filtering (the answer to the RQ2). Our experiments with SapBERT show that cross-lingual training on the English MCN corpus substantially helps to improve the performance on clinical datasets in other languages, which answers the RQ3. Finally, answering the RQ4, we note that general-purpose models without domain knowledge and fine-tuning are almost useless for the considered task, falling behind even the simplistic tf-idf baseline. Our fair evaluation shows that clinical entity linking requires pretraining at least on the related biomedical corpora. The constructed benchmark for cross-lingual clinical entity linking is available at https://github.com/ AIRI-Institute/medical_crossing. Our study opens up new venues for further work. First, we plan to extend this evaluation to more languages, more corpora, and other types of entities (not only diseases but, e.g., medical procedures or drugs). Second, SapBERT receives a significant boost in the performance by using synonymous relations, but in fact, the concepts form a tree-like hierarchy, and taking it into account may improve the results further. Third, since our method of evaluation moves towards zeroshot territory, we plan to apply other recently developed approaches in zero-shot learning to the entity linking problem.
Acknowledgements
We would also like to thank the anonymous reviewers for the comments and suggestions, which helped us improve the manuscript. The work is supported by the Russian Science Foundation [grant number 18-11-00284].
CANcer TExt MIning Shared Task on Iber-LEF 2020 (Miranda-Escalada et al., 2020a)) is a manually annotated text corpus of tumor morphology mentions in Spanish mapped to the latest Spanish version of the oncological ontology, which is a part of ICD-O (World Health Organization, 2013); we use the dictionary from (López-Úbeda et al., 2020).
Table 1: Statistics of the datasets in English (en), Spanish (es), French (fr), German (de), and Dutch (nl).Dataset
Lang # in
full
corpus
Avg.
len in
chars
% with
numer-
als
Split
Filtering
Train Test Train set
Dictionary
Filt. Filt 0.2 Filt. Filt 0.2
Entity mentions
CANTEMIST es 10031
18.73 6.92
6396 3635 998
711 3268 3040
CodiEsp-D
es 10874
15.84 1.05
7209 3665 1386 1167 3449 3347
MCN
en 13609
12.36 1.54
6684 6925 3204 2819 3386 2304
Mantra
de 201
17.62 0.50
-201
-
-107
62
en 452
16.42 1.11
-452
-
-126
66
es 166
19.67 2.41
-166
-
-
65
38
fr 222
17.64 0.45
-222
-
-
99
50
nl 127
16.06 0.00
-127
-
-
65
44
Concepts
CANTEMIST es 657
-
-
493 386 332
279 364
321
CodiEsp-D
es 2206
-
-
1767 1143 841
750 1142 1050
MCN
en 3792
-
-
2331 2579 2000 1834 1631 1195
Mantra
de 169
-
-
-169
-
-
97
53
en 373
-
-
-373
-
-119
61
es 147
-
-
-147
-
-
69
35
fr 185
-
-
-185
-
-
83
39
nl 117
-
-
-117
-
-
62
42
Table 3 :
3Results of the evaluation with filtering by a training set.
SapBERT+mcn-fz4 85.54% 92.17% 75.32% 87.01% 52.63% 76.32% SapBERT+mcn-fz10 84.34% 92.77% 72.73% 87.01% 47.37% 76.32% 43% 93.24% 62.63% 84.85% 46.00% 76.00% SapBERT+mcn 83.33% 95.50% 64.65% 89.90% 54.00% 84.00% SapBERT+mcn-fz4 84.23% 94.14% 66.67% 86.87% 54.00% 80.00% SapBERT+mcn-fz10 82.88% 93.69% 63.64% 85.86% 48.00% 78.00% Mantra Tf-idf 73.23% 77.95% 53.85% 61.54% 43.18% 50.00% (Dutch) BERT 55.12% 58.27% 18.46% 24.62% 13.64% 20.45% SapBERT 84.25% 87.40% 73.85% 80.00% 63.64% 72.73% SapBERT+mcn 85.83% 87.40% 78.46% 80.00% 70.45% 72.73% SapBERT+mcn-fz4 85.83% 87.40% 78.46% 80.00% 70.45% 72.73% SapBERT+mcn-fz10 84.25% 87.40% 75.38% 80.00% 65.91% 72.73%Dataset
Model
Full
Filtered
Filtered 0.2
Acc@1 Acc@5 Acc@1 Acc@5 Acc@1 Acc@5
CodiEsp
Tf-idf
20.55% 39.24% 15.63% 35.49% 15.45% 35.28%
Diagnostico
BERT
10.45% 15.58% 4.90% 10.35% 4.75% 10.18%
SapBERT
47.83% 63.66% 44.62% 61.44% 44.55% 61.14%
SapBERT+mcn
48.27% 64.07% 45.09% 61.87% 44.19% 60.98%
SapBERT+mcn-fz4 48.32% 63.68% 45.14% 61.47% 44.25% 60.56%
SapBERT+mcn-fz10 49.14% 64.31% 46.01% 62.13% 38.54% 50.95%
MCN
Tf-idf
59.00% 65.91% 33.82% 45.87% 24.61% 36.55%
BERT
48.61% 52.16% 12.55% 19.46% 6.21% 10.98%
SapBERT
66.28% 74.55% 47.50% 59.08% 38.54% 50.80%
SapBERT+target
69.36% 80.90% 54.99% 67.13% 46.14% 58.16%
CANTEMIST Tf-idf
27.02% 47.92% 18.85% 42.07% 16.57% 28.01%
BERT
25.50% 34.69% 17.17% 27.36% 16.48% 26.55%
SapBERT
57.47% 65.23% 52.72% 61.32% 51.12% 59.64%
SapBERT+mcn
61.29% 67.02% 56.98% 63.31% 55.86% 61.61%
SapBERT+mcn-fz4
61.6% 66.36% 57.31% 62.88% 56.22% 61.05%
SapBERT+mcn-fz10 57.47% 65.45% 52.72% 61.57% 51.12% 59.64%
Mantra
Tf-idf
73.63% 79.10% 50.47% 60.75% 29.03% 45.16%
(German)
BERT
59.20% 63.68% 23.36% 31.78% 8.07% 16.13%
SapBERT
87.56% 95.52% 76.64% 91.59% 64.52% 88.71%
SapBERT+mcn
88.06% 95.52% 80.30% 89.39% 67.74% 87.10%
SapBERT+mcn-fz4 89.55% 95.02% 80.37% 90.65% 72.58% 87.10%
SapBERT+mcn-fz10 88.06% 95.52% 77.57% 91.59% 66.13% 88.71%
Mantra
Tf-idf
86.06% 92.04% 51.59% 73.02% 43.94% 62.12%
(English)
BERT
78.54% 84.29% 24.60% 45.24% 16.67% 37.88%
SapBERT
93.81% 96.90% 79.37% 90.48% 75.76% 90.91%
SapBERT+mcn
94.03% 96.90% 80.16% 90.48% 80.30% 89.39%
SapBERT+mcn-fz4 94.25% 97.12% 80.95% 91.27% 80.16% 90.48%
SapBERT+mcn-fz10 94.25% 96.90% 80.95% 90.48% 80.30% 90.91%
Mantra
Tf-idf
71.69% 80.72% 45.45% 62.34% 26.32% 44.74%
(Spanish)
BERT
62.65% 69.28% 25.97% 38.96% 10.53% 15.79%
SapBERT
83.73% 90.36% 71.43% 83.12% 47.37% 68.42%
SapBERT+mcn
84.34% 90.96% 72.73% 84.42% 50.00% 71.05%
Mantra
Tf-idf
77.03% 80.63% 50.51% 57.58% 30.00% 38.00%
(French)
BERT
65.32% 71.62% 24.24% 37.37% 2.00% 12.00%
SapBERT
82.
https://github.com/dmis-lab/BioSyn
Effective mapping of biomedical text to the UMLS Metathesaurus: the MetaMap program. A R Aronson, Proceedings of the AMIA Symposium. the AMIA Symposium17Aronson, A. R. (2001). Effective mapping of biomed- ical text to the UMLS Metathesaurus: the MetaMap program. In Proceedings of the AMIA Symposium, page 17. American Medical Informatics Association.
Cometa: A corpus for medical entity linking in the social media. M Basaldella, F Liu, E Shareghi, N Collier, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Basaldella, M., Liu, F., Shareghi, E., and Collier, N. (2020). Cometa: A corpus for medical entity linking in the social media. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 3122-3137.
Spanish pre-trained bert model and evaluation data. J Cañete, G Chaperon, R Fuentes, J.-H Ho, H Kang, J Pérez, PML4DC at ICLR 2020Cañete, J., Chaperon, G., Fuentes, R., Ho, J.-H., Kang, H., and Pérez, J. (2020). Spanish pre-trained bert model and evaluation data. In PML4DC at ICLR 2020.
From narrative descriptions to meddra: automagically encoding adverse drug reactions. C Combi, M Zorzi, G Pozzani, U Moretti, Arzenton , E , Journal of Biomedical Informatics. 84Combi, C., Zorzi, M., Pozzani, G., Moretti, U., and Arzenton, E. (2018). From narrative descriptions to meddra: automagically encoding adverse drug reac- tions. Journal of Biomedical Informatics, 84:184- 199.
Unsupervised cross. A Conneau, K Khandelwal, N Goyal, V Chaudhary, G Wenzek, F Guzmán, E Grave, M Ott, L Zettlemoyer, V Stoyanov, lingual representation learning at scale. CoRR, abs/1911.02116Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., and Stoyanov, V. (2019). Unsuper- vised cross-lingual representation learning at scale. CoRR, abs/1911.02116.
ECSTRA-INSERM@ CLEF eHealth2016-task 2: ICD10 code extraction from death certificates. M Dermouche, V Looten, R Flicoteaux, S Chevret, J Velcin, N Taright, Dermouche, M., Looten, V., Flicoteaux, R., Chevret, S., Velcin, J., and Taright, N. (2016). ECSTRA- INSERM@ CLEF eHealth2016-task 2: ICD10 code extraction from death certificates. CLEF.
Bert: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesDevlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). Bert: Pre-training of deep bidirectional transformers for language understanding. In Pro- ceedings of the 2019 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long and Short Papers), pages 4171-4186.
Deep neural models for icd-10 coding of death certificates and autopsy reports in free-text. F Duarte, B Martins, C S Pinto, M J Silva, Journal of biomedical informatics. 80Duarte, F., Martins, B., Pinto, C. S., and Silva, M. J. (2018). Deep neural models for icd-10 coding of death certificates and autopsy reports in free-text. Journal of biomedical informatics, 80:64-77.
Uwm: Disorder mention extraction from clinical text using crfs and normalization using learned edit distance patterns. O Ghiasvand, R J Kate, SemEval@ COLING. Ghiasvand, O. and Kate, R. J. (2014). Uwm: Disorder mention extraction from clinical text using crfs and normalization using learned edit distance patterns. In SemEval@ COLING, pages 828-832.
Two-phase chief complaint mapping to the umls metathesaurus in korean electronic medical records. B.-Y Kang, D.-W Kim, H.-G Kim, IEEE Transactions on Information Technology in Biomedicine. 131Kang, B.-Y., Kim, D.-W., and Kim, H.-G. (2008). Two-phase chief complaint mapping to the umls metathesaurus in korean electronic medical records. IEEE Transactions on Information Technology in Biomedicine, 13(1):78-86.
Natural language processing algorithms for mapping clinical text fragments onto ontology concepts: a systematic review and recommendations for future studies. M G Kersloot, F J Van Putten, A Abu-Hanna, R Cornet, D L Arts, Journal of biomedical semantics. 111Kersloot, M. G., van Putten, F. J., Abu-Hanna, A., Cor- net, R., and Arts, D. L. (2020). Natural language processing algorithms for mapping clinical text frag- ments onto ontology concepts: a systematic review and recommendations for future studies. Journal of biomedical semantics, 11(1):1-21.
Taggerone: joint named entity recognition and normalization with semi-markov models. R Leaman, Z Lu, Bioinformatics. 3218Leaman, R. and Lu, Z. (2016). Taggerone: joint named entity recognition and normalization with semi-markov models. Bioinformatics, 32(18):2839- 2846.
Biobert: a pre-trained biomedical language representation model for biomedical text mining. J Lee, W Yoon, S Kim, D Kim, S Kim, C H So, J Kang, Bioinformatics. 364Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C. H., and Kang, J. (2020). Biobert: a pre-trained biomed- ical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.
Self-alignment pretraining for biomedical entity representations. F Liu, E Shareghi, Z Meng, M Basaldella, N Collier, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLiu, F., Shareghi, E., Meng, Z., Basaldella, M., and Collier, N. (2021a). Self-alignment pretraining for biomedical entity representations. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 4228-4238, June.
Learning domain-specialised representations for cross-lingual biomedical entity linking. F Liu, I Vulić, A Korhonen, N Collier, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingShort Papers2Liu, F., Vulić, I., Korhonen, A., and Collier, N. (2021b). Learning domain-specialised representa- tions for cross-lingual biomedical entity linking. In Proceedings of the 59th Annual Meeting of the Asso- ciation for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 565- 574.
Zero-shot entity linking by reading entity descriptions. L Logeswaran, M.-W Chang, K Lee, K Toutanova, J Devlin, H Lee, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsLogeswaran, L., Chang, M.-W., Lee, K., Toutanova, K., Devlin, J., and Lee, H. (2019). Zero-shot entity linking by reading entity descriptions. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3449-3460.
Investigating of disease name normalization using neural network and pre-training. Y Lou, T Qian, F Li, J Zhou, D Ji, M Cheng, IEEE Access. 8Lou, Y., Qian, T., Li, F., Zhou, J., Ji, D., and Cheng, M. (2020). Investigating of disease name normalization using neural network and pre-training. IEEE Access, 8:85729-85739.
Deep neural models for medical concept normalization in user-generated texts. Z Miftahutdinov, E Tutubalina, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop. the 57th Annual Meeting of the Association for Computational Linguistics: Student Research WorkshopMiftahutdinov, Z. and Tutubalina, E. (2019). Deep neural models for medical concept normalization in user-generated texts. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics: Student Research Workshop, pages 393-399.
Low resource recognition and linking of biomedical concepts from a large ontology. S Mohan, R Angell, N Monath, A Mccallum, arXiv:2101.10587arXiv preprintMohan, S., Angell, R., Monath, N., and McCallum, A. (2021). Low resource recognition and linking of biomedical concepts from a large ontology. arXiv preprint arXiv:2101.10587.
Robust representation learning of biomedical names. M C Phan, A Sun, Y Tay, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsPhan, M. C., Sun, A., and Tay, Y. (2019). Robust rep- resentation learning of biomedical names. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 3275- 3285.
Semeval-2014 task 7: Analysis of clinical text. S Pradhan, N Elhadad, W W Chapman, S Manandhar, G Savova, SemEval@ COLING. Pradhan, S., Elhadad, N., Chapman, W. W., Manand- har, S., and Savova, G. (2014). Semeval-2014 task 7: Analysis of clinical text. In SemEval@ COLING, pages 54-62.
Emr coding with semi-parametric multi-head matching networks. A Rios, R Kavuluru, Proceedings of the conference. Association for Computational Linguistics. North American Chapter. the conference. Association for Computational Linguistics. North American ChapterNIH Public Access20182081Rios, A. and Kavuluru, R. (2018). Emr coding with semi-parametric multi-head matching networks. In Proceedings of the conference. Association for Com- putational Linguistics. North American Chapter. Meeting, volume 2018, page 2081. NIH Public Ac- cess.
O Sevgili, A Shelmanov, M Arkhipov, A Panchenko, C Biemann, arXiv:2006.00575Neural entity linking: A survey of models based on deep learning. arXiv preprintSevgili, O., Shelmanov, A., Arkhipov, M., Panchenko, A., and Biemann, C. (2020). Neural entity linking: A survey of models based on deep learning. arXiv preprint arXiv:2006.00575.
Entity linking with a knowledge base: Issues, techniques, and solutions. W Shen, J Wang, J Han, IEEE Transactions on Knowledge and Data Engineering. 272Shen, W., Wang, J., and Han, J. (2014). Entity linking with a knowledge base: Issues, techniques, and so- lutions. IEEE Transactions on Knowledge and Data Engineering, 27(2):443-460.
Biomedical entity representations with synonym marginalization. M Sung, H Jeon, J Lee, J Kang, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsSung, M., Jeon, H., Lee, J., and Kang, J. (2020). Biomedical entity representations with synonym marginalization. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 3641-3650.
Overview of the share/clef ehealth evaluation lab. H Suominen, S Salanterä, S Velupillai, W W Chapman, G Savova, N Elhadad, S Pradhan, B R South, D L Mowery, G J Jones, International Conference of the Cross-Language Evaluation Forum for European Languages. SpringerSuominen, H., Salanterä, S., Velupillai, S., Chap- man, W. W., Savova, G., Elhadad, N., Pradhan, S., South, B. R., Mowery, D. L., Jones, G. J., et al. (2013). Overview of the share/clef ehealth eval- uation lab 2013. In International Conference of the Cross-Language Evaluation Forum for European Languages, pages 212-231. Springer.
Fair evaluation in concept normalization: a large-scale comparative analysis for bert-based models. E Tutubalina, A Kadurin, Z Miftahutdinov, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsTutubalina, E., Kadurin, A., and Miftahutdinov, Z. (2020). Fair evaluation in concept normalization: a large-scale comparative analysis for bert-based mod- els. In Proceedings of the 28th International Con- ference on Computational Linguistics, pages 6710- 6716.
Extraction and standardization of patient complaints from electronic medication histories for pharmacovigilance: Natural language processing analysis in japanese. M Usui, E Aramaki, T Iwao, S Wakamiya, T Sakamoto, M Mochizuki, JMIR medical informatics. 6311021Usui, M., Aramaki, E., Iwao, T., Wakamiya, S., Sakamoto, T., and Mochizuki, M. (2018). Ex- traction and standardization of patient complaints from electronic medication histories for pharma- covigilance: Natural language processing analysis in japanese. JMIR medical informatics, 6(3):e11021.
Erasmus MC at CLEF eHealth 2016: Concept recognition and coding in French texts. E Van Mulligen, Z Afzal, S A Akhondi, D Vo, J A Kors, Van Mulligen, E., Afzal, Z., Akhondi, S. A., Vo, D., and Kors, J. A. (2016). Erasmus MC at CLEF eHealth 2016: Concept recognition and coding in French texts. CLEF.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Advances in neural information processing systems. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems, pages 5998- 6008.
. F Villena, Spanish biobert embeddingsVillena, F. (2021). Spanish biobert embeddings.
Transformers: State-of-the-art natural language processing. T Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, T Rault, R Louf, M Funtowicz, J Davison, S Shleifer, P Von Platen, C Ma, Y Jernite, J Plu, C Xu, T L Scao, S Gugger, M Drame, Q Lhoest, A M Rush, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsOnlineAssociation for Computational LinguisticsWolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtow- icz, M., Davison, J., Shleifer, S., von Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Scao, T. L., Gugger, S., Drame, M., Lhoest, Q., and Rush, A. M. (2020). Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing: System Demonstrations, pages 38-45, Online, October. Association for Computational Linguistics.
Normco: Deep disease normalization for biomedical knowledge base construction. D Wright, Y Katsis, R Mehta, C.-N Hsu, Automated Knowledge Base Construction. Wright, D., Katsis, Y., Mehta, R., and Hsu, C.-N. (2019). Normco: Deep disease normalization for biomedical knowledge base construction. In Auto- mated Knowledge Base Construction.
Latte: Latent type modeling for biomedical entity linking. M Zhu, B Celikkaya, P Bhatia, C K Reddy, arXiv:1911.09787.8arXiv preprintLanguage Resource ReferencesZhu, M., Celikkaya, B., Bhatia, P., and Reddy, C. K. (2019). Latte: Latent type modeling for biomedical entity linking. arXiv preprint arXiv:1911.09787. 8. Language Resource References
Exploring semantic groups through visual approaches. Olivier Bodenreider, Alexa T Mccray, ElsevierBodenreider, Olivier and McCray, Alexa T. (2003). Exploring semantic groups through visual ap- proaches. Elsevier.
The unified medical language system (UMLS): integrating biomedical terminology. Olivier Bodenreider, Oxford University PressBodenreider, Olivier. (2004). The unified medical lan- guage system (UMLS): integrating biomedical ter- minology. Oxford University Press.
GGPONC: A Corpus of German Medical Text with Rich Metadata Based on Clinical Practice Guidelines. Florian Borchert, Christina Lohr, Luise Modersohn, Thomas Langer, Markus Follmann, Jan Sachs, Philipp, Udo Hahn, Matthieu-P Schapranow, Borchert, Florian and Lohr, Christina and Modersohn, Luise and Langer, Thomas and Follmann, Markus and Sachs, Jan Philipp and Hahn, Udo and Schapra- now, Matthieu-P. (2020). GGPONC: A Corpus of German Medical Text with Rich Metadata Based on Clinical Practice Guidelines.
The medical dictionary for regulatory activities (MedDRA). Elliot G Brown, Louise Wood, Sue Wood, SpringerBrown, Elliot G and Wood, Louise and Wood, Sue. (1999). The medical dictionary for regulatory activ- ities (MedDRA). Springer.
ICD-10-CM Complete Code Set 2016. Medical Codebooks, Medical Code BooksCodeBooks, Medical. (2016). ICD-10-CM Complete Code Set 2016. Medical Code Books.
Medical subject headings used to search the biomedical literature. Margaret H Coletti, Howard L Bleich, BMJ Group BMA House. Coletti, Margaret H and Bleich, Howard L. (2001). Medical subject headings used to search the biomed- ical literature. BMJ Group BMA House, Tavistock Square, London, WC1H 9JR.
A multilingual goldstandard corpus for biomedical concept recognition: the Mantra GSC. Jan A. Kors and S. Clematide and Saber Ahmad Akhondi and Erik M. van Mulligen and Dietrich Rebholz-Schuhmann.Jan A. Kors and S. Clematide and Saber Ahmad Akhondi and Erik M. van Mulligen and Dietrich Rebholz-Schuhmann. (2015). A multilingual gold- standard corpus for biomedical concept recognition: the Mantra GSC.
Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking. Fangyu Liu, Ivan Vulic, Anna Korhonen, Nigel Collier, Fangyu Liu and Ivan Vulic and Anna Korhonen and Nigel Collier. (2021). Learning Domain- Specialised Representations for Cross-Lingual Biomedical Entity Linking.
Extracting Neoplasms Morphology Mentions in Spanish Clinical Cases through Word Embeddings. Pilar López-Úbeda, Manuel Díaz-Galiano, Carlos, María Martín-Valdivia, Teresa, Luis Alfonso López, Ureña, López-Úbeda, Pilar and Díaz-Galiano, Manuel Carlos and Martín-Valdivia, María Teresa and López, Luis Alfonso Ureña. (2020). Extracting Neoplasms Mor- phology Mentions in Spanish Clinical Cases through Word Embeddings.
MCN: A comprehensive corpus for medical concept normalization. Yen-Fu Luo, Weiyi Sun, Anna Rumshisky, Yen-Fu Luo and Weiyi Sun and Anna Rumshisky. (2019). MCN: A comprehensive corpus for medical concept normalization.
Named Entity Recognition. Antonio Miranda-Escalada, Eulàlia Farré, Martin Krallinger, Concept Normalization and Clinical Coding: Overview of the Cantemist Track for Cancer Text Mining in Spanish, Corpus, Guidelines, Methods and Results. Miranda-Escalada, Antonio and Farré, Eulàlia and Krallinger, Martin. (2020a). Named Entity Recog- nition, Concept Normalization and Clinical Cod- ing: Overview of the Cantemist Track for Cancer Text Mining in Spanish, Corpus, Guidelines, Meth- ods and Results.
Overview of automatic clinical coding: annotations, guidelines, and solutions for non-english clinical cases at codiesp track of CLEF. Antonio Miranda-Escalada, Aitor Gonzalez-Agirre, Jordi Armengol-Estapé, Martin Krallinger, 2020Miranda-Escalada, Antonio and Gonzalez-Agirre, Aitor and Armengol-Estapé, Jordi and Krallinger, Martin. (2020b). Overview of automatic clinical coding: annotations, guidelines, and solutions for non-english clinical cases at codiesp track of CLEF eHealth 2020.
. Ana Peters, Carolina, Adalniza Da Silva, Moura Pucca, Caroline P Gebeluca, Yohan Gumiel, Bonescki, Lilian Mie Cintho, Mukai, Deborah Carvalho, Ribeiro, Sadid A Hasan, Moro, Claudia Maria Cabral and others. Peters, Ana Carolina and da Silva, Adalniza Moura Pucca and Gebeluca, Caroline P and Gumiel, Yohan Bonescki and Cintho, Lilian Mie Mukai and Car- valho, Deborah Ribeiro and Hasan, Sadid A and Moro, Claudia Maria Cabral and others. (2020).
SemClinBr-a multi institutional and multi specialty semantically annotated corpus for Portuguese clinical NLP tasks. SemClinBr-a multi institutional and multi specialty semantically annotated corpus for Portuguese clini- cal NLP tasks.
Towards a Multilingual Corpus for Named Entity Linking Evaluation in the Clinical Domain. Pedro Ruas, André Lamúrias, Francisco M Couto, CEUR-WS.orgPedro Ruas and André Lamúrias and Francisco M. Couto. (2020). Towards a Multilingual Corpus for Named Entity Linking Evaluation in the Clinical Do- main. CEUR-WS.org.
SNOMED RT: a reference terminology for health care. Kent A Spackman, Keith E Campbell, Roger A Côté, Spackman, Kent A and Campbell, Keith E and Côté, Roger A. (1997). SNOMED RT: a reference termi- nology for health care.
2010 i2B2/VA challenge on concepts, assertions, and relations in clinical text. Ozlem Uzuner, South, Brett, Shuying Shen, Scott Duvall, Uzuner, Ozlem and South, Brett and Shen, Shuying and DuVall, Scott. (2011). 2010 i2B2/VA challenge on concepts, assertions, and relations in clinical text.
MedType: Improving Medical Entity Linking with Semantic Type Prediction. Shikhar Vashishth, Rishabh Joshi, Denis Newman-Griffis, Ritam Dutt, Carolyn Rose, Vashishth, Shikhar and Joshi, Rishabh and Newman- Griffis, Denis and Dutt, Ritam and Rose, Carolyn. (2020). MedType: Improving Medical Entity Link- ing with Semantic Type Prediction.
International classification of diseases for oncology (ICD-O). World Health Organization. 3rd edition. 1st revisionWorld Health Organization. (2013). International classification of diseases for oncology (ICD-O). 3rd edition, 1st revision. |
465,873 | Modeling language evolution with codes that utilize context and phonetic features | We present methods for investigating processes of evolution in a language family by modeling relationships among the observed languages. The models aim to find regularities-regular correspondences in lexical data. We present an algorithm which codes the data using phonetic features of sounds, and learns longrange contextual rules that condition recurrent sound correspondences between languages. This gives us a measure of model quality: better models find more regularity in the data. We also present a procedure for imputing unseen data, which provides another method of model comparison. Our experiments demonstrate improvements in performance compared to prior work. | [
10512621,
16108530,
2038068,
1591992,
405662
] | Modeling language evolution with codes that utilize context and phonetic features
Association for Computational LinguisticsCopyright Association for Computational LinguisticsAugust 7-12, 2016. 2016
Javad Nouri
Department of Computer Science
University of Helsinki
Finland
Roman Yangarber
Department of Computer Science
University of Helsinki
Finland
Modeling language evolution with codes that utilize context and phonetic features
Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning (CoNLL)
the 20th SIGNLL Conference on Computational Natural Language Learning (CoNLL)Berlin, GermanyAssociation for Computational LinguisticsAugust 7-12, 2016. 2016
We present methods for investigating processes of evolution in a language family by modeling relationships among the observed languages. The models aim to find regularities-regular correspondences in lexical data. We present an algorithm which codes the data using phonetic features of sounds, and learns longrange contextual rules that condition recurrent sound correspondences between languages. This gives us a measure of model quality: better models find more regularity in the data. We also present a procedure for imputing unseen data, which provides another method of model comparison. Our experiments demonstrate improvements in performance compared to prior work.
Introduction
We present work on modeling evolution within language families, by discovering regularity in data from observed languages.
The study of evolution of language families covers several problems, including: a. discovering cognates-"genetically related" words, i.e., words that derive from a common ancestor word in an ancestral proto-language; b. determining genetic relationships among languages in the given language family based on observed data; c. discovering patterns of sound correspondence across languages; and d. reconstruction of forms in protolanguages. In this paper, we treat a. (sets of cognates) as given, and focus on problems b. and c. 1 Given a corpus of cognate sets, 2 we first aim to find as much regularity as possible in the data at the sound (or symbol) level. 3 An important goal is that our methods be data-driven-we aim to use all data available, and to learn the patterns of regular correspondence directly from the data. We allow only the data to determine which rules underlie it-correspondences that are inherently encoded in the corpus itself-rather than relying on externally supplied (and possibly biased) rules or "priors." We try to refrain from a priori assumptions or "universal" principles-e.g., no preference to align consonants with consonants, to align a symbol with itself, etc. We claim that alignment may not be the best way to address the problem of regularity. Finding alignments is indeed finding a kind of regularity, but not all regularity is expressed as alignment.
The paper is organized as follows. In section 2 we review the data used in our experiments and recent approaches to modeling language evolution. We formalize the problem and present our models in section 3. The models treat sounds as vectors of phonetic features, and utilize the context of the sounds to discover patterns of regular correspondence. Once we have obtained the regularity, the question arises how we can evaluate it effectively. In section 4, we present a procedure for imputation-prediction of unseen data-to evaluate the strength of the learned rules of correspondence, by how well they predict words in one language given corresponding words in another language. We further evaluate the models by using them for building phylogenies-family trees, and comparing them to gold standards, in section 4.2. We conclude with a discussion in section 5.
We have experimented with several language families: Uralic, Turkic and Indo-European; the paper focuses on results from the Uralic family.
We use large-scale digital etymological resources/dictionaries. For Uralic, the StarLing database, (Starostin, 2005), contains 2586 Uralic cognate sets, based on (Rédei, 1991). The etymological dictionary Suomen Sanojen Alkuperä (SSA), "The Origin of Finnish Words," (Itkonen and Kulonen, 2000), has over 5000 cognate sets.
Related work and motivation
One traditional arrangement of the Uralic languages is shown in Figure 1; several alternative arrangements appear in the literature.
The last 15 years have seen a surge in computational modeling of language relationships, change and evolution. We provide a detailed discussion of related prior work in (Nouri et al., 2016).
In earlier work, e.g., (Wettig et al., 2011), we presented two perspectives on the problem of finding regularity. It can be seen as a problem of aligning the data. From an information-theoretic perspective, finding regularity is a problem of compression: the more regularity we find in data, the more we can compress it. In (Wettig et al., 2011), we presented baseline models, which focus on alignment of symbols, in a 1-1 fashion. We showed that aligning more than one symbol at a time-e.g., 2-2-gives better performance. Alignment is a natural way to think of comparing languages. E.g., in Figure 2, obtained by the 1-1 model, we can observe 4 that most of the time Finnish k corresponds to Estonian k (we write Fin. k ∼ Est. k). However, models that focus on alignments have certain shortcomings. For example, substantial probability mass is assigned to Fin. k ∼ Est. g, yet the model cannot explain why. Fin. k ∼ Est. g in certain environments-in nonfirst syllables, between vowels or after a voiced consonant-but the model cannot capture this regularity, because it has no notion of context. In fact, the regularity is much deeper: not only Fin. k, but all Finnish voiceless stops become voiced in Estonian in this environment: p ∼ b, t ∼ d. This type of regularity cannot be captured by the baseline model because it treats symbols as atoms, and does not know about their shared phonetic features.
We claim that alignment may always not be the best way to think about the problem of finding regularity. Figure 2 shows a prominent "diagonal," 4 The size of the circle is proportional to the probability of aligning the corresponding symbols on the X and Y axes. The dot coordinates "." correspond to deletions/insertions.
. a b d e e̮ g h i j k l m n o p r s t u v z ä ö ü
. a d e g h i j k l m n o p r s t u v ä ö ü Estonian Finnish Figure 2: 1-1 alignment for Finnish and Estonian many sounds correspond-they "align with themselves." However, as languages diverge further, this correspondence becomes blurry; e.g., when we try to align Finnish and Hungarian, the probability distribution of aligned symbols has much higher entropy, Figure 3. The reason is that the regularity lies on a much deeper level: predicting which sound occurs in a given position in a word requires knowledge of a wider context, in both Finnish and Hungarian. Hence we will prefer to think in terms of coding, rather than alignment. Methods in (Kondrak, 2002), learn one-toone sound correspondences between words in pairs of languages. Kondrak (2003), Wettig et al. (2011) find more complex-many-to-manycorrespondences. These methods focus on alignment, and model context of the sound changes in a limited way, while it is known that most evolutionary changes are conditioned on the context of the evolving sound. Bouchard-Côté et al. (2007) use MCMC-based methods to model context, and operate on more than a pair of languages. 5 Our models, similarly to other work, operate at the phonetic level only, leaving semantic judgements to the creators of the database. Some prior work attempts to approach semantics by computational means as well, e.g., (Kondrak, 2004;Kessler, 2001). We begin with a set of etymological data for a language family as given, and treat each cognate set as a fundamental unit of in- put. We use the principle of recurrent sound correspondence, as in much of the literature. Alignment can be evaluated by measuring relationships among entire languages within the family. Construction of phylogenies is studied, e.g., in (Nakhleh et al., 2005;Ringe et al., 2002;Barbançon et al., 2009).
. a b d dé f g h i j k l m n o p r s t u v z ö ü ā č ē ī ĺ ń ō š ū ǖ ȫ
Our work is related to the generative "Berkeley" models, (Bouchard-Côté et al., 2007), (Hall and Klein, 2011), in the following respects.
Context: in (Wettig et al., 2011) we capture some context by coding pairs of symbols, as in (Kondrak, 2003). Berkeley models handle context by conditioning the symbol being generated upon the immediately preceding and following symbols. Our method uses broader context by building decision trees, so that non-relevant context information does not grow model complexity.
Phonetic features: in (Wettig et al., 2011) we treated sounds/symbols as atomic-not analyzed in terms of their phonetic makeup. Berkeley models use "natural classes" to define the context of a sound change, but not to generate the symbols themselves; (Bouchard-Côté et al., 2009) encode as a prior which sounds are "similar" to each other. We code symbols in terms of phonetic features. Our models are based on informationtheoretic Minimum Description Length principle (MDL), e.g., (Grünwald, 2007)-unlike Berkeley. MDL brings some theoretical benefits, since models chosen in this way are guided by data with no free parameters or hand-picked "priors." The data analyst chooses the model class and structure, and the coding scheme, i.e., a decodable way to encode model and data. This determines the learning strategy-we optimize the cost function, which is the code length determined by these choices.
Objective function: we use NML-the normalized maximum likelihood, not reported previously in this setting. It is preferable for theoretical and practical reasons, e.g., to prequential coding used in (Wettig et al., 2011), as explained in section 3.1.
Models that utilize more than the immediate adjacent environment of a sound to build a complete alignment of a language family have not been reported previously, to the best of our knowledge.
Coding pairs of words
We begin with baseline algorithms for pairwise coding: in (Wettig et al., 2011;Wettig et al., 2012) we code pairs of words, from two related languages in our corpus of cognates. For each word pair, the task of alignment is finding which sym-bols correspond best; the task of coding is achieving more compression. The simplest form of symbol alignment is a pair (σ : τ ) ∈ Σ × T , a single symbol σ from the source alphabet Σ with a symbol τ from the target alphabet T .
To model insertions and deletions, we augment both alphabets with a special "empty" symboldenoted by a dot-and write the augmented alphabets as Σ . and T . . We can then align word pairs, such as hiiri-löNk@r (meaning "mouse" in Finnish and Khanty) in many different ways; putting Finnish (source level, above) and Khanty (target level, below), for example:
h i . . i r i | | | | | | | lö N k @ r . . h . . i i r i | | | | | | | | lö N k @ r . . ...
A final note about alignments: we find no satisfactory way to evaluate alignments. Which of the above alignments is "better"? It may be satisfying to prefer the left one, observing that Fin. h corresponds well to Khn. l (since they both go back to Proto-Uralicš); Fin. r ∼ Khn. r, etc. However, if a model achieves better compression by preferring the alignment on the right, then it is difficult to argue that that alignment is "not correct."
Context model with phonetic features
Our coding method is based on MDL. The most refined form of MDL, NML-Normalized Maximum Likelihood, (Rissanen, 1996)-cannot be efficiently computed for our model. Therefore, we resort to a classic two-part coding scheme. The first part of the two-part code is responsible for splitting the data into subsets corresponding to certain contexts. However, given the contexts, we can use NML to encode these subsets. 6 We begin with a raw set of observed dataword pairs in two languages. We search for a way to code the data, by capturing regular correspondences. The goodness of the code is defined formally below. MDL says that the more regularity we can find in the data, the fewer bits we will need to encode (or compress) it. More regularity means lower entropy in the distribution that describes the data, and lower entropy lets us construct a more economical code.
Features: Rather than coding symbols (sounds) as atomic, we code them in terms of their pho- Contexts: While coding each feature of the symbol, the model is allowed to query a fixed and finite a set of candidate contexts. The idea is that the model can query its "history"-information that has already been coded previously. When coding k, e.g., the model may query features of blue a (β, γ, etc.), as well as features of red a, etc. When coding g the model may query those, and in addition also the features of k (χ, φ, etc.) Formally, a context is a triplet (L, P, F ): L is the level-source (σ) or target (τ ); P is one of the positions that the model may query-relative to the position currently being coded; for example, we may query positions shown in Figure 5B. F is one of the possible features found at that position. Thus, we have in total about 2 levels × 8 positions × 5 features ≈ 80 candidate contexts that can be queried, as explained in detail below.
j a l k a j a l g [ ζ χ φ ψ ] [ α β γ δ ] [ ξ π μ ω ]
Two-part code
We code the complete (i.e., aligned) data using a two-part code, following MDL. We first code which model instance we select from our model class, and then code the data given the model. Our model class is defined as follows: a set of decision trees (forest)-one tree per feature per level (separately for source and for target). A model instance will define a particular structure for each tree.
Cost of coding the structure: Thus, the forest consists of 18 decision trees-one for each feature on the source and the target level: the type feature, 4 vowel and 4 consonant features, times 2 levels. Each node in a tree will either be a leaf, or will be split-by querying one of the candidate contexts defined above. The cost of a tree is one bit for every node n i -to encode whether n i is internal (was split) or a leaf-plus the number of internal nodes × ≈ log 80-to encode which particular context was chosen to split each n i . We explain how the model chooses the best candidate context on which to split a node in section 3.3.
Each feature and level define a tree, e.g., the "voiced" (X) feature of the source symbolscorresponds to the σ-X tree. A node N in this tree holds a distribution over the values of feature X of only those symbol instances in the complete data that have reached node N , by following the context queries from the root downward. The tree structure tells us precisely which path to follow-completely determined by the context. When coding a symbol α based on another symbol found in the context C of α-for example, C = (τ, −K, M): at level τ , position -K, and one of the features M-the next edge down the tree is determined by that feature's value; and so on, down to a leaf. 8 Cost of the data given the model: is computed by taking into account only the distributions at the leaves. The code will assign a cost (code-length) to every possible alignment of the data. The total code-length is the objective function that the learning algorithm will optimize.
Coding scheme: we use Normalized Maximum Likelihood (NML), and prequential coding as in (Wettig et al., 2011). We code the distribution at each leaf node separately; the sum of the costs of all leaves gives the total cost of the complete data-the value of the objective function.
Suppose n instances reach a leaf node N , of the tree for feature F on level λ, and F has k values: e.g., n consonants satisfying N 's context constraints in the σ-X tree, with k = 2 values:{−, +}. Suppose also that the values are distributed so that n i instances have value i, with i ∈ {1, . . . , k}. Then this requires an NML code-length of:
L N M L (λ; F ; N ) = − log P N M L (λ; F ; N ) = − log i n i n n i C(n, k)(1)
Here i n i n n i is the maximum likelihood of the multinomial data at node N , and the term C(n, k) =
n 1 +...+n k =n i n i n n i(2)
is a normalizing constant to make P N M L a probability distribution. In MDL literature, (Grünwald, 2007), the term − log C(n, k) is called the parametric complexity or the (minimax) regret of the model-in this case, the multinomial model. The NML distribution is the unique solution to the mini-max problem posed in (Shtarkov, 1987),
min P max x n log P (x n |Θ(x n )) P (x n )(3)
whereΘ(x n ) = arg max Θ P(x n ) are the maximum likelihood parameters for the data x n . Thus, P N M L minimizes the worst-case regret, i.e., the number of excess bits in the code as compared to the best model in the model class, with hind-sight. Details on the computation of this code length are given in (Kontkanen and Myllymäki, 2007). Learning the model from the observed data now means aligning word pairs and building decision trees so as to minimize the two-part code length: the sum of the model's code length-encoding the structure of the trees,-and the code length of the data given the model-encoding the aligned word pairs using these trees.
Summary of the algorithm: We start with an initial random alignment for each pair of words in the corpus. We then alternate between two steps: A. re-build the decision trees for all features on source and target levels, and B. re-align all word pairs in the corpus, using dynamic programming. Both of these operations monotonically decrease the two-part cost function and thus compress the data. We continue until we reach convergence.
Simulated annealing with a slow cooling schedule is used to avoid getting trapped in local optima.
Building decision trees
Given a complete alignment of the data, we need to build a decision tree, for each feature on both levels, yielding the lowest two-part cost.The term "decision tree" is meant in a probabilistic sense: at each node we store a distribution over the respective feature values, for all instances that reach this node. The distribution at a given leaf is then used to code an instance when it reaches the leaf. We code the features in a fixed, pre-set order, and source level (σ-level) before target (τ -level).
We now describe in detail the process of building the tree-using as example a tree for the σlevel feature X. (We will need do the same for all other features, on both levels, as well.) First, we collect all instances of consonants on σ-level, gather the the counts for feature X, and build an initial count vector; suppose it is:
value of X → + - 1001 1002
This vector is stored at the root of the tree; the cost of this node is computed using NML, eq. 1. Note that this vector / distribution has rather high entropy.
Next, we try to split this node, by finding such a context that if we query the values of the feature in that context, it will help us reduce the entropy in this count vector. We check in turn all possible candidate contexts (L, P, F ), and choose the best one. Each candidate refers to some symbol found on σ-level or τ -level, at some relative position P , and to one of that symbol's features F . We will condition the split on the possible values of F . For each candidate, we try to split on its feature's values, and collect the resulting alignment counts.
Suppose one such candidate is (σ, -V, H), i.e., (σ-level, previous vowel, Horizontal feature), and suppose that the H-feature has two values: front / back. Suppose also that the vector at the root node (recall, this tree is for the X-feature) would then split into two vectors, for example:
value of X → + - X | H=front 1000 1 X | H=back 1 1001
This would likely be a very good split, since it reduces the entropy of the distribution in each row to near zero. The criterion that guides the choice of the best candidate context to use for splitting a node is the sum of the code lengths of the resulting split vectors, and the code length is proportional to the entropy.
We go through all candidates exhaustively, 9 and greedily choose the one that yields the greatest reduction in entropy, and drop in cost. We proceed recursively down the tree, trying to split nodes, and stop when the total tree cost stops decreasing.
This completes the tree for feature X on level σ. We build all remaining trees-for all features and all levels similarly-based on the current alignment of the complete data.
Variations of context-based models
The context models enable us to discover more regularities in the data by querying the context of sounds. However building decision trees repeatedly in the process of searching for the optimal alignments is very time consuming. We have explored several variations of context-based models in an attempt to make the search converge more quickly, without sacrificing quality.
Zero-depth context model
In this variant of the model, during the simulated annealing phase (i.e., when there is some randomness in the search algorithm), the trees are not expanded to their full depth. Instead, for source-level trees, only the root node is calculated and the target level trees are allowed to query only the itself position on the source level. Once the simulated annealing reaches the greedy phase, the trees are grown in the same way as they would have been normally, without any restrictions.
This model results in reasonable alignments and relatively low costs and lower running time.
Infinite-depth context model
This is another restrictive variation of the context model, which is more permissive than the zerodepth model. In this variation during the simulated annealing phase of the algorithm, the candidates that can be queried to expand the root nodes of the trees are limited to already encoded features of the itself position.
Evaluation
We discuss two views on evaluation-strict evaluations vs. intuitive evaluations.
Comparing context models to each other
From a strictly information-theoretic point of view, a sufficient condition to claim that model M 1 is better than M 2 , is that M 1 assigns a higher probability (equivalently-lower cost) to the observed data. Figure 7A shows the absolute costs, in bits, for all language pairs-for the baseline 1-1 model and six context models. The six context models are: the "normal" model, zero-depth and infinitedepth-and for each, the objective function uses either NML or prequential coding.
Here is how we interpret the points in these scatter plots. Each box in the triangular plot compares one model, M x -whose scores are plotted on the X-axis-against another model, M y (on the Y-axis). For example, the leftmost column compares the baseline 1-1 model as M x against each of the six context models in turn; etc. In every plot box, each of the 10 × 9 points is a comparison of the two models M x and M y on one language pair (L 1 , L 2 ). Therefore, for each point (L 1 , L 2 ), the X-coordinate gives the score of model M x , and the Y-coordinate gives the score of the other model, M y . If the point (L 1 , L 2 ) is below the diagonal, M x has higher cost on (L 1 , L 2 ) than M y . The further away the point is from the "break-even" diagonal line x = y, the greater the advantage of one model over the other.
The left column of figure 7A shows that all context models always produce much lower cost compared to the basic context-free 1-1 model defined in (Wettig et al., 2011).
The remaining five columns compare the context models among themselves. Here we see that Compressed size x1000 bits Data size: number of word pairs (average word-length: 5.5 bytes)
Gzip Bzip2 1-1 model 2-2 model Context model Figure 6: Comparison of compression power no model variant is a clear winner. Since the variants do not show a clear preference for the "best" context model among this set, we will use all of them, to vote as an ensemble.
In figure 6, we compare the context model against standard data compressors, Gzip and Bzip, as well as the baseline models in (Wettig et al., 2011), tested on 3200 Finnish-Estonian data from SSA. Gzip/Bzip compress data by finding regularities-which are frequent sub-strings.
These comparisons confirm that the context model finds more regularity in the data than the off-the-shelf data compressors-which have no knowledge that the words in the data are genetically related-as well as the 1-1 and 2-2 models.
Imputation
Strictly, the improvement in the compression cost is adequate proof that the presented model outperforms the baselines. For a more intuitive evaluation of improvement in model quality, we can compare models by using them to impute unseen data. This is done as follows. For a given model M , and a language pair (L 1 , L 2 )-e.g., (Finnish, Estonian)-we hold out one word pair, and train the model on all remaining word pairs. Then we show the model the held out Finnish word and let it impute-i.e., guessthe corresponding Estonian word. Imputation can be done for all models with a dynamic programming algorithm, similar to the Viterbi-like search used during model training. Formally, given the held-out Finnish string, the imputation procedure selects-from all possible Estonian strings-the most probable Estonian string, given the model. We then compute an edit distance (e.g., the Levenshtein edit distance) between the imputed Estonian string and the correct withheld word.
We repeat this procedure for all word pairs in the (L 1 , L 2 ) data set, sum the edit distances, and normalize by the total size of the correct L 2 data-giving the Normalized Edit Distance: N ED(L 2 |L 1 , M ) from L 1 to L 2 , under M .
NED indicates how much regularity the model has learned about the language pair (L 1 , L 2 ). Finally, we used NED to compare models across all language pairs. The context models always have lower cost than the baseline, and lower NED in ≈88% of the language pairs. This is encouraging indication that optimizing the code length is a good approach: the models do not optimize NED directly, and yet the cost correlates with NED-a simple and intuitive measure of model quality.
A similar kind of imputation was used in (Bouchard-Côté et al., 2007) for cross-validation.
Voting for phylogenies
Each context model assigns its own MDL cost to every language pair. These raw MDL costs are not directly comparable, since different language pairs have different amounts of data-different number of shared cognate words. We can make these costs comparable by normalizing them, using NCD-Normalized Compression Distance, (Cilibrasi and Vitanyi, 2005), as in (Wettig et al., 2011). Then, each model produces its own pairwise distance matrix for all language pairs-where the distance is NCD. A pairwise distance matrix can be used to construct a phylogeny for the language family. NED, introduced above, provides yet another distance measure between any pair of languages, similarly to NCD. Thus, the NED scores can also be used to make inferences about how far the languages are from each other, and used as in put to algorithms for creating phylogenetic trees. For example, applying the NeighborJoin algorithm, (Saitou and Nei, 1987), to the pairwise NED matrix produced by the normal context model, yields the phylogeny in Figure 7B.
To compute how far a given phylogeny is from a gold-standard tree, we can use a distance measure for unrooted, leaf-labeled (URLL) trees. One such URLL distance measure is given in (Robinson and Foulds, 1981). The URLL distance between this tree and the gold standard in Figure 1 is 0.12. 10 However, the MDL costs do not allow us to prefer any one of the context models over the others.
Model
Brit. Ant. Volga normal-nml-avg.NCD 0.14 0 0.14 normal-nml-avg.NED 0.14 0 0.14 normal-nml-min.NCD 0.14 0 0.14 normal-nml-min.NED 0.28 0.14 0.28 normal-prequential-avg.NCD 0.14 0 0.14 normal-prequential-avg.NED 0.14 0.28 0.42 normal-prequential-min.NCD 0.14 0 0.14 normal-prequential-min.NED 0.14 0.28 0.42 ∞-nml-avg.NCD 0.28 0.14 0.28 ∞-nml-avg.NED 0.42 0.28 0.42 ∞-nml-min.NCD 0.28 0.14 0.28 ∞-nml-min.NED 0.28 0.14 0.28 ∞-prequential-avg.NCD 0.14 0 0.14 ∞-prequential-avg.NED 0.28 0.14 0.28 ∞-prequential-min.NCD 0.14 0.28 0.42 ∞-prequential-min.NED 0.28 0.14 0.28 zero-nml-avg.NCD 0.42 0.42 0.57 zero-nml-avg.NED 0 0.14 0.28 zero-nml-min.NCD 0.14 0 0.14 zero-nml-min.NED 0.28 0.28 0.42 zero-prequential-avg.NCD 0.14 0 0.14 zero-prequential-avg.NED 0.28 0.14 0.28 zero-prequential-min.NCD 0.14 0 0.14 zero-prequential-min.NED 0.28 0.28 0.42 Total vote 5.14 3.28 6.71 Table 1: Context models voting for Britannica, Anttila and Volga gold standards Therefore, we use all models as an ensemble. Gold-standard trees: Different linguists advocate different, conflicting theories about the structure of the Uralic family tree, and Finno-Ugric in particular. Figure 1 shows one such phylogeny, we call "Britannica." Another phylogeny, isomorphic to the tree in Figure 7B, we call "Anttila." A third tree in the literature pairs Mari and Mordvin together into a "Volgaic" branch of Finno-Ugric.
In Table 1, we compare trees generated by the context models to these three gold-standard trees, using the URLL distance defined above.
The context models induce phylogenetic trees as follows. Each model can use prequential coding or NML. Each model yields one NCD matrix and one NED matrix. Finally, for any pair of languages L 1 and L 2 , the model in general produces different distances for (L 1 , L 2 ) vs. (L 2 , L 1 ), depending on which language is the source and which is the target (since some languages preserve more information than others). Therefore, each of the three context models produces 8 trees, 24 in total. The distance from each tree to the three gold-standard phylogenies is in Table 1.
The measures show which gold-standard tree is favored by all models taken together. The models strongly prefer "Anttila"-which happens to be the phylogeny favored by a majority of Uralic scholars at present, (Anttila, 1989).
Discussion and future work
We have presented an approach to modeling evolutionary processes within a language family by coding data from all languages pair-wise. To our knowledge, these models represent the first attempt to capture longer-range context in evolutionary modeling, where prior work allowed small neighboring context to condition the correspondences. We present a feature-based context-aware MDL coding scheme, and compare it against our earlier models, in terms of compression cost and imputation power. Language distances induced by compression cost and by imputation for all pairs of languages, enable us to build complete phylogenies. The model takes a set of lexical data as input, and makes no further assumptions. In this regard, it is as objective as possible given the data. 11 Finally, we note that our experiments with the context models confirm that the notion of alignment is secondary in modeling evolution. In the old approach, we aligned symbols jointly, and hoped to find symbol pairs that align to each other frequently. In the new approach, we code symbols separately one by one on the source and target level, and A. we code the symbols one feature at a time, and B. while coding each feature, we allow the model to use information from any feature of any symbol that has been coded previously.
These models do better, with no alignment. The objectivity of models given the data opens new possibilities for comparing entire data sets. For example, we can begin to compare the Finnish/Estonian data in StarLing vs. other datasets-and the comparison will be impartial, relying solely on the given data. The models also enable us to quantify the uncertainty of individual entries in the corpus of etymological data. For example, for a given entry x in language L 1 , we can compute the probability that x would be imputed by any of the models, trained on all the remaining data from L 1 plus any other set of languages in the family. This can be applied in particular to entries marked as dubious by the database creators.
Figure 1 :
1Uralic language family (adapted from Encyclopedia Britannica)
Figure 3: 1-1 alignment for Finnish and Hungarian
Figure 4 :
4Fin. jalka (source) ∼ Est. jalg (target) netic features. For example, figure 4 shows how a model might code Finnish jalka and Estonian jalg (meaning "leg"). We code the symbols in a fixed order: top to bottom, left to right. Each symbol is coded as a vector of its phonetic features, e.g.,k = [ξ χ φ ψ].For each symbol, first we code a special Type feature, with values: K (consonant), V (vowel), dot (insertion / deletion), or # (word boundary).7 Consonants and vowels have different sets of features; each feature has 2-8 values, listed in Figure 5A. Features are coded in a fixed order.
Figure 5 :
5(A: left) Phonetic features and (B: right) phonetic contexts / environments.
Figure 7 :
7(A: left) Comparison of costs of context models and the baseline 1-1; (B: upper right) Finno-Ugric tree induced by imputation and normalized edit distances, via NeighborJoin
ContextsI itself, possibly dot -P previous position, possibly dot -S previous non-dot symbol -K previous consonant -V previous vowel +S previous or self non-dot symbol +K previous or self consonant +V previous or self vowel ... (other contexts possible)Consonant articulation
M Manner
plosive, fricative, glide, ...
P Place
labial, dental, ..., velar, uvular
X Voiced
-, +
S
Secondary -, affricate, aspirate, ...
Vowel articulation
V Vertical
high-mid-low
H Horizontal front-center-back
R Rounding
-, +
L Length
1-5
Extending the methods to problem d. is future work. 2 The members of a cognate set are posited (by linguists) to derive from a common, shared origin: a word-form in the (typically unobserved) ancestral proto-language.
NB: we use sounds and symbols interchangeably, as we assume that input data is rendered in a phonetic transcription.
The running time did not scale well when the number of languages was above three.
Theoretical benefits of NML over other coding schemes include freedom from priors, invariance to reparametrization, and other optimality properties, which are outside the scope of this paper,(Rissanen, 1996).
Type feature and word end (#) not shown in figure.
Model code to construct trees from data, and examples of decision trees learned by the model are made publicly available on the Project Web site: etymon.cs.helsinki.fi/.
We augment the set of possible feature values at every node with two additional special branches: = means that the symbol at the queried position is of the wrong type and hence does not have the queried feature; # means the query ran past the beginning of the word.
This URLL distance of 0.12 is also quite small. We computed the expected URLL distance from a random tree with this leaf set over a sample of 1000 randomly generated trees-which is over 0.8. The number of leaf-labeled trees with n nodes is (2n − 3)!! (see, e.g.,(Ford, 2010)).
The data set itself, of course, may be highly subjective.Refining the data set is in itself an important challenge, as presented in problem a. in the Introduction, to be addressed in future work.
AcknowledgmentsThis research was supported in part by the Uralink Project and the FinUgRevita Project of the Academy of Finland, and by the National Centre of Excellence "ALGODAN: Algorithmic Data Analysis" of the Academy of Finland. We thank Teemu Roos for his assistance. We are grateful to the anonymous reviewers for their comments and suggestions.
Historical and comparative linguistics. Raimo Anttila, John BenjaminsRaimo Anttila. 1989. Historical and comparative linguis- tics. John Benjamins.
An experimental study comparing linguistic phylogenetic reconstruction methods. G François, Tandy Barbançon, Don Warnow, Steven N Ringe, Luay Evans, Nakhleh, Proceedings of the Conference on Languages and Genes, UC. the Conference on Languages and Genes, UCSanta BarbaraCambridge University PressFrançois G. Barbançon, Tandy Warnow, Don Ringe, Steven N. Evans, and Luay Nakhleh. 2009. An ex- perimental study comparing linguistic phylogenetic re- construction methods. In Proceedings of the Conference on Languages and Genes, UC Santa Barbara. Cambridge University Press.
A probabilistic approach to diachronic phonology. Alexandre Bouchard-Côté, Percy Liang, Thomas Griffiths, Dan Klein, Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL:2007). the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL:2007)Prague, Czech RepublicAlexandre Bouchard-Côté, Percy Liang, Thomas Griffiths, and Dan Klein. 2007. A probabilistic approach to di- achronic phonology. In Proceedings of the Joint Con- ference on Empirical Methods in Natural Language Pro- cessing and Computational Natural Language Learning (EMNLP-CoNLL:2007), pages 887-896, Prague, Czech Republic.
Improved reconstruction of protolanguage word forms. Alexandre Bouchard-Côté, Thomas L Griffiths, Dan Klein, Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL09). the North American Chapter of the Association for Computational Linguistics (NAACL09)Alexandre Bouchard-Côté, Thomas L. Griffiths, and Dan Klein. 2009. Improved reconstruction of protolanguage word forms. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL09).
Clustering by compression. Rudi Cilibrasi, M B Paul, Vitanyi, IEEE Transactions on Information Theory. 514Rudi Cilibrasi and Paul M.B. Vitanyi. 2005. Clustering by compression. IEEE Transactions on Information Theory, 51(4):1523-1545.
Encodings of cladograms and labeled trees. J Daniel, Ford, Electronic Journal of Combinatorics. 17Daniel J. Ford. 2010. Encodings of cladograms and labeled trees. Electronic Journal of Combinatorics, 17:1556- 1558.
The Minimum Description Length Principle. Peter Grünwald, MIT PressPeter Grünwald. 2007. The Minimum Description Length Principle. MIT Press.
Large-scale cognate recovery. David Hall, Dan Klein, Empirical Methods in Natural Language Processing (EMNLP). David Hall and Dan Klein. 2011. Large-scale cognate recov- ery. In Empirical Methods in Natural Language Process- ing (EMNLP).
Suomen Sanojen Alkuperä (The Origin of Finnish Words). Erkki Itkonen, Ulla-Maija Kulonen, Suomalaisen Kirjallisuuden Seura. Erkki Itkonen and Ulla-Maija Kulonen. 2000. Suomen Sano- jen Alkuperä (The Origin of Finnish Words). Suomalaisen Kirjallisuuden Seura, Helsinki, Finland.
The Significance of Word Lists: Statistical Tests for Investigating Historical Connections Between Languages. Brett Kessler, The University of Chicago PressStanford, CABrett Kessler. 2001. The Significance of Word Lists: Statisti- cal Tests for Investigating Historical Connections Between Languages. The University of Chicago Press, Stanford, CA.
Determining recurrent sound correspondences by inducing translation models. Grzegorz Kondrak, Proceedings of COLING 2002: 19 th International Conference on Computational Linguistics. COLING 2002: 19 th International Conference on Computational LinguisticsTaipeiGrzegorz Kondrak. 2002. Determining recurrent sound cor- respondences by inducing translation models. In Proceed- ings of COLING 2002: 19 th International Conference on Computational Linguistics, pages 488-494, Taipei.
Identifying complex sound correspondences in bilingual wordlists. Grzegorz Kondrak, Computational Linguistics and Intelligent Text Processing (CICLing-2003). A. GelbukhMexico CitySpringer-VerlagGrzegorz Kondrak. 2003. Identifying complex sound corre- spondences in bilingual wordlists. In A. Gelbukh, editor, Computational Linguistics and Intelligent Text Processing (CICLing-2003), pages 432-443, Mexico City. Springer- Verlag Lecture Notes in Computer Science, No. 2588.
Combining evidence in cognate identification. Grzegorz Kondrak, Proceedings of the Seventeenth Canadian Conference on Artificial Intelligence (Canadian AI 2004). the Seventeenth Canadian Conference on Artificial Intelligence (Canadian AI 2004)London, OntarioSpringer-Verlag3060Grzegorz Kondrak. 2004. Combining evidence in cognate identification. In Proceedings of the Seventeenth Cana- dian Conference on Artificial Intelligence (Canadian AI 2004), pages 44-59, London, Ontario. Lecture Notes in Computer Science 3060, Springer-Verlag.
A linear-time algorithm for computing the multinomial stochastic complexity. Petri Kontkanen, Petri Myllymäki, Information Processing Letters. 1036Petri Kontkanen and Petri Myllymäki. 2007. A linear-time algorithm for computing the multinomial stochastic com- plexity. Information Processing Letters, 103(6):227-233.
Perfect phylogenetic networks: A new methodology for reconstructing the evolutionary history of natural languages. Luay Nakhleh, Don Ringe, Tandy Warnow, Language (Journal of the Linguistic Society of America). 812Luay Nakhleh, Don Ringe, and Tandy Warnow. 2005. Per- fect phylogenetic networks: A new methodology for re- constructing the evolutionary history of natural languages. Language (Journal of the Linguistic Society of America), 81(2):382-420.
From alignment of etymological data to phylogenetic inference via population genetics. Javad Nouri, Jukka Sirén, Jukka Corander, Roman Yangarber, Proceedings of CogACLL: the 7 th Workshop on Cognitive aspects of Computational Language Learning, at ACL-2016. CogACLL: the 7 th Workshop on Cognitive aspects of Computational Language Learning, at ACL-2016Berlin, GermanyAssociation for Computational LinguisticsJavad Nouri, Jukka Sirén, Jukka Corander, and Roman Yan- garber. 2016. From alignment of etymological data to phylogenetic inference via population genetics. In Pro- ceedings of CogACLL: the 7 th Workshop on Cognitive as- pects of Computational Language Learning, at ACL-2016, Berlin, Germany, August. Association for Computational Linguistics.
Uralisches etymologisches Wörterbuch. Károly Rédei, Harrassowitz, WiesbadenKároly Rédei. 1991. Uralisches etymologisches Wörterbuch. Harrassowitz, Wiesbaden.
Indo-European and computational cladistics. Don Ringe, Tandy Warnow, A Taylor, Transactions of the Philological Society. 1001Don Ringe, Tandy Warnow, and A. Taylor. 2002. Indo- European and computational cladistics. Transactions of the Philological Society, 100(1):59-129.
Fisher information and stochastic complexity. Jorma Rissanen, IEEE Transactions on Information Theory. 421Jorma Rissanen. 1996. Fisher information and stochastic complexity. IEEE Transactions on Information Theory, 42(1):40-47.
Comparison of phylogenetic trees. D F Robinson, L R Foulds, Mathematical Biosciences. 531-2D.F. Robinson and L.R. Foulds. 1981. Comparison of phy- logenetic trees. Mathematical Biosciences, 53(1-2):131- 147.
The Neighbor-Joining method: a new method for reconstructing phylogenetic trees. Naruya Saitou, Masatoshi Nei, Molecular biology and evolution. 44Naruya Saitou and Masatoshi Nei. 1987. The Neighbor- Joining method: a new method for reconstructing phylo- genetic trees. Molecular biology and evolution, 4(4):406- 425.
Universal sequential coding of single messages. Problems of Information Transmission. Yuri M Shtarkov, 23Yuri M. Shtarkov. 1987. Universal sequential coding of single messages. Problems of Information Transmission, 23:3-17.
Tower of Babel: StarLing etymological databases. Sergei A Starostin, Sergei A. Starostin. 2005. Tower of Babel: StarLing etymo- logical databases. http://newstar.rinet.ru/.
MDL-based Models for Alignment of Etymological Data. Hannes Wettig, Suvi Hiltunen, Roman Yangarber, Proceedings of RANLP: the 8 th Conference on Recent Advances in Natural Language Processing, Hissar, Bulgaria. RANLP: the 8 th Conference on Recent Advances in Natural Language Processing, Hissar, BulgariaHannes Wettig, Suvi Hiltunen, and Roman Yangarber. 2011. MDL-based Models for Alignment of Etymological Data. In Proceedings of RANLP: the 8 th Conference on Recent Advances in Natural Language Processing, Hissar, Bul- garia.
Using context and phonetic features in models of etymological sound change. Hannes Wettig, Kirill Reshetnikov, Roman Yangarber, Proc. EACL Workshop on Visualization of Linguistic Patterns and Uncovering Language History from Multilingual Resources. EACL Workshop on Visualization of Linguistic Patterns and Uncovering Language History from Multilingual ResourcesAvignon, FranceHannes Wettig, Kirill Reshetnikov, and Roman Yangarber. 2012. Using context and phonetic features in models of etymological sound change. In Proc. EACL Workshop on Visualization of Linguistic Patterns and Uncovering Lan- guage History from Multilingual Resources, pages 37-44, Avignon, France. |
14,386,564 | A STATISTICAL APPROACH TO MACHINE TRANSLATION | In this paper, we present a statistical approach to machine translation. We describe the application of our approach to translation from French to English and give preliminary results. Peter F. Brown et al. A Statistical Approach to Machine Translation Source Language Model S t TranslatiOnModel T Pr(S) x Pr(TIS) = Pr(S,T) A Source Language Model and a Translation Model furnish a probability distribution over source-target sentence pairs (S,T). The joint probability | [
18361921,
17402234
] | A STATISTICAL APPROACH TO MACHINE TRANSLATION
Peter F Brown
IBM Thomas J. Watson Research Center Yorktown Heights
NY
John Cocke
IBM Thomas J. Watson Research Center Yorktown Heights
NY
Stephen A Della Pietra
IBM Thomas J. Watson Research Center Yorktown Heights
NY
Vincent J Della
IBM Thomas J. Watson Research Center Yorktown Heights
NY
Pietra
IBM Thomas J. Watson Research Center Yorktown Heights
NY
Fredrick Jelinek
IBM Thomas J. Watson Research Center Yorktown Heights
NY
John D Lafferty
IBM Thomas J. Watson Research Center Yorktown Heights
NY
Robert L Mercer
IBM Thomas J. Watson Research Center Yorktown Heights
NY
Paul S Roossin
IBM Thomas J. Watson Research Center Yorktown Heights
NY
A STATISTICAL APPROACH TO MACHINE TRANSLATION
In this paper, we present a statistical approach to machine translation. We describe the application of our approach to translation from French to English and give preliminary results. Peter F. Brown et al. A Statistical Approach to Machine Translation Source Language Model S t TranslatiOnModel T Pr(S) x Pr(TIS) = Pr(S,T) A Source Language Model and a Translation Model furnish a probability distribution over source-target sentence pairs (S,T). The joint probability
INTRODUCTION
The field of machine translation is almost as old as the modern digital computer. In 1949 Warren Weaver suggested that the problem be attacked with statistical methods and ideas from information theory, an area which he, Claude Shannon, and others were developing at the time (Weaver 1949). Although researchers quickly abandoned this approach, advancing numerous theoretical objections, we believe that the true obstacles lay in the relative impotence of the available computers and the dearth of machinereadable text from which to gather the statistics vital to such an attack. Today, computers are five orders of magnitude faster than they were in 1950 and have hundreds of millions of bytes of storage. Large, machine-readable corpora are readily available. Statistical methods have proven their value in automatic speech recognition (Bahl et al. 1983) and have recently been applied to lexicography (Sinclair 1985) and to natural language processing (Baker 1979;Ferguson 1980;Garside et al. 1987;Sampson 1986;Sharman et al. 1988). We feel that it is time to give them a chance in machine translation.
The job of a translator is to render in one language the meaning expressed by a passage of text in another language. This task is not always straightforward. For example, the translation of a word may depend on words quite far from it. Some English translators of Proust's seven volume work A la Recherche du Temps Perdu have striven to make the first word of the first volume the same as the last word of the last volume because the French original begins and ends with the same word (Bernstein 1988). Thus, in its most highly developed form, translation involves a careful study of the original text and may even encompass a detailed analysis of the author's life and circumstances. We, of course, do not hope to reach these pinnacles of the translator's art.
In this paper, we consider only the translation of individual sentences. Usually, there are many acceptable translations of a particular sentence, the choice among them being largely a matter of taste. We take the view that every sentence in one language is a possible translation of any sentence in the other. We assign to every pair of sentences (S, T) a probability, Pr(TIS), to be interpreted as the probability that a translator will produce T in the target language when presented with S in the source language. We expect Pr(TIS) to be very small for pairs like (Le matin je me brosse les dents lPresident Lincoln was a good lawyer) and relatively large for pairs like (Le president Lincoln btait un bon avocat l President Lincoln was a good lawyer). We view the problem of machine translation then as follows. Given a sentence T in the target language, we seek the sentence S from which the translator produced T. We know that our chance of error is minimized by choosing that sentence S that is most probable given T. Thus, we wish to choose S so as to maximize Pr(SI T). Using Bayes' theorem, we can write Pr (S) Pr (TIS) Pr (SI T) = Pr (T)
The denominator on the right of this equation does not depend on S, and so it suffices to choose the S that maximizes the product Pr(S)Pr (TIS). Call the first factor in this product the language model probability of S and the second factor the translation probability of T given S. Although the interaction of these two factors can be quite profound, it may help the reader to think of the translation probability as suggesting words from the source language that might have produced the words that we observe in the target sentence and to think of the language model probability as suggesting an order in which to place these source words.
Thus, as illustrated in Figure 1, a statistical translation system requires a method for computing language model probabilities, a method for computing translation probabilities, and, finally, a method for searching among possible source sentences S for the one that gives the greatest value for Pr(S)Pr( TIS).
In the remainder of this paper we describe a simple version of such a system that we have implemented. In the Pz (S, T) of the pair (S, T) is the product of the probability Pr (S) computed by the language model and the conditional probability Pr (T I S) computed by the translation model. The parameters of these models are estimated automatically f~om a large database of source-target sentence pairs using a statistical aigoritlim which optimizes, in an appropriate sense, the fit between the models and the data.
T ~ Decoder = argmaxPr(S I T ) = argmaxPr (S,T) s s A Decoder performs the actual translation. Given a sentence T in the target language, the decoder chooses a viable translation by selecting that sentence in the source langnage for which the probability Pr (S [ T) is maximum.
Figure 1 A Statistical Machine Translation
System.
next section we describe our language model for Pr(S), and in Section 3 we describe our translation model for Pr(T[S). In Section 4 we describe our search procedure. In Section 5 we explain how we estimate the parameters of our models from a large database of translated text. In Section 6 we describe the results of two experiments we performed using these models. Finally, in Section 7 we conclude with a discussion of some improvements that we intend to implement. Thus, we can recast the language modeling problem as one of computing the probability of a single word given all of the words that precede it in a sentence. At any point in the sentence, we must know the probability of an object word, s i, given a history, s~s2. • • Si_l. Because there are so many histories, we cannot simply treat each of these probabilities as a separate parameter. One way to reduce the number of parameters is to place each of the histories into an equivalence class in some way and then to allow the probability of an object word to depend on the history only through the equivalence class into which that history falls. In an n-gram model, two histories are equivalent if they agree in their final n-1 words. Thus, in a bigram model, two histories are equivalent if they end in the same word and in a trigram model, two histories are equivalent if they end in the same two words. While n-gram models are linguistically simpleminded, they have proven quite valuable in speech recognition and have the redeeming feature that they are easy to make and to use. We can see the power of a trigram model by applying it to something that we call bag translation from English into English. In bag translation we take a sentence, cut it up into words, place the words in a bag, and then try to recover the sentence given the bag. We use the n-gram model to rank different arrangements of the words in the bag. Thus, we treat an arrangement S as better than another arrangement S' if Pr(S) is greater than Pr(S'). We tried this scheme on a random sample of sentences. From a collection of 100 sentences, we considered the 38 sentences with fewer than 11 words each. We had to restrict the length of the sentences because the number of possible rearrangements grows exponentially with sentence length. We used a trigram language model that had been constructed for a speech recognition system. We were able to recover 24 (63%) of the sentences exactly. Sometimes, the sentence that we found to be most probable was not an exact reproduction of the original, but conveyed the same meaning. In other cases, of course, the most probable sentence according to our model was just garbage. If we count as correct all of the sentences that retained the meaning of the original, then 32 (84%) of the 38 were correct. Some examples of the original sentences and the sentertces recovered from the bags are shown in Figure 2. We :have no doubt that if we had been able to handle longer sentertces, the results would have been worse and that the probability of error grows rapidly with sentence length.
THE LANGUAGE MODEL
THE TRANSLATION MODEL
For simple sentences, it is reasonable to think of the French translation of an English sentence as being generated from the English sentence word by word. Thus, in the sentence pair (,lean aime Marie I John loves Mary) we feel that John produces Jean, loves produces aime, and Mary produces
Exact reconstruction (24 of 38)
Please give me your response as soon as possible. Please give me your response as soon as possible.
Reconstruction preserving meaning (8 of 38)
Now let me mention some of the disadv'antages. =~ Let me mention some of the disadvantages now.
Garbage reconstruction (6 of 38)
In our organization research has two missions. =~ In oar missions research organization has two. Marie. We say that a word is aligned with the word that it produces. Thus John is aligned with Jean in the pair that we just discussed. Of course, not all pairs of sentences are as simple as this example. In the pair (Jean n'aime personne[John loves nobody), we can again align John with Jean and loves with aime, but now, nobody aligns with both n' and personne. Sometimes, words in the English sentence of the pair align with nothing in the French sentence, and similarly, occasionally words in the French member of the pair do not appear to go with any of the words in the English sentence. We refer to a picture such as that shown in Figure 3 as an alignment. An alignment indicates the origin in the English sentence of each of the words in the French sentence. We call the number of French words that an English word produces in a given alignment its fertility in that alignment.
If we look at a number of pairs, we find that words near the beginning of the English sentence tend to align with words near the beginning of the French sentence and that words near the end of the English sentence tend to align with words near the end of the French sentence. But this is not always the case. Sometimes, a French word will appear quite far from the English word that produced it. We call this effect distortion. Distortions will, for example, allow adjectives to precede the nouns that they modify in English but to follow them in French.
It is convenient to introduce the following notation for alignments. We write the French sentence followed by the English sentence and enclose the pair in parentheses. We separate the two by a vertical bar. Following each of the English words, we give a parenthesized list of the positions of the words in the French sentence with which it is aligned. If an English word is aligned with no French words, then we omit the list. Thus (Jean aime MarielJohn(1) loves (2) Mary (3) ) is the simple alignment with which we began this discussion. In the alignment (Le chien est battu par Jean[John(6) does beat(3,4) the(l) dog (2) ), John produces Jean, does produces nothing, beat produces est battu, the produces Le, dog produces chien, and par is not produced by any of the English words.
Rather than describe our translation model formally, we present it by working an example. To compute the probability of the alignment (Le chien est battu par Jean[John(6) does beat(3,4) the(l) dog (2)), begin by multiplying the probability that John has fertility 1 by Pr(Jean[John).
The proposal will
Les propositions S ne seront pas raises en application Then multiply by the probability that does has fertility 0. Next, multiply by the probability that beat has fertility 2 times Pr(estlbeat)Pr(battulbeat), and so on. The word par is produced from a special English word which is denoted by (null). The result is
Pr(fertility = 1 [John) x Pr(Jean[John) x Pr(fertility = O ldoes) x Pr(fertility ---2[beat) x Pr(est[beat)Pr(battulbeat) x Pr(fertility = l[the) x Pr(Le[the) × Pr(fertility = 1 [dog) x Pr(chien[dog) x Pr(fertility = l[(null)) x Pr(par(null)).
Finally, factor in the distortion probabilities. Our model for distortions is, at present, very simple. We assume that the position of the target word depends only on the length of the target sentence and the position of the source word. Therefore, a distortion probability has the form Pr(i[j, 1) where i is a target position, j a source position, and 1 the target length.
In summary, the parameters of our translation model are a set of fertility probabilities Pr(n[e) for each English word e and for each fertility n from 0 to some moderate limit, in our case 25; a set of translation probabilities Pr (fie), one for each element f of the French vocabulary and each member e of the English vocabulary; and a set of distortion probabilities Pr(i[j, l) for each target position i, source position j, and target length l. We limit i, j, and l to the range 1 to 25.
SEARCHING
In searching for the sentence S that maximizes Pr(S) Pr(T[S), we face the difficulty that there are simply too many sentences to try. Instead, we must carry out a suboptimal search. We do so using a variant of the stack search that has worked so well in speech recognition (Bahl et al. 1983). In a stack search, we maintain a list of partial alignment hypotheses. Initially, this list contains only one entry corresponding to the hypothesis that the target sentence arose in some way from a sequence of source words that we do not know. In the alignment notation introduced earlier, this entry might be (Jean aime Marie I *) where the asterisk is a place holder for an unknown sequence of source words. The search proceeds by iterations, each of which extends some of the most promising entries on the list. An entry is extended by adding one or more additional words to its hypothesis. For example, we might extend the initial entry above to one or more of the following entries: The search ends when there is a complete alignment on the list that is significantly more promising than any of the incomplete alignments.
Sometimes, the sentence S' that is found in this way is not the same as the sentence S that a translator might have been working on. When S' itself is not an acceptable translation, then there is clearly a problem. If
Pr(S')Pr(T[S') is greater than Pr(S)Pr(TIS)
, then the problem lies in our modeling of the language or of the translation process. If, however, Pr(S')Pr(T[ S') is less than Pr(S)Pr(TIS), then our search has failed to find the most likely sentence. We call this latter type of failure a search error. In the case of a search error, we can be sure that our search procedure has failed to find the most probable source sentence, but we cannot be sure that were we to correct the search we would also correct the error. We might simply find an even more probable sentence that nonetheless is incorrect. Thus, while a search error is a clear indictment of the search procedure, it is not an acquittal of either the language model or the translation model.
PARAMETER ESTIMATION
Both the language model and the translation model have many parameters that must be specified. To estimate these parameters accurately, we need a large quantity of data. For the parameters of the language model, we need only English text, which is available in computer-readable form from many sources; but for the parameters of the translation model, we need pairs of sentences that are translations of one another.
By law, the proceedings of the Canadian parliament are kept in both French and English. As members rise to address a question before the house or otherwise express themselves, their remarks are jotted clown in whichever of the two languages is used. After the meeting adjourns, a collection of translators begins working to produce a complete set of the proceedings in both French and English. • These proceedings are called Hansards, in remembrance of the publisher of the proceedings of the British parliament in the early 1800s. All of these proceedings are available in computer-readable form, and we have been able to obtain about 100 million words of English text and the corresponding French text from the Canadian government. Although the translations are not made sentence by sentence, we have been able to extract about three million pairs of sentences by using a statistical algorithm based on sentence length. Approximately 99% of these pairs are made up of sentences that are actually translations of one another. It is this collection of sentence pairs, or more properly various subsets of this collection, from which we have estimated the parameters of the language and translation models.
In the experiments we describe later, we use a bigram language model. Thus, we have one parameter for every pair of words in the source language. We estimate these parameters from the counts of word pairs in a large sample of text from the English part of our Hansard data using a method described by Jelinek and Mercer (1980).
In Section 3 we discussed alignments of sentence pairs. If we had a collection of aligned pairs of sentences, then we could estimate the parameters of the translation model by counting, just as we do for the language model. However, we do not have alignments but only the unaligned pairs of sentences. This is exactly analogous to the situation in speech recognition where one has the script of a sentence and the time waveform corresponding to an utterance of it, but no indication of just what in the time waveform corresponds to what in the script. In speech recognition, this problem is attacked with the EM algorithm (Baum 1972;Dempster et al. 1977). We have adapted this algorithm to our problem in translation. In brief, it works like this: given some :initial estimate of the parameters, we can compute the probability of any particular alignment. We can then re-estimate the parameters by weighing each possible alignment according to its probability as determined by the initial guess of the parameters. Repeated iterations of this process lead to parameters that assign ever greater probability to the set of sentence pairs that we actually observe. This algorithm leads to a local maximum of the probability of the observed pairs as a function of the parameters of the model. There may be many such local maxima. The particular one at which we arrive will, in general, depend on the initial choice of parameters.
Two PILOT EXPERIMENTS
In our first experiment, we test our ability to estimate parameters for the translation model. We chose as our English vocabulary the 9,000 most common words in the English part of the Hansard data, and as our French vocabulary the 9,000 most common French words. For the purposes of this experiment, we replaced all other words with either the unknown English word or the unknown Frenc.h word, as appropriate. We applied the iterative algorithm discussed above in order to estimate some 81 millJion parameters from 40,000 pairs of sentences comprising a total of about 800,000 words in each language. The algorithm requires an initial guess of the parameters. We assumted that each of the 9,000 French words was equally probable as a translation of any of the 9,000 English words; we assumed that each of the fertilities from 0 to 25 was equally probable for each of the 9,000 English words; and finally, we assumed that each target position was equally probable given each source position and target length. Thus, our initial choices contained very little information about either French or English.
Fi[gure 4 shows the translation and fertility probabilities we estimated for the English word the. We see that, according to the model, the translates most frequently into the French articles le and la. This is not surprising, of course, but we emphasize that it is determined completely automatically by the estimation process. In some sense, this correspondence is inherent in the sentence pairs themselves. Figure 5 shows these probabilities for the English word not. As expected, the French word pas appears as a highly probable translation. Also, the fertility probabilities indicate that not translates most often into two French words, a situation consistent with the fact that negative French sentences contain the auxiliary word ne in addition to a primary negative word such as pas or rien. Figure 4 Probabilities for "the."
For both of these words, we could easily have discovered the same information from a dictionary. In Figure 6, we see the trained parameters for the English word hear. As we would expect, various forms of the French word entendre appear as possible translations, but the most probable translation is the French word bravo. When we look at the fertilities here, we see that the probability is about equally divided between fertility 0 and fertility 1. The reason for this is that the English speaking members of parliament express their approval by shouting Hear, hear/, while the French speaking ones say Bravo/The translation model has learned that usually two hears produce one bravo by having one of them produce the bravo and the other produce nothing.
A given pair of sentences has many possible alignments, since each target word can be aligned with any source word. A translation model will assign significant probability only to some of the possible alignments, and we can gain further insight about the model by examining the alignments that it considers most probable. We show one such alignment in Figure 3. Observe that, quite reasonably, not is aligned with ne and pas, while implemented is aligned with the phrase mises en application. We can also see here a deficiency of the model since intuitively we feel that will and be act in concert to produce seront while the model aligns will with seront but aligns be with nothing.
In our second experiment, we used the statistical approach to translate from French to English. To have a manageable task, we limited the English vocabulary to the 1,000 most frequently used words in the English part of the Hansard corpus. We chose the French vocabulary to be the 1,700 most frequently used French words in translations of sentences that were completely covered by the 1,000-word English vocabulary. We estimated the 17 million parameters of the translation model from 117,000 pairs of sentences that were completely covered by both our French and English vocabularies. We estimated the parameters of the bigram language model from 570,000 sentences from the English part of the Hansard data. These sentences contain about 12 million words altogether and are not restricted to sentences completely covered by our vocabulary.
We used our search procedure to decode 73 new French sentences from elsewhere in the Hansard data. We assigned each of the resulting sentences a category according to the following criteria. If the decoded sentence was exactly the same as the actual Hansard translation, we assigned the sentence to the exact category. If it conveyed the same meaning as the Hansard translation but in slightly different words, we assigned it to the alternate category. If the decoded sentence was a legitimate translation of the French sentence but did not convey the same meaning as the Hansard translation, we assigned it to the different category. If it made sense as an English sentence but could not be interpreted as a translation of the French sentence, we assigned it to the wrong category. Finally, if the decoded sentence was grammatically deficient, we assigned it to the ungrammatical category. An example from each category is shown in Figure 7, and our decoding results are summarized in Figure 8.
Only 5% of the sentences fell into the exact category. However, we feel that a decoded sentence that is in any of the first three categories (exact, alternate, or different) represents a reasonable translation. By this criterion, the system performed successfully 48% of the time.
As an alternate measure of the system's performance, one of us corrected each of the sentences in the last three categories (different, wrong, and ungrammatical) to either the exact or the alternate category. Counting one stroke for J'al re~u cette demande en effet. Such a request was made. I have received this request in effect.
Permettez que je donne un example ~, la Chambre. Let me give the House one example. Let me give an example in the House. Vous avez besoin de toute l'~de disponible. You need all the help you can get. You need of the whole benefits available. each letter that must be deleted and one stroke for each letter that must be inserted, 776 strokes were needed to repair all of the decoded sentences. This compares with the 1,916 strokes required to generate all of the Hansard translations from scratch. Thus, to the extent that translation time can be equated with key strokes, the system reduces the work by about 60%.
PLANS
There are many ways in which the simple models described in this paper can be improved. We expect some improvement from estimating the parameters on more data. For the experiments described above, we estimated the parameters of the models from only a small fraction of the data we have available: for the translation model, we used only about one percent of our data, and for the language model, only about ten percent. We Mve serious problems in sentences in which the translation of certain source words depends on the translation of other source words. For example, the translation model produces aller from to go by producing aller from go and nothing from to. Intuitively we feel that to go functions as a unit to produce aller. While our model allows many target words to come from the same source word, it does not allow several source words to work together to produce a single target word. In the future, we hope to address the problem of identifying groups of words in the source language that function as a unit in translation. This may take the form of a probabilistic division of the source sentence into groups of words.
At present, we assume in our translation model that words, are placed into the target sentence independently of one another. Clearly, a more realistic assumption must accoun~: for the fact that words form phrases in the target sentence that are translations of phrases in the source sentence and that the target words in these phrases will tend to stay together even if the phrase itself is moved around. We are working on a model in which the positions of the: target words produced by a particular source word depend on the identity of the source word and on the positions of the target words produced by the previous source ,word.
We are preparing a trigram language model that we hope will substantially improve the performance of the system. A useful information-theoretic measure of the complexity of a language with respect to a model is the perplexity as defined by Bahl et al. (1983). With the bigrana model that we are currently using, the source text for our 1,000-word translation task has a perplexity of about 78. With the trigram model that we are preparing, the perplexity of the source text is about 9. In addition to showing the strength of a trigram model relative to a bigram model, this also indicates that the 1,000-word task is very simple.
We: treat words as unanalyzed wholes, recognizing no connection, for example, between va, vais, and vont, or between tall, taller, and tallest. As a result, we cannot improve our statistical characterization of va, say, by observation of sentences involving vont. We are working on morplhologies for French and English so that we can profit from sl;atistical regularities that our current word-based approach must overlook.
Finally, we treat the sentence as a structureless sequence of words. Sharman et al. discuss a method for deriving a probabilistic phrase structure grammar automatically from a sample of parsed sentences (1988). We hope to apply their method to construct grammars for both French and English and to base future translation models on the grammatical constructs thus defined.
Figure 2
2Bag Model Examples.
Figure 3
3Alignment
(
Jean aime Marie I John(I)*), (Jean aime Marie[ *loves (2)*), (Jean aime Marie l *Mary(3)), (Jean airne Marie l Jeans(l)*).
Figure 7 Translation
7Examples.
Computational Linguistics Volume 16, Number 2, June 1990
Peter F. Brown et al.
Peter F. Brown et al.A Statistical Approach to Machine Translation
Ces ammendements sont certainement n~cessaires. These amendments are certainly necessary. These amendments are certainly necessary.C'est pourtant tr~s simple. Yet it is very simple. It is still very simple.
A Maximum Likelihood Approach to Continuous Speech Recognition. L R Bahl, F Jelinek, R L Mercer, IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI. 52Bahl, L. R.; Jelinek, F.; and Mercer, R. L. 1983 A Maximum Likelihood Approach to Continuous Speech Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-5(2): 179-190.
Stochastic Modeling for Automatic Speech Understanding. J K Baker, Speech Recognition. Reddy, R. A.New York, NYAcademic PressBaker, J. K. 1979 Stochastic Modeling for Automatic Speech Understand- ing. In: Reddy, R. A. (ed.) Speech Recognition. Academic Press, New York, NY.
An Inequality and Associated Maximization Technique in Statistical Estimation of Probabilistic Functions of a Markov Process. L E Baum, Inequalities. 3Baum, L. E. 1972 An Inequality and Associated Maximization Technique in Statistical Estimation of Probabilistic Functions of a Markov Pro- cess. Inequalities 3:1-8.
Howard's Way. The New York Times Magazine. R Bernstein, 13892Bernstein, R. 1988 Howard's Way. The New York Times Magazine 138(47639): pp 40-44, 74, 92.
Maximum Likelihood from Incomplete Data via the EM Algorithm. A P Dempster, N M Laird, D B Rubin, Journal of the Royal Statistical Society. 39Dempster, A. P.; Laird, N. M.; and Rubin, D. B. 1977 Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society 39(B):1-38.
Hidden Markov Analysis: An Introduction. J D Ferguson, Hidden Markov Models for Speech. IDA-CRD. Ferguson, J. D.Princeton, NJFerguson, J. D. 1980 Hidden Markov Analysis: An Introduction. In: Ferguson, J. D. (ed.), Hidden Markov Models for Speech. IDA-CRD, Princeton, NJ.
The Computational Analysis of English: A Corpus-Based Approach. R G Garside, G N Leech, G R Sampson, Longman, NYGarside, R. G.; Leech, G. N.; and Sampson, G. R. 1987 The Computa- tional Analysis of English: A Corpus-Based Approach. Longman, NY.
Interpolated Estimation of Markov Source Parameters from Sparse Data. F Jelinck, R L Mercer, Proceedings of the Workshop on Pattern Recognition in Practice. the Workshop on Pattern Recognition in PracticeNorth-Holland, Amsterdam, The NetherlandsJelinck, F. and Mercer, R. L. 1980 Interpolated Estimation of Markov Source Parameters from Sparse Data. In: Proceedings of the Workshop on Pattern Recognition in Practice. North-Holland, Amsterdam, The Netherlands.
A Stochastic Approach to Parsing. G R Sampson, Proceedings of the I I th International Conference on Computational Linguistics. the I I th International Conference on Computational LinguisticsSampson, G. R. 1986 A Stochastic Approach to Parsing. Proceedings of the I I th International Conference on Computational Linguistics. 151- 155.
Generating a Grammar for Statistical Training. R A Sharman, F Jelinck, R L Mercer, Proceedings of the IBM Conference on Natural Language Processing. the IBM Conference on Natural Language ProcessingThornwood, NYSharman, R. A.; Jelinck, F.; and Mercer, R. L. 1988 Generating a Grammar for Statistical Training. In: Proceedings of the IBM Confer- ence on Natural Language Processing, Thornwood, NY.
Lcxicographic Evidence. J M Sinclair, Dictionaries, Lexicography and Language Learning. Ilson, R.New York, NYPergamon PressSinclair, J. M. 1985 Lcxicographic Evidence. In: Ilson, R. (ed.) Dictionar- ies, Lexicography and Language Learning. Pergamon Press, New York, NY.
In: Machine Translation of Languages. W Weaver, MIT PressCambridge, MATranslation (1949)Weaver, W. 1955 Translation (1949). In: Machine Translation of Lan- guages, MIT Press, Cambridge, MA. |
5,598,307 | Segmentation and Translation of Japanese Multi-word Loanwords | The Japanese language has absorbed large numbers of loanwords from many languages, in particular English. As well as using single loanwords, compound nouns, multiword expressions (MWEs), etc. constructed from loanwords can be found in use in very large quantities. In this paper we describe a system which has been developed to segment Japanese loanword MWEs and construct likely English translations. The system, which leverages the availability of large bilingual dictionaries of loanwords and English n-gram corpora, achieves high levels of accuracy in discriminating between single loanwords and MWEs, and in segmenting MWEs. It also generates useful translations of MWEs, and has the potential to being a major aid to lexicographers in this area. | [
9426034,
7418935,
6645623
] | Segmentation and Translation of Japanese Multi-word Loanwords
James Breen jimbreen@gmail.com
The University of Melbourne
The University of Melbourne
Nanyang Technological University
Singapore
Timothy Baldwin
The University of Melbourne
The University of Melbourne
Nanyang Technological University
Singapore
Francis Bond bond@ieee.org
The University of Melbourne
The University of Melbourne
Nanyang Technological University
Singapore
Segmentation and Translation of Japanese Multi-word Loanwords
The Japanese language has absorbed large numbers of loanwords from many languages, in particular English. As well as using single loanwords, compound nouns, multiword expressions (MWEs), etc. constructed from loanwords can be found in use in very large quantities. In this paper we describe a system which has been developed to segment Japanese loanword MWEs and construct likely English translations. The system, which leverages the availability of large bilingual dictionaries of loanwords and English n-gram corpora, achieves high levels of accuracy in discriminating between single loanwords and MWEs, and in segmenting MWEs. It also generates useful translations of MWEs, and has the potential to being a major aid to lexicographers in this area.
Introduction
The work described in this paper is part of a broader project to identify unrecorded lexemes, including neologisms, in Japanese corpora. Since such lexemes include the range of lexical units capable of inclusion in Japanese monolingual and bilingual dictionaries, it is important to be able to identify and extract a range of such units, including compound nouns, collocations and other multiword expressions (MWEs: Sag et al. (2002), Baldwin and Kim (2009)).
Unlike some languages, where there is official opposition to the incorporation of foreign words, Japanese has assimilated a large number of such words, to the extent that they constitute a sizeable proportion of the lexicon. For example, over 10% of the entries and sub-entries in the major Kenkyūsha New Japanese-English Dictionary (5th ed.) (Toshiro et al., 2003) are wholly or partly made up of loanwords. In addition there are several published dictionaries consisting solely of such loanwords. Estimates of the number of loanwords and particularly MWEs incorporating loanwords in Japanese range into the hundreds of thousands. While a considerable number of loanwords have been taken from Portuguese, Dutch, French, etc., the overwhelming majority are from English.
Loanwords are taken into Japanese by adapting the source language pronunciation to conform to the relatively restricted set of syllabic phonemes used in Japanese. Thus "blog" becomes burogu, and "elastic" becomes erasutikku. When written, the syllables of the loanword are transcribed in the katakana syllabic script (ブログ, エラスティック), which in modern Japanese is primarily used for this purpose. This use of a specific script means possible loanwords are generally readily identifiable in text and can be extracted without complex morphological analysis.
The focus of this study is on multiword loanwords. This is because there are now large collections of basic Japanese loanwords along with their translations, and it appears that many new loanwords are formed by adopting or assembling MWEs using known loanwords. As evidence of this, we can cite the numbers of katakana sequences in the the Google Japanese n-gram corpus (Kudo and Kazawa, 2007). Of the 2.6 million 1-grams in that cor-pus, approximately 1.6 million are in katakana or other characters used in loanwords. 1 Inspection of those 1-grams indicates that once the words that are in available dictionaries are removed, the majority of the more common members are MWEs which had not been segmented during the generation of the corpus. Moreover the n-gram corpus also contains 2.6 million 2-grams and 900,000 3-grams written in katakana. Even after allowing for the multiple-counting between the 1, 2 and 3grams, and the imperfections in the segmentation of the katakana sequences, it is clear that the vast numbers of multiword loanwords in use are a fruitful area for investigation with a view to extraction and translation.
In the work presented in this paper we describe a system which has been developed to segment Japanese loanword MWEs and construct likely English translations, with the ultimate aim of being part of a toolkit to aid the lexicographer. The system builds on the availability of large collections of translated loanwords and a large English n-gram corpus, and in testing is performing with high levels of precision and recall.
Prior Work
There has not been a large amount of work published on the automatic and semiautomatic extraction and translation of Japanese loanwords. Much that has been reported has been in areas such as backtransliteration (Matsuo et al., 1996;Knight and Graehl, 1998;Bilac and Tanaka, 2004), or on extraction from parallel bilingual corpora (Brill et al., 2001). More recently work has been carried out exploring combinations of dictionaries and corpora (Nakazawa et al., 2005), although this lead does not seem to have been followed further.
Both Bilac and Tanaka (2004) and Nakazawa et al. (2005) address the issue of segmentation of MWEs. This is discussed in 3.1 below.
Role and Nature of Katakana Words in Japanese
As mentioned above, loan words in Japanese are currently written in the katakana script. This is an orthographical convention that has been applied relatively strictly since the late 1940s, when major script reforms were carried out. Prior to then loanwords were also written using the hiragana syllabary and on occasions kanji (Chinese characters). The katakana script is not used exclusively for loanwords. Other usage includes:
a. transcription of foreign person and place names and other named entities. Many Japanese companies use names which are transcribed in katakana. Chinese (and Korean) place names and person names, although they are usually available in kanji are often written in katakana transliterations; b. the scientific names of plants, animals, etc. c. onomatopoeic words and expressions, although these are often also written in hiragana; d. occasionally for emphasis and in some contexts for slang words, in a similar fashion to the use of italics in English. The proportion of katakana words that were not loanwords was measured by Brill et al. (2001) at about 13%. (The impact and handling of these is discussed briefly at the end of Section 4.)
When considering the extraction of Japanese loan words from text, there are a number of issues which need to be addressed.
Segmentation
As mentioned above, many loanwords appear in the form of MWEs, and their correct analysis and handling often requires separation into their composite words. In Japanese there is a convention that loanword MWEs have a "middle-dot" punctuation character (・) inserted between the components, however while this convention is usually followed in dictionaries, it is rarely applied elsewhere. Web search engines typically ignore this character when indexing, and a search for a very common MWE: トマトソース tomatosōsu "tomato sauce", reveals that it almost always appears as an undifferentiated string. Moreover, the situation is confused by the common use of the ・ character to separate items in lists, in a manner similar to a semi-colon in English. In practical terms, systems dealing with loanwords MWEs must be prepared to do their own segmentation.
One approach to segmentation is to utilize a Japanese morphological analysis system. These have traditionally been weak in the area of segmentation of loanwords, and tend to default to treating long katakana strings as 1-grams. In testing a list of loanwords and MWEs using the ChaSen system , Bilac and Tanaka (2004) report a precision and recall of approximately 0.65 on the segmentation, with a tendency to undersegment being the main problem. Nakazawa et al. (2005) report a similar tendency with the JUMAN morphological analyzer (Kurohashi and Nagao, 1998). The problem was most likely due to the relatively poor representation of loanwords in the morpheme lexicons used by these systems. For example the IPADIC lexicon used at that time only had about 20,000 words in katakana, and many of those were proper nouns.
In this study, we use the MeCab morphological analyzer (Kudo et al., 2004) with the recently-developed UniDic lexicon (Den et al., 2007), as discussed below.
As they were largely dealing with nonlexicalized words, Bilac and Tanaka (2004) used a dynamic programming model trained on a relatively small (13,000) list of katakana words, and reported a high precision in their segmentation. Nakazawa et al. (2005) used a larger lexicon in combination with the JU-MAN analyzer and reported a similar high precision.
Non-English Words
A number of loanwords are taken from languages other than English.
The JMdict dictionary (Breen, 2004) has approximately 44,000 loanwords, of which 4% are marked as coming from other languages. Inspection of a sample of the 22,000 entries in the Gakken A Dictionary of Katakana Words (Kabasawa and Satō, 2003) indicates a similar proportion. (In both dictionaries loanwords from languages other than English are marked with their source language.) This relatively small number is known to cause some problems with generating translations through transliterations based on English, but the overall impact is not very significant.
Pseudo-English Constructions
A number of katakana MWEs are constructions of two or more English words forming a term which does not occur in English. An example is バージョンアップ bājoNappu "version up", meaning upgrading software, etc. These constructions are known in Japanese as 和製英語 wasei eigo "Japanese-made English". Inspection of the JMdict and Gakken dictionaries indicate they make up approximately 2% of katakana terms, and while a nuisance, are not considered to be a significant problem.
Orthographical Variants
Written Japanese has a relatively high incidence of multiple surface forms of words, and this particularly applies to loan words. Many result from different interpretations of the pronunciation of the source language term, e.g. the word for "diamond" is both ダイヤモンド daiyamoNdo and ダイアモンド daiamoNdo, with the two occurring in approximately equal proportions. (The JMdict dictionary records 10 variants for the word "vibraphone", and 9 each for "whiskey" and "vodka".) In some cases two different words have been formed from the one source word, e.g. the English word "truck" was borrowed twice to form トラック torakku meaning "truck, lorry" and トロッコ torokku meaning "trolley, rail car". Having reasonably complete coverage of alternative surface forms is important in the present project.
Approach to Segmentation and MWE Translation
As our goal is the extraction and translation of loanword MWEs, we need to address the twin tasks of segmentation of the MWEs into their constituent source-language components, and generation of appropriate transla-tions for the MWEs as a whole. While the back-transliteration approaches in previous studies have been quite successful, and have an important role in handling single-word loanwords, we decided to experiment with an alternative approach which builds on the large lexicon and n-gram corpus resources which are now available. This approach, which we have labelled "CLST" (Corpus-based Loanword Segmentation and Translation) builds upon a direction suggested in Nakazawa et al. (2005) The process of segmenting an MWE and deriving a translation is as follows:
a. using the katakana words in (b) above, generate all possible segmentations of the MWE. A recursive algorithm is used for this. Table 1 shows the segments derived for the MWE ソーシャルブックマークサービス sōsharubukkumākusābisu "social bookmark service". b. for each possible segmentation of an MWE, assemble one or more possible glosses as follows: i. take each element in the segmented MWE, extract the first gloss in the dictionary and assemble a composite potential translation by simply concatenating the glosses. Where there are multiple senses, extract the first gloss from each and assemble all possible combinations. (The first gloss is being used as lexicographers typically place the most relevant and succinct translation first, and this has been observed to be often the most useful when building composite glosses.) As examples, for ソーシャル・ブックマーク・サービス the element サービス has two senses "service" and "goods or services without charge", so the possible glosses were "social bookmark service" and "social bookmark goods or services without charge". For ソーシャル・ブック・マーク・サービス the element マーク has senses of "mark", "paying attention", "markup" and "Mach", so the potential glosses were "social book mark service", "social book markup service", "social book Mach service", etc. A total of 48 potential translations were assembled for this MWE. ii. where the senses are tagged as being affixes, also create combinations where the gloss is attached to the preceding or following gloss as appropriate. iii. if the entire MWE is in the dictionary, extract its gloss as well. It may seem unusual that a single sense is being sought for an MWE with polysemous elements. This comes about because in Japanese polysemous loanwords are almost always due to them being derived from multiple source words. For example ランプ raNpu has three senses reflecting that it results from the borrowing of three distinct English words: "lamp", "ramp" and "rump". On the other hand, MWEs containing ランプ, such as ハロゲンランプ harogeNraNpu "halogen lamp" or オンランプ oNraNpu "on-ramp" almost invariably are associated with one sense or another. c. attempt to match the potential translations with the English n-grams, and where a match does exist, extract the frequency data. For the example above, only "social bookmark service", which resulted from the ソーシャル・ブックマーク・サービス segmentation, was matched successfully; d. where match(es) result, choose the one with the highest frequency as both the most likely segmentation of the MWE and the candidate translation. The approach described above assumes that the term being analyzed is a MWE, when in fact it may well be a single word. In the case of as-yet unrecorded words we would expect that either no segmentation is accepted or that any possible segmentations have relatively low frequencies associated with the potential translations, and hence can be flagged for closer inspection. As some of the testing described below involves deciding whether a term is or is not a MWE, we have enabled the system to handle single terms as well by checking the unsegmented term against the dictionary and extracting n-gram frequency counts for the glosses. This enables the detection and rejection of possible spurious segmentations. As an example of this, the word ボールト bōruto "vault" occurs in one of the test files described in the following section. A possible segmentation (ボー・ルト) was generated with potential translations of "bow root" and "baud root". The first of these occurs in the English 2-grams with a frequency of 63, however "vault" itself has a very high frequency in the 1-grams so the segmentation would be rejected.
As pointed out above, a number of katakana words are not loanwords. For the most part these would not be handled by the CLST segmentation/translation process as they would not be reduced to a set of known segments, and would be typically reported as failures. The transliteration approaches in earlier studies also have problems with these words. Some of the non-loanwords, such as scientific names of plants, animals, etc. or words written in katakana for emphasis, can be detected and filtered prior to attempted processing simply by comparing the katakana form with the equivalent hiragana form found in dictionaries. Some of the occurrences of Chinese and Japanese names in text can be detected at extraction time, as such names are often written in forms such as "...金鍾泌(キムジョンピル)...". 5
Evaluation
Evaluation of the CLST system was carried out in two stages: testing the segmentation using data used in previous studies to ensure it was discriminating between single loanwords and MWEs, and testing against a collection of MWEs to evaluate the quality of the translations proposed.
Segmentation
The initial tests of CLST were of the segmentation function and the identification of single words/MWEs.
We were fortu- (2004), which consisted of 150 out-of-lexicon katakana terms from the EDR corpus (EDR, 1995) and 78 from the NTCIR-2 test collection (Kando et al., 2001). The terms were hand-marked as to whether they were single words or MWEs. Unfortunately we detected some problems with this marking, for example シェークスピア shēkusupia "Shakespeare" had been segmented (shake + spear) whereas ホールバーニング hōrubāniNgu "hole burning" had been left as a single word. We considered it inappropriate to use this data without amending these terms. As a consequence of this we are not able to make a direct comparison with the results reported in Bilac and Tanaka (2004). Using the corrected data we analyzed the two datasets and report the results in Table 2. We include the results from analyzing the data using MeCab/UniDic as well for comparison. The precision and recall achieved was higher than that reported in Bilac and Tanaka (2004). As in Bilac and Tanaka (2004), we calculate the scores as follows: N is the number of terms in the set, c is the number of terms correctly segmented or identified as 1-grams, e is the number of terms incorrectly segmented or identified, and n = c + e. Recall is calculated as c N , precision as c n , and the F-measure as 2×precision×recall precision+recall . As can be seen, our CLST approach has achieved a high degree of accuracy in identifying 1-grams and segmenting the MWEs. Although it was not part of the test, it also proposed the correct translations for almost all the MWEs. The less-than-perfect recall is entirely due to the few cases where either no segmentation was proposed, or where the proposed segmentation could not be validated with the English n-grams.
The performance of MeCab/UniDic is interesting, as it also has achieved a high level of accuracy. This is despite the UniDic lexicon only having approximately 55,000 katakana words, and the fact that it is operating outside the textual context for which it has been trained. Its main shortcoming is that it tends to over-segment, which is a contrast to the performance of ChaSen/IPADIC reported in Bilac and Tanaka (2004) where undersegmentation was the problem.
Translation
The second set of tests of CLST was directed at developing translations for MWEs. The initial translation tests were carried out on two sets of data, each containing 100 MWEs. The sets of data were obtained as follows:
a. the 100 highest-frequency MWEs were selected from the Google Japanese 2grams. The list of potential MWEs had to be manually edited as the 2grams contain a large number of oversegmented words, e.g. アイコン aikoN "icon" was split: アイコ+ン, and オークション ōkushoN "auction" was split オーク+ション; b. the katakana sequences were extracted from a large collection of articles from 1999 in the Mainichi Shimbun (a Japanese daily newspaper), and the 100 highest-frequency MWEs extracted. After the data sets were processed by CLST the results were examined to determine if the segmentations had been carried out correctly, and to assess the quality of the proposed translations. The translations were graded into three groups: (1) acceptable as a dictionary gloss, (2) understandable, but in need of improvement, and (3) wrong or inadequate. An example of a translation graded as 2 is マイナスイオン mainasuioN "minus ion", where "negative ion" would be better, and one graded as 3 is フリーマーケット furīmāketto "free market", where the correct translation is "flea market". For the most part the translations receiving a grading of 2 were the same as would have been produced by a back-transliteration system, and in many cases they were the wasei eigo constructions described above.
Some example segmentations, possible translations and gradings are in Table 3 The assessments of the segmentation and the gradings of the translations are given in Table 4. The precision, recall and F measures have been calculated on the basis that a grade of 2 or better for a translation is a satisfactory outcome.
A brief analysis was conducted on samples of 25 MWEs from each test set to ascertain whether they were already in dictionaries, or the degree to which they were suitable for inclusion in a dictionary. The dictionaries used for this evaluation were the commercial Kenkyusha Online Dictionary Service 6 which has eighteen Japanese, Japanese-English and English-Japanese dictionaries in its search tool, and the free WWWJDIC online dictionary, 7 which has the JMdict and JMnedict dictionaries, as well as numerous glossaries.
Of the 50 MWEs sampled: a. 34 (68%) were in dictionaries; b. 11 (22%) were considered suitable for inclusion in a dictionary. In some cases the generated translation was not considered appropriate without some modification, i.e. it had been categorized as "2"; c. 3 (6%) were proper names (e.g. hotels, 6 http://kod.kenkyusha.co.jp/service/ 7 http://www.edrdg.org/cgibin/wwwjdic/wwwjdic?1C software packages); d. 2 (4%) were not considered suitable for inclusion in a dictionary as they were simple collocations such as メニューエリア menyūeria "menu area".
As the tests described above were carried out on sets of frequently-occurring MWEs, it was considered appropriate that some further testing be carried out on less common loanword MWEs. Therefore an additional set of 100 lower-frequency MWEs which did not occur in the dictionaries mentioned above were extracted from the Mainichi Shimbun articles and were processed by the CLST system. Of these 100 MWEs: a. 1 was not successfully segmented; b. 83 of the derived translations were classified as "1" and 16 as "2"; c. 8 were proper names.
The suitability of these MWEs for possible inclusion in a bilingual dictionary was also evaluated.
In fact the overwhelming majority of the MWEs were relatively straightforward collocations, e.g. マラソンランナー marasoNraNnā "marathon runner" and ロックコンサート rokkukoNsāto "rock concert", and were deemed to be not really appropriate as dictionary entries. 5 terms were assessed as being dic-tionary candidates. Several of these, e.g. ゴールドプラン gōrudopuraN "gold plan" and エースストライカー ēsusutoraikā "ace striker" were category 2 translations, and their possible inclusion in a dictionary would largely be because their meanings are not readily apparent from the component words, and an expanded gloss would be required.
Some points which emerge from the analysis of the results of the tests described above are: a. to some extent, the Google n-gram test data had a bias towards the types of constructions favoured by Japanese webpage designers, e.g. ショッピングトップ shoppiNgutoppu "shopping top", which possibly inflated the proportion of translations being scored with a 2; b. some of the problems leading to a failure to segment the MWEs were due to the way the English n-gram files were constructed. Words with apostrophes were split, so that "men's" was recorded as a bigram: "men+'s". This situation is not currently handled in CLST, which led to some of the segmentation failures, e.g. with メンズアイテム meNzuaitemu "men's item";
Conclusion and Future Work
In this paper we have described the CLST (Corpus-based Loanword Segmentation and Translation) system which has been developed to segment Japanese loanword MWEs and construct likely English translations. The system, which leverages the availability of large bilingual dictionaries of loanwords and English n-gram corpora, is achieving high levels of accuracy in discriminating between single loanwords and MWEs, and in segmenting MWEs. It is also generating useful translations of MWEs, and has the potential to being a major aide both to lexicography in this area, and to translating. The apparent success of an approach based on a combination of large corpora and relatively simple heuristics is consistent with the conclusions reached in a number of earlier investigations (Banko and Brill, 2001;Lapata and Keller, 2004). Although the CLST system is performing at a high level, there are a number of areas where refinement and experimentation on possible enhancements can be carried out. They include:
a. instead of using the "first-gloss" heuristic, experiment with using all available glosses. This would be at the price of increased processing time, but may improve the performance of the segmentation and translation; b. align the searching of the n-gram corpus to cater for the manner in which words with apostrophes, etc. are segmented. At present this is not handled correctly; c. tune the presentation of the glosses in the dictionaries so that they will match better with the contents of the n-gram corpus. At present the dictionary used is simply a concatenation of several sources, and does not take into account such things as the n-gram corpus having hyphenated words segmented; d. extend the system by incorporating a back-transliteration module such as that reported in Bilac and Tanaka (2004). This would cater for single loanwords and thus provide more complete coverage.
in that it uses a large English n-gram corpus both to validate alternative segmentations and select candidate translations.The three key resources used in CLST are: a. a dictionary of katakana words which has been assembled from: i. the entries with katakana headwords or readings in the JMdict dictionary; ii. the entries with katakana headwords in the Kenkyūsha New Japanese-English Dictionary; iii. the katakana entries in the Eijiro dictionary database; 2 iv. the katakana entries in a number of technical glossaries covering biomedical topics, engineering, finance, law, etc.; v. the named-entities in katakana from the JMnedict named-entity database. 3 This dictionary, which contains both base words and MWEs, includes short English translations which, where appropriate, have been split into identifiable senses. It contains a total of 270,000 en-tries.
b. a collection of 160,000 katakana words
drawn from the headwords of the dictio-
nary above. It has been formed by split-
ting the known MWEs into their compo-
nents where this can be carried out reli-
ably;
c. the Google English n-gram corpus 4 . This
contains 1-grams to 5-grams collected
from the Web in 2006, along with fre-
ソーシャル・ブックマーク・サービス
ソーシャル・ブックマーク・サー・ビス
ソーシャル・ブック・マーク・サービス
ソーシャル・ブック・マーク・サー・ビス
ソー・シャル・ブックマーク・サービス
ソー・シャル・ブックマーク・サー・ビス
ソー・シャル・ブック・マーク・サービス
ソー・シャル・ブック・マーク・サー・ビス
Table 1: Segmentation Example
quency counts. In the present project we
use a subset of the corpus consisting only
of case-folded alphabetic tokens.
Table 2 :
2Results from Segmentation Testsnate to be able to use the same data used
by Bilac and Tanaka
.MWE
Segmentation
Possible Translation Frequency Grade
ログインヘルプ
ログイン・ヘルプ
login help
541097
1
ログインヘルプ
ログ・イン・ヘルプ
log in help
169972
-
キーワードランキング キーワード・ランキング
keyword ranking
39818
1
キーワードランキング キー・ワード・ランキング key word ranking
74
-
キャリアアップ
キャリア・アップ
career up
13043
2
キャリアアップ
キャリア・アップ
carrier up
2552
-
キャリアアップ
キャリア・アップ
career close up
195
-
キャリアアップ
キャリア・アップ
career being over
188
-
キャリアアップ
キャリア・アップ
carrier increasing
54
-
Table 3 :
3Sample Segmentations and TranslationsData
Failed
Translation Grades
Set
Segmentations 1
2
3
Precision Recall F
Google
9
66 24
1
98.90
90.00 94.24
Mainichi (Set 1)
3
77 19
1
98.97
96.00 97.46
Mainichi (Set 2)
1
83 16
0
100.00
99.00 99.50
Table 4 :
4Results from Translation Tests
In addition to katakana, loanwords use the ー (chōoN) character for indicating lengthened vowels, and on rare occasions the ヽ and ヾ syllable repetition characters.
http://www.eijiro.jp/e/index.htm 3 http://www.csse.monash.edu.au/~jwb/ enamdict_doc.html 4 http://www.ldc.upenn.edu/Catalog/ CatalogEntry.jsp?catalogId=LDC2006T13
Kim Jong-Pil, a former South Korean politician.
IPADIC version 2.7.0 User's Manual. Masayuki Asahara, Yuji Matsumoto, NAIST, Information Science Division. in JapaneseMasayuki Asahara and Yuji Matsumoto. 2003. IPADIC version 2.7.0 User's Manual (in Japanese). NAIST, Information Science Divi- sion.
Multiword expressions. Timothy Baldwin, Su Nam Kim, Handbook of Natural Language Processing. Nitin Indurkhya and Fred J. DamerauBoca Raton, USACRC Press2nd editionTimothy Baldwin and Su Nam Kim. 2009. Mul- tiword expressions. In Nitin Indurkhya and Fred J. Damerau, editors, Handbook of Natural Language Processing. CRC Press, Boca Raton, USA, 2nd edition.
Scaling to very very large corpora for natural language disambiguation. Michele Banko, Eric Brill, Proceedings of the 39th Annual Meeting of the ACL and 10th Conference of the EACL (ACL-EACL 2001). the 39th Annual Meeting of the ACL and 10th Conference of the EACL (ACL-EACL 2001)Toulouse, France; Geneva, Switzerland. James Breen; Geneva, Switzerland; Gary KacmarcikEric BrillProceedings of the COLING-2004 Workshop on Multilingual Resources. and Chris Brockett. 2001. Automatically Harvesting KatakanaMichele Banko and Eric Brill. 2001. Scaling to very very large corpora for natural language dis- ambiguation. In Proceedings of the 39th Annual Meeting of the ACL and 10th Conference of the EACL (ACL-EACL 2001), Toulouse, France. Slaven Bilac and Hozumi Tanaka. 2004. A Hy- brid Back-transliteration System for Japanese. In Proceedings of the 20th international con- ference on Computational Linguistics, COLING '04, Geneva, Switzerland. James Breen. 2004. JMdict: a Japanese- Multilingual Dictionary. In Proceedings of the COLING-2004 Workshop on Multilingual Re- sources, pages 65-72, Geneva, Switzerland. Eric Brill, Gary Kacmarcik, and Chris Brock- ett. 2001. Automatically Harvesting Katakana-
English Term Pairs from Search Engine Query Logs. Proceedings of the Sixth Natural Language Processing Pacific Rim Symposium. the Sixth Natural Language Processing Pacific Rim SymposiumTokyo, JapanHitotsubashi Memorial HallEnglish Term Pairs from Search Engine Query Logs. In Proceedings of the Sixth Natural Language Processing Pacific Rim Symposium, November 27-30, 2001, Hitotsubashi Memo- rial Hall, National Center of Sciences, Tokyo, Japan, pages 393-399.
The development of an electronic dictionary for morphological analysis and its application to Japanese corpus linguistics. Yasuharu Den, Toshinobu Ogiso, Hideki Ogura, Atsushi Yamada, Nobuaki Minematsu, Japanese Linguistics. 22Kiyotaka Uchimoto, and Hanae Koiso. in JapaneseYasuharu Den, Toshinobu Ogiso, Hideki Ogura, Atsushi Yamada, Nobuaki Minematsu, Kiy- otaka Uchimoto, and Hanae Koiso. 2007. The development of an electronic dictionary for morphological analysis and its application to Japanese corpus linguistics (in Japanese). Japanese Linguistics, 22:101-123.
EDR Electronic Dictionary Technical Guide. Edr, Japan Electronic Dictionary Research Institute, Ltd.in JapaneseEDR, 1995. EDR Electronic Dictionary Technical Guide. Japan Electronic Dictionary Research Institute, Ltd. (in Japanese).
A Dictionary of Katakana Words. Yōichi Kabasawa, Morio Satō, GakkenYōichi Kabasawa and Morio Satō, editors. 2003. A Dictionary of Katakana Words. Gakken.
Overview of Japanese and English Information Retrieval Tasks (JEIR) at the Second NTCIR Workshop. Noriko Kando, Kazuko Kuriyama, Masaharu Yoshioka, Proceedings of the Second NTCIR Workshop. the Second NTCIR WorkshopJeju, Korea. Kevin Knight and Jonathan GraehlNoriko Kando, Kazuko Kuriyama, and Masaharu Yoshioka. 2001. Overview of Japanese and En- glish Information Retrieval Tasks (JEIR) at the Second NTCIR Workshop. In Proceedings of the Second NTCIR Workshop, Jeju, Korea. Kevin Knight and Jonathan Graehl. 1998.
. Machine Transliteration. Comput. Linguist. 244Machine Transliteration. Comput. Linguist., 24(4):599-612, December.
Japanese Web N-gram Corpus version 1. Taku Kudo, Hideto Kazawa, Taku Kudo and Hideto Kazawa. 2007. Japanese Web N-gram Corpus version 1. http://www. ldc.upenn.edu/Catalog/docs/LDC2009T08/.
Applying conditional random fields to Japanese morphological analysis. Taku Kudo, Kaoru Yamamoto, Yuji Matsumoto, Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP 2004). the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP 2004)Barcelona, SpainTaku Kudo, Kaoru Yamamoto, and Yuji Mat- sumoto. 2004. Applying conditional random fields to Japanese morphological analysis. In Proceedings of the 2004 Conference on Em- pirical Methods in Natural Language Process- ing (EMNLP 2004), pages 230-237, Barcelona, Spain.
Nihongo keitai-kaiseki sisutemu JUMAN [Japanese morphological analysis system JU-MAN] version 3.5. Sadao Kurohashi, Makoto Nagao, Kyoto University.Technical reportin JapaneseSadao Kurohashi and Makoto Nagao. 1998. Nihongo keitai-kaiseki sisutemu JUMAN [Japanese morphological analysis system JU- MAN] version 3.5. Technical report, Kyoto University. (in Japanese).
The web as a baseline: Evaluating the performance of unsupervised web-based models for a range of NLP tasks. Mirella Lapata, Frank Keller, Proceedings of the Human Langauge Techinology Conference and Conference on Empirical Methods in National Language Processing (HLT/NAACL-2004). the Human Langauge Techinology Conference and Conference on Empirical Methods in National Language Processing (HLT/NAACL-2004)Boston, USAMirella Lapata and Frank Keller. 2004. The web as a baseline: Evaluating the performance of unsupervised web-based models for a range of NLP tasks. In Proceedings of the Human Langauge Techinology Conference and Confer- ence on Empirical Methods in National Lan- guage Processing (HLT/NAACL-2004), pages 121-128, Boston, USA.
Japanese Morphological Analysis System ChaSen Version 2.3.3 Manual. Technical report. Yuji Matsumoto, Akira Kitauchi, Tatsuo Yamashita, Yoshitaka Hirano, Hiroshi Matsuda, Kazuma Takaoka, Masayuki Asahara, NAISTYuji Matsumoto, Akira Kitauchi, Tatsuo Ya- mashita, Yoshitaka Hirano, Hiroshi Matsuda, Kazuma Takaoka, and Masayuki Asahara. 2003. Japanese Morphological Analysis System ChaSen Version 2.3.3 Manual. Technical re- port, NAIST.
Translation of 'katakana' words using an English dictionary and grammar. Yoshihiro Matsuo, Mamiko Hatayama, Satoru Ikehara, Proceedings of the Information Processing Society of Japan. the Information Processing Society of Japan53in JapaneseYoshihiro Matsuo, Mamiko Hatayama, and Satoru Ikehara. 1996. Translation of 'katakana' words using an English dictionary and grammar (in Japanese). In Proceedings of the Information Processing Society of Japan, volume 53, pages 65-66.
Automatic Acquisition of Basic Katakana Lexicon from a Given Corpus. Toshiaki Nakazawa, Daisuke Kawahara, Sadao Kurohashi, Proceedings of the 2nd International Joint Conference on Natural Language Processing (IJCNLP-05). the 2nd International Joint Conference on Natural Language Processing (IJCNLP-05)Jeju, KoreaToshiaki Nakazawa, Daisuke Kawahara, and Sadao Kurohashi. 2005. Automatic Acquisi- tion of Basic Katakana Lexicon from a Given Corpus. In Proceedings of the 2nd International Joint Conference on Natural Language Process- ing (IJCNLP-05), pages 682-693, Jeju, Korea.
Multiword expressions: A pain in the neck for NLP. A Ivan, Timothy Sag, Francis Baldwin, Ann Bond, Dan Copestake, Flickinger, Proceedings of the 3rd International Conference on Intelligent Text Processing and Computational Linguistics (CICLing-2002). the 3rd International Conference on Intelligent Text Processing and Computational Linguistics (CICLing-2002)Mexico City, MexicoIvan A. Sag, Timothy Baldwin, Francis Bond, Ann Copestake, and Dan Flickinger. 2002. Multi- word expressions: A pain in the neck for NLP. In Proceedings of the 3rd International Confer- ence on Intelligent Text Processing and Com- putational Linguistics (CICLing-2002), pages 1-15, Mexico City, Mexico.
Kenkyûsha New Japanese-English Dictionary. Watanabe Toshiro, Edmund Skrzypczak, and Paul Snowdon5th Edition. KenkyûshaWatanabe Toshiro, Edmund Skrzypczak, and Paul Snowdon (eds). 2003. Kenkyûsha New Japanese-English Dictionary, 5th Edition. Kenkyûsha. |
6,844,025 | Mining Tables from Large Scale HTML Texts | Table is a very common presentation scheme, but few papers touch on table extraction in text data mining. This paper l'ocuscs on mining tables from large-scale HTML texts.Table filtering,recognition, interpretation, and presentation arc discussed. Heuristic rules and cell similarities arc employed to identify tables. The F-measure ot' table recognition is 86.50%. We also propose an algorithm to capture attribute-value relationships alnong table cells. Finally, more structured data is extracted and presented. | [
1432114,
16206198
] | Mining Tables from Large Scale HTML Texts
Hsin-Hsi Chen hh_chen@csie.ntu.edu.tw
Department o1' Computer Science
hfformation Engiueering National Taiwan University Taipei
TAIWAN, R.O.C
Shih-Chung Tsai
Department o1' Computer Science
hfformation Engiueering National Taiwan University Taipei
TAIWAN, R.O.C
Jin-He Tsai
Department o1' Computer Science
hfformation Engiueering National Taiwan University Taipei
TAIWAN, R.O.C
Mining Tables from Large Scale HTML Texts
Table is a very common presentation scheme, but few papers touch on table extraction in text data mining. This paper l'ocuscs on mining tables from large-scale HTML texts.Table filtering,recognition, interpretation, and presentation arc discussed. Heuristic rules and cell similarities arc employed to identify tables. The F-measure ot' table recognition is 86.50%. We also propose an algorithm to capture attribute-value relationships alnong table cells. Finally, more structured data is extracted and presented.
Introduction
Tables, which arc simple and easy to use, are very common presentation sclleme for writers to describe schedules, organize statistical data, summarize cxpcrilnental results, and so on, in texts ol' different domains. Because tables provide rich inlbrmation, table acquisition is useful for many applications such as document tmderstauding, question-and-answering, text retrieval, etc. However, most of previous approaches on text data mining focus on text parts, and only few touch on tabular ones (Appelt and Israel, 1997;Gaizauskas and Wilks, 1998;Hurst, 1999a). Of the papers on table extractions (Douglas, Hurst and Quinn, 1995;Douglas and Hurst 1996;Hurst and Douglas, 1997;Ng, Lim and Koo, 1999), plain texts arc their targets.
I11 plain text, writers often use special symbols, e.g., tabs, blanks, dashes, etc., to inake tables. The following shows an example. It depicts book titles, authors, and prices. When detecting il' there is a table in free text, we should disambiguatc tile uses of tile special symbols. That is, the special symbol may be a separator or content o1' cells. Previous papers employ grammars (Green and Krishuainoorthy, 1995), string-based cohesion measures (Hurst and Douglas, 1997), and learning methods (Ng, Lim and Koo, 1999) to deal with table recognition.
Because of the silnplicity of table construction lnethods in free text, the expressive capability is limited.
Comparatively, the markup languages like HTML provide very flexible constructs for writers to design tables. The flexibility also shows that table extraction in HTML texts is harder than that iu plain text. Because the HTML texts are huge on the web, and they arc important sources o1' knowledge, it is indispensable to deal with table mining on HTML texts. Hurst (1999b) is the first attempt to collect a corpus froln HTML files, LAT~X files and a small number o1' ASCII files for table extraction. This paper focuses on HTML texts. We will discuss not only how to recognize tables from HTML texts, but also how to identify the roles of each cell (attribute and/or value), aud how to utilize the extracted tables.
1
Tables in HTML HTML table begins with au optional caption t'ollowcd one or more rows. Each row is formed by one or more cells, which are classified into header and data cells. Cells can be merged across rows and colulnns. The following tags arc used:
(1) <table...> </table> (2) <tr ...> </tr> (3) <td...> </td> (4) <th ...> </th> (5) <caption ...> </caption> Cell may play the role o1' attribute and/or value. Several cells may be concatenated to denote an altribute. For example, "AdulI-Price-Single ]),ecru-Economic Chlss" means the ;tdl.llt price for economic class and single room.
The relationships may 13o read in column wise or in row wise depending on the interpretation. For example, the relationship for "Tour Code:I)P9LAXOIAB" is in row wise.
The prices for "Economic Class" are in column wise.
The Another fx)int that shoukt be mentioned is: table designers usually employ COLSPAN (ROWSPAN) to specify how many cohunns (rows) a table cell should span. In this example, the COI,SPAN of cell "Tour Code" is 3. That means "Tour Code" spans 3 columns. Similarly, the P, OWSI~AN o1' cell "Adult" is 3. This cell spans 3 rows.
COLSPAN and ROWSPAN provide flexibility for users to design any kinds ot' tables, but they make automatic table interpretation more challengeable.
Flow of Table Mining
The flow of table nfining is shown as Figure 1. It is composed of five modules. Hypertext processing module analyses HTML text, and extracts the table tags. Table filtering module filters out impossible cases by heuristic rules. The remaining candidates are sent to table recognition module for further analyses. The table interpretation module differentiates the roles of cells in the tables.
The final module tackles how to present and employ the mining results.
The first two modules are discussed in tile following paragraph, and the last three modules will be dealt with in the following sections in detail.
Figure 1. Flow of Table Mining
As specified above, table wrappers do not always introduce tables. Two filtering rules are employed to disambiguate their functions:
(1) A table must contain at least two cells to represent attribute and value. In other words, the structure with only one cell is filtered out.
(2) If the content enclosed by table wrappers contain too much hyperlinks, forms and figures, then it is not regarded as a table.
To evaluate the performance of table mining, we prepare the test data selected from airline information in travelling category o1' Chinese Yahoo web site (http://www.yaboo.com. tw). Table 2 shows the statistics of our test data. Table 3 shows the results after we employ the filtering rules on the test data. Tile 5 th row shows how many non-table candidates are filtered out by the proposed rules, and tile 6 th row shows the nulnbcr of wrong filters. On the average, the correct rate is 98.93%.
Total 423 of 2300 nou-tables are remained.
Table Recognition
After simple analyses specified in the previous sectiou, there are still 423 non-tables passing the filtering criteria. Now we consider the content of the cells. A cell is much shorter than a senteuce in plain text. In our study, the length of 43,591 cells (of 61,770 cells) is smaller than 10 characters 2. Because of the space lilnitation in a table, writers often use shorthand notations to describe their intention. For a A Chinese character is represented by two bytes.
That is, a cell contains 5 Chinese characters oil the average. example, they may use a Chinese character (":~,]", dao4) to represent a two-character word "~j~" (dao4da2, arrive), and a character ("¢~", 1i2) to denote the Chinese word ",~$ i~,~l" (li2kail, leave). They even employ special symbols like • and Y to represent "increase" and "decrease". Thus it is hard to determine if a fragment of ttTML text is a table depending on a cell only. The context among cells is important.
Value cells under the same attribute names demonstrate similar concepts. WE employ the following metrics to measure the cell similarity.
(1) String similarity We measure how many characters are common in neighboring cells. I1' the lmmber is above zt threshold, we call lhe two cells are similar.
(2) Named entity simihuily
The metric considers semantics of cells. We adopt some named entity expressions defined in MUG (1998) Table 4 shows that string similarity cannot capture the similar concept between neighboring cells very well. The F-measure is 55.50%. Table 5 tries to incorporate more semantic features, i.e., categories of named entity. Unlbrtunately, the result does not meet our expectation. The performance only increases a little. The major reason is that the keywords (pro/am, $, %, etc.) for date/time expressions and monetary and percentage expressions are usually omitted in {able description. Table 6 shows that the F-measure achieves 86.50% when number category is used. Compared wilh Tables 4 and 5, the performance is improved Tables Tables 151 42 7 17 5 222 Proposed Correct 135 40 7 14 3 Tables Tables 668 60 16 drastically.
Table Interpretation
As specified in Section 1, the attribute-value relationship may be interpreted in colunm wise or in row wise. If the table tags in questions do not contain COLSPAN (ROWSPAN), the problem is easier. The first row and/or the first column consist of the attribute cells, and the others are value cells. Cell similarity guides us how to read a table. We define row (or column) similarity in terms of cell similarity as follows.
Two rows (or columns) are similar il' most of the corresponding cells between these two rows (or columns) are similar.
A basic table interpretation algorithm is shown below. Assume there are n rows and m Let % denote a cell in i m row and jth colulnns.
col u mn.
(1) I1' there is only one row or column, then the problem is trivial. We jnst read it in row wise or column wise.
(2) Otherwise, we start the similarity checking froln the right-bottom position, i.e., c,~,n. That is, the n th row and the in th column arc regarded as base for comparisons. (3) For each row i (1 _< i < n), compute the similarity of the two rows i and n. (4) Count how many pairs of rows are similar. (5) If the count is larger than (n-2)/2, and the similarity of row 1 and row n is smaller than the similarity of the other row pairs, then we say this table can be read in column wise. In other words, the first row contains attribute cells. (6) The interpretation from row wise is done in the similar way. We start checking from in th coluInn, compare it with each column j (1 < j < in), and count how many pail's of columns are similar. (7) If neither "row-wise" nor "column-wise" can be assigned, then the default is set to "row wise". Table 6 is an example. The first column contains attribute ceils. The other cells arc statistics of an expel'imel~tal result. We read it in row wise. ff COLSPAN (ROWSPAN) is used, the table iutet'pretation is more difficult. Here, we extend the above algorithm to deal with table interpretation with COLSPAN (ROWSPAN).
At first, we drop COLSPAN and ROWSPAN by duplicating several copies o1' cells in their proper positions. For example, COLSPAN=3 for "Tour Code" in Table 1, thus we duplicate "Tour Code" at colunms 2 and 3. Table 7 shows the final reformulation el' the example in Table 1. Then we employ the above algorithln with slight inodification to l'ind the reading direction.
The modification is that spanning cells ale boundaries for similarity checking. Take Table 7 as an example.
We start the similarity checking from the i'ight-I~ottom cell, i.e., 360, and consider each row and column within boundaries. The cell "1999The cell " .04.01-2000.31" is a spanning cell, so that 2 "a row is a boundary. "Price" is a spanning cell, thus 2 '''1 column is a boundary. In this case, we can interpret the table tags in both row wise and column wise. At this [illle, "to\v-\vise" i:; selected.
In final cycle, Ihc starting positions are (2,5)
and (9, 2). The boundaries arc 0 'l' rOW and ()u~ column. Those two siib-tables are road it] row wise.
5
Presentalion of Table Extraction The results of table interprctatioll arc a sequence of attributc-wfluc pairs. Consider the tour example. Table 8 shows the extracted pairs.
We can find ihe following two phenomena:
(I) A cell may be a vahle of lliOre [h~tll ()tic attribute.
(2) A cell may ael as an attribute in one case, and a value in another case. We can concatenate two attributes logelher by using phenomenon (1). l;or example, "35,450" is a value of "Single Room" and "Economic Class", thus "Single Room-Econonfic Class" is formed. Besides l[lal, we Call find attribute hierarchy by using l)hcnomcnon (2). For example, "Single 1),oom" is a value o1" "Price", and "Price" is a vahie of "Adult", so that we can create a hierarchy "Adult-Price-Single Room".
Merging the results from these two phononlena, we can create the in/erl~rclations that we listed in Section 1. For example, from the two facts: "35,450" is a wflue of "Single Room-L;conomic Class", and "Adult-Price-Single Room" is a hierarchical attribute, we can infer that 35,450 is a vahie o1' "Adult-Price-Single Rooin-Economic Class".
In this way, we can transform unstructured data into more slrtictured representatioil for fttrther applications. Consider an application in quest|O|] al]d answering.
Giver a query like "how much is the piice of a double |oom for all adult", the keywoMs are "price", "double There are still other spaces to improve performance. The cues from context of tables and the traversal paths of HTML pages may be also useful.
In the text surrounding tables, writers usually explain the meaning of tables. For example, which row (or column) denotes what kind ol' meanings. From the description, we can know which cell may be an attribute, and along the same row (column) we can find their value cells. Besides that, the text can also show the selnantics ot' the cells. For exalnple, the table cell may be a monetary expression that denotes the price of a tour package. In this way, even money marker is not present in the table cell, we can still know it is a monetary expression.
Note that HTML texts can be chained through hyperlinks like "previous" and "next". The context can be expanded further. Their effects on table mining will be studied in the future.
Besides the possible extensions, another research line that can be considered is to set up a corpus for evaluation o1' attribute-value relationship.
Because the role of a cell (attribute or value) is relative to other cells, to develop answering keys is indispensable for table interpretation.
-l'ricc-l)oublc l{oom-EconoMc Class 32,500 Adult-l'ricc-Extra Ilcd-l:conomic Class 30,550 Child-Pricc-OccutmtioiM :cononfic Class 25,800 Child-t'rice-l';xlra Iled-l,;conomic Class 23,850 Child-Price-No ()ccupalion-I.;conomic Class 22,900 Adult-l'ricc-Single Room-l.
Language Inforlnation P.elrieval G. Grefenstette $115 NaturalLanguage Information Retrieval T.Slrzalkowski $144title
author
price
Statistical Language Learning
E.Chamiak
$30
Cross-
Table 1 .
1All Example for a Tour Package ~ ................ T~;t,r i~o'iic ................... } ......... isi'gi;)XR()iAii .......... } ' ........... ;diiii~i ............... i 1999iii;fiOJ-2iJO0103131 i i ...... Ci,,,L0i.~x/o;];],,;i ............ { i,;ccgn[;/,ii c (!izls Y ll.;x{c;isii;ii i .......................il i, Siligiei(i;t;ii; i,
35,450
I 2510 '
i Adtilt i11 l)oublc Room .... :3:2;.5(J(J I i2)3i) I
i
II. (iixi;:{[iGi
i . . . . . . . 3i/556 ......... -7}6 i
........ >[
_
<
,
I
'
!d Occupatioll i 25800 i i430 i
Child ii~l ExU'aBed
i
23,850
i '7]0" i
'
i
i
t No Occt )a oi/I
22,900
360
i
They denote main wrapper, table row, table data,
table header, and caption for a table. Table 1
shows an example that lists the prices for a tour.
The interpretation of this table in terms of
altribute-wdue relationships is shown as follows:
Allribul¢
Vahtc
Tour Code
I)P91,AX01AI{
Valid
1999.04.01-2000.03.31
Adult-l>ricc-Singlo Room-l~;conomic Class
35,450
Adult
table wrapper (<table> ... </table>) is a useful cue lkw table recognition. The H'FMI, text for the above example is shown as follows. The table tags are enclosed by a table wrapper. ltowever, ;l taMe does not always exist when table wrapper al3pears in ft'I'MI, text. This is because writers often employ table tags to i'cpresent forlll or IllOlltl. That allows users to input queries or make selections.<lablc border>
<if>
<td COI~SIL,\N="3">Totu" Code</td>
<ld COI,SI'AN="2">I)I'91,AXO1AB</Id>
</tr>
<11>
<id COLS PAN="3">Valid</id>
<ld C, OLS PAN="2"> 1999.04.01-2000.03.31 </td>
</I r>
<lr>
' This example is selected from http://www.china-
airlincs.com/cdl~ks/los7.-4.htm
<td COI,Sl'AN:"3">Class/I.~xlensic, n</td>
<td>l':cononlic Class</td>
<td>l';xlcnsion</td>
</ir>
<I1>
<td ROW SPA N="3">Adult</td>
<ld ROW.q PA N="6"><I)> P</l)>
<l)>P,</p>
<p>l</p>
<p>C</p>
<p>l:</td>
<ld>Single Rooni</td>
<ld>35,450</Id>
<1d>2,510</td>
</tr>
<tr>
<ld>l)oubl¢ I~,oom</td>
<ld>32,500</Id>
<ld> 1,430</td>
<ttr>
<h>
<td>l:;xlra Hcd</td>
<td>30,550</kl>
<td>720</td>
</tr>
<Jr>
<td>Chikl</id>
<td>()ccupation<</td>
<td>25,800</td>
<td> 1,430</td>
<It r>
<11>
<td>l ';xtra Ik'd</tcl>
<td>23,850</td>
<td>720</id>
<It r>
<11>
<td>No ()CCUlmtion</td>
<td>22,900</td>
<kl>360</td>
</It>
<:/lalq,_.>
table recognition
recognitionIt
table
interpretation
hypertext
plocessin~
It
table
filtering
presentation
of results
Table 2 .
2Statistics o1' Test DataTable 3. Pertbrmance of Filtering Rules 46% 100% 100% 96.15% ~)9.06% )8.93% Rate These four rows list tile names of aMines, total number of web pages, total number of table wrappers, and total number of tables, respectively. On the average, there are 2.35 table wrappers, and 0.67 tables for each web page. The statistics shows that table tags are used quite often in HTML text, and only 28.53% are actual tables.China
AMine
694
2075
751
Eva
Airline
366
568
98
Mandarin Singapore Fareast Sum
AMine
AMine Ml'line
142
110
60
1372
184
163
228
3218
(2.35)
23
40
6
918
(0.67)
China
Eva Mandarin Singapore Fareast Sum
Airline Airline AMine
Airline Airline
#of
2075
568
184
163
228
3218
wrappers
Number of 751
98
23
40
6
918
Tables
Number of 1324
470
161
123
222
2300
[Non-Tables
Total
973
455
158
78
213
1877
Filter
Wrong
15
0
0
3
2
20
Filter
Correct 98.
such as date/time expressions and monetary and percentage expressions. A role-based lnethod similar {o lhe paper (Chert, Ding, and Tsai, 1998) is employed to tell if a cell is a specific named entity. The neighboring cells belonging to the same llalned entity We count how many neighboring cells are similar. If the percentage is above a threshold, the table tags are interpreted as a table. The data after table filtering (Section 2) is used to evaluate the strategies in table recognition. 'Fables 4-6 show the experimental results when the three metrics are applied incrementally.category are similar.
(3) Number category similarily
Number characters (0-9) appear very
often. If total number characters in a
cell exceeds a threshold, we call tlae
cell belongs to !.he number category.
The neighboring cells in number
category are similar.
Precision rate (P), recall rate (R), and
F-measure (F) defined below are adopted to
measure the performance.
p = NumberQ/Correct7?tl)lesSystemGenerated
TotalNumberO/TahlesSystem Gen crated
R = NumberOJ'CorrectTahlexSystemGenerated
7btalNumberOfCorrectT"ables
P+R
2
Table 4 .
4String SimilarityChina
l';;'a
Airline Airline
Numhcr of 751
98
Tables
Tables
150
4 I
Proposed
Correct
134
39
l'rccision 89.33% 95.12%
Ralc
Recall Ralc 17.8'l% 39.80%
l:-mcasurc 53.57% 67.46%
Mandarin Singapore l"areast Nttm
AMine
AMine Airline
23
4O
6
918
7
17
5
220
7
14
3
197
lOOq~ 82.35%
6(Y/, 89.55c/~
30.43% 35.00%
50% 21.46%
65.22~A 58.68%
55% 55.50%
Table 5. String or Named Entity Similarity
China
l';wL Mandarin Singapore Farcasl Sum
Airline Airline AMine
Airline Airline
Number of
751
98
23
40
6
918
Table 6 .
6String, Named Entity,
or Nulnber Category Similarity
China
10,,a Mandarin Singapore Fai'cast Stllll
AMine AMine Airline
AMine AMine
Nmnbm" of
751
98
23
40
6
918
Table 1 is
1a typical example. Five COLSPANs
and two ROWSPANs are used to create a better
layout.
The
attributes
are
formed
hierarchically. The following is an example of
hierarchy.
Adult ..... I'rice
...........
Double Room
............. Single Room
............ Extra Bed
Table 7 .
7Reformulation of Example in Table 1 .......................... : ..........................................................................................................................] our
t
Tot ' Co( e
Code ~I'F°t "Co¢ e I+DP9LAX0 AB DP9LAX01ABi
,. .......
i 1999.04.01-
1999.04.01-
vmd
{ Vand 1 Vanu
t
'
t
'
~
'
I 2000.03.31
2000.03.31
......... "C5~7;') " " U~i~{g21++f++~Sii~i;~U+
l++Eco;;o;iiic++V2.
........ ~ .............
!
extenmon
Extension {ExtensionlExtension t
Class
!
"
'
. t
"
~
[
,
Single
Adult
PIxICE I
35450
2,510
f
+ ! ~'°°m !
'
Double
Adult
PRICE
I-" 32 500
1,430
I
~ I
Room
|
" '"
...... ++el +++ +V+i++~ + ++(+i ~i o++ ++`++++;i+++ + ........... +++ ........
.... +++ +i+++++ + ++a +<+++
++ + ............ +++0 .........
+
+
|
Child
| I'RICE + ,
.
22 900
360
Table 8
8Tim Extracted Attril)ute-Value Pairsroom", and "adult".After consulting the database learning from HTMI. lexls, two wflues, 32,500 and 1,430 with attributes economic class and extension, are reported. With this table mining technology, knowledge lhat can be employed is beyond text level.Conclusionin this paper, we propose a systematic way to mine tables from HTML texts.Table filtering, table recognition, table interpretation and application of table extractionare discussed. The cues l'ron] HTML lags and information in taMe cells are employed to recognize and interpret tables. The F-measure for table recognition is 86.50%.l ~' cycle
2 '"1 t')'t'|e
3 'a t'ydc
Altribulc
Value
Single P, oonl
35¢150
Single I{cx:,nl
2,510
I )Otlblc l(ooin
I )Otlble P, ooln
32,500
1,430
No Occul)atioll
22,900
No Occultation
360
I'conomic Class
35,450
Economic Class
32,500
Ec~monfic Class
22,900
I:xlcnsion
2,510
I'xtension
1,430
I-xtension
Class/I,;xtension
360
Economic Class
Class/l';xicnsion
l:~xtension
Valid
1999.04.01-2000.03.31
Price
Single Room
Price
Double ROOlll
I'RICI:,
No ()cctqmtion
Tour Code
I)l'9t ,AX01ANB
(((
Valid
199 ~.()4.01-2000.03.31
Adul!
Price
Child
Price
Tutorial Notes on Building Information Extraction Syslems. D Appelt, D Israel, Tutorial on Fifth Conference on Applied Natural Language Processing. Appelt, D. and Israel, D. (1997) "Tutorial Notes on Building Information Extraction Syslems," Tutorial on Fifth Conference on Applied Natural Language Processing, 1997.
Named Entity Extraction for Information Retrieval. H H Chen, Y W Ding, S C Tsai, Computer Processing of Oriental Languages, Special Issue on Information Retrieval on Oriental Languages. 121Chen, H.H.; Ding Y.W.; and Tsai, S.C. (1998) "Named Entity Extraction for Information Retrieval," Computer Processing of Oriental Languages, Special Issue on Information Retrieval on Oriental Languages, Vol. 12, No. 1, 1998, pp.75-85.
Using Natural Language Processing for Identifying and Interpreting Tables in Plain Text. S Douglas, M Hurst, M Qui, D , Proceedings of Fourth Annual Symposium on Document Analysis and Informatiotl Retrieval. Fourth Annual Symposium on Document Analysis and Informatiotl RetrievalDouglas, S.; Hurst, M. and Qui,m, D. (1995) "Using Natural Language Processing for Identifying and Interpreting Tables in Plain Text," Proceedings of Fourth Annual Symposium on Document Analysis and Informatiotl Retrieval, 1995, pp. 535-545.
Layout and Language: Lists and Tables in Technical Documents. S Douglas, M Hurst, Proceedings of ACL SIGPARSE Workshop on Punctuation in Computational Linguistics. ACL SIGPARSE Workshop on Punctuation in Computational LinguisticsDouglas, S. and Hurst, M. (1996) "Layout and Language: Lists and Tables in Technical Documents," Proceedings of ACL SIGPARSE Workshop on Punctuation in Computational Linguistics, 1996, pp. 19-24.
Infornmtion Extraction: Beyond Document Retriew~l. R Gaizauskas, Y Wilks, Computational Linguistics and Chinese Language Processing. 3Gaizauskas, R. and Wilks, Y. (1998) " Infornmtion Extraction: Beyond Document Retriew~l," Computational Linguistics and Chinese Language Processing, Vol. 3, No. 2, 1998, pp. 17-59.
Recognition of Tables Using Grammars. E Green, M Krishnanloorthy, Proceedings of the Fourth Annual Symposium on Document Analysis arm h{fom~ation Retrieval. the Fourth Annual Symposium on Document Analysis arm h{fom~ation RetrievalGreen, E. and Krishnanloorthy, M. (1995) "Recognition of Tables Using Grammars," Proceedings of the Fourth Annual Symposium on Document Analysis arm h{fom~ation Retrieval, 1995, pp. 261-278.
Layout and Language: Preliminary Experiments in Assigning Logical Structure to Table Cells. M Hurst, S Douglas, Proceedings of the Fifth Cot!ference on Applied Natural Lattguage Processing. the Fifth Cot!ference on Applied Natural Lattguage ProcessingHurst, M. and Douglas, S. (1997) "Layout and Language: Preliminary Experiments in Assigning Logical Structure to Table Cells," Proceedings of the Fifth Cot!ference on Applied Natural Lattguage Processing, 1997, pp. 217-220.
Layout and Language: Beyond Simple Text for Information Interaction -Modeling the Table. M Hurst, Proceedings of the 2rid htternatiottal Conference on Multimodal hlterJ?tces, Hong Kong. the 2rid htternatiottal Conference on Multimodal hlterJ?tces, Hong KongHurst, M. (1999a) "Layout and Language: Beyond Simple Text for Information Interaction -Modeling the Table," Proceedings of the 2rid htternatiottal Conference on Multimodal hlterJ?tces, Hong Kong, January 1999.
Layout and Language: A Corpus ol' Documents Containing Tables. M Hurst, Proceedings of AAAI Fall Symposium: Usillg Layout for the Generation, Undelwtanding arm Retrieval oJ Documetttx. AAAI Fall Symposium: Usillg Layout for the Generation, Undelwtanding arm Retrieval oJ DocumetttxHurst, M. (1999b) "Layout and Language: A Corpus ol' Documents Containing Tables," Proceedings of AAAI Fall Symposium: Usillg Layout for the Generation, Undelwtanding arm Retrieval oJ Documetttx, 1999.
A Workbench lot Acquisition of Ontological Knowledge from Natural Text. A Mikheev, S Finch, ¢br Computational Litlguistics. hvceedings of the 7th Conference o..[ the European ChapterMikheev, A. and Finch, S. (1995) "A Workbench lot Acquisition of Ontological Knowledge from Natural Text," hvceedings of the 7th Conference o..[ the European Chapter .¢br Computational Litlguistics, 1995, pp. 194-201.
Proceedittgs of 7 'h Message UndelwtatMing Conferetlce. eedittgs of 7 'h Message UndelwtatMing ConferetlceMUC (1998) Proceedittgs of 7 'h Message UndelwtatMing Conferetlce, hltp://www.muc.saic. corn/proccedings/proceedil~gs index.html.
Learning Io Recognize Tables in Free Text. H T Ng, C Y Lira, J L T Koo, Proceedings of the 37th Ammal Meeting of ACL. the 37th Ammal Meeting of ACLNg, H.T.; Lira, C.Y. and Koo, J.L.T. (1999) "Learning Io Recognize Tables in Free Text," Proceedings of the 37th Ammal Meeting of ACL, 1999, pp. 443-450. |
10,102,597 | Unsupervised Segmentation Helps Supervised Learning of Character Tagging for Word Segmentation and Named Entity Recognition | This paper describes a novel character tagging approach to Chinese word segmentation and named entity recognition (NER) for our participation in Bakeoff-4. 1 It integrates unsupervised segmentation and conditional random fields (CRFs) learning successfully, using similar character tags and feature templates for both word segmentation and NER. It ranks at the top in all closed tests of word segmentation and gives promising results for all closed and open NER tasks in the Bakeoff. Tag set selection and unsupervised segmentation play a critical role in this success. | [
8467680,
1654945,
14161026,
18371469,
15095698,
2776693,
5275640
] | Unsupervised Segmentation Helps Supervised Learning of Character Tagging for Word Segmentation and Named Entity Recognition
Hai Zhao haizhao@cityu.edu.hk
Department of Chinese, Translation and Linguistics City
University of Hong Kong Tat Chee Ave
Kowloon, Hong Kong
Chunyu Kit ctckit@cityu.edu.hk
Department of Chinese, Translation and Linguistics City
University of Hong Kong Tat Chee Ave
Kowloon, Hong Kong
Unsupervised Segmentation Helps Supervised Learning of Character Tagging for Word Segmentation and Named Entity Recognition
This paper describes a novel character tagging approach to Chinese word segmentation and named entity recognition (NER) for our participation in Bakeoff-4. 1 It integrates unsupervised segmentation and conditional random fields (CRFs) learning successfully, using similar character tags and feature templates for both word segmentation and NER. It ranks at the top in all closed tests of word segmentation and gives promising results for all closed and open NER tasks in the Bakeoff. Tag set selection and unsupervised segmentation play a critical role in this success.
Introduction
A number of recent studies show that character sequence labeling is a simple but effective formulation of Chinese word segmentation and name entity recognition for machine learning (Xue, 2003;Low et al., 2005;Zhao et al., 2006a;Chen et al., 2006). Character tagging becomes a prevailing technique for this kind of labeling task for Chinese language processing, following the current trend of applying machine learning as a core technology in the field of natural language processing. In particular, when a full-fledged general-purpose sequence learning model such as CRFs is involved, the only work to do for a given application is to identify an ideal set of features and hyperparameters for the purpose of achieving the best learning model that we can with available training data. Our work in this aspect provides a solid foundation for applying an unsupervised segmentation criterion to enrich the supervised CRFs learning for further performance enhancement on both word segmentation and NER.
This paper is intended to present the research for our participation in Bakeoff-4, with a highlight on our strategy to select character tags and feature templates for CRFs learning. Particularly worth mentioning is the simplicity of our system in contrast to its success. The rest of the paper is organized as follows. The next section presents the technical details of the system and Section 3 its evaluation results. Section 4 looks into a few issues concerning character tag set, unsupervised segmentation, and available name entities (NEs) as features for open NER test. Section 5 concludes the paper.
System Description
Following our previous work (Zhao et al., 2006a;Zhao et al., 2006b;, we continue to apply the order-1 linear chain CRFs (Lafferty et al., 2001) as our learning model for Bakeoff-4. Specifically, we use its implementation CRF++ by Taku Kudo 2 freely available for research purpose. We opt for a similar set of character tags and feature templates for both word segmentation and NER.
In addition, two key techniques that we have explored in our previous work are applied. One is to introduce more tags in the hope of utilizing more precise contextual information to achieve more pre-
Tag Set
Our previous work shows that a 6-tag set enables the CRFs learning of character tagging to achieve a better segmentation performance than others (Zhao et al., 2006a;Zhao et al., 2006b). So we keep using this tag set for Bakeoff-4. Its six tags are B, B 2 , B 3 , M, E and S. Table 2 illustrates how characters in words of various lengths are tagged with this tag set. For NER, we need to tell apart three types of NEs, namely, person, location and organization names. Correspondingly, the six tags are also adapted for characters in these NEs but distinguished by the suffixes -PER, -LOC and -ORG. For example, a character in a person name may be tagged with either B-PER, B 2 -PER, B 3 -PER, M-PER, E-PER, or S-PER. Plus an additional tag "O" for none NE characters, altogether we have 19 tags for NER. An example of NE tagging is illustrated in Table 1.
Feature Templates
We use not only a similar tag set but also the same set of feature templates for both the word segmentation and NER closed tests in Bakeoff-4. Six n-gram templates, namely, C −1 , C 0 , C 1 , C −1 C 0 , C 0 C 1 , C −1 C 1 , are selected as features, where C stands for a character and the subscripts -1, 0 and 1 for the previous, current and next character, respectively.
In addition to these n-gram features, unsupervised segmentation outputs are also used as features, for the purpose of providing more word boundary information via global statistics derived from all unlabeled texts of the training and test corpora. The basic idea is to inform a supervised leaner of which substrings are recognized as word candidates by a given unsupervised segmentation criterion and how likely they are to be true words in terms of that criterion .
We adopt the accessor variety (AV) (Feng et al., 2004a;Feng et al., 2004b) as our unsupervised segmentation criterion. It formulates an idea similar to linguist Harris ' (1955; 1970) for segmenting utterances of an unfamiliar language into morphemes to facilitate word extraction from Chinese raw texts. It is found more effective than other criteria in supporting CRFs learning of character tagging for word segmentation . The AV value of a substring s is defined as
AV (s) = min{L av (s), R av (s)},
where the left and right AV values L av (s) and R av (s) are defined, respectively, as the numbers of its distinct predecessor and successor characters.
In our work, AV values for word candidates are derived from an unlabeled corpus by substring counting, which can be efficiently carried out with the aid of the suffix array representation (Manber and Myers, 1993;Kit and Wilks, 1998). Heuristic rules are applied in Feng et al.'s work to remove insignificant substrings. We do not use any such rule.
Multiple feature templates are used to represent word candidates of various lengths identified by the AV criterion. For the sake of efficiency, all candidates longer than five characters are given up. To accommodate the word likelihood information, we need to extend the feature representation in , where only the candidate substrings are used as features for word segmentation. Formally put, our new feature function for a word can-
f n (s) = t, if 2 t ≤ AV (s) < 2 t+1 ,
where t is an integer to logarithmize the score. This is to alleviate the sparse data problem by narrowing down the feature representation involved. Note that t is used as a feature value rather than a parameter for the CRFs training in our system. For an overlap character of several word candidates, we only choose the one with the greatest AV score to activate the above feature function for that character. It is in this way that the unsupervised segmentation outcomes are fit into the CRFs learning.
Features for Open NER
Three extra groups of feature template are used for the open NER beyond those for the closed. The first group includes three segmentation feature templates. One is character type feature template T (C −1 )T (C 0 )T (C 1 ), where T (C) is the type of character C. For this, five character types are defined, namdely, number, foreign letter, punctuation, date and time, and others. The other two are generated respectively by two assistant segmenters (Zhao et al., 2006a), a maximal matching segmenter based on a dictionary from Peking University 3 and a CRFs segmenter using the 6-tag set and the six n-gram feature templates for training. The second group comes from the outputs of two assistant NE recognizers (ANERs), both trained with a corresponding 6-tag set and the same six ngram feature templates. They share a similar feature representation as the assistant segmenter. Table 3 lists the training corpora for the assistant CRFs segmenter and the ANERs for various open NER tests.
The third group consists of feature templates generated from seven NE lists acquired from Chinese Wikipedia. 4 The categories and numbers of these NE items are summarized in Table 4.
Evaluation Results
The performance of both word segmentation and NER is measured in terms of the F-measure F = 2RP/(R + P ), where R and P are the recall and precision of segmentation or NER. We tested the techniques described above with the previous Bakeoffs' data 5 (Sproat and Emerson, 2003;Emerson, 2005;Levow, 2006). The evaluation results for the closed tests of word segmentation are reported in Table 5 and those for the NER on two corpora of Bakeoff-3 are in the upper part of Table 7. '+/-AV' indicates whether AV features are applied.
For Bakeoff-4, we participated in all five closed tracks of word segmentation, namely, CityU, CKIP, CTB, NCC, and SXU, and in all closed and open NER tracks of CityU and MSRA. 6 The evaluation .7429 a F-score for in-vocabulary (IV) words. b Henceforth the official evaluation results in Bakeoff-4 are marked with "*". results of word segmentation and NER for our system are presented in Tables 6 and 7, respectively. For the purpose of comparison, the word segmentation performance of our system on Bakeoff-4 data using the 2-and 4-tag sets and the best corresponding n-gram feature templates as in (Tsai et al., 2006;Low et al., 2005) are presented in Table 8. 7 This comparison reconfirms the conclusion in (Zhao et CityU data sets in any other situation than the Bakeoff. 7 The templates for the 2-tag set, adopted from (Tsai et al., 2006), include C −2 , C −1 , C 0 , C 1 , C −3 C −1 , C −2 C 0 , C −2 C −1 , C −1 C 0 , C −1 C 1 and C 0 C 1 . Those for the 4-tag set, adopted from (Xue, 2003) and (Low et al., 2005), include C −2 , C −1 , C 0 , C 1 , C 2 , C −2 C −1 , C −1 C 0 , C −1 C 1 , C 0 C 1 and C 1 C 2 . al., 2006b) about tag set selection for character tagging for word segmentation that the 6-tag set is more effective than others, each with its own best corresponding feature template set.
Discussion
Tag Set and Computational Cost
Using more labels in CRFs learning is expected to bring in performance enhancement. Inevitably, however, it also leads to a huge rise of computational cost for model training. We conducted a series of experiments to study the computational cost of CRFs training with different tag sets using Bakeoff-3 data.
The experimental results are given in Table 9, showing that the 6-tag set costs nearly twice as much time as the 4-tag set and about three times as the 2-tag set. Fortunately, its memory cost with the six n-gram feature templates remains very close to that of the 2and 4-tag sets with the n-gram feature template sets from (Tsai et al., 2006;Xue, 2003). However, a 2-tag set is popular in use for word segmentation and NER for the reason that CRFs training is very computationally expensive and a large tag set would make the situation worse. Cer- tainly, a possible way out of this problem is the computer hardware advancement, which is predicted by Moore's Law (Moore, 1965) to be improving at an exponential rate in general, including processing speed and memory capacity. Specifically, CPU can be made twice faster every other year or even 18 months. It is predictable that computational cost will not be a problem for CRFs training soon, and the advantages of using a larger tag set as in our approach will be shared by more others.
Unsupervised Segmentation Features
Our evaluation results show that the unsupervised segmentation features bring in performance improvement on both word segmentation and NER for all tracks except CTB segmentation, as highlighted in Table 6. We are unable explain this yet, and can only attribute it to some unique text characteristics of the CTB segmented corpus. An unsupervised segmentation criterion provides a kind of global information over the whole text of a corpus . Its effectiveness is certainly sensitive to text characteristics. Quite a number of other unsupervised segmentation criteria are available for word discovery in unlabeled texts, e.g., boundary entropy (Tung and Lee, 1994;Chang and Su, 1997;Huang and Powers, 2003;Jin and Tanaka-Ishii, 2006) and descriptionlength-gain (DLG) (Kit and Wilks, 1999). We found that among them AV could help the CRFs model to achieve a better performance than others, although the overall unsupervised segmentation by DLG was slightly better than that by AV. Combining any two of these criteria did not give any further performance improvement. This is why we have opted for AV for Bakeoff-4.
NE List Features for Open NER
We realize that the NE lists available to us are far from sufficient for coping with all NEs in Bakeoff-4. It is reasonable that using richer external NE lists gives a better NER performance in many cases . Surprisingly, however, the NE list features used in our NER do not lead to any significant performance improvement, according to the evaluation results in Table 7. This is certainly another issue for our further inspection.
Conclusion
Without doubt our achievements in Bakeoff-4 owes not only to the careful selection of character tag set and feature templates for exerting the strength of CRFs learning but also to the effectiveness of our unsupervised segmentation approach. It is for the sake of simplicity that similar sets of character tags and feature templates are applied to two distinctive labeling tasks, word segmentation and NER. Relying on little preprocessing and postprocessing, our system simply follows the plain training and test routines of machine learning practice with the CRFs model and achieves the best or nearly the best results for all tracks of Bakeoff-4 in which we participated. Simple is beautiful, as Albert Einstein said, "Everything should be made as simple as possible, but not one bit simpler." Our evaluation results also provide evidence that simple can be powerful too.
Table 1 :
1An exmaple of NE tagging for a character sequenceCharacters
O
W
W
ì q ¯ü Ñ
Tags
B-ORG B2-ORG B3-ORG E-ORG O S-LOC O
O
O
O
Table 2 :
2Illustration of character taggingWord length Tag sequence for a word
1
S
2
B E
3
B B2 E
4
B B2 B3 E
5
B B2 B3 M E
6
B B2 B3 M · · · M E
cise labeling results. This also optimizes the active
features for the CRFs training. The other is to in-
tegrate the unsupervised segmentation outputs into
CRFs as features. It assumes no word boundary in-
formation in the training and test corpora for NER.
Table 3 :
3Training corpora for assistant learnersTrack
CityU NER
MSRA NER
Ass. Seg. CityU (Bakeoff-1 to 4) MSRA (Bakeoff-2)
ANER-1
CityU(Bakeoff-3)
CityU(Bakeoff-3)
ANER-2
MSRA(Bakeoff-3)
CityU(Bakeoff-4)
Table 4 :
4NE lists from Chinese Wikipedia didate s with a score AV (s) is defined asCategory
Number
Place name suffix
85
Chinese place name
6,367
Foreign place name
1,626
Chinese family name
573
Most common Chinese family name
109
Foreign name
2,591
Chinese university
515
Table 5 :
5Segmentation results for previous BakeoffsBakeoff-1
AS
CityU
CTB
PKU
-AV
F
.9727 .9473
.8720
.9558
R OOV
a
.7907 .7576
.7022
.7078
+AV
F
.9725 .9554
.9023
.9612
R OOV
.7597 .7616
.7502
.7208
Bakeoff-2
AS
CityU MSRA
PKU
-AV
F
.9534 .9476
.9735
.9515
R OOV
.6812 .6920
.7496
.6720
+AV
F
.9570 .9610
.9758
.9540
ROOV
.6993 .7540
.7446
.6765
Bakeoff-3
AS
CityU
CTB
MSRA
-AV
F
.9538 .9691
.9322
.9608
ROOV
.6699 .7815
.7095
.6658
+AV
F
.9586 .9747
.9431
.9660
R OOV
.6935 .8005
.7608
.6620
a Recall of out-of-vocabulary (OOV) words.
Table 6 :
6Evaluation results of word segmentation on Bakeoff-4 data setsFeature
Data
F
P
R
F IV
a
P IV
R IV
F OOV P OOV R OOV
CityU .9426 .9410 .9441 .9640 .9636 .9645 .7063
.6960
.7168
CKIP .9421 .9387 .9454 .9607 .9581 .9633 .7113
.7013
.7216
-AV
CTB
.9634 .9641 .9627 .9738 .9761 .9715 .7924
.7719
.8141
(n-gram)
NCC
.9333 .9356 .9311 .9536 .9612 .9461 .5678
.5182
.6280
SXU
.9552 .9559 .9544 .9721 .9767 .9675 .6640
.6223
.7116
CityU .9510 .9493 .9526 .9667 .9626 .9708 .7698
.7912
.7495
CKIP .9470 .9440 .9501 .9623 .9577 .9669 .7524
.7649
.7404
+AV* b
CTB
.9589 .9596 .9583 .9697 .9704 .9691 .7745
.7761
.7730
NCC
.9405 .9407 .9402 .9573 .9583 .9562 .6080
.5984
.6179
SXU
.9623 .9625 .9622 .9752 .9764 .9740 .7292
.7159
Table 7 :
7NER evaluation results For our official submission to Bakeoff-4, we also used an ANER trained on the MSRA NER training corpus of Bakeoff-3. This makes our official evaluation results extremely high but trivial, for a part of this corpus is used as the MSRA NER test corpus for Bakeoff-4. Presented here are the results without using this ANER.b Open2 is the result of Open1 using no NE list feature.Track
Setting
F PER F LOC F ORG
F NE
Bakeoff-3
CityU
-AV
.8849 .9219 .7905 .8807
+AV
.9063 .9281 .7981 .8918
MSRA
-AV
.7851 .9072 .8242 .8525
+AV
.8171 .9139 .8164 .8630
Bakeoff-4
-AV
.8222 .8682 .6801 .8092
CityU
+AV*
.8362 .8677 .6852 .8152
Open 1 * .9125 .9216 .7862 .8869
Open 2
.9137 .9214 .7853 .8870
-AV
.9221 .9193 .8367 .8968
+AV*
.9319 .9219 .8414 .9020
MSRA Open*
1.000 .9960 .9920 .9958
Open 1
a
.9710 .9601 .9352 .9558
Open2 b
.9699 .9581 .9359 .9548
a
Table 8 :
8Segmentation F-scores by different tag setsAV Tags CityU CKIP CTB
NCC
SXU
2
.9303 .9277 .9434 .9198 .9454
−
4
.9370 .9348 .9481 .9280 .9512
6
.9426 .9421 .9634 .9333 .9552
2
.9382 .9319 .9451 .9239 .9485
+
4
.9482 .9423 .9527 .9356 .9593
6
.9510 .9470 .9589 .9405 .9623
Table 9 :
9Comparison of computational costTags Templates
AS
CityU CTB MSRA
Training time (Minutes)
2
Tsai
112
52
16
35
4
Xue
206
79
28
73
6
Zhao
402
146
47
117
Feature numbers (×10 6 )
2
Tsai
13.2
7.3
3.1
5.5
4
Xue
16.1
9.0
3.9
6.8
6
Zhao
15.6
8.8
3.8
6.6
Memory cost (Giga bytes)
2
Tsai
5.4
2.4
0.9
1.8
4
Xue
6.6
2.8
1.1
2.2
6
Zhao
6.4
2.7
1.0
2.1
The Fourth International Chinese Language Processing Bakeoff & the First CIPS Chinese Language Processing Evaluation, at http://www.china-language.gov.cn/bakeoff08/bakeoff-08 basic.html.
http://crfpp.sourceforge.net/
It consists of about 108K words of one to four characterslong, available at http://ccl.pku.edu.cn/doubtfire/Course/Chi nese%20Information%20Processing/Source Code/Chapter 8/ Lexicon full.zip.
http://zh.wikipedia.org/wiki/Ä 5 http://www.sighan.org6 We declare that our team has never been exposed to the
AcknowledgementsThe research described in this paper was supported by the Research Grants Council of Hong Kong S.A.R., China, through the CERG grant 9040861 (CityU 1318/03H) and by City University of Hong Kong through the Strategic Research Grant 7002037. Dr. Hai Zhao was supported by a Postdoctoral Research Fellowship in the Department of Chinese, Translation and Linguistics, City University of Hong Kong.
An unsupervised iterative method for Chinese new lexicon extraction. Jing-Shin Chang, Keh-Yih Su, Computational Linguistics and Chinese Language Processing. 2Jing-Shin Chang and Keh-Yih Su. 1997. An unsuper- vised iterative method for Chinese new lexicon ex- traction. Computational Linguistics and Chinese Lan- guage Processing, 2(2):97-148.
Chinese named entity recognition with conditional random fields. Wenliang Chen, Yujie Zhang, Hitoshi Isahara, SIGHAN-5. Sydney, AustraliaWenliang Chen, Yujie Zhang, and Hitoshi Isahara. 2006. Chinese named entity recognition with conditional random fields. In SIGHAN-5, pages 118-121, Sydney, Australia, July 22-23.
The second international Chinese word segmentation bakeoff. Thomas Emerson , SIGHAN-4. Jeju Island, KoreaThomas Emerson. 2005. The second international Chi- nese word segmentation bakeoff. In SIGHAN-4, pages 123-133, Jeju Island, Korea, October 14-15.
Accessor variety criteria for Chinese word extraction. Haodi Feng, Kang Chen, Xiaotie Deng, Weimin Zheng, Computational Linguistics. 301Haodi Feng, Kang Chen, Xiaotie Deng, and Weimin Zheng. 2004a. Accessor variety criteria for Chi- nese word extraction. Computational Linguistics, 30(1):75-93.
Unsupervised segmentation of Chinese corpus using accessor variety. Haodi Feng, Kang Chen, Chunyu Kit, Xiaotie Deng, First International Joint Conference on Natural Language Processing (IJCNLP-04). K. Y. Su, J. Tsujii, J. H. Lee & O. Y. KwongSanya, Hainan Island, ChinaSpringer3248Natural Language Processing -IJCNLPHaodi Feng, Kang Chen, Chunyu Kit, and Xiaotie Deng. 2004b. Unsupervised segmentation of Chinese cor- pus using accessor variety. In First International Joint Conference on Natural Language Processing (IJCNLP-04), pages 255-261, Sanya, Hainan Island, China, March 22-24. Also in K. Y. Su, J. Tsujii, J. H. Lee & O. Y. Kwong (eds.), Natural Language Pro- cessing -IJCNLP 2004, LNAI 3248, pages 694-703. Springer.
From phoneme to morpheme. Harris Zellig Sabbetai, Language. 312Zellig Sabbetai Harris. 1955. From phoneme to mor- pheme. Language, 31(2):190-222.
Morpheme boundaries within words. Harris Zellig Sabbetai, Papers in Structural and Transformational Linguistics. Zellig Sabbetai Harris. 1970. Morpheme boundaries within words. In Papers in Structural and Transfor- mational Linguistics, page 68õ77.
Chinese word segmentation based on contextual entropy. Jin Hu Huang, David Powers, Dong Hong Ji and Kim-Ten LuaCOLIPS PublicationSentosa, Singapore, OctoberJin Hu Huang and David Powers. 2003. Chinese word segmentation based on contextual entropy. In Dong Hong Ji and Kim-Ten Lua, editors, PACLIC - 17, pages 152-158, Sentosa, Singapore, October, 1-3. COLIPS Publication.
Unsupervised segmentation of Chinese text by use of branching entropy. Zhihui Jin, Kumiko Tanaka-Ishii, COLING/ACL-2006. Sidney, AustraliaZhihui Jin and Kumiko Tanaka-Ishii. 2006. Unsuper- vised segmentation of Chinese text by use of branch- ing entropy. In COLING/ACL-2006, pages 428-435, Sidney, Australia, July 17-21.
The virtual corpus approach to deriving n-gram statistics from large scale corpora. Chunyu Kit, Yorick Wilks, Proceedings of 1998 International Conference on Chinese Information Processing Conference. Changning Huang1998 International Conference on Chinese Information Processing ConferenceBeijingChunyu Kit and Yorick Wilks. 1998. The virtual corpus approach to deriving n-gram statistics from large scale corpora. In Changning Huang, editor, Proceedings of 1998 International Conference on Chinese Informa- tion Processing Conference, pages 223-229, Beijing, Nov. 18-20.
Unsupervised learning of word boundary with description length gain. Chunyu Kit, Yorick Wilks, CoNLL-99. M. Osborne and E. T. K. SangBergen, NorwayChunyu Kit and Yorick Wilks. 1999. Unsupervised learning of word boundary with description length gain. In M. Osborne and E. T. K. Sang, editors, CoNLL-99, pages 1-6, Bergen, Norway.
Improving Chinese word segmentation with description length gain. Chunyu Kit, Hai Zhao, 2007 International Conference on Artificial Intelligence (ICAI'07). Las VegasChunyu Kit and Hai Zhao. 2007. Improving Chi- nese word segmentation with description length gain. In 2007 International Conference on Artificial Intelli- gence (ICAI'07), Las Vegas, June 25-28.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. John D Lafferty, Andrew Mccallum, Fernando C N Pereira, ICML'2001. San Francisco, CAJohn D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilis- tic models for segmenting and labeling sequence data. In ICML'2001, pages 282-289, San Francisco, CA.
The third international Chinese language processing bakeoff: Word segmentation and named entity recognition. Gina-Anne Levow, SIGHAN-5. Sydney, AustraliaGina-Anne Levow. 2006. The third international Chi- nese language processing bakeoff: Word segmentation and named entity recognition. In SIGHAN-5, pages 108-117, Sydney, Australia, July 22-23.
A maximum entropy approach to Chinese word segmentation. Jin Kiat Low, Hwee Tou Ng, Wenyuan Guo, SIGHAN-4. Jeju Island, KoreaJin Kiat Low, Hwee Tou Ng, and Wenyuan Guo. 2005. A maximum entropy approach to Chinese word seg- mentation. In SIGHAN-4, pages 161-164, Jeju Island, Korea, October 14-15.
Suffix arrays: A new method for on-line string searches. Udi Manber, Gene Myers, SIAM Journal on Computing. 225Udi Manber and Gene Myers. 1993. Suffix arrays: A new method for on-line string searches. SIAM Journal on Computing, 22(5):935-948.
Cramming more components onto integrated circuits. Gordon E Moore, Electronics. 38Gordon E. Moore. 1965. Cramming more components onto integrated circuits. Electronics, 3(8), April 19.
The first international Chinese word segmentation bakeoff. Richard Sproat, Thomas Emerson, SIGHAN-2. Sapporo, JapanRichard Sproat and Thomas Emerson. 2003. The first international Chinese word segmentation bakeoff. In SIGHAN-2, pages 133-143, Sapporo, Japan.
On closed task of Chinese word segmentation: An improved CRF model coupled with character clustering and automatically generated template matching. Richard Tzong-Han Tsai, Hsieh-Chuan, Cheng-Lung Hung, Hong-Jie Sung, Wen-Lian Dai, Hsu, SIGHAN-5. Sydney, AustraliaRichard Tzong-Han Tsai, Hsieh-Chuan Hung, Cheng- Lung Sung, Hong-Jie Dai, and Wen-Lian Hsu. 2006. On closed task of Chinese word segmentation: An im- proved CRF model coupled with character clustering and automatically generated template matching. In SIGHAN-5, pages 108-117, Sydney, Australia, July 22-23.
Identification of unknown words from corpus. His-Jian Cheng-Huang Tung, Lee, Computational Proceedings of Chinese and Oriental Languages. 8Cheng-Huang Tung and His-Jian Lee. 1994. Iden- tification of unknown words from corpus. Compu- tational Proceedings of Chinese and Oriental Lan- guages, 8:131-145.
Chinese word segmentation as character tagging. Computational Linguistics and Chinese Language Processing. Nianwen Xue, 8Nianwen Xue. 2003. Chinese word segmentation as character tagging. Computational Linguistics and Chinese Language Processing, 8(1):29-48.
Word segmentation and named entity recognition for SIGHAN Bakeoff3. Suxiang Zhang, Ying Qin, Juan Wen, Xiaojie Wang, SIGHAN-5. Sydney, AustraliaSuxiang Zhang, Ying Qin, Juan Wen, and Xiaojie Wang. 2006. Word segmentation and named entity recog- nition for SIGHAN Bakeoff3. In SIGHAN-5, pages 158-161, Sydney, Australia, July 22-23.
Incorporating global information into supervised learning for Chinese word segmentation. Hai Zhao, Chunyu Kit, PACLING-2007. Melbourne, AustraliaHai Zhao and Chunyu Kit. 2007. Incorporating global information into supervised learning for Chinese word segmentation. In PACLING-2007, pages 66-74, Mel- bourne, Australia, September 19-21.
An improved Chinese word segmentation system with conditional random field. Hai Zhao, Chang-Ning Huang, Mu Li, SIGHAN-5. Sydney, AustraliaHai Zhao, Chang-Ning Huang, and Mu Li. 2006a. An improved Chinese word segmentation system with conditional random field. In SIGHAN-5, pages 162- 165, Sydney, Australia, July 22-23.
Effective tag set selection in Chinese word segmentation via conditional random field modeling. Hai Zhao, Chang-Ning Huang, Mu Li, Bao-Liang Lu, PACLIC-20. Wuhan, ChinaHai Zhao, Chang-Ning Huang, Mu Li, and Bao-Liang Lu. 2006b. Effective tag set selection in Chinese word segmentation via conditional random field modeling. In PACLIC-20, pages 87-94, Wuhan, China, Novem- ber 1-3. |
1,234,375 | Text-level Discourse Dependency Parsing | Previous researches on Text-level discourse parsing mainly made use of constituency structure to parse the whole document into one discourse tree. In this paper, we present the limitations of constituency based discourse parsing and first propose to use dependency structure to directly represent the relations between elementary discourse units (EDUs). The state-of-the-art dependency parsing techniques, the Eisner algorithm and maximum spanning tree (MST) algorithm, are adopted to parse an optimal discourse dependency tree based on the arcfactored model and the large-margin learning techniques. Experiments show that our discourse dependency parsers achieve a competitive performance on text-level discourse parsing. | [
1157793,
5187426,
11919464,
6174034,
360083,
1421908,
12926517,
15893207,
3262717,
901375
] | Text-level Discourse Dependency Parsing
June 23-25
Sujian Li
Key Laboratory of Computational Linguistics
MOE
Peking University
China
Liang Wang
Key Laboratory of Computational Linguistics
MOE
Peking University
China
Ziqiang Cao ziqiangyeah@pku.edu.cncswjli@comp.polyu.edu.hk
Key Laboratory of Computational Linguistics
MOE
Peking University
China
Wenjie Li
Department of Computing
The Hong Kong Polytechnic University
HongKong
Text-level Discourse Dependency Parsing
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics
the 52nd Annual Meeting of the Association for Computational LinguisticsBaltimore, Maryland, USAJune 23-25
Previous researches on Text-level discourse parsing mainly made use of constituency structure to parse the whole document into one discourse tree. In this paper, we present the limitations of constituency based discourse parsing and first propose to use dependency structure to directly represent the relations between elementary discourse units (EDUs). The state-of-the-art dependency parsing techniques, the Eisner algorithm and maximum spanning tree (MST) algorithm, are adopted to parse an optimal discourse dependency tree based on the arcfactored model and the large-margin learning techniques. Experiments show that our discourse dependency parsers achieve a competitive performance on text-level discourse parsing.
Introduction
It is widely agreed that no units of the text can be understood in isolation, but in relation to their context. Researches in discourse parsing aim to acquire such relations in text, which is fundamental to many natural language processing applications such as question answering, automatic summarization and so on.
One important issue behind discourse parsing is the representation of discourse structure. Rhetorical Structure Theory (RST) (Mann and Thompson, 1988), one of the most influential discourse theories, posits a hierarchical generative tree representation, as illustrated in Figure 1. The leaves of a tree correspond to contiguous text spans called Elementary Discourse Units (EDUs) 1 . The adjacent EDUs are combined into 1 EDU segmentation is a relatively trivial step in discourse parsing. Since our work focus here is not EDU segmentation but discourse parsing. We assume EDUs are already known. the larger text spans by rhetorical relations (e.g., Contrast and Elaboration) and the larger text spans continue to be combined until the whole text constitutes a parse tree. The text spans linked by rhetorical relations are annotated as either nucleus or satellite depending on how salient they are for interpretation. It is attractive and challenging to parse the whole text into one tree.
Since such a hierarchical discourse tree is analogous to a constituency based syntactic tree except that the constituents in the discourse trees are text spans, previous researches have explored different constituency based syntactic parsing techniques (eg. CKY and chart parsing) and various features (eg. length, position et al.) for discourse parsing (Soricut and Marcu, 2003;Joty et al., 2012;Reitter, 2003;LeThanh et al., 2004;Baldridge and Lascarides, 2005;Subba and Di Eugenio, 2009;Sagae, 2009;Hernault et al., 2010b;Feng and Hirst, 2012). However, the existing approaches suffer from at least one of the following three problems. First, it is difficult to design a set of production rules as in syntactic parsing, since there are no determinate generative rules for the interior text spans. Second, the different levels of discourse units (e.g. EDUs or larger text spans) occurring in the generative process are better represented with different features, and thus a uniform framework for discourse analysis is hard to develop. Third, to reduce the time complexity of the state-of-the-art constituency based parsing techniques, the approximate parsing approaches are prone to trap in local maximum.
In this paper, we propose to adopt the dependency structure in discourse representation to overcome the limitations mentioned above. Here is the basic idea: the discourse structure consists of EDUs which are linked by the binary, asymmetrical relations called dependency relations. A dependency relation holds between a subordinate EDU called the dependent, and another EDU on which it depends called the head, as illustrated in Figure 2. Each EDU has one head. So, the dependency structure can be seen as a set of headdependent links, which are labeled by functional relations. Now, we can analyze the relations between EDUs directly, without worrying about any interior text spans. Since dependency trees contain much fewer nodes and on average they are simpler than constituency based trees, the current dependency parsers can have a relatively low computational complexity. Moreover, concerning linearization, it is well known that dependency structures can deal with non-projective relations, while constituency-based models need the addition of complex mechanisms like transformations, movements and so on. In our work, we adopt the graph based dependency parsing techniques learned from large sets of annotated dependency trees. The Eisner (1996) algorithm and maximum spanning tree (MST) algorithm are used respectively to parse the optimal projective and non-projective dependency trees with the large-margin learning technique (Crammer and Singer, 2003). To the best of our knowledge, we are the first to apply the dependency structure and introduce the dependency parsing techniques into discourse analysis.
The rest of this paper is organized as follows. Section 2 formally defines discourse dependency structure and introduces how to build a discourse dependency treebank from the existing RST corpus. Section 3 presents the discourse parsing approach based on the Eisner and MST algorithms. Section 4 elaborates on the large-margin learning technique as well as the features we use. Section 5 discusses the experimental results. Section 6 introduces the related work and Section 7 concludes the paper.
Discourse Dependency Structure
Similar to the syntactic dependency structure defined by McDonald (2005aMcDonald ( , 2005b, we insert an artificial EDU e 0 in the beginning for each document and label the dependency relation linking from e 0 as ROOT. This treatment will sim-plify both formal definitions and computational implementations. Normally, we assume that each EDU should have one and only one head except for e 0 . A labeled directed arc is used to represent the dependency relation from one head to its dependent. Then, discourse dependency structure can be formalized as the labeled directed graph, where nodes correspond to EDUs and labeled arcs correspond to labeled dependency relations.
We assume that the text 2 T is composed of n+1 EDUs including the artificial e 0 . That is T=e 0 e 1 e 2 … e n . Let R={r 1 ,r 2 , … ,r m } denote a finite set of functional relations that hold between two EDUs. Then a discourse dependency graph can be denoted by G=<V, A> where V denotes a set of nodes and A denotes a set of labeled directed arcs, such that for the text T=e 0 e 1 e 2 … e n and the label set R the following holds:
(1) V = { e 0 , e 1 , e 2 , … e n } (2) A V R V, where <e i , r, e j >A represents an arc from the head e i to the dependent e j labeled with the relation r.
(3) If <e i , r, e j >A then <e k , r', e j >A for all ki (4) If <e i , r, e j >A then <e i , r', e j >A for all r'r
The third condition assures that each EDU has one and only one head and the fourth tells that only one kind of dependency relation holds between two EDUs. According to the definition, we illustrate all the 9 possible unlabeled dependency trees for a text containing three EDUs in Figure 2. The dependency trees 1' to 7' are projective while 8' and 9' are non-projective with crossing arcs.
Our Discourse Dependency Treebank
To automatically conduct discourse dependency parsing, constructing a discourse dependency treebank is fundamental. It is costly to manually construct such a treebank from scratch. Fortunately, RST Discourse Treebank (RST-DT) (Carlson et al., 2001) is an available resource to help with.
A RST tree constitutes a hierarchical structure for one document through rhetorical relations. A total of 110 fine-grained relations (e.g. Elaboration-part-whole and List) were used for tagging RST-DT. They can be categorized into 18 classes (e.g. Elaboration and Joint). All these relations can be hypotactic ("mononuclear") or paratactic ("multi-nuclear"). A hypotactic relation holds between a nucleus span and an adjacent satellite span, while a paratactic relation connects two or more equally important adjacent nucleus spans. For convenience of computation, we convert the n-ary (n>2) RST trees 3 to binary trees through adding a new node for the latter n-1 nodes and assume each relation is connected to only one nucleus 4 . This departure from the original theory is not such a major step as it may appear, since any nucleus is known to contribute to the essential meaning. Now, each RST tree can be seen as a headed constituency based binary tree where the nuclei are heads and the children of each node are linearly ordered. Given three EDUs 5 , Figure 1 shows the possible 8 headed constituency based trees where the superscript * denotes the heads (nuclei). We use dependency trees to simulate the headed constituency based trees.
Contrasting Figure 1 with Figure 2, we use dependency tree 1' to simulate binary trees 1 and 8, and dependency tress 2'-7' to simulate binary trees 2-7 correspondingly. The rhetorical relations in RST trees are kept as the functional relations which link the two EDUs in dependency trees. With this kind of conversion, we can get our discourse dependency treebank. It is worth noting that the non-projective trees like 8' and 9' do not exist in our dependency treebank, though they are eligible according to the definition of discourse dependency graph.
Discourse Dependency Parsing
System Overview
As stated above, T=e 0 e 1 …e n represents an input text (document) where e i denotes the i th EDU of T. We use V to denote all the EDU nodes and VRV -0 (V -0 =V-{e 0 }) denote all the possible discourse dependency arcs. The goal of discourse dependency parsing is to parse an optimal spanning tree from VRV -0 . Here we follow the arc factored method and define the score of a dependency tree as the sum of the scores of all the arcs in the tree. Thus, the optimal dependency tree for T is a spanning tree with the highest score and obtained through the function DT(T,w): ) denotes the score of the arc <e i , r, e j > which is calculated according to its feature representation f(e i ,r,e j ) and a weight vector w.
0 0 0 ,, ,, ( , ) ( , ) ( , , ) ( , , ) f T T i j T T i j T G V R V G V R V i j e r e G G V R V i j e r
Next, two basic problems need to be solved: how to find the dependency tree with the highest score for T given all the arc scores (i.e. a parsing problem), and how to learn and compute the scores of arcs according to a set of arc features (i.e. a learning problem).
The following of this section addresses the first problem. Given the text T, we first reduce the multi-digraph composed of all possible arcs to the digraph. The digraph keeps only one arc <e i , r, e j > between two nodes which satisfies
(
)
. Thus, we can proceed with a reduction from labeled parsing to unlabeled parsing. Next, two algorithms, i.e. the Eisner algorithm and MST algorithm, are presented to parse the projective and non-projective unlabeled dependency trees respectively.
Eisner Algorithm
It is well known that projective dependency parsing can be handled with the Eisner algorithm (1996) which is based on the bottom-up dynamic programming techniques with the time complexity of O(n 3 ). The basic idea of the Eisner algorithm is to parse the left and right dependents of an EDU independently and combine them at a later stage. This reduces the overhead of indexing heads. Only two binary variables, i.e. c and d, are required to specify whether the heads occur leftmost or rightmost and whether an item is complete.
Eisner(T, )
Input: Text T=e 0 e 1 … e n ; Arc scores (e i ,e j )
1 Instantiate E[i, i, d, c]=0.0 for all i, d, c 2 For m := 1 to n 3 For i := 1 to n 4 j = i + m 5
if j> n then break; 6 # Create subgraphs with c=0 by adding arcs 7
E[i, j, 0, 0]=max iqj (E[i,q,1,1]+E[q+1,j,0,1]+(e j ,e i )) 8 E[i, j, 1, 0]=max iqj (E[i,q,1,1]+E[q+1,j,0,1]+(e i ,e j )) 9 # Add corresponding left/right subgraphs 10 E[i, j, 0, 1]=max iqj (E[i,q,0,1]+E[q,j,0,0] 11 E[i, j, 1, 1]=max iqj (E[i,q,1,0]+E[q,j,1,1])
Figure 3: Eisner Algorithm Figure 3 shows the pseudo-code of the Eisner algorithm. A dynamic programming table E[i,j,d,c] is used to represent the highest scored subtree spanning e i to e j . d indicates whether e i is the head (d=1) or e j is head (d=0). c indicates whether the subtree will not take any more dependents (c=1) or it needs to be completed (c=0). The algorithm begins by initializing all lengthone subtrees to a score of 0.0. In the inner loop, the first two steps (Lines 7 and 8) are to construct the new dependency arcs by taking the maximum over all the internal indices (iqj) in the span, and calculating the value of merging the two subtrees and adding one new arc. The last two steps (Lines 10 and 11) attempt to achieve an optimal left/right subtree in the span by adding the corresponding left/right subtree to the arcs that have been added previously. This algorithm considers all the possible subtrees. We can then get the optimal dependency tree with the score E[0,n,1,1] .
Maximum Spanning Tree Algorithm
As the bottom-up Eisner Algorithm must maintain the nested structural constraint, it cannot parse the non-projective dependency trees like 8' and 9' in Figure 2. However, the non-projective dependency does exist in real discourse. For example, the earlier text mainly talks about the topic A with mentioning the topic B, while the latter text gives a supplementary explanation for the topic B. This example can constitute a nonprojective tree and its pictorial diagram is exhibited in Figure 4. Following the work of McDonald (2005b), we formalize discourse dependency parsing as searching for a maximum spanning tree (MST) in a directed graph. Chu and Liu (1965) and Edmonds (1967) independently proposed the virtually identical algorithm named the Chu-Liu/Edmonds algorithm, for finding MSTs on directed graphs (McDonald et al. 2005b). Figure 5 shows the details of the Chu-Liu/Edmonds algorithm for discourse parsing. Each node in the graph greedily selects the incoming arc with the highest score. If one tree results, the algorithm ends. Otherwise, there must exist a cycle. The algorithm contracts the identified cycle into a single node and recalculates the scores of the arcs which go in and out of the cycle. Next, the algorithm recursively call itself on the contracted graph. Finally, those arcs which go in or out of one cycle will recover themselves to connect with the original nodes in V. Like McDonald et al. (2005b), we adopt an efficient implementation of the Chu-Liu/Edmonds algorithm that is proposed by Tarjan (1997) with O(n 2 ) time complexity.
Chu-Liu-Edmonds(G, )
Input: Text T=e 0 e 1 … e n ; Arc scores (e i ,e j )
1 A' = {<e i , e j >| e i = argmax (e i ,e j ); 1j|V|} 2 G' = (V, A') 3
If G' has no cycles, then return G' 4
Find an arc set A C that is a cycle in G' 5
<G C , ep> = contract(G, A C , ) 6 G = (V, A)=Chu-Liu-Edmonds(G C , ) 7
For the arc <e i ,e C > where ep(e i ,e C )=e j :
8 A=AA C {<e i ,e j )}-{<e i ,e C >, <a(e j ),e j >} 9
For the arc <e C , e i > where ep(e C ,e i )=e j : 10
A=A{<e j ,e i >}-{<e C ,e i >} 11 V = V 12 Return G Contract(G=(V,A), A C , )
1 Let G C be the subgraph of G excluding nodes in C 2 Add a node e C to G C denoting the cycle C 3 For e j V-C : e i C <e i ,e j >A 4
Add arc <e C ,e j > to G C with ep(e C ,e j )= (e i ,e j ) 5
(e C ,e j ) = (ep(e C ,e j ),e j )
6 For e i V-C: e j C (e i ,e j )A 7
Add arc <e i ,e C > to G C with ep(e i ,e C )= = [(e i ,e j )-(a(e i ),e j )] 8
(e i ,e C ) =(e i ,e j )-(a(e i ),e j )+score(C) 9 Return <G C , ep>
Learning
In Section 3, we assume that the arc scores are available. In fact, the score of each arc is calculated as a linear combination of feature weights.
Features
Following (Feng and Hirst, 2012;Lin et al., 2009;Hernault et al., 2010b), we explore the following 6 feature types combined with relations to represent each labeled arc <e i , r, e j > .
(1) WORD: The first one word, the last one word, and the first bigrams in each EDU, the pair of the two first words and the pair of the two last words in the two EDUs are extracted as features.
(2) POS: The first one and two POS tags in each EDU, and the pair of the two first POS tags in the two EDUs are extracted as features.
(3) Position: These features concern whether the two EDUs are included in the same sentence, and the positions where the two EDUs are located in one sentence, one paragraph, or one document.
(4) Length: The length of each EDU. (5) Syntactic: POS tags of the dominating nodes as defined in Soricut and Marcu (2003) are extracted as features. We use the syntactic trees from the Penn Treebank to find the dominating nodes,. (6) Semantic similarity: We compute the semantic relatedness between the two EDUs based on WordNet. The word pairs are extracted from (e i , e j ) and their similarity is calculated. Then, we can get a weighted complete bipartite graph where words are deemed as nodes and similarity as weights. From this bipartite graph, we get the maximum weighted matching and use the averaged weight of the matches as the similarity between e i and e j . In particular, we use path_similarity, wup_similarity, res_similarity, jcn_similarity and lin_similarity provided by the nltk.wordnet.similarity (Bird et. al., 2009) package for calculating word similarity.
As for relations, we experiment two sets of relation labels from RST-DT. One is composed of 19 coarse-grained relations and the other 111 fine-grained relations 6 .
MIRA based Learning
Margin Infused Relaxed Algorithm (MIRA) is an online algorithm for multiclass classification and is extended by Taskar et al. (2003) to cope with structured classification. Figure 6 gives the pseudo-code of the MIRA algorithm (McDonld et al., 2005b). This algorithm is designed to update the parameters w using a single training instance , i T i y in each iteration. On each update, MIRA attempts to keep the norm of the change to the weight vector as small as possible, which is subject to constructing the correct dependency tree under consideration with a margin at least as large as the loss of the incorrect dependency trees. We define the loss of a discourse dependency tree ' i y (denoted by ( , ') ii L yy ) as the number of the EDUs that have incorrect heads. Since there are exponentially many possible incorrect dependency trees and thus exponentially many margin constraints, here we relax the optimization and stay with a single best dependency tree ' ( , ) j ii DT T yw which is parsed under the weight vector w j . In this algorithm, the successive updated values of w are accumulated and averaged to avoid overfitting.
Experiments
Preparation
We test our methods experimentally using the discourse dependency treebank which is built as in Section 2. The training part of the corpus is composed of 342 documents and contains 18,765 EDUs, while the test part consists of 38 documents and 2,346 EDUs. The number of EDUs in each document ranges between 2 and 304. Two sets of relations are adopted. One is composed of 19 relations and Table 1 shows the number of each relation in the training and test corpus. The other is composed of 111 relations. Due to space limitation, Table 2 only lists the 10 highestdistributed relations with regard to their frequency in the training corpus.
The following experiments are conducted: (1) to measure the parsing performance with different relation sets and different feature types; (2) to compare our parsing methods with the state-ofthe-art discourse parsing methods. Based on the MIRA leaning algorithm, the Eisner algorithm and MST algorithm are used to parse the test documents respectively. Referring to the evaluation of syntactic dependency parsing, we use unlabeled accuracy to calculate the ratio of EDUs that correctly identify their heads, labeled accuracy the ratio of EDUs that have both correct heads and correct relations. Table 3 and Table 4 show the performance on two relation sets. The numbers (1-6) represent the corresponding feature types described in Section 4.1.
Relations Train Test Relations Train Test
From Table 3 and Table 4, we can see that the addition of more feature types, except the 6 th feature type (semantic similarity), can promote the performance of relation labeling, whether using the coarse-grained 19 relations and the finegrained 111 relations. As expected, the first and second types of features (WORD and POS) are the ones which play an important role in building and labeling the discourse dependency trees. These two types of features attain similar performance on two relation sets. The Eisner algorithm can achieve unlabeled accuracy around 0.36 and labeled accuracy around 0.26, while MST algorithm achieves unlabeled accuracy around 0.20 and labeled accuracy around 0.14.
The third feature type (Position) is also very helpful to discourse parsing. With the addition of this feature type, both unlabeled accuracy and labeled accuracy exhibit a marked increase. Especially, when applying MST algorithm on discourse parsing, unlabeled accuracy rises from around 0.20 to around 0.73. This result is consistent with Hernault's work (2010b) whose experiments have exhibited the usefulness of those position-related features. The other two types of features which are related to length and syntactic parsing, only promote the performance slightly.
As we employed the MIRA learning algorithm, it is possible to identify which specific features are useful, by looking at the weights learned to each feature using the training data. Table 5 selects 10 features with the highest weights in absolute value for the parser which uses the coarsegrained relations, while Table 6 selects the top 10 features for the parser using the fine-grained relations. Each row denotes one feature: the left part before the symbol "&" is from one of the 6 feature types and the right part denotes a specific relation. From Table 5 and Table 6, we can see that some features are reasonable. For example, The sixth feature in Table 5 represents that the dependency relation is preferred to be labeled Explanation with the fact that "because" is the first word of the dependent EDU. From these two tables, we also observe that most of the heavily weighted features are usually related to those highly distributed relations. When using the coarse-grained relations, the popular relations (eg. Elaboration, Attribution and Joint) are always preferred to be labeled. When using the fine-grained relations, the large relations including List and Elaboration-object-attribute-e are given the precedence of labeling. This phenomenon is mainly caused by the sparseness of the training corpus and the imbalance of relations. To solve this problem, the augment of training corpus is necessary. Unlike previous discourse parsing approaches, our methods combine tree building and relation labeling into a uniform framework naturally. This means that relations play a role in building the dependency tree structure. From Table 3 and Table 4, we can see that fine-grained relations are more helpful to building unlabeled discourse trees more than the coarse-grained relations. The best result of unlabeled accuracy using 111 relations is 0.7506, better than the best performance (0.7447) using 19 relations. We can also see that the labeled accuracy using the fine-grained relations can achieve 0.4309, only 0.06 lower than the best labeled accuracy (0.4915) using the coarse-grained relations.
Feature description Weight
In addition, comparing the MST algorithm with the Eisner algorithm, Table 3 and Table 4 show that their performances are not significantly different from each other. But we think that MST algorithm has more potential in discourse dependency parsing, because our converted discourse dependency treebank contains only projective trees and somewhat suppresses the MST algorithm to exhibit its advantage of parsing nonprojective trees. In fact, we observe that some non-projective dependencies produced by the MST algorithm are even reasonable than what they are in the dependency treebank. Thus, it is important to build a manually labeled discourse dependency treebank, which will be our future work.
Comparison with Other Systems
The state-of-the-art discourse parsing methods normally produce the constituency based discourse trees. To comprehensively evaluate the performance of a labeled constituency tree, the blank tree structure ('S'), the tree structure with nuclearity indication ('N'), and the tree structure with rhetorical relation indication but no nuclearity indication ('R') are evaluated respectively using the F measure (Marcu 2000).
To compare our discourse parsers with others, we adopt MIRA and Eisner algorithm to conduct discourse parsing with all the 6 types of features and then convert the produced projective dependency trees to constituency based trees through their correspondence as stated in Section 2. Our parsers using two relation sets are named Our-coarse and Our-fine respectively. The inputted EDUs of our parsers are from the standard segmentation of RST-DT. Other text-level discourse parsing methods include: (1) Percepcoarse: we replace MIRA with the averaged perceptron learning algorithm and the other settings are the same with Our-coarse; (2) HILDAmanual and HILDA-seg are from Hernault (2010b)'s work, and their inputted EDUs are from RST-DT and their own EDU segmenter respectively; (3) LeThanh indicates the results given by LeThanh el al. (2004), which built a multi-level rule based parser and used 14 rela-tions evaluated on 21 documents from RST-DT; (4) Marcu denotes the results given by Marcu(2000)'s decision-tree based parser which used 15 relations evaluated on unspecified documents. Table 7 shows the performance comparison for all the parsers mentioned above. Human denotes the manual agreement between two human annotators. From this table, we can see that both our parsers perform better than all the other parsers as a whole, though our parsers are not developed directly for constituency based trees. Our parsers do not exhibit obvious advantage than HILDA-manual on labeling the blank tree structure, because our parsers and HILDAmanual all perform over 94% of Human and this performance level somewhat reaches a bottleneck to promote more. However, our parsers outperform the other parsers on both nuclearity and relation labeling. Our-coarse achieves 94.2% and 91.8% of the human F-scores, on labeling nuclearity and relation respectively, while Ourfine achieves 95.2% and 87.6%. We can also see that the averaged perceptron learning algorithm, though simple, can achieve a comparable performance, better than HILDA-manual. The parsers HILDA-seg, LeThanh and Marcu use their own automatic EDU segmenters and exhibit a relatively low performance. This means that EDU segmentation is important to a practical discourse parser and worth further investigation. To further compare the performance of relation labeling, we follow Hernault el al. (2010a) and use Macro-averaged F-score (MAFS) to evaluate each relation. Due to space limitation, we do not list the F scores for each relation. Macro-averaged F-score is not influenced by the number of instances that are contained in each relation. Weight-averaged F-score (WAFS) weights the performance of each relation by the number of its existing instances. Table 8 compares our parser Our-coarse with other parsers HILDA-manual, Feng (Feng and Hirst, 2012) and Baseline. Feng (Feng and Hirst, 2012) can be seen as a strengthened version of HILDA which adopts more features and conducts feature selection. Baseline always picks the most frequent relation (i.e. Elaboration). From the results, we find that Our-coarse consistently provides superior performance for most relations over other parsers, and therefore results in higher MAFS and WAFS.
Related Work
So far, the existing discourse parsing techniques are mainly based on two well-known treebanks. One is the Penn Discourse TreeBank (PDTB) (Prasad et al., 2007) and the other is RST-DT.
PDTB adopts the predicate-arguments representation by taking an implicit/explicit connective as a predication of two adjacent sentences (arguments). Then the discourse relation between each pair of sentences is annotated independently to characterize its predication. A majority of researches regard discourse parsing as a classification task and mainly focus on exploiting various linguistic features and classifiers when using PDTB (Wellner et al., 2006;Pitler et al., 2009;Wang et al., 2010). However, the predicatearguments annotation scheme itself has such a limitation that one can only obtain the local discourse relations without knowing the rich context.
In contrast, RST and its treebank enable people to derive a complete representation of the whole discourse. Researches have begun to investigate how to construct a RST tree for the given text. Since the RST tree is similar to the constituency based syntactic tree except that the constituent nodes are different, the syntactic parsing techniques have been borrowed for discourse parsing (Soricut and Marcu, 2003;Baldridge and Lascarides, 2005;Sagae, 2009;Hernault et al., 2010b;Feng and Hirst, 2012). Soricut and Marcu (2003) use a standard bottomup chart parsing algorithm to determine the discourse structure of sentences. Baldridge and Lascarides (2005) model the process of discourse parsing with the probabilistic head driven parsing techniques. Sagae (2009) apply a transition based constituent parsing approach to construct a RST tree for a document. Hernault et al. (2010b) develop a greedy bottom-up tree building strategy for discourse parsing. The two adjacent text spans with the closest relations are combined in each iteration. As the extension of Hernault's work, Feng and Hirst (2012) further explore various features aiming to achieve better performance. However, as analyzed in Section 1, there exist three limitations with the constituency based discourse representation and parsing. We innovatively adopt the dependency structure, which can be benefited from the existing RST-DT, to represent the discourse. To the best of our knowledge, this work is the first to apply dependency structure and dependency parsing techniques in discourse analysis.
Conclusions
In this paper, we present the benefits and feasibility of applying dependency structure in textlevel discourse parsing. Through the correspondence between constituency-based trees and dependency trees, we build a discourse dependency treebank by converting the existing RST-DT. Based on dependency structure, we are able to directly analyze the relations between the EDUs without worrying about the additional interior text spans, and apply the existing state-of-the-art dependency parsing techniques which have a relatively low time complexity. In our work, we use the graph based dependency parsing techniques learned from the annotated dependency trees. The Eisner algorithm and the MST algorithm are applied to parse the optimal projective and non-projective dependency trees respectively based on the arc-factored model. To calculate the score for each arc, six types of features are explored to represent the arcs and the feature weights are learned based on the MIRA learning technique. Experimental results exhibit the effectiveness of the proposed approaches. In the future, we will focus on non-projective discourse dependency parsing and explore more effective features.
Figure 4 :
4Pictorial Diagram of Non-projective Trees
Figure 5 :
5Chu-Liu/Edmonds MST Algorithm
Thus, we need to determine the features for arc representation first. With referring to McDonald et al. (2005a; 2005b), we use the Margin Infused Relaxed Algorithm (MIRA) to learn the feature weights based on a training set of documents annotated with dependency structures where y i denotes the correct dependency tree for the text T i .
= v + w j ; 6 j = j+1 7 w = v/(K*N ) Figure 6 :
)6MIRA based Learning
Table 2 :
210 Highest Distributed Fine-grained
Relations
5.2 Feature Influence on Two Relation Sets
So far, researches on discourse parsing avoid
adopting too fine-grained relations and the rela-
tion sets containing around 20 labels are widely
used. In our experiments, we observe that adopt-
ing a fine-grained relation set can even be helpful
to building the discourse trees. Here, we conduct
experiments on two relation sets that contain 19
and 111 labels respectively. At the same time,
different feature types are tested their effects on
discourse parsing.
Method Features
Unlabeled
Acc.
Labeled
Acc.
Eisner
1+2
0.3602
0.2651
1+2+3
0.7310
0.4855
1+2+3+4
0.7370
0.4868
1+2+3+4+5
0.7447
0.4957
1+2+3+4+5+6 0.7455
0.4983
MST
1+2
0.1957
0.1479
1+2+3
0.7246
0.4783
1+2+3+4
0.7280
0.4795
1+2+3+4+5
0.7340
0.4915
1+2+3+4+5+6 0.7331
0.4851
Table 3 :
3Performance Using Coarse-grained Relations.Method Feature types
Unlabeled
Acc.
Labeled
Acc.
Eisner
1+2
0.3743
0.2421
1+2+3
0.7451
0.4079
1+2+3+4
0.7472
0.4041
1+2+3+4+5
0.7506
0.4254
1+2+3+4+5+6 0.7485
0.4288
MST
1+2
0.2080
0.1300
1+2+3
0.7366
0.4054
1+2+3+4
0.7468
0.4071
1+2+3+4+5
0.7494
0.4288
1+2+3+4+5+6 0.7460
0.4309
Table 4 :
4Performance Using Fine-grained Relations.
Table 5 :
5Top 10 Feature Weights for Coarsegrained Relation Labeling(Eisner Algorithm) Features
Weight
1 Last two words in dependent EDU are "ap-
peals court" & List
0.576
2 First two words in head EDU are "I 'd"
& Attribution
0.385
3 First two words in dependent EDU is "that
the" & Elaboration-object-attribute-e
0.348
4 First POS in head EDU is "DT" & List
-0.323
5 Last word in dependent EDU is "in" & List
-0.286
6 First word in dependent EDU is "racked" &
Elaboration-object-attribute-e
0.445
7 First two word pairs are <"In an","But
even"> & List
-0.252
8 Dependent EDU has a dominating node
tagged "CD"& Elaboration-object-attribute-e
-0.244
9 First two words in dependent EDU are "pa-
tents disputes" & Purpose
0.231
10 First word in dependent EDU is "to"
& Purpose
0.230
Table 6 :
6Top 10 Feature Weights for Coarse-
grained Relation Labeling (Eisner Algorithm)
Table 7 :
7Full Parser EvaluationMAFS WAFS Acc
Our-coarse
0.454 0.643 66.84
Percep-coarse
0.438 0.633 65.37
Feng
0.440 0.607 65.30
HILDA-manual 0.428 0.604 64.18
Baseline
-
-
35.82
Table 8: Relation Labeling Performance
The two terms "text" and "document" are used interchangeably and represent the same meaning. 3 According to our statistics, there are totally 381 n-ary relations in RST-DT.4 We set the first nucleus as the only nucleus.
We can easily get all possible headed binary trees for one more complex text containing more than three EDUs, by extending the 8 possible situations for three EDUs.
19 relations include the original 18 relation in RST-DT plus one artificial ROOT relation. The 111 relations also include the ROOT relation.
Acknowledgments. We also thank the three anonymous reviewers for their helpful comments.
Probabilistic Head-driven Parsing for Discourse Structure. Jason Baldridge, Alex Lascarides, Proceedings of the Ninth Conference on Computational Natural Language Learning. the Ninth Conference on Computational Natural Language LearningJason Baldridge and Alex Lascarides. 2005. Probabil- istic Head-driven Parsing for Discourse Structure. In Proceedings of the Ninth Conference on Com- putational Natural Language Learning, pages 96- 103.
Natural Language Processing with Python -Analyzing Text with the Natural Language Toolkit. Steven Bird, Ewan Klein, Edward Loper, O'ReillySteven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python -Ana- lyzing Text with the Natural Language Toolkit. O'Reilly.
Building a Discourse-tagged Corpus in the Framework of Rhetorical Structure Theory. Lynn Carlson, Daniel Marcu, Mary E Okurowski, Proceedings of the Second SIGdial Workshop on Dis. the Second SIGdial Workshop on Dis16Lynn Carlson, Daniel Marcu, and Mary E. Okurowski. 2001. Building a Discourse-tagged Corpus in the Framework of Rhetorical Structure Theory. Pro- ceedings of the Second SIGdial Workshop on Dis- course and Dialogue-Volume 16, pages 1-10.
On the Shortest Arborescence of a Directed Graph, Science Sinica. Yoeng-Jin Chu, Tseng-Hong Liu, Yoeng-Jin Chu and Tseng-Hong Liu. 1965. On the Shortest Arborescence of a Directed Graph, Sci- ence Sinica, v.14, pp.1396-1400.
Ultraconservative Online Algorithms for Multiclass Problems. Koby Crammer, Yoram Singer, JMLRKoby Crammer and Yoram Singer. 2003. Ultracon- servative Online Algorithms for Multiclass Prob- lems. JMLR.
Optimum Branchings, J. Research of the National Bureau of Standards. 71Jack Edmonds. 1967. Optimum Branchings, J. Re- search of the National Bureau of Standards, 71B, pp.233-240.
Three New Probabilistic Models for Dependency Parsing: An Exploration. Jason Eisner, Proc. COLING. COLINGJason Eisner. 1996. Three New Probabilistic Models for Dependency Parsing: An Exploration. In Proc. COLING.
Text-level Discourse Parsing with Rich Linguistic Features. Vanessa Wei Feng, Graeme Hirst, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. the 50th Annual Meeting of the Association for Computational LinguisticsJeju, Republic of KoreaVanessa Wei Feng and Graeme Hirst. Text-level Dis- course Parsing with Rich Linguistic Features, Pro- ceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 60-68, Jeju, Republic of Korea, 8-14 July 2012.
A Semi-supervised Approach to Improve Classification of Infrequent Discourse Relations Using Feature Vector Extension. Hugo Hernault, Danushka Bollegala, Mitsuru Ishizuka, Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. the 2010 Conference on Empirical Methods in Natural Language ProcessingCambridge, MAAssociation for Computational LinguisticsHugo Hernault, Danushka Bollegala, and Mitsuru Ishizuka. 2010a. A Semi-supervised Approach to Improve Classification of Infrequent Discourse Re- lations Using Feature Vector Extension. In Pro- ceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 399-409, Cambridge, MA, October. Association for Computational Linguistics.
HILDA: A Discourse Parser Using Support Vector Machine Classification. Hugo Hernault, Helmut Prendinger, David A Duverle, Mitsuru Ishizuka, Dialogue and Discourse. 13Hugo Hernault, Helmut Prendinger, David A. duVerle, and Mitsuru Ishizuka. 2010b. HILDA: A Discourse Parser Using Support Vector Machine Classifica- tion. Dialogue and Discourse, 1(3):1-33.
A Novel Discriminative Framework for Sentencelevel Discourse Analysis. Shafiq Joty, Giuseppe Carenini, Raymond T Ng, EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Stroudsburg, PA, USAShafiq Joty, Giuseppe Carenini and Raymond T. Ng. A Novel Discriminative Framework for Sentence- level Discourse Analysis. EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Em- pirical Methods in Natural Language Processing and Computational Natural Language Learning Stroudsburg, PA, USA.
Generating Discourse Structures for Written Texts. Huong Lethanh, Geetha Abeysinghe, Christian Huyck, Proceedings of the 20th International Conference on Computational Linguistics. the 20th International Conference on Computational LinguisticsHuong LeThanh, Geetha Abeysinghe, and Christian Huyck. 2004. Generating Discourse Structures for Written Texts. In Proceedings of the 20th Interna- tional Conference on Computational Linguistics, pages 329-335.
Recognizing Implicit Discourse Relations in the Penn Discourse Treebank. Ziheng Lin, Min-Yen Kan, Hwee Tou Ng, Proceedings of the 2009 Conference on Empirical Method in Natural Language Processing. the 2009 Conference on Empirical Method in Natural Language Processing1Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing Implicit Discourse Relations in the Penn Discourse Treebank. In Proceedings of the 2009 Conference on Empirical Method in Natural Language Processing, Vol. 1, EMNLP'09, pages 343-351.
Rhetorical Structure Theory: Toward a Functional Theory of Text Organization. William Mann, Sandra Thompson, Text. 83William Mann and Sandra Thompson. 1988. Rhetori- cal Structure Theory: Toward a Functional Theory of Text Organization. Text, 8(3):243-281.
The Theory and Practice of Discourse Parsing and Summarization. Daniel Marcu, MIT PressCambridge, MA, USADaniel Marcu. 2000. The Theory and Practice of Dis- course Parsing and Summarization. MIT Press, Cambridge, MA, USA.
Online Large-Margin Training of Dependency Parsers. Ryan Mcdonald, Koby Crammer, Fernando Pereira, 43rd Annual Meeting of the Association for Computational Linguistics. Ryan McDonald, Koby Crammer, and Fernando Pe- reira. 2005a. Online Large-Margin Training of De- pendency Parsers, 43rd Annual Meeting of the Association for Computational Linguistics (ACL 2005) .
Ryan Mcdonald, Fernando Pereira, Kiril Ribarov, Non-projective Dependency Parsing using Spanning Tree Algorithms, Proceedings of HLT/EMNLP. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. 2005b. Non-projective Dependency Parsing using Spanning Tree Algorithms, Proceed- ings of HLT/EMNLP 2005.
Automatic Sense Prediction for Implicit Discourse Relations in Text. Emily Pitler, Annie Louis, Ani Nenkova, Proc. of the 47th ACL. of the 47th ACLEmily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic Sense Prediction for Implicit Discourse Relations in Text, In Proc. of the 47th ACL. pages 683-691.
Rashmi Prasad, Eleni Miltsakaki, Nikhil Dinesh, Alan Lee, Aravind Joshi, Livio Robaldo, Bonnie Webber, 2.0The Penn Discourse Treebank. Rashmi Prasad, Eleni Miltsakaki, Nikhil Dinesh, Alan Lee, Aravind Joshi, Livio Robaldo, and Bonnie Webber. 2007. The Penn Discourse Treebank 2.0
Annotation Manual. The PDTB Research Group. Annotation Manual. The PDTB Research Group, December.
Simple Signals for Complex Rhetorics: On Rhetorical Analysis with Richfeature Support Vector Models. LDV Forum. David Reitter, 18David Reitter. 2003. Simple Signals for Complex Rhetorics: On Rhetorical Analysis with Rich- feature Support Vector Models. LDV Forum, 18(1/2):38-52.
Analysis of discourse structure with syntactic dependencies and data-driven shiftreduce parsing. Kenji Sagae, Proceedings of the 11th International Conference on Parsing Technologies. the 11th International Conference on Parsing TechnologiesKenji Sagae. 2009. Analysis of discourse structure with syntactic dependencies and data-driven shift- reduce parsing. In Proceedings of the 11th Interna- tional Conference on Parsing Technologies, pages 81-84.
Sentence level discourse parsing using syntactic and lexical information. Radu Soricut, Daniel Marcu, Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology. the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology1Radu Soricut and Daniel Marcu. 2003. Sentence level discourse parsing using syntactic and lexical in- formation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Lan- guage Technology, Volume 1, pages 149-156.
An effective discourse parser that uses rich linguistic information. Rajen Subba, Barbara Di Eugenio, Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational LinguisticsRajen Subba and Barbara Di Eugenio. 2009. An effec- tive discourse parser that uses rich linguistic in- formation. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 566-574.
Finding Optimum Branchings, Networks. Robert Endre Tarjan, Robert Endre Tarjan, 1977. Finding Optimum Branchings, Networks, v.7, pp.25-35.
Max-margin Markov Networks. Ben Taskar, Carlos Guestrin, Daphne Koller, Proc. NIPS. NIPSBen Taskar, Carlos Guestrin and Daphne Koller. 2003. Max-margin Markov Networks. In Proc. NIPS.
D-LTAG: Extending Lexicalized TAG to Discourse. Bonnie Webber, Cognitive Science. 285Bonnie Webber. 2004. D-LTAG: Extending Lexical- ized TAG to Discourse. Cognitive Science, 28(5):751-779.
Kernel based Discourse Relation Recognition with Temporal Ordering Information. Wen Ting Wang, Jian Su, Chew Lim Tan, Proc. of ACL'10. of ACL'10Wen Ting Wang, Jian Su and Chew Lim Tan. 2010. Kernel based Discourse Relation Recognition with Temporal Ordering Information, In Proc. of ACL'10. pages 710-719.
Classification of Discourse Coherence Relations: an Exploratory Study Using Multiple Knowledge Sources. Ben Wellner, James Pustejovsky, Catherine Havasi, Proc.of the 7th SIGDIAL Workshop on Discourse and Dialogue. .of the 7th SIGDIAL Workshop on Discourse and DialogueAnna Rumshisky and Roser SauriBen Wellner, James Pustejovsky, Catherine Havasi, Anna Rumshisky and Roser Sauri. 2006. Classifi- cation of Discourse Coherence Relations: an Ex- ploratory Study Using Multiple Knowledge Sources. In Proc.of the 7th SIGDIAL Workshop on Discourse and Dialogue. pages 117-125. |
13,072,792 | The C-Score -Proposing a Reading Comprehension Metrics as a Common Evaluation Measure for Text Simplification | This article addresses the lack of common approaches for text simplification evaluation, by presenting the first attempt for a common evaluation metrics. The article proposes reading comprehension evaluation as a method for evaluating the results of Text Simplification (TS). An experiment, as an example application of the evaluation method, as well as three formulae to quantify reading comprehension, are presented. The formulae produce an unique score, the C-score, which gives an estimation of user's reading comprehension of a certain text. The score can be used to evaluate the performance of a text simplification engine on pairs of complex and simplified texts, or to compare the performances of different TS methods using the same texts. The approach can be particularly useful for the modern crowdsourcing approaches, such as those employing the Amazon's Mechanical Turk 1 or CrowdFlower 2 . The aim of this paper is thus to propose an evaluation approach and to motivate the TS community to start a relevant discussion, in order to come up with a common evaluation metrics for this task. | [
8463747,
2382276,
15700645,
16215847,
6412912,
5477884,
9945908,
2935285,
15636533,
8884060,
4896510
] | The C-Score -Proposing a Reading Comprehension Metrics as a Common Evaluation Measure for Text Simplification
Association for Computational LinguisticsCopyright Association for Computational LinguisticsAugust 4-9 2013. 2013
Irina Temnikova irina.temnikova@gmail.com
Linguistic Modelling Department
Institute of Information and Communication Technologies
Institute of Nuclear Research and Nuclear Energy
Lab. of Particle and Astroparticle Physics
Bulgarian Academy of Sciences
Bulgarian Academy of Sciences
Galina Maneva galina.maneva@gmail.com
Linguistic Modelling Department
Institute of Information and Communication Technologies
Institute of Nuclear Research and Nuclear Energy
Lab. of Particle and Astroparticle Physics
Bulgarian Academy of Sciences
Bulgarian Academy of Sciences
The C-Score -Proposing a Reading Comprehension Metrics as a Common Evaluation Measure for Text Simplification
Proceedings of the 2nd Workshop on Predicting and Improving Text Readability for Target Reader Populations
the 2nd Workshop on Predicting and Improving Text Readability for Target Reader PopulationsSofia, BulgariaAssociation for Computational LinguisticsAugust 4-9 2013. 2013
This article addresses the lack of common approaches for text simplification evaluation, by presenting the first attempt for a common evaluation metrics. The article proposes reading comprehension evaluation as a method for evaluating the results of Text Simplification (TS). An experiment, as an example application of the evaluation method, as well as three formulae to quantify reading comprehension, are presented. The formulae produce an unique score, the C-score, which gives an estimation of user's reading comprehension of a certain text. The score can be used to evaluate the performance of a text simplification engine on pairs of complex and simplified texts, or to compare the performances of different TS methods using the same texts. The approach can be particularly useful for the modern crowdsourcing approaches, such as those employing the Amazon's Mechanical Turk 1 or CrowdFlower 2 . The aim of this paper is thus to propose an evaluation approach and to motivate the TS community to start a relevant discussion, in order to come up with a common evaluation metrics for this task.
Context and Motivation
Currently, the area of Text Simplification (TS) is getting more and more attention. Starting as early as in the 1996, Chandrasekar et al. proposed an approach for TS as a pre-processing step before feeding the text to a parser. Next, the PSET project (Devlin, 1999;Canning, 2002), proposed two modules for simplifying text for aphasic readers. The text simplification approaches continued in 2003 with Siddharthan (2003) and Inui et al. (2003), and through the [2005][2006] until the recent explosion of TS approaches in 2010-2012. Recently, several TS-related workshops took place: PITR 2012 (Williams et al., 2012), SLPAT 2012 (Alexandersson et al., 2012), and NLP4ITA 2012 3 and 2013. As in confirmation with the text simplification definition as the "process for reducing text complexity at different levels" (Temnikova, 2012), the TS approaches tackle a variety of text complexity aspects, ranging from lexical (Devlin, 1999;Inui et al., 2003;Elhadad, 2006;Gasperin et al., 2009;Yatskar et al., 2010;Coster and Kauchak, 2011;Bott et al., 2012;Specia et al., 2012;Rello et al., 2013;Drndarević et al., 2013), syntactic (Chandrasekar et al., 1996;Canning, 2002;Siddharthan, 2003;Inui et al., 2003;Gasperin et al., 2009;Zhu et al., 2010;Woodsend and Lapata, 2011;Coster and Kauchak, 2011;Drndarević et al., 2013), to discourse/cohesion (Siddharthan, 2003). The variety of problems tackled by the TS approaches differ, according to their final aim: (1) being a pre-processing step of an input to text processing applications, or (2) addressing the reading difficulties of specific groups of readers. The first type of final application ranges between parser input (Chandrasekar et al., 1996), small screens displays (Daelemans et al., 2004;Grefenstette, 1998), text summarization (Vanderwende et al., 2007), text extraction (Klebanov et al., 2004), semantic role labeling (Vickrey and Koller, 2008) and Machine Translation (MT) (Ruffino, 1982;Streiff, 1985).The TS approaches addressing specific human reading needs, instead, address readers with low levels of literacy (Siddharthan, 2003;Gasperin et al., 2009;Elhadad, 2006;Williams and Reiter, 2008), language learners (Petersen and Ostendorf, 2007), and readers with specific cognitive and language disabilities. The TS approaches, addressing this last type of readers target those suffering from aphasia (Devlin, 1999;Canning, 2002), deaf readers (Inui et al., 2003), dyslexics (Rello et al., 2013) and the readers with general disabilities (Max, 2006;Drndarević et al., 2013).
Despite the large number of current work in TS, there has been almost no attention to defining common text simplification evaluation approaches, which would allow the comparison of different TS systems. Until the present moment, usually, each approach has applied his/her own methods and materials, often taken from other Natural Language Processing (NLP) fields, making the comparison difficult or impossible.
The aim of this paper is thus to propose an evaluation method and to foster the discussion of this topic in the text simplification community, as well as to motivate the TS community to come up with common evaluation metrics for this task.
Next, Section 2 will describe the existing approaches to evaluating TS, as well as the few attempts towards offering a common evaluation strategy. After that, the next sections will present our evaluation approach, starting with Section 3 describing its context, Section 4 presenting the formulae, Section 5 offering the results, and finally Section 6, providing a Discussion and the Conclusions.
Evaluation Methods in Text Simplification
As mentioned in the previous section, until now, the different authors adopted different combinations of metrics, without reaching to a common approach, which would allow the comparison of different systems. As the different TS evaluation methods are applied on a variety of different text units (words, sentences, texts), this makes the comparison between approaches even harder. As the aim of this article is to propose a text simplification evaluation metrics which would take into account text comprehensibility and reading comprehension, in this discussion we will focus mostly on the approaches, whose aim is to simplify texts for target readers and their evaluation strategies. The existing TS evaluation approaches focus either on the quality of the generated text/sentences, or on the effectiveness of text simplification on reading comprehension. The first group of approaches include human judges ratings of simplification, content preservation, and grammaticality, standard MT evaluation scores (BLEU and NIST), a variety of other automatic metrics (perplexity, precision/recall/F-measure, and edit distance). The methods, aiming to evaluate the text simplification impact on reading comprehension, use, instead, reading speed, reading errors, speech errors, comprehension questions, answer correctness, and users' feedback. Several approaches use a variety of readability formulae (the Flesch, Flesch-Kincaid, Coleman-Liau, and Lorge formulae for English, as well as readability formulae for other languages, such as for Spanish). Due to the criticisms of readability formulae (DuBay, 2004), which often restrict themselves to a very superficial text level, they can be considered to stand on the borderline between the two previously described groups of TS evaluation approaches. As can be seen from the discussion below, different TS systems employ a combination of the listed evaluation approaches.
As one of the first text simplification systems for target reader populations, PSET, seems to have applied different evaluation strategies for different of its components, without running an evaluation of the system as a whole. The lexical simplification component (Devlin, 1999), which replaced technical terms with more frequent synonyms, was evaluated via user feedback, comprehension questions and the use of the Lorge readability formula (Lorge, 1948). The syntactic simplification system evaluated its single components and the system as a whole from different points of view, to a different extent, and used different evaluation strategies. Namely, the text comprehensibility was evaluated via reading time and answers' correctness given by sixteen aphasic readers; the components replacing passive with active voice and splitting sentences were evaluated for content preservation and grammaticality via four human judges' ratings; and finally, the anaphora resolution component was evaluated using precision and recall. Siddharthan (2003) did not carry out evaluation with target readers, while three human judges rated the grammaticality and the meaning preservation of ninety-five sentences. Gasperin et al. (2009) used precision, recall and f-measure. Other approaches, using human judges are those of Elhadad (2006), who also used precision and recall and Yatskar et al. (2010), who employed three annotators comparing pairs of words and indicating them same, simpler, or more complex. Williams and Reiter (2008) run two experiments, the larger one involving 230 subjects and measured oral reading rate, oral reading errors, response correctness to comprehension questions and finally, speech errors. Drndarevic et al. (2013) used 7 readability measures for Spanish to evaluate the degree of simplification, and twenty-five human annotators to evaluate on a Likert scale the grammaticality of the output and the preservation of the original meaning. The recent approaches considering TS as an MT task, such as Specia (2010), Zhu et al. (2010), Coster and Kauchak (2011) and Woodsend and Lapata (2011), apply standard MT evaluation techniques, such as BLEU (Papineni et al., 2002), NIST (Doddington, 2002), and TERp (Snover et al., 2009). In addition, Woodsend and Lapata (2011) apply two readability measures (Flesch-Kincaid, Coleman-Liau) to evaluate the actual reduction in complexity and human judges ratings for simplification, meaning preservation, and grammaticality. Zhu et al. (2010) apply the Flesch readability score (Flesch, 1948) and n-gram language model perplexity, and Coster and Kauchak (2011) -two additional automatic techniques (the word-level-F1 and simple string accuracy), taken from sentence compression evaluation (Clarke and Lapata, 2006).
As we consider that the aim of text simplification for human readers is to improve text comprehensibility, we argue that reading comprehension must be evaluated, and that evaluating just the quality of produced sentences is not enough. Differently from the approaches that employ human judges, we consider that it is better to test real human comprehension with target readers populations, rather than to make conclusions about the extent of population's understanding on the basis of the opinion of a small number of human judges. In addition, we consider that measuring reading speed, rate, as well as reading and speed errors, requires much more complicated and expensive tools, than having an online system to measure time to reply and recognize correct answers. Finally, we consider that cloze tests are an evaluation method that cannot really reflect the complexity of reading comprehension (for example for measuring manipulations of the syntactic struc-ture of sentences), and for this reason, we select multiple-choice questions as the testing method, which we consider the most reflecting the specificities of the complexity of a text, more accessible than eye-tracking technologies, and more objective than users' feedback. The approach does not explicitly evaluate the fluency, grammaticality and content preservation of the simplified text, but can be coupled with such additional evaluation.
The closest to ours approach is that of Rello et al. (2013) who evaluated reading comprehension with over ninety readers with and without dyslexia. Besides using eye-tracking (reading time and fixations duration), different reading devices, and users rating the text according to how easy it is it read, to understand and to remember, they obtain also a comprehension score based on multiplechoice questions (MCQ) with 3 answers (1 correct, 1 partially correct and 1 wrong). The difference with our approach is that we consider that having only one correct answer (as suggested by Gronlund (1982)), is a more objective evaluation, rather than having one partially correct answer, which would introduce subjectivity in evaluation.
To support our motivation, some state-of-the-art approaches state the scarcity of evaluation with target readers (Williams and Reiter, 2008), note that there are no commonly accepted evaluation measures (Coster and Kauchak, 2011), attempt to address the need of developing reading comprehension evaluation methods (Siddharthan and Katsos, 2012), and propose common evaluation frameworks (Specia et al., 2012;De Belder and Moens, 2012). More concretely, Siddhathan and Katsos (2012) propose the magnitude estimation of readability judgements and the delayed sentence recall as reading comprehension evaluation methods. Specia et al. (2012) provide a lexical simplification evaluation framework in the context of Semeval-2012. The evaluation is performed using a measure of inter-annotator agreement, based on Cohen (1960). Similarly, De Belder and Moens (2012) propose a dataset for evaluating lexical simplification. No common evaluation framework has been yet developed for syntactic simplification.
As seen in the overview, besides the multitude of existing approaches, and the few approaches attempting to propose a common evaluation framework, there are no widely accepted evaluation metrics or methods, which would allow the com-parison of existing approaches. The next section presents our evaluation approach, which we offer as a candidate for common evaluation metrics.
Proposed Evaluation Metrics
The Evaluation Experiment
The metrics proposed in this article, was developed in the context of a previously conducted large-scale text simplification evaluation experiment (Temnikova, 2012). The experiment aimed to determine whether a manual, rule-based text simplification approach (namely a controlled language), can re-write existing texts into more understandable versions. Impact on reading comprehension was necessary to evaluate, as the purpose of text simplification was to enhance in first place the reading comprehension of emergency instructions. The controlled language used for simplification was the Controlled Language for Crisis Management (CLCM, more details in (Temnikova, 2012)), which was developed on the basis of existing psychological and psycholinguistic literature discussing human comprehension under stress, which ensures its psychological validity. The text units evaluated in this experiments were whole texts, and more concretely pairs of original texts and their simplified versions. We argue that using whole texts for measuring reading comprehension is better than single sentences, as the texts provide more context for understanding. The experiment took place in the format of an online experiment, conducted via a specially developed web interface, and required users to read several texts and answer Multiple-Choice Questions (MCQ), testing the readers' understanding of each of the texts. Due to the purpose of the text simplification (emergency situations simulation), users were required to read the texts in a limited time, as to imitate a stressful situation with no time to think and re-read the text. This aspect will not be taken into account in the evaluation, as the purpose is to propose a general formula, applicable to a variety of different text simplification experiments. After reading the text in a limited time, the text was hidden from the readers, and they were presented with a screen, asking if they were ready to proceed with the questions. Next, each question was displayed one by one, along with its answers, with the readers not having the option to go back to the text. In order to ensure the constant attention of the readers and to reduce readers' tiredness fact or, the texts were kept short (about 150-170 words each), and the number of texts to be read by the reader was kept to four. In addition, to ensure comparability, all the texts were selected in a way to be more or less of the same length. The experiment employed a collection of a total of eight texts, four of which original, non simplified ('complex') versions, and the other four -their manually simplified versions. Each user had to read two complex and two simplified texts, none of which was a variant of the other. The interface automatically randomized the order of displaying the texts, to ensure that different users would get different combinations of texts in one of the following two different sequences:
• Complex-Simplified-Complex-Simplified • Simplified-Complex-Simplified-Complex
This was done in order to minimize the impact of the order of displaying the texts on the text comprehension results. After reading each text, the readers were prompted to answer between four and five questions about each text. The MCQ method was selected as it is considered being the most objective and easily measurable way of assessing comprehension (Gronlund, 1982). The number of questions and answers was selected in a way to not tire the reader (four to five questions per text and four to five answers for each question), and the questions and answers themselves were designed following the the best MCQ practices (Gronlund, 1982). Some of the practices followed involved ensuring that there is only one correct answer per question, making all wrong answers (or 'distractors') grammatically, and as text length consistent with the correct answer, in order to avoid giving hints to the reader, and making all distractors plausible and equally attractive. Similarly to the texts, the questions and answers were also displayed in different order to different readers, to avoid that the order influences the comprehension results. The correct answer was displayed in different positions to avoid learning its position and internally marked in a way to distinguish it during evaluation from all the distractors in whatever position it was displayed. The questions required understanding of key aspects of the texts, to avoid relying on pure texts' memorization (such as under which conditions what was supposed to be done, explanations, and the order in which actions needed to be taken). The information, evaluating the users' comprehension, collected during the experiment, was, on one hand the time for answering each question, and on the other hand, the number of correct answers given by all participants while replying to the same question. Besides the fact that we used a specially developed interface, this evaluation approach can be applied to any experiment employing an interface capable of calculating the time for answering and to distinguish the correct answers from the incorrect ones.
The efficiency of the experiment design was thoroughly tested by running it through several rounds of pilot experiments and requiring participants' feedback.
We claim that the evaluation approach proposed in this paper can be applied to more simply organized experiments, as the randomization aspects are not reflected in the evaluation formulae.
The final experiment involved 103 participants, collected via a request sent to several mailing lists. The participants were 55 percent women and 44 percent male, and ranged from undergraduate students to retired academicians (i.e. corresponded to nineteen to fifty-nine years old). As the experiment allowed entering lots of personal data, it was also known that participants had a variety of professions (including NLP people, teachers, and lawyers), knew English from the beginner through intermediate, to native level, and spoke a large variety of native languages, allowing to have native speakers from many of the World's language families (Non Indo-European and Indo-European included). Figure 1 shows the coarsegrained classification made at the time of the experiment, and the distribution of participants per native languages. A subset of specific native language participants will be selected to give an example of applying the evaluation metrics to a real evaluation experiment.
In order to obtain results, we have asked the participants to enter a rich selection of information, and recorded the chosen answer (be it correct or not), and the time which each participant employed to give each answer (correct or wrong). Table 1 shows the data we recorded for each single answer of every participant.
The data in Table 1 is: Entry id is each given answer, the Domain background (answer y -yes and n -no) indicates whether the participant has any previous knowledge of the experiment (crisis management) domain. As each text, question and com- plex/simplified texts pair are given reference numbers, respectively Text number, Question number, and Texts pair number record that. As required by the evaluation method, each entry records also the Time to reply each question (measured in 'milliseconds'), and the Answer number. As said before, the correct answers are marked in a special way, allowing to distinguish them at a later stage, when counting the number of correct answers.
Definitions and Evaluation Hypotheses
In order to correctly evaluate the performance of the text simplification method on the basis of the above described experiment, the data obtained was thoughtfully analyzed. The two criteria selected to best describe the users' performance were time to reply and number of correct answers. The evaluation was done offline, after collecting the data from the participants. The evaluation analysis aimed to test the following two hypotheses:
If the text simplification approach has a positive impact on the reading comprehension:
1. The percentage of correct answers given for the simplified text will be higher than the percentage of correct answers given for the complex text.
2. The time to recognize the correct answer and reply correctly to the questions about the simplified text will be significantly lower than the time to recognize the correct answer and reply correctly to the questions about the complex text.
The two hypotheses were tested previously by employing only the key variables (time to reply and number of correct answers). It has been proven that comprehension increases with the percentage of correct answers and decreases with the increase of the time to reply. On the basis of these facts, we define the C-Score (a text Comprehension Score) -an objective evaluation metrics, which allows to give a reading comprehension estimate to a text, or to compare two texts or two or more text simplification approaches. The C-Score is calculated text per text. In order to address a variety of situations, we propose three versions of the C-Score, which cover, gradually, all possible variables which can affect comprehension in such an experiment. In the following sections we present their formulae, the variables involved, and discuss their results, advantages and shortcomings.
3.3 The C-Score Version One. The C-Score Simple.
Given a text comprehension experiment featuring n texts with m questions with r answers each, an ability to measure time to reply to questions and to recognize the correct answers, we define the C-Score Simple as given below:
C simple = P r t mean(1)
Where: Pr is the percentage of correct answers, from all answers given to all the questions about this text, and t is the average time to reply to all questions about this text (both with a correct and a wrong answer). The time is expressed in arbitrary seconds-based units, depending on the experiment. The logic behind this formula is simple: we consider that comprehension increases with the percentage of correctly answered questions, and diminishes if the mean time to answer questions increases.
The C-Score Version Two. C-Score
Complete.
The C-Score complete takes into consideration a rich selection of variables reflecting the questions and answers complexity. In this C-Score version, we consider that the experiment designers will select short texts (e.g. 150 words) of a similar length, with the aim to reduce participants' tiredness factor, as we did in our experimental settings.
C complete = P r N q N q q=1
Qs(q) t mean (q)
In this formula, Pr is the percentage of correct answers by all participants for this text, Nq is the number of questions of this text (4-5 in our experiment), and t is the average time to reply to all questions about this text (4-5 in our experiment). We introduce the concept Question Size, (Qs), which is calculated for each question and takes into account the number of answers of the question (Na), the question length in words (Lq), and the total length in words of its answers (La):
Qs = N a(Lq + La)(3)
We consider that the number of questions negatively influences the comprehension results, as the reader gets cognitively tired to process more and more questions about different key aspects of the text. In addition, Gronlund (1982) suggests to restrict the number of questions per text to four-five to achieve better learning. For this reason, we consider that comprehension decreases, if the number of questions is higher. We also consider that answering correctly/faster to a difficult question shows better text comprehension than giving fast a correct answer to a simply-worded question. For this reason we award question difficulty, and we place it above the fraction.
The C-Score Version Three. C-Score
Textsize.
Finally, the last version of C-Score takes into account the case when the texts used for comparison can be of a different length, and in this way, the texts' complexity (for example, when comparing the results of two different TS engines, without having access to the same texts). For this reason, the C-Score 3 considers the text length (called text size, Ts) of the texts used in the experiment. As a longer text will be more difficult to understand than a shorter text, the text length is placed near the percentage of correct answers.
C textsize = P rT s N q N q q=1 Qs(q) t mean (q)(4)
C-Score Results
We have implemented and applied the above described formulae to the experimental data, presented in Section 3.1. As we have only one text simplification approach, two user scenarios are presented: Please note that the texts pairs are: Text 1 and 2; Text 3 and 4; Text 5 and 6; and Text 7 and 8. In each couple, the first text is complex and the second is its simplified version. The results for the first evaluation scenario are respectively displayed in Table 2 for C-Score Simple, Table 3 for C-Score Complete and Table 4 for C-Score Textsize. The results of C-Score Complete have been multiplied per 100 for better readability. As a reminder, we consider that higher the score is, better is text comprehension. From this point of view, if the text simplification approach was successful, Text 2 (Simplified) should have a higher C-Score than its original, complex Text 1, Text 4 (Simplified) should have a higher C-Score than its original Text 3, Text 6 (Simplified) -a higher score than the complex Text 5, and Text 8 (Simplified)a higher score than its original Text 7.
In the second scenario, the participants data has been divided into data relevant to participants under 45 years old (ninety-two participants) and into participants over 45 years old (eleven participants). In this case only the C-Score Simple has been applied. The results of this evaluation are shown in Table 5. As our aim is to compare the reading abilities of different ages of people, and not the results of text simplification, only the complex texts are taken into account. The results show that the comprehension score of participants under 45 years old is higher for all texts (despite the uneven participants' distribution), except in the case of complex Text 5.
A similar phenomenon can be observed in Tables 2, 3 and 4, where in all text pairs, except for pair 3, i.e. Texts 5 and 6 (where can be observed the opposite), the simplified text has a higher comprehension score than its complex original. The hypothesis about the different behavior of Text 5 and 6 is that it is text-specific. This is confirmed by Table 5, which shows that besides the big dif- ferences in reading comprehension between participants under 45 years old and participants over 45 years old, Text 5 has more or less the same comprehension score for both groups of readers. From this fact we can assume that this text is probably fairly easy, so this type of combination of text simplification rules does not simplify it, and instead, when applied makes it less comprehensible or more awkward for the human readers.
Discussion and Conclusions
This article has presented an extended discussion of the methods employed for evaluation in the text Table 5: C-Score Simple for one text.
simplification domain. In order to address the lack of common or standard evaluation approaches, this article proposed three evaluation formulae, which measure the reading comprehension of produced texts. The formulae have been developed on the basis of an extensive reading comprehension experiment, aiming to evaluate the impact of a text simplification approach (a controlled language) on emergency instructions. Two evaluation scenarios have been presented, the first of which calculated with all three formulae, while the second used only the simplest one. In this way, the article aims to address both the lack of common TS evaluation metrics as suggested in Section 2 (Coster and Kauchak, 2011) and the scarcity of reading comprehension (Siddharthan and Katsos, 2012) evaluation with real users (Williams and Reiter, 2008), by proposing a tailored approach for this type of text simplification evaluation. With this article we aim at inciting the Text Simplification Community to open a discussion forum about common methods for evaluating text simplification, in order to provide objective evaluation metrics allowing the comparison of different approaches, and to ensure that simplification really achieves its aims. We also argue that taking in consideration the endusers and text units used for evaluation is important. In our approach, we address only the evaluation of text simplification approaches aiming to improve reading comprehension and experiments in which time to reply to questions and percentage of correct answers can be measured. A plausible scenario for applying our evaluation approach would be to use the Amazon Mechanical Turk for crowd-sourcing and then to evaluate the performance of a text simplification system on complex and simplified texts, to compare the performance of two or more approaches, or of two versions of the same system on the same pairs of texts. These formulae can be also employed in psycholinguistically-oriented experiments, which aim to reach cognitive findings regarding specific target reader groups, such as dyslexics or autis-tic readers. Future work will involve the comparison of the above proposed evaluation metrics with any of the metrics already employed in the related work, such as the recent and classic readability formulae, eye-tracking, reading rate, human judges ratings, and others. We consider that content preservation and grammaticality are not necessary to be evaluated for this approach, as the simplified texts have been produced manually, by linguists, who were native speakers of English.
Figure 1 :
1Coarse-grained distribution of participants per native languages.
Table 2: Experiment results for C-Score Simple.Text number
C-Score Simple
Text 1 (Complex)
21.3
Text 2 (Simplified) 35.3
Text 3 (Complex)
24.8
Text 4 (Simplified) 34.9
Text 5 (Complex)
36.8
Text 6 (Simplified) 23.6
Text 7 (Complex)
40.5
Text 8 (Simplified) 51.5
Table 3 :
3Experiment results for C-Score Complete.Table 4: Experiment results for C-ScoreTextsize.Text number
C-Score Textsize
Text 1 (Complex)
109.5
Text 2 (Simplified) 192.0
Text 3 (Complex)
107.7
Text 4 (Simplified) 131.3
Text 5 (Complex)
171.6
Text 6 (Simplified) 102.4
Text 7 (Complex)
176.1
Text 8 (Simplified) 263.3
http://aws.amazon.com/mturk/. Last accessed on May 3rd, 2013. 2 http://crowdflower.com/. Last accessed on June 14th, 2013.
http://www.taln.upf.edu/nlp4ita/. Last accessed on May 3rd, 2013.
AcknowledgmentsThe authors would like to thank Prof. Dr. Petar Temnikov for the ideas and advices about the research methodology, Dr. Anke Buttner for the psycholinguistic counseling about the experiment design, including questions, answers and texts selection and the simplification method psychological validity, and Dr. Constantin Orasan and Dr. Le An Ha for the testing interface implementation. The research of Irina Temnikova reported in this paper was partially supported by the project AComIn "Advanced Computing for Innovation", grant 316087, funded by the FP7 Capacity Programme (Research Potential of Convergence Regions).Finally, the authors would also like to thank the PITR 2013 reviewers for their useful feedback.
Jan Alexandersson, Peter Ljunglf, Kathleen F Mc-Coy, Brian Roark, Annalu Waller, Proceedings of the Third Workshop on Speech and Language Processing for Assistive Technologies. the Third Workshop on Speech and Language Processing for Assistive TechnologiesMontréal, CanadaAssociation for Computational LinguisticsJan Alexandersson, Peter Ljunglf, Kathleen F. Mc- Coy, Brian Roark, and Annalu Waller, editors. 2012. Proceedings of the Third Workshop on Speech and Language Processing for Assistive Technolo- gies. Association for Computational Linguistics, Montréal, Canada, June.
Can spanish be simpler? lexsis: Lexical simplification for spanish. Stefan Bott, Luz Rello, Biljana Drndarević, Horacio Saggion, Proceedings of the 24th International Conference on Computational Linguistics. the 24th International Conference on Computational LinguisticsMumbai, IndiaStefan Bott, Luz Rello, Biljana Drndarević, and Hora- cio Saggion. 2012. Can spanish be simpler? lexsis: Lexical simplification for spanish. In Proceedings of the 24th International Conference on Computational Linguistics (Coling 2012), Mumbai, India (Decem- ber 2012).
Syntactic Simplification of Text. Yvonne Canning, UKUniversity of SunderlandPh.D. thesisYvonne Canning. 2002. Syntactic Simplification of Text. Ph.D. thesis, University of Sunderland, UK.
Motivations and methods for text simplification. Raman Chandrasekar, Christine Doran, Bangalore Srinivas, Proceedings of the 16th conference on Computational linguistics. the 16th conference on Computational linguistics2Association for Computational LinguisticsRaman Chandrasekar, Christine Doran, and Bangalore Srinivas. 1996. Motivations and methods for text simplification. In Proceedings of the 16th confer- ence on Computational linguistics-Volume 2, pages 1041-1044. Association for Computational Linguis- tics.
Models for sentence compression: A comparison across domains, training requirements and evaluation measures. James Clarke, Mirella Lapata, Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsJames Clarke and Mirella Lapata. 2006. Models for sentence compression: A comparison across do- mains, training requirements and evaluation mea- sures. In Proceedings of the 21st International Con- ference on Computational Linguistics and the 44th annual meeting of the Association for Computa- tional Linguistics, pages 377-384. Association for Computational Linguistics.
A coefficient of agreement for nominal scales. Jacob Cohen, Educational and psychological measurement. 201Jacob Cohen et al. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement, 20(1):37-46.
Learning to simplify sentences using wikipedia. William Coster, David Kauchak, Proceedings of the Workshop on Monolingual Text-To-Text Generation. the Workshop on Monolingual Text-To-Text GenerationAssociation for Computational LinguisticsWilliam Coster and David Kauchak. 2011. Learning to simplify sentences using wikipedia. In Proceedings of the Workshop on Monolingual Text-To-Text Gen- eration, pages 1-9. Association for Computational Linguistics.
Automatic sentence simplification for subtitling in dutch and english. Walter Daelemans, Anja Höthker, Erik Tjong Kim Sang, Proceedings of the 4th International Conference on Language Resources and Evaluation. the 4th International Conference on Language Resources and EvaluationWalter Daelemans, Anja Höthker, and Erik Tjong Kim Sang. 2004. Automatic sentence simplification for subtitling in dutch and english. In Proceedings of the 4th International Conference on Language Re- sources and Evaluation, pages 1045-1048.
A dataset for the evaluation of lexical simplification. Jan De Belder, Marie-Francine Moens, Computational Linguistics and Intelligent Text Processing. SpringerJan De Belder and Marie-Francine Moens. 2012. A dataset for the evaluation of lexical simplification. In Computational Linguistics and Intelligent Text Processing, pages 426-437. Springer.
Automatic Language Simplification for Aphasic Readers. Siobhan Devlin, UKUniversity of SunderlandPh.D. thesisSiobhan Devlin. 1999. Automatic Language Simplifi- cation for Aphasic Readers. Ph.D. thesis, University of Sunderland, UK.
Automatic evaluation of machine translation quality using n-gram cooccurrence statistics. George Doddington, Proceedings of the second international conference on Human Language Technology Research. the second international conference on Human Language Technology ResearchMorgan Kaufmann Publishers IncGeorge Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co- occurrence statistics. In Proceedings of the second international conference on Human Language Tech- nology Research, pages 138-145. Morgan Kauf- mann Publishers Inc.
Automatic text simplification in spanish: A comparative evaluation of complementing modules. Biljana Drndarević, Stefan Sanjaštajner, Susana Bott, Horacio Bautista, Saggion, Computational Linguistics and Intelligent Text Processing. SpringerBiljana Drndarević, SanjaŠtajner, Stefan Bott, Susana Bautista, and Horacio Saggion. 2013. Automatic text simplification in spanish: A comparative evalu- ation of complementing modules. In Computational Linguistics and Intelligent Text Processing, pages 488-500. Springer.
The principles of readability. H William, Dubay, Impact Information. William H. DuBay. 2004. The principles of readabil- ity. Impact Information, pages 1-76.
User-sensitive text summarization: Application to the medical domain. Noémie Elhadad, Columbia UniversityPh.D. thesisNoémie Elhadad. 2006. User-sensitive text summa- rization: Application to the medical domain. Ph.D. thesis, Columbia University.
A new readability yardstick. Rudolf Flesch, The Journal of applied psychology. 323Rudolf Flesch. 1948. A new readability yardstick. The Journal of applied psychology, 32(3).
Natural language processing for social inclusion: a text simplification architecture for different literacy levels. Caroline Gasperin, Erick Maziero, Lucia Specia, Sandra M Tas Pardo, Aluisio, the Proceedings of SEMISH-XXXVI Seminário Integrado de Software e Hardware. Caroline Gasperin, Erick Maziero, Lucia Specia, TAS Pardo, and Sandra M Aluisio. 2009. Natural lan- guage processing for social inclusion: a text sim- plification architecture for different literacy levels. the Proceedings of SEMISH-XXXVI Seminário Inte- grado de Software e Hardware, pages 387-401.
Producing intelligent telegraphic text reduction to provide an audio scanning service for the blind. Gregory Grefenstette, Working notes of the AAAI Spring Symposium on Intelligent Text summarization. Gregory Grefenstette. 1998. Producing intelligent telegraphic text reduction to provide an audio scan- ning service for the blind. In Working notes of the AAAI Spring Symposium on Intelligent Text summa- rization, pages 111-118.
Constructing achievement tests. Norman Edward Gronlund, Prentice HallNorman Edward Gronlund. 1982. Constructing achievement tests. Prentice Hall.
Text simplification for reading assistance: a project note. Kentaro Inui, Atsushi Fujita, Tetsuro Takahashi, Proceedings of the second international workshop on Paraphrasing. the second international workshop on ParaphrasingAssociation for Computational Linguistics16Ryu Iida, and Tomoya IwakuraKentaro Inui, Atsushi Fujita, Tetsuro Takahashi, Ryu Iida, and Tomoya Iwakura. 2003. Text simplifica- tion for reading assistance: a project note. In Pro- ceedings of the second international workshop on Paraphrasing-Volume 16, pages 9-16. Association for Computational Linguistics.
Text simplification for informationseeking applications. Kevin Beata Beigman Klebanov, Daniel Knight, Marcu, On the Move to Meaningful Internet Systems 2004: CoopIS, DOA, and ODBASE. SpringerBeata Beigman Klebanov, Kevin Knight, and Daniel Marcu. 2004. Text simplification for information- seeking applications. In On the Move to Mean- ingful Internet Systems 2004: CoopIS, DOA, and ODBASE, pages 735-747. Springer.
The lorge and flesch readability formulae: A correction. Irving Lorge, School and Society67Irving Lorge. 1948. The lorge and flesch readabil- ity formulae: A correction. School and Society, 67:141-142.
Writing for language-impaired readers. Aurélien Max, Computational Linguistics and Intelligent Text Processing. SpringerAurélien Max. 2006. Writing for language-impaired readers. In Computational Linguistics and Intelli- gent Text Processing, pages 567-570. Springer.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th annual meeting on association for computational linguistics. the 40th annual meeting on association for computational linguisticsAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.
Text simplification for language learners: a corpus analysis. Sarah E Petersen, Mari Ostendorf, Proc. of Workshop on Speech and Language Technology for Education. of Workshop on Speech and Language Technology for EducationSarah E. Petersen and Mari Ostendorf. 2007. Text sim- plification for language learners: a corpus analysis. In In Proc. of Workshop on Speech and Language Technology for Education.
Simplify or help? text simplification strategies for people with dyslexia. Luz Rello, Ricardo Baeza-Yates, Stefan Bott, Horacio Saggion, Proc. W4A. W4A13Luz Rello, Ricardo Baeza-Yates, Stefan Bott, and Ho- racio Saggion. 2013. Simplify or help? text sim- plification strategies for people with dyslexia. Proc. W4A, 13.
Coping with machine translation. Practical Experience of Machine Translation. J , Richard Ruffino, J. Richard Ruffino. 1982. Coping with machine trans- lation. Practical Experience of Machine Transla- tion.
Offline sentence processing measures for testing readability with users. Advaith Siddharthan, Napoleon Katsos, Proceedings of the First Workshop on Predicting and Improving Text Readability for target reader populations. the First Workshop on Predicting and Improving Text Readability for target reader populationsAssociation for Computational LinguisticsAdvaith Siddharthan and Napoleon Katsos. 2012. Of- fline sentence processing measures for testing read- ability with users. In Proceedings of the First Work- shop on Predicting and Improving Text Readability for target reader populations, pages 17-24. Associ- ation for Computational Linguistics.
Syntactic simplification and text cohesion. Advaith Siddharthan, UKUniversity of CambridgePh.D. thesisAdvaith Siddharthan. 2003. Syntactic simplification and text cohesion. Ph.D. thesis, University of Cam- bridge, UK.
Fluency, adequacy, or hter?: exploring different human judgments with a tunable mt metric. Matthew Snover, Nitin Madnani, J Bonnie, Richard Dorr, Schwartz, Proceedings of the Fourth Workshop on Statistical Machine Translation. the Fourth Workshop on Statistical Machine TranslationAssociation for Computational LinguisticsMatthew Snover, Nitin Madnani, Bonnie J Dorr, and Richard Schwartz. 2009. Fluency, adequacy, or hter?: exploring different human judgments with a tunable mt metric. In Proceedings of the Fourth Workshop on Statistical Machine Translation, pages 259-268. Association for Computational Linguis- tics.
Semeval-2012 task 1: English lexical simplification. Lucia Specia, Rada Sujay Kumar Jauhar, Mihalcea, Proceedings of the First Joint Conference on Lexical and Computational Semantics. the First Joint Conference on Lexical and Computational SemanticsAssociation for Computational LinguisticsLucia Specia, Sujay Kumar Jauhar, and Rada Mihal- cea. 2012. Semeval-2012 task 1: English lexi- cal simplification. In Proceedings of the First Joint Conference on Lexical and Computational Seman- tics, pages 347-355. Association for Computational Linguistics.
Translating from complex to simplified sentences. Lucia Specia, Computational Processing of the Portuguese Language. SpringerLucia Specia. 2010. Translating from complex to sim- plified sentences. In Computational Processing of the Portuguese Language, pages 30-39. Springer.
A A Streiff, New developments in titus 4. Lawson. 185192A. A. Streiff. 1985. New developments in titus 4. Lawson (1985), 185:192.
Text Complexity and Text Simplification in the Crisis Management domain. Irina Temnikova, Wolverhampton, UKPh.D. thesisIrina Temnikova. 2012. Text Complexity and Text Sim- plification in the Crisis Management domain. Ph.D. thesis, Wolverhampton, UK.
Beyond sumbasic: Taskfocused summarization with sentence simplification and lexical expansion. Lucy Vanderwende, Hisami Suzuki, Chris Brockett, Ani Nenkova, Information Processing & Management. 436Lucy Vanderwende, Hisami Suzuki, Chris Brockett, and Ani Nenkova. 2007. Beyond sumbasic: Task- focused summarization with sentence simplification and lexical expansion. Information Processing & Management, 43(6):1606-1618.
Sentence simplification for semantic role labeling. David Vickrey, Daphne Koller, Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-2008: HLT). the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-2008: HLT)David Vickrey and Daphne Koller. 2008. Sentence simplification for semantic role labeling. In Pro- ceedings of the 46th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies (ACL-2008: HLT), pages 344- 352.
Generating basic skills reports for low-skilled readers. Sandra Williams, Ehud Reiter, Natural Language Engineering. 144Sandra Williams and Ehud Reiter. 2008. Generating basic skills reports for low-skilled readers. Natural Language Engineering, 14(4):495-525.
Proceedings of the First Workshop on Predicting and Improving Text Readability for target reader populations. Sandra Williams, Advaith Siddharthan, and Ani Nenkovathe First Workshop on Predicting and Improving Text Readability for target reader populationsMontréal, CanadaAssociation for Computational LinguisticsSandra Williams, Advaith Siddharthan, and Ani Nenkova, editors. 2012. Proceedings of the First Workshop on Predicting and Improving Text Read- ability for target reader populations. Association for Computational Linguistics, Montréal, Canada, June.
Learning to simplify sentences with quasi-synchronous grammar and integer programming. Kristian Woodsend, Mirella Lapata, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsKristian Woodsend and Mirella Lapata. 2011. Learn- ing to simplify sentences with quasi-synchronous grammar and integer programming. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 409-420. Association for Computational Linguistics.
For the sake of simplicity: Unsupervised extraction of lexical simplifications from wikipedia. Mark Yatskar, Bo Pang, Cristian Danescu-Niculescu-Mizil, Lillian Lee , Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational LinguisticsMark Yatskar, Bo Pang, Cristian Danescu-Niculescu- Mizil, and Lillian Lee. 2010. For the sake of sim- plicity: Unsupervised extraction of lexical simplifi- cations from wikipedia. In Human Language Tech- nologies: The 2010 Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics, pages 365-368. Association for Computational Linguistics.
A monolingual tree-based translation model for sentence simplification. Zhemin Zhu, Delphine Bernhard, Iryna Gurevych, Proceedings of the 23rd international conference on computational linguistics. the 23rd international conference on computational linguisticsAssociation for Computational LinguisticsZhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Proceedings of the 23rd international conference on computational lin- guistics, pages 1353-1361. Association for Compu- tational Linguistics. |
235,599,150 | [] | Improving English to Spanish Out-of-Domain Translations by Morphology Generalization and Generation
Lluís Formiga lluis.formiga@upc.edu
Universitat Politècnica de Catalunya (UPC)
08034BarcelonaSpain
Adolfo Hernández adolfo.hernandez@upc.edu
Universitat Politècnica de Catalunya (UPC)
08034BarcelonaSpain
José B Mariño jose.marino@upc.edu
Universitat Politècnica de Catalunya (UPC)
08034BarcelonaSpain
Enric Monte enric.monte@upc.edu
Universitat Politècnica de Catalunya (UPC)
08034BarcelonaSpain
Improving English to Spanish Out-of-Domain Translations by Morphology Generalization and Generation
6
This paper presents a detailed study of a method for morphology generalization and generation to address out-of-domain translations in English-to-Spanish phrase-based MT. The paper studies whether the morphological richness of the target language causes poor quality translation when translating out-ofdomain. In detail, this approach first translates into Spanish simplified forms and then predicts the final inflected forms through a morphology generation step based on shallow and deep-projected linguistic information available from both the source and targetlanguage sentences. Obtained results highlight the importance of generalization, and therefore generation, for dealing with out-ofdomain data.
Introduction
The problems raised when translating into richer morphology languages are well known and are being continuously studied (Popovic and Ney, 2004;Koehn and Hoang, 2007;de Gispert and Mariño, 2008;Toutanova et al., 2008;Clifton and Sarkar, 2011;Bojar and Tamchyna, 2011).
When translating from English into Spanish, inflected words make the lexicon to be very large causing a significant data sparsity problem. In addition, system output is limited only to inflected phrases available in the parallel training corpus (Bojar and Tamchyna, 2011). Hence, phrase-based SMT systems cannot generate proper inflections unless they have learned them from the appropriate phrases.
That would require to have a parallel corpus containing all possible word inflections for all phrases available, which it is an unfeasible task.
Different approaches to address the morphology into SMT may be summarized in four, not mutually exclusive, categories: i) factored models (Koehn and Hoang, 2007), enriched input models (Avramidis and Koehn, 2008;Ueffing and Ney, 2003), segmented translation (Virpioja et al., 2007;de Gispert et al., 2009;Green and DeNero, 2012) and morphology generation (Toutanova et al., 2008;de Gispert and Mariño, 2008;Bojar and Tamchyna, 2011).
Whereas segmented translation is intended for agglutinative languages, translation into Spanish has been classically addressed either by factored models (Koehn and Hoang, 2007), enriched input scheme (Ueffing and Ney, 2003) or target language simplification plus a morphology generation as an independent step (de Gispert and Mariño, 2008). This latter approach has also been used to translate to other rich morphology languages such as Czech (Bojar and Tamchyna, 2011).
The problem of morphology sparsity becomes crucial when addressing translations out-of-domain. Under that scenario, there is a high presence of previously unseen inflected forms even though their lemma could have been learned with the training material. A typical scenario out-of-domain is based on weblog translations, which contain material based on chat, SMS or social networks text, where it is frequent the use of second person of the verbs. However, second person verb forms are scarcely populated within the typical training material (e.g. Europarl, News and United Nations). That is due to the following reasons: i) text from formal acts converts the second person (tú) subject into usted formal form, which uses third person inflections and ii) text from news is mainly depicted in a descriptive language relegating second person to textual citations of dialogs that are a minority over all the text.
Some recent domain-adaptation work (Haddow and Koehn, 2012) has dealt implicitly with this problem using the OpenSubtitles 1 bilingual corpus that contains plenty of dialogs and therefore second person inflected Spanish forms. However, their study found drawbacks in the use of an additional corpus as training material: the improvement of the quality of the out-of-domain translations worsened the quality of in-domain translations. On the other hand, the use of an additional corpus to train specific inflected-forms language generator has not yet been addressed.
This paper presents our findings on tackling the problem to inflect out-of-domain verbs. We built a SMT system from English into simplified morphology Spanish in order to inflect the verbs as an independent postprocessing step. This strategy has been formerly applied to translate from English into Spanish with a N-gram based decoder (de Gispert and Mariño, 2008) but without dealing with out-ofdomain data and neither with a factored based system (Koehn and Hoang, 2007). We analyze the most convenient features (deep vs. shallow) to perform this task, the impact of the aforementioned strategy when using different training material and different test sets. The main reason to focus the study only on the verbs is their strong impact on the translation quality (Ueffing and Ney, 2003;de Gispert and Mariño, 2008).
In section 2 we describe the architecture of the simplification plus generation strategy. In section 3 we detail the design of the generation system. In section 4 we detail the experiments performed and we discuss them in section 5. At last, we explain in section 6 the main conclusions and lines to be dealt in the future. 1 www.opensubtitles.org
System architecture
The main idea of the presented strategy is to reduce the sparsity of the translation models and the perplexity of the language models by simplifying the morphology in the target language.
Spanish, as a Latin derived language, has a complex grammar. Rodríguez and Carretero (1996) enumerated the problems of Spanish morphology flexions into 7 different problems that contain verb conjugation, gender/number derivations and enclitic forms among others. As it has been mentioned, we focus on the surface forms related to Spanish verbs. Concretely we center our study to predict i) person and number (PN) for the Spanish verb forms and ii) number and gender (NG) of participles and adjectives derived from verbs, which are very common in passive forms. We implicitly deal with enclitic forms through a segmentation step based on the work by Farrús et al. (2011).
The idea is summarized in Figure 1. Spanish verb forms are replaced with their simplified form. Generalization is carried out through several steps detailed in Table 1. The Spanish POS tags are given in Parole format 2 that includes information about the type, mode, tense, person, number and gender of the verb. First, we concatenate the POS tag to the lemma of the verb. For example, the inflected form puede is transformed into VMIP3S0 [poder], which indicates that the lemma of Main Verb poder is inflected to the Indicative Present Third Person Singular form. Next, we generalize the person, number and gender of the verb to the following variables: p for person, n for number and g for gender. Under this generalization, the simplified form keeps information of verb type ('VM' ! main verb), mode and tense ('IP' ! indicative, present), while 'p' and 'n' represent any person and number once generalized (from 3rd person singular). It is important to highlight that we do not perform just a simple lemmatization as we also keep the information about the type, mode and tense of the verb.
After simplifying the corpus we can build the models following the standard procedures explained in section 4.1. Note that the tuning of the system is performed with the simplified reference of the development texts. At this point, the translation process may be independently evaluated if test references are also simplified. This evaluation provides oracles for the generation step. That is, the maximum gain to be obtained under a perfect generation system. Finally, morphology prediction system is designed independently as it is explained in section 3. The generation system predicts the correct verb morphology for the given context both in the source and the target sentence. Once the morphology is predicted, the verb is inflected with a verb conjugator.
The presented strategy has two clear benefits: i) it makes clear the real impact of morphology generalization by providing an oracle for the studied scenarios and ii) decouples the morphology generation system from the actual SMT pipeline making it feasible to be trained with small or noisy out-of-domain corpora without having a strong negative impact into the decoder pipeline (Haddow and Koehn, 2012).
However, any bilingual corpora used to train the generation system has to be correctly aligned in order to perform a correct extraction of the features. In that sense it is useful to reuse the already trained SMT (e.g. GIZA) alignment models as they are built from larger collections of data.
Design of the Generation System
The generation system is addressed as a multiclass classification problem. We separate the prediction in two independent tasks: i) person and number and ii) number and gender. The reason of the separation is the fact that in Spanish there are not verb forms where the person, number and gender have to be predicted at the same time. Thus, the forms other than participle involve decisions only based in person and number while the participle forms involve only number and gender. Thus, we train two independent multiclass classifiers: i) a person and number classifier involving 6 output classes (1st, 2nd and 3rd person either in Singular or Plural) and ii) a number and gender classifier involving 4 output classes (Male and Female either in Singular or Plural). We provide the one-best decision of the decoder as the input to the generation system along with its related tokenized source sentence and its alignment. It is important to highlight that the decoder has to be able to provide the source-translation alignment at word level.
Relevant Features
A set of linguistic features is extracted for each generalized verb found in the target sentence. These features include simple shallow information around the verb and might include deep information such as projected dependency constituents or semantic role labels.
For the shallow feature extraction, the features are extracted with simple neighborhood functions that look the words, POS tags and the morphology in and around the verb in both the source and target side. These features are: i) Context words and their POS for both the source and target verbs. ii) The composed verb phrase and its POS (e.g. it has not already been saved). The verb phrase is detected through a WFST acceptor. We also consider mixed word/POS source verb sequences (e.g. PRP has not already been VB). iii) Presence of a passive voice on the source. iv) Sequence of named entities (and their conjugations) before the source and target verbs: (e.g, John, Mary and Peter). v) Reflexive pronoun after the source and target verbs. vi) Pronoun before the source verb or whether it has POS indicating 3S (VBZ) or not3S (not VBZ) conjugation. vii) Pronoun before the target verb (yo, tú...). viii) Simplified form of the target verb simplifying also its mode and mode and tense. ix) Abstract pattern of the verb noting whether it is a auxiliary haber plus participle or simply a participle (mainly used as adjective).
For the deep features, first we perform semantic role labeling and dependency parsing of the source sentence through the Semantic parser of Lund University 3 and then we project this information to the target side using the alignment. In case of alignment to multiple words, we use the lexical model probabilities to decide the target word that corresponds to the source dependency. In total we use 310 different deep features such as: pA (parent agent), cSBJ (child subject), cOB (child object), pNMOD (parent modifier), pA1 pos (POS of the parent agent 1) among others. The most important learned features are detailed in Section 4.4.
Classifier framework
The generation system is implemented by means of classification models that predict the person, number and gender from the extracted features. Typical algorithms to deal with this task are Conditional Random Fields (McCallum and Li, 2003), MaxEnt classifiers (Della Pietra et al., 1997) or Support Vector Machines (Platt et al., 2000). All of them usually represent the set of features as a binary array.
We discard CRFs because the prediction case described in this paper does not suit as a structured/sequential problem as we only focus on predicting verb forms and usually they don't influence each other within the sentence and therefore each of 3 http://nlp.cs.lth.se/software/ Figure 2: Decision DAG to find the best class out of four classes related to gender and number them becomes a root constituent itself.
We have chosen SVMs instead of MaxEnt because the feature vectors are high-dimensional. Concretely, the binary vector size is 380k for the shallow features and 755k for the vectors that combine shallow and deep features. Therefore, SVM approximates the decision boundary by means of support vectors, which allow curvature in the feature space when it is high dimensional. This was confirmed in some preliminary experiments where we found better performance and that the size of the support vectors was about the 5% with respect to the total training database. On the other hand the MaxEnt classifier is based on simple hyperplanes, which assumes that the underlying boundary between classes is linear. In addition, the MaxEnt model assumes that the distribution of the the dot product between the feature vector and the set of weights of the classifier, which in the model is reflected by the use of an exponential nonlinearity. This assumption is rather limited and might not be correct.
Among the different multiclass SVM approaches, we have implemented the generation system by Decision Directed Acyclic Graphs (DDAG) (Platt et al., 2000) composed of binary SVM classifiers. A DDAG combines many two-class classifiers into a multiclassification task. The description of the structure is as follows: For an N-class problem, the DDAG contains N(N-1)/2 nodes, one for each pair of classes (one-vs-one classifier). A DAGSVM algorithm is proposed by Platt et al. (2000). An example of a structure of the DDAG is shown in Figure 2.
The classifiers can be ordered following different criteria such as the misclassification rate, the balance between samples of each class or the most reliable decision taken by the classifiers. In this paper we follow the latter criteria: After processing the features by all classifiers simultaneously, the most consistent decision from all binary classifiers is taken in first place, afterwards the second best is considered and so on, until the final class is answered by the binary decisions. The experiments are explained in section 4.4.
Experiments
The experiments were carried out in three distinct stages. First, we have analyzed the impact of morphological generalization into the decoder models both with texts of the training-domain and text outof-domain. Then, we studied the generation system accuracy with fluent text sets and finally, we have studied the overall improvement achieved by the whole strategy under the different scenarios. We based our experiments under the framework of a factored decoder (Moses - Koehn and Hoang (2007)). Concretely, we translate the source words into target words plus their POS tags (Factored Moses from 0 to 0,2) using two separate language models for improving the fluency of the output. We did the alignment with stems through mGIZA (Gao and Vogel, 2008). We used the material from WMT12 (Callison-Burch et al., 2012) MT Shared Task for training. We used the Freeling analyzer (Padró et al., 2010) to tokenize, lemmatize and POStag both sides of the corpus (English and Spanish).
Baseline systems
In the same way we use the Freeling libraries in order to conjugate the verbs. We trained the language models (LM) with the SRILM Toolkit (Stolcke, 2002) at 5-gram level for words and 7-gram level for POS-tags.
In order to study the impact of the morphology at different training levels we have considered two different scenarios: First, we train a system only with texts from the European Parliament being a limited resource scenario, hereafter EPPS, consisting of small-sized corpora. Secondly, we consider a state-of-the-art scenario, hereafter WMT12, using all the material available. Corpus details are given in table 2. Weights have been tuned according to the development material of WMT'12 (7567 news sentences from 2008 to 2010). The news material for the years 2011 and 2012 has been left aside for testing purposes as explained later.
All these steps were performed identically for both the baseline and simplified verb forms decoders. Note that for the latter, the POS factor is also simplified. In addition, we needed also to simplify the development texts for tuning the system.
Test scenarios
We set different evaluation test sets: news tests from WMT11 and WMT12 (Callison-Burch et al., 2012) for in-domain evaluation and weblog translations from the FAUST project (Pighin et al., 2012) for the out-of-domain. The news sets from WMT consist of 3003 human revised translations each. They will be referred as n11 and n12 in this paper. Regarding the weblog translations we considered 998 translation requests in English into Spanish submitted to Softissimo's online translation portal 4 . Two independent human translators had corrected the most obvious typos and provided reference translations into Spanish for all of them along with the clean versions of the input requests. Thus, we consider four different test sets from this material:
i) Weblog Raw (wr) The noisy weblog input. It contains misspellings, slang and other input noise typical from chats, forums, etc. These translations are evaluated with their correspondent reference provided by each translators (two references).
ii) Weblog Clean i (w0 and w1) The cleaned version of the input text provided by each translator on the source side. Cleaned versions may differ due to the interpretation of the translators (e.g. If you dont like to chat ! If you don't like chatting -If you don't want to chat).
iii) Weblog Clean0.1 (w0.w1) In that case we mix up the criteria of the different translators. In that case the cleaned versions are concatenated (making up a set of 1,996 sentences) and evaluated with their respective translations (two references).
Impact of morphology generalization into the Decoder
We analyzed the effect of the morphology generalization into the decoder's models across two different aspects. First, we analyzed to what extent the morphology generalization reduces the perplexity of the language models built upon words and POS tags. Secondly, we analyzed the downsizing of the sparsity within the Moses lexical models. Results of the perplexity and sparsity reduction are detailed in table 3. The EPPS results detail the reduction within the constrained decoder and the WMT12 ones detail the reduction within the fully-trained decoder. In general terms, word level perplexities are reduced by a 6-7% when working with formal News data (in-domain) and by a 12-17% when working with weblog data. We observed that perplexity reduction is relatively more important for the constrained system. For the POS Language Models we observed less margin of reduction for the in-domain News sets (3-6%) and similar results for the weblog dataset (11.5-18%). With respect to the lexical models, we observed a reduction of the Spanish unique entries of the model. For the constrained system (EPPS) the entries are reduced from 164.13k to 140.10k and for the fully trained (WMT12) system the entries are reduced from 660.59k to 626.36k. The ratios of the lexical models show that the sparsity is clearly defined in Table 3: Evaluation of perplexity and lexical entries reduction obtained by the morphology generalization strategy.
the constrained system while it becomes balanced with a larger training corpus. In the latter case the generalization causes a negative sparsity relation.
Generation System
After analyzing the impact of the generalization strategy into the decoder models, we evaluated the DDAG accuracy to predict the morphology of the verb forms. Previous studies (de Gispert and Mariño, 2008) detailed that the learning curve for predicting verb forms stabilized with 300,000 verb samples for PN and 150,000 verb samples NG. As the purpose of this paper is to analyze the suitability of morphology-generalization strategy when addressing out-of-domain translations, we did not consider the study a new learning curve We trained the generation system with clean and fluent corpora (not MT-output). Details of the different corpora studied are depicted in table 4.
First, we trained as a baseline generation system with the same corpora of WMT12. We homogeneously sampled 300,000 sentences from the parallel corpus with 678k verbs. We used 450,000 verbs for training the generation system (300,000 for person and number (PN) and 150,000 for number and gender (NG)) setting aside 228k verbs (188 for PN and 40k for NG) for testing purposes.
We coped with second person morphology (tú / vosotros) with the use of OpenSubtitles corpora as training material, which contains plenty of dialogs. In that case we needed to align the sentences. We performed all the steps of mGIZA starting from the previously trained WMT12 models.
We used the OpenSubtitles corpora in two different ways: entirely or partially combined with the WMT12 corpora. However, the Subtitles corpora does not have enough verb forms for training the number and gender system, causing a smaller size of the training set for the standalone system and not allowing an equal contribution (50%) for the combined version. Table 6: Classification scores for the best accuracy configurations.
We also tested the prediction task in sets other than the verbs left apart from the training data. Concretely, we used the development material of WMT12 (n08-10) and the weblog test data.
Results are shown in tables 5. Regarding the feature sets used, as explained on section 3.1, we analyzed the accuracy both with shallow features and combining them with deep projected features (Shal-low+Dep) based on syntactic and semantic dependencies. We also analyzed the precision, recall and F1 scores for each class for the w0.w1 test set (Table 6). These results are from the best configurations achieved (PN: Shallow+Dep trained only with Subtitles and NG: Shallow trained with combined sets (WMT12+Sub)).
Results to predict person and number indicate that models trained with only subtitles yield the best accuracies for weblog data, whereas the models trained with the WMT12+Sub combined set yield the best results for the News domain. In addition, we observed that the best results are obtained with the help of the deep features indicating that they are important for the prediction task.
However, deep features do not help in the prediction of number and gender for the weblog and News test sets. With respect to the training material, the best results are achieved by the combined training set WMT12+Sub for the weblog tests and by the standalone WMT12 set for the News test set. This behavior is explained by the small amount of number and gender samples in the subtitles set.
Consequently, we analyzed the most important features from the DDAG-SVM models, i.e. those features with a significant weight values in the support vectors of the classifiers. Regarding the PN classifiers, we found that the Shallow features were among the 9 most important features of the PN models. Dependency features were less important being the POS, surface and lemma of the subject the 10th, 13th and 16th most important features respectively. Predicate features had a minimal presence in the models being the POS of the APP0 the 24rd most important feature. As presumed, for the NG classifiers the impact of the deep features was less important. In that case the POS of the NMOD and PMOD were in the 14th and 17th positions respectively and the POS of A1 the 18th most important feature.
With respect to the correctness of the classifiers per class (Table 6), we observed that 1P and SM classes are the ones with the highest F1 score. However, 2P class cannot be predicted due to its small presence (⇡ 0.6%) in both training and testing sets. When analyzing the results in detail, we found considerable confusions between 3P-3S, 2S-3S, and SM-SF. This latter case is caused by the presence of female proper nouns that the system is not able to classify accordingly (e.g. Tymoshenko) and therefore assigns them to the majority class (SM). All the F1 scores are around 0.35 and 0.45 per class, with the exception of 2P that can not be predicted properly.
Translation
Before analyzing the improvement of the strategy as a whole, we made an oracle analysis without the generation system. In that case, we evaluated the oracle translations by simplifying the reference translations and comparing them to the output of the simplified models. We detail the BLEU oracles in table 7. For the constrained system we observed a potential improvement between 0.5 to 0.7 BLEU points for the News sets and an improvement from 1 to 1.3 BLEU points for weblog datasets. For the full trained system we observed a similar improvement for the News sets (between 0.5 and 0.7 BLEU points) but a better improvement, between 2 and 3 BLEU points, for the out-of-domain weblog data. These oracles demonstrate the potential of morphology generalization as a good strategy for dealing with outof-domain data.
After analyzing the oracles we studied the overall translation performance of the strategy. We analyzed the results with BLEU and METEOR (Denkowski and Lavie, 2011). However, METEOR properties of synonymy and paraphrasing did not make it suitable for evaluating the oracles for the simplified references. In addition, table 7 details the results for the full generation strategy. In general terms, we observe better improvements for the weblog (out-ofdomain) data than for the News data. For the constrained system, weblog test sets improve by 0.55 BLEU/0.20 METEOR points while News test sets only improve 0.25 BLEU/0.14 METEOR points. For the fully trained system, the out-of-domain improvement is 1.49 BLEU/1.27 METEOR points in average and the News (in-domain) achieve an improvement of 0.62/0.56 METEOR points. These results are discussed next.
BLEU-EPPS
Discussion
The comparison of the different experiments show that a better improvement of the language models perplexity do not lead to a better improvement into the oracles obtained. Concretely, the EPPS constrained language models achieved a higher improvement with respect to the perplexities, whereas the fully trained WMT12 decoder achieved better improvement oracles. These results point the importance of the morphology generalization to the phrase-based and lexical models other than the language models.
In addition, when considering the full strategy the non-constrained system (WMT12) achieves higher improvements compared to the constrained decoder in most of the metrics. The constrained decoder provides a less fluent translation (and more noisy) compared to the fully trained decoder. Consequently, the morphology prediction task becomes more difficult for the constrained scenario due to the high presence of noise in the context of the generalized verbs. The noise presence into the MT-output also explains why the deep features do not help to obtain better translations. The main difference between the accuracy and translation experiments is the typology of the text where the prediction takes place. Whereas the accuracy experiments are performed with human references the generation system has to deal with the decoder output, which is noisy and less fluent, making the shallow features more robust. Thus, the strategy becomes more relevant when a decoder of better quality is available because a more fluent MT-output eases the task of morphology prediction.
The combined training set (wmt12+Sub) achieves the most stable improvement across all the metrics and trained scenarios. The WMT12 generation system worsens the baseline results, making the Subtitles corpus a crucial part to be combined into the training material in order to achieve a high improvement for the fully trained system due to, among other reasons, the lack of second person inflected forms into the training material.
We conducted a posterior analysis of the cases when the generation system worsened the oracle. In that case we found that in the 25% of these cases the generation was correctly performed but there was a change of the subject between the reference and the output. For example, the English phrase "Good people are willing" translated as "Las buenas personas están" has a worse score than "Las buenas personas está" with the reference "La gente buena está". In that example the metric penalizes the good agreement instead of the verb correspondence with the reference, which obviously it is not correct.
Conclusions and Future Work
This paper presents a strategy based on morphology generalization as a good method to deal with out-ofdomain translations, whereas it provides stability to in-domain translations. The experiments point the morphological sparseness as a crucial issue to deal when performing domain adaptation in SMT into richer languages along with language model perplexity.
In addition, we have shown that training morphology generation systems with the help of noisy data (OpenSubtitles) might help to obtain a better translation without compromising the quality of the models. Morphology generation systems might be trained with a relatively small amount of parallel data compared to standard SMT training corpora.
We have also shown the importance of projected deep features in order to predict the correct verb morphology under clean and fluent text. However, the projection of deep features is sensitive to the flu-ency of the sentence making them unreliable when they are applied to noisy MT-output.
Also we have shown that the morphology generation system becomes more relevant with high quality MT systems because their output is more fluent, making the shallow and deep features more reliable to guide the classifier.
Future plans include providing a n-best list or a lattice to the generation system to expand its search. We also work on the study of the projection heuristics in order to make the deep features less sensitive to the MT-output noise. Finally, we want to expand our study to the generalization of common nouns, function words and adjectives. In this case we should study the suitability of sequential learning frameworks such as CRF or probabilistic graphical models (PGM).
Table 1 :
1Example of morphology generalization steps taken for Spanish verbs.
Table 2 :
2Details of different corpora used for training the models. The counts are computed before generalization.
Table 5 :
5Accuracy scores achieved by the DDAG learner trained with different clean and aligned corpus (wmt12, Subtitles and combined) and different feature sets (Shallow and Shallow+Dependencies). The best results are depicted in bold.PN
NG
Train Test Train Test
WMT12
300k
189k
150k
40k
Subtitles
300k
82k
30k
7k
Combined
WMT12
150k
339k
120k
70k
Subtitles
150k
232k
30k
7k
Total
300k
570k
150k
77k
Table 4 :
4Details of the number of verbs per corpora and task used for training the generation system. PN stands for Person and Number and NG for Number and Gender.
Table 7 :
7Evaluation scores for English-Spanish translations considering Baseline, Oracle and Morphology Generation configurations. The best results are depicted in bold.
http://www.lsi.upc.edu/ nlp/tools/parole-eng.html
http://www.reverso.net
AcknoweledgmentsWe would like to thank the anonymous reviewers for their insightful comments. We also want to thank Daniele Pighin for his valuable advice. This research has been partially funded by the European Community's Seventh Framework Programme (FP7/2007(FP7/ -2013 under grant agreement 247762 (FAUST project, FP7-ICT-2009-4-247762) and by the Spanish Government (Buceador, TEC2009-14094-C04-01)
Enriching morphologically poor languages for statistical machine translation. E Avramidis, P Koehn, Proc. of ACL-08: HLT. of ACL-08: HLTE. Avramidis and P. Koehn. 2008. Enriching morpho- logically poor languages for statistical machine trans- lation. Proc. of ACL-08: HLT, pages 763-770.
Forms Wanted: Training SMT on Monolingual Data. Workshop of Machine Translation and Morphologically-Rich Languages. O Bojar, A Tamchyna, O. Bojar and A. Tamchyna. 2011. Forms Wanted: Training SMT on Monolingual Data. Workshop of Machine Translation and Morphologically-Rich Lan- guages., January.
Findings of the 2012 workshop on statistical machine translation. C Callison-Burch, P Koehn, C Monz, M Post, R Soricut, L Specia, Proc. of the 7th Workshop on Statistical Machine Translation. of the 7th Workshop on Statistical Machine TranslationMontréal, Canada. ACLC. Callison-Burch, P. Koehn, C. Monz, M. Post, R. Sori- cut, and L. Specia. 2012. Findings of the 2012 work- shop on statistical machine translation. In Proc. of the 7th Workshop on Statistical Machine Translation, pages 10-51, Montréal, Canada. ACL.
Combining morphemebased machine translation with post-processing morpheme prediction. A Clifton, A Sarkar, Proc. of the 49th Annual Meeting of the ACL-HLT. of the 49th Annual Meeting of the ACL-HLTPortland, OR, USAA. Clifton and A. Sarkar. 2011. Combining morpheme- based machine translation with post-processing mor- pheme prediction. In Proc. of the 49th Annual Meeting of the ACL-HLT. Portland, OR, USA.
On the impact of morphology in English to Spanish statistical MT. A De Gispert, J Mariño, Speech Communication. 50A. de Gispert and J. Mariño. 2008. On the impact of mor- phology in English to Spanish statistical MT. Speech Communication, 50(11-12):1034-1046.
Minimum bayes risk combination of translation hypotheses from alternative morphological decompositions. A De Gispert, S Virpioja, M Kurimo, W Byrne, Proc. of Human Language Technologies: The 2009 Annual Conference of the NAACL. of Human Language Technologies: The 2009 Annual Conference of the NAACLShort Papers; Stroudsburg, PA, USAA. de Gispert, S. Virpioja, M. Kurimo, and W. Byrne. 2009. Minimum bayes risk combination of translation hypotheses from alternative morphological decompo- sitions. In Proc. of Human Language Technologies: The 2009 Annual Conference of the NAACL, Short Pa- pers, pages 73-76, Stroudsburg, PA, USA.
Inducing features of random fields. Pattern Analysis and Machine Intelligence. V Della Della Pietra, J Pietra, Lafferty, IEEE Transactions on. 194Della Pietra, V. Della Pietra, and J. Lafferty. 1997. Inducing features of random fields. Pattern Analy- sis and Machine Intelligence, IEEE Transactions on, 19(4):380-393.
Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. M Denkowski, A Lavie, Proc. of the 6th Workshop on Statistical Machine Translation. of the 6th Workshop on Statistical Machine TranslationAssociation for Computational LinguisticsM. Denkowski and A. Lavie. 2011. Meteor 1.3: Auto- matic metric for reliable optimization and evaluation of machine translation systems. In Proc. of the 6th Workshop on Statistical Machine Translation, pages 85-91. Association for Computational Linguistics.
Overcoming statistical machine translation limitations: error analysis. Language Resources and Evaluation. M Farrús, M R Costa-Jussà, J B Mariño, M Poch, A Hernández, C A Henríquez, Q , J A R Fonollosa, 45M. Farrús, M. R. Costa-jussà, J. B. Mariño, M. Poch, A. Hernández, C. A. Henríquez Q., and J. A. R. Fonol- losa. 2011. Overcoming statistical machine transla- tion limitations: error analysis. Language Resources and Evaluation, 45(2):165-179, May.
Parallel implementations of word alignment tool. Q Gao, S Vogel, Software Engineering, Testing, and Quality Assurance for Natural Language Processing. Columbus, OhioQ. Gao and S. Vogel. 2008. Parallel implementations of word alignment tool. In Software Engineering, Test- ing, and Quality Assurance for Natural Language Pro- cessing, pages 49-57, Columbus, Ohio, June. ACL.
A class-based agreement model for generating accurately inflected translations. S Green, J Denero, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. the 50th Annual Meeting of the Association for Computational LinguisticsJeju Island, KoreaAssociation for Computational LinguisticsLong Papers)S. Green and J. DeNero. 2012. A class-based agree- ment model for generating accurately inflected trans- lations. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 146-155, Jeju Island, Ko- rea, July. Association for Computational Linguistics.
Analysing the effect of out-of-domain data on smt systems. B Haddow, P Koehn, Proc. of the 7th Workshop on Statistical Machine Translation. of the 7th Workshop on Statistical Machine TranslationMontréal, Canada. ACLB. Haddow and P. Koehn. 2012. Analysing the effect of out-of-domain data on smt systems. In Proc. of the 7th Workshop on Statistical Machine Translation, pages 422-432, Montréal, Canada. ACL.
Factored translation models. P Koehn, H Hoang, Proc. of the 2007 Joint Conf. on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). of the 2007 Joint Conf. on Empirical Methods in Natural Language essing and Computational Natural Language Learning (EMNLP-CoNLL)Prague, Czech RepublicP. Koehn and H. Hoang. 2007. Factored translation models. In Proc. of the 2007 Joint Conf. on Em- pirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP- CoNLL), pages 868-876, Prague, Czech Republic, June. ACL.
Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. A Mccallum, W Li, Proc. of the 7th conference on Natural language learning at HLT-NAACL 2003. of the 7th conference on Natural language learning at HLT-NAACL 2003Association for Computational Linguistics4A. McCallum and W. Li. 2003. Early results for named entity recognition with conditional random fields, fea- ture induction and web-enhanced lexicons. In Proc. of the 7th conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 188-191. Associ- ation for Computational Linguistics.
Freeling 2.1: Five years of opensource language processing tools. Ll, M Padró, S Collado, M Reese, I Lloberes, Castellón, Proc. of 7th Language Resources and Evaluation Conference (LREC 2010). of 7th Language Resources and Evaluation Conference (LREC 2010)La Valletta, MALTA, May. ELRALl. Padró, M. Collado, S. Reese, M. Lloberes, and I. Castellón. 2010. Freeling 2.1: Five years of open- source language processing tools. In Proc. of 7th Lan- guage Resources and Evaluation Conference (LREC 2010), La Valletta, MALTA, May. ELRA.
The faust corpus of adequacy assessments for real-world machine translation output. D Pighin, Ll Màrquez, Ll Formiga, Proc. of the 8th International Conference on Language Resources and Evaluation (LREC'12). of the 8th International Conference on Language Resources and Evaluation (LREC'12)Istanbul, Turkey, mayEuropean Language Resources Association (ELRAD. Pighin, Ll. Màrquez, and Ll. Formiga. 2012. The faust corpus of adequacy assessments for real-world machine translation output. In Proc. of the 8th In- ternational Conference on Language Resources and Evaluation (LREC'12), Istanbul, Turkey, may. Euro- pean Language Resources Association (ELRA).
Large margin DAGs for multiclass classification. J Platt, N Cristianini, J Shawe-Taylor, Advances in Neural Information Processing Systems. MIT PressJ. Platt, N. Cristianini, and J. Shawe-taylor. 2000. Large margin DAGs for multiclass classification. In Advances in Neural Information Processing Systems, pages 547-553. MIT Press.
Towards the use of word stems and suffixes for statistical machine translation. M Popovic, H Ney, Proceedings of the 4th International Conference on Language Resources and Evaluation, LREC'04. the 4th International Conference on Language Resources and Evaluation, LREC'04M. Popovic and H. Ney. 2004. Towards the use of word stems and suffixes for statistical machine translation. In Proceedings of the 4th International Conference on Language Resources and Evaluation, LREC'04, pages 1585-1588, May.
A formal approach to spanish morphology: the coi tools. S Rodríguez, J Carretero, Procesamiento del Lenguaje Natural. 19119S. Rodríguez and J. Carretero. 1996. A formal approach to spanish morphology: the coi tools. Procesamiento del Lenguaje Natural, 19:119.
SRILM -an extensible language modeling toolkit. A Stolcke, Proc. of the ICSLP. of the ICSLPDenver, ColoradoA. Stolcke. 2002. SRILM -an extensible language mod- eling toolkit. In Proc. of the ICSLP, pages 311-318, Denver, Colorado, September.
Applying morphology generation models to machine translation. K Toutanova, H Suzuki, A Ruopp, Proc. of ACL-08: HLT. of ACL-08: HLTColumbus, OhioK. Toutanova, H. Suzuki, and A. Ruopp. 2008. Applying morphology generation models to machine translation. In Proc. of ACL-08: HLT, pages 514-522, Columbus, Ohio, June. ACL.
Using pos information for statistical machine translation into morphologically rich languages. N Ueffing, H Ney, Proc. of the 10th conference on European chapter of the Association for Computational Linguistics. of the 10th conference on European chapter of the Association for Computational LinguisticsStroudsburg, PA, USA. ACL1EACL '03N. Ueffing and H. Ney. 2003. Using pos informa- tion for statistical machine translation into morpholog- ically rich languages. In Proc. of the 10th conference on European chapter of the Association for Computa- tional Linguistics -Volume 1, EACL '03, pages 347- 354, Stroudsburg, PA, USA. ACL.
Morphology-aware statistical machine translation based on morphs induced in an unsupervised manner. S Virpioja, J J Väyrynen, M Creutz, M Sadeniemi, Machine Translation Summit XI. S. Virpioja, J.J. Väyrynen, M. Creutz, and M. Sadeniemi. 2007. Morphology-aware statistical machine transla- tion based on morphs induced in an unsupervised man- ner. Machine Translation Summit XI, 2007:491-498. |
||
21,687,487 | Annotated Corpus of Scientific Conference's Homepages for Information Extraction | In this paper, we present a new corpus that contains 943 homepages of scientific conferences, 14794 including subpages, with annotations of interesting information: name of a conference, its abbreviation, place, and several important dates; that is, submission, notification, and camera ready dates. The topics of conferences included in the corpus are equally distributed over five areas: artificial intelligence, natural language processing, computer science, telecommunication, and image processing. The corpus is publicly available. Beside the characteristics of the corpus, we present the results of information extraction from the corpus using SVM and CRF models as we would like this corpus to be considered a reference data set for this type of task. | [
16514634,
10298501
] | Annotated Corpus of Scientific Conference's Homepages for Information Extraction
Piotr Andruszkiewicz p.andruszkiewicz@ii.pw.edu.pl
Instititute of Computer Science
Warsaw University of Technology Warsaw
Poland
Rafał Hazan r.hazan@stud.elka.pw.edu.pl
Instititute of Computer Science
Warsaw University of Technology Warsaw
Poland
Annotated Corpus of Scientific Conference's Homepages for Information Extraction
annotated corpusscientific conference's homepagesinformation extraction
In this paper, we present a new corpus that contains 943 homepages of scientific conferences, 14794 including subpages, with annotations of interesting information: name of a conference, its abbreviation, place, and several important dates; that is, submission, notification, and camera ready dates. The topics of conferences included in the corpus are equally distributed over five areas: artificial intelligence, natural language processing, computer science, telecommunication, and image processing. The corpus is publicly available. Beside the characteristics of the corpus, we present the results of information extraction from the corpus using SVM and CRF models as we would like this corpus to be considered a reference data set for this type of task.
Introduction
Up-to-date information about conferences is important for scientists who track conferences of their interest and check, e.g., dates of a conference, deadlines, that could change especially during submission period. Thus, a system that gathers such information could ease scientists' lives. The system should collect data about conferences and keep it up-to-date. Moreover, it should provide data in a structured way to facilitate searching conferences and obtaining information about any changes. The crucial part of this system are methods for collecting data about conferences automatically, e.g., homepages of a conference for the current and previous years, when and where a conference will be held, submission, notification, camera ready dates, etc. In this paper, we present a new corpus that contains homepages of conferences with annotations of interesting information, e.g., name of a conference, its abbreviation, several important dates for the conference. The motivation behind this task was that according to our knowledge there is no any publicly available corpus in such a domain. The corpus can be used to train a tool for information extraction from unstructured sources containing data describing conferences. We chose conference home pages as a source as they contain up-to-date information. Structured services, such as WikiCFP, do not always update information (e.g., when deadline changes) and cannot be used in a real system for gathering up-to-date information about conferences. Beside the characteristics of the corpus, we present results of information extraction as a baseline and a proof that this corpus can be used as a reference data set for information extraction from homepages of conferences. The corpus is publicly available and can be downloaded from the following website http://ii.pw.edu.pl/∼pandrusz/data/conferences/. The remainder of this paper is organised as follows: In Section 2. we describe the corpus we created. In Sections 3. the preprocessing and features are described. The experimental results are discussed in Section 4.. Section 5. presents related work. Finally, Section 6. summarizes the conclusions of the study and outlines avenues to explore in the future.
The Corpus
On the internet one can find corpora for information extraction, e.g., the corpus for information extraction from researcher's homepages , seminar announcements (Califf and Mooney, 1999;Freitag and Mc-Callum, 1999). However, we could not find any publicly available corpus for scientific conferences. Therefore, we created an annotated corpus for the task of information extraction from homepages of scientific conferences. This corpus is publicly available and can be found on the website http://ii.pw.edu.pl/∼pandrusz/data/conferences/. Our decision to collect homepages of conferences, not Call For Papers (CFPs), is based on the following findings. We verified 100 passed conferences from our corpus for which we were able to find a running homepage and determine the important dates for a conference. Then we compared the data from a homepage and from WikiCFP service. It appeared that in WikiCFP about 70% of conferences have not up-to-date information about important dates, mostly submission date, as this date changes most often as the deadline approaches/passes. The dates are stable until the submission date comes, then dates are changed on a homepage, however, they are not updated in WikiCFP. Furthermore, data provided in CFPs is limited, e.g., it usually lacks information about sponsors. In (Xin et al., 2008) the authors stated that only less than 10% of CFPs analysed by them presented information about sponsors. Moreover, a service might not have information about conference we are looking for because it is field specific or covers only small part of all conferences in the field. According to authors of (Xin et al., 2008), it was possible to find only 40% of the textual CFPs of the top 293 computer science conferences listed at Citeseer (http://citeseer.ist.psu.edu/impact.html), while searching such conference services. In our work CFPs proved useful in gathering not detailed information on conferences; namely, the list of conference webpage addresses. During the process of gathering the corpus we wanted to make it as automatic as possible. To that end, as a first step we gathered a list of conferences from a conference hub. We chose the WikiCFP (http://wikicfp.com/cfp/) for that purpose, as it is a well known service and contains CFPs for areas we are interested in. Then we downloaded the homepage link and other data about each conference from WikiCFP. After that we downloaded the homepages and subpages (the depth level was restricted to one) within the same domain as the main page.
In the next step of corpus creation, for each conference (sub)page and each entity, e.g., submission date equal to 15 January 2015, we automatically found all instances of this entity and annotated them in the html source code of the web page. The method of searching for an instance of the entity could not be a simple comparison of characters for several reasons. For instance, there are different ways of writing dates, the names of the conference provided in WikiCFP could differ slightly from the name on the page. Thus, we employed the following method for name of a conference comparison. We removed all conference name stop-words, e.g., word Workshop from Workshop on Collaborative Online Organizations. This led to Collaborative Online Organization that was searched. The conference name stop-words list has been manually created and consists of: The, International, Conference, Workshop. Moreover, we allow other single words to appear between words that were being searched. We applied case sensitive search. When conference name stop-words were neighbours of found instances, we added these conference name stop-words to annotation as their constitute a name. Though we do not annotate the year number and the consecutive number of the conference if they appear at the beginning of the name. To deal with different formats of date, we employed GATE tool and its default JAPE (Java Annotation Patterns Engine) rules (Kenter and Maynard, 2005). After the automatic process, the annotations were verified by three persons and manually corrected/added where necessary. This step is necessary as WikiCFP may not have up-to-date information as already explained. In case of disagreement majority voting was used. The corpus we created contains 943 annotated homepages, 14794 pages including subpages, of scientific conferences. Hence, there are more than 15 pages per conference on average. The topics of conferences are equally distributed over five topics; namely, artificial intelligence, natural language processing, computer science, telecommunication, and image processing. The following entities were annotated: name and abbreviation of the conference, place, dates of the conference, submission, notification, final version due dates, the tags used in corpus are cname, abbre, where, when, subm, notf, finv, respectively. The annotated entity types are the most important considering a system that gathers information about conferences and is used by scientists to track conferences of their interest. However, we plan to annotate additional entity types, e.g., general and local chairs, invited speakers, sponsors. The statistics of the corpus are presented in Table 1 tity type. Thus, abbreviation contains one token. The name of a place, where a conference is held, sometimes consists of two tokens, which is consistent with names of cities or countries, e.g., New Zealand. The length of the name of a conference is almost 7. The lenght of each important date is about 3.5 and is consistent within the dates. The date of a conference is longer (about 4.8) because it contains a range of days. Two next columns Inst. and Inst. per conference present the number of instances of an entity type in the corpus and the average number per conference. Important dates are less frequent on homepages. The most frequent is place and surprisingly it is mentioned over 80 times per conference.
Preprocessing and Features
To reduce the number of features, we use Snowball stemmer (Porter, 2001) in the preprocessing phase. We also remove words from a custom stoplist. Words that often occur in conference names, e.g., 'the', 'and', 'on', are not included in the stoplist. We extract a main article or paragraphs from a web page using Boilerpipe (Kohlschütter et al., 2010) library. Text is not removed from the web page in order to avoid situation in which important elements are removed by mistake. In our approach, we distinguish four group of features; namely, local, offset, layout, and dictionary features.
Local Features
Local features are created based on a current word that is being analysed. The commonly used feature is a word. This feature is not created for words from the stoplist and those tokens that contain nonalphabetic characters. The second feature contains part of speech (POS) tags for a current word calculated by Penn Pos Tagger from factorie package (McCallum et al., 2009). Short word feature is assigned a value true for words containing 2 to 5 characters. Shape of a word represents numbers with 1, capital letters with A and small letters with a. If there are more than two the same characters in the value of this feature, the sequence is reduced to two characters. For type of a word feature we created eight types of words. Short phrase is set for words being a sequence of length of one or two words, for instance, named entities with two words, e.g., Carl Brunto. Long phrase indicates words of sequences with at least three words. We distinct between short and long phrases because conference names are usually long phrases
Offset Features
Predecessor represents features calculated for the word that precedes a current word. We assume that we take into account only type of a word feature for one predecessor. Successor feature is constructed in the analogous way. Important dates of conferences represented as lists or tables are easy to understand for humans and hard to process by machine learning algorithms. We can find dates on the left, right, below or above a description of a date. We created date surrounding words feature to help machine learning algorithms in important dates extraction. It describes a date by up to six words before a date. If a date is followed by a colon then it contains up to six words after a date. The words from date surrounding words feature are used to calculate features for a current word. We create these features only for dates, because we do not want to increase the number of features too much.
Layout Features
Block feature informs about the blocks a word belongs to. We assign a separated value for each of the following blocks: head title, title, subtitle, paragraph, list, and table as the distribution over blocks differ for entities of interest. The number of a paragraph for a word is represented by Paragraph number feature. We consider only first six paragraphs because more than half of interesting entities is present in these paragraphs based on the corpus. This feature is important for conference names and abbreviations, dates and locations of conferences detection as these entities often occur at the beginning of a web page, according to our corpus. The important dates usually lie in further parts of a web page.
One of the subpages of a main conference homepage may contain entities of interest. Therefore, subpages are added to the training data. We restrict subpages to only those accessible by links with the following names: index, home, call for papers, registration, important dates. Moreover, each word from subpage is indicated by subpage feature containg anchor text, e.g., SUB=index.
Words modified by the following HTML tags: STRONG, B (bold), U (underlined), and FONT (use different fonts) are marked with Emphasised feature. This feature is meant for dates of a conference as they are more often underlined. Abbreviations and names of conferences do not show this regularity. Links (A HTML tag) are represented by Hyperlink feature. This feature surprisingly indicates rather links to other conferences than information important in our task. Statistics calculated on our corpus confirm that.
Dictionary Features
Within location features, for a location in a web page a LOC=true, for a country COUNTRY=true and for a city CITY=true features are created. To calculate these features, gazetteer from ANNIE module of GATE (Kenter and Maynard, 2005) is used and location names from the corpus are added. The aforementioned features are helpful in conference location extraction. Words that have not been found in the dictionary are marked with Out of dictionary feature. Our dictionary of English words contains 112505 words. This feature is designed for abbreviation extraction because this type of entity has the highest fraction of words not found in the dictionary.The feature suggests also location entity as it has the second highest value of not being found in the dictionary. We created word dictionaries for place and date, name and abbreviation of a conference. They contain words that occur the most in sentences containing an important entity of a given type. Feature promising surrounding words marks words from sentences that contain at least one word from the dictionary. As the dictionaries are not mutually exclusive, promising surrounding words feature indicates that a word is important rather for entity extraction than for a specific entity type.
Multi-token Sequences
While describing features for our model, we assume that a single token; that is, a word, a number, or a nonalphanumeric character, is considered as a base object used by a model and assigned with one of interesting entity types, including other that means an object is not of one of the interesting entity types. This leads to a case when a sequence of tokens may have different entity types assigned even they are one entity of, e.g., conference name type. For instance, a sequence International Conference on Artificial Intelligence & Applications may have the following entity types assigned: International -conference name, Conference -conference name, on -other, Artificial -conference name, and so on. Therefore, we expand a base object of a model to be a sequence of tokens that groups words forming one instance of entity. While detection of dates is an easy task, finding sequences that represent other named entities is not trivial. Hence, we prepared a heuristic algorithm customised for finding token sequences on conference web pages that is based on the following rules: each sequence consists of words that begin with a capital letter; these words may be separated by one word that starts with small letter; sequences are found within a sentence; a sequence cannot be separated by comma, dash nor colon. For example, words 'International Conference on Advancements in Information Technology' is treated by this algorithm as one sequence.
For sequences with at least two words we need to calculate features in one of the
Experiments
In our experiments we divide the corpus into training and test sets according to the proportion of 70/30. For Support Vector Machine (SVM) model (Cortes and Vapnik, 1995) cross validation is performed on the training set in order to find the best parameters, then the model is trained on the whole training set. We use LibSVM implementation (Chang and Lin, 2011). For multiclass classification we employ one versus the rest approach (Fan et al., 2008). For a web page we choose the only one entity of a given type that has the highest score among those indicated by an algorithm. Only location entity may have two instances because usually a country and a city is provided on a web page as a location of a conference.
Importance of Features
In our first group of experiments we verify using SVM how important the groups of features customised for information extraction from scientific conferences web pages are. We want to show how domain specific features influence the final results. As features in groups are sparse, a model with only one group of features would obtain very low accuracy and the comparison of models built with only one group of features would not be reliable. Therefore in each iteration we analyze all groups of features but one, in order to estimate how relevant is the group which was left out. The results of the experiments are shown in Table 2. For all entity types but Notification we obtain the best results for all types of features included. For Notification we achieve the best results for the case without dictionary features, however, the results for all types of features case are not far behind (0.49 vs 0.46 in terms of F1). The results show that each group of features carries some information that is important for (at least one) interesting entity type. Thus, we could say that it is crucial to prepare features that are specific for a given domain. As the obtained results have shown, lack of some features may reduce the accuracy for some entity types to zero, for instance, lack of offset features for important dates.
For scientific conference web pages local features identify more general objects, such as dates and named entities that contain desired information. Offset features describe surroundings of a word, its context, which is necessary for important dates extraction. Layout features generate important features functions that inform on the localisation of a given word within a web page. They help in the case when an entity is not placed in the main text of a web page. Dictionary features improve the results mostly by its location feature that indicates potential places where a conference is held.
Models Comparison
Having the influence of features verified, we investigate the applicability of different models with regard to variations of their basic objects used; namely, single tokens, and sequences. In this set of experiments we use preprocessing and all the groups of features mentioned in Section 3.. For SVM model we start with comparison of single tokens and sequences used as basic objects that the model is working with. The results for linear SVM classifier run on single tokens as basic objects 1 are shown in the first row of Table 3. The accuracy of the model, also linear SVM, that uses sequences as basic objects is presented in the second row in the same table. The single token SVM performs significantly poorer than sequence SVM for name of a conference and important dates. The reason behind is that first model assigns a label to each single token independently and mentioned entities consists of several tokens. We try to ease SVM with this task by incorporating offset features, however, it seems that it is not enough to help single token SVM with extraction of entities that consist of several consecutive words. By providing the SVM already extracted potential sequences we overcome this problem. For sequence SVM we observe also 6 percentage points (p.p.) decrease in F1 for abbreviation detection, where linear SVM performs the best. We present only the results of linear SVM because the nonlinear SVM with RBF kernel function has not obtained significantly better results. Therefore, we stay with linear one due to less complexity and shorter training time. Our model has a high number of features, hence there is no need to increase the dimensionality by applying a kernel function (Hsu et al., 2003).
In our experiments we also use Linear Conditional Random Fields, CRF (Lafferty et al., 2001) with three different templates of factors. The first template connects factors with an input variable and an output variable. The second represents the relation between consecutive output variables. The third has only one argument that is an output variable. Single tokens CRF (Lin. CRF in Table 3 Table 3) discovers them on the comparable level to sequence SVM. Both models based on sequences handle important dates significantly better because the sequence discovery algorithm extracts potential entities, that may have different formats, very well. Moreover, sequences also help CRF in date extraction (the best obtained results), like for SVM. Sequences discovery for name is not as good as discovering sequences for important dates. That is why we observe 9 p.p. decrease in extraction of that entity for CRF based on sequences compared to the one based on single tokens. However, sequences slightly increase CRF results for abbreviation and place. Summarising, linear CRF based on single tokens outperforms other models for name. Linear SVM, also based on single tokens, obtains the best results for abbreviation.
Dates are extracted better with models based on sequences than single tokens. For place the winner is SVM on both single tokens and sequences (SVM on sequences outperforms SVM on single tokens by only 1 p.p.), however, all other models are not worse than 8 p.p. in terms of F1. Thus, different models may be used for specific entity types in order to achieve the best cumulative results.
Related work
Previous works in that field focused mostly on information extraction from CFPs using different approaches. Extracting information from CFPs has drawbacks mentioned in Section 2.. In (Lazarinis, 1998) rule based method was employed to extract date and country from a CFP. Linear CRF was used in (Schneider, 2006) in order to extract seven attributes about conferences from CFPs with the use of layout features. However, in this approach only plain text of CFPs was used and layout features were based on lines of text, indicating, e.g., first token in line or first line in the text. We use HTML sourcecode of web pages, including formatting. Thus, our data has much more richer layout. In (Ireson et al., 2005) a general platform for performing and assessing information extraction from workshop CFPs was described.
The platform was used in Pascal Challenge on Evaluating Machine Learning for Information Extraction. The organizers of the challenge provided a standardised corpus of CFPs, a set of tasks, and methodology for evaluation. The results of the challenge can be found in the aforementioned paper. Issertial and Tsuji (2011) focused also on information extraction from CFPs, including that which come via e-mails. They used rule-based methods to extract information about conferences from conference services, like Wi-kiCFP, and combined them in one system in order to facilitate the process of finding conferences that are of interest for a user. In contrast to aforementioned works (Xin et al., 2008) extracted information about conferences from web pages with Constrained Hierarchical Conditional Random Fields. However, the set of homepages used in experiments has not been published. We created the annotated corpus, performed extraction and made both the corpus and the results public in order to encourage researchers to improve the baseline for this corpus. In information extraction many approaches have been proposed. One of them is a rule-based method employed in (Ciravegna, 2001;Hazan and Andruszkiewicz, 2013). Support Vector Machines (SVM) classifier was applied to information extraction from web pages also (Andruszkiewicz and Nachyla, 2013). A variety of Conditional Random Fields (CRF) methods were widely used Wu and Weld, 2010;Li et al., 2011;Rocktäschel et al., 2013;Wang and Feng, 2013;Andruszkiewicz and Nachyla, 2013;Cuong et al., 2015). Constrained CRF applied in (Xin et al., 2008) allows a miner for specifying constrains for extracted entities. Furthermore, Markov Logic Networks (MLNs) were used in information extraction from web pages (Andruszkiewicz and Nachyla, 2013).
Conclusions and Future Work
To sum up, we created the corpus of 943 annotated homepages of scientific conferences and make it publicly available. Moreover, we performed the experiments with singleand multi-token SVM and CRF for this set in order to set a baseline for this corpus.
. Column Avg. length presents an average token length of an en-Entity
Tag
Avg.
Inst.
Inst. per
type
length
conference
Name
cname
6.93
9954
10.6
Abbreviation
abbre
1.00
52222
55.4
Place
where
1.07
79091
83.9
Date
when
4.78
11261
11.9
Submission
subm
3.54
3196
3.4
Notification
notf
3.56
2081
2.2
Final ver. due
finv
3.54
3851
4.1
Table 1: The characteristics of the corpus.
Table 2 :
2The importance of features groups for entities (F1 measure, the best results marked in bold). and locations of conferences are usually short. Date indicates dates that are present on a web page. Other types are: Number -assigned for numbers, e.g., 12, 1st; acronym indicates words of the following shapes: AAaa, AaAA, AAa, AA, AaaAaa AaaAA, AA1AA, AAaAA, punctuation marks, special char represents nonalphanumeric chars that are not punctuation marks. Other words are marked with standard word type and represent words that probably are not interesting in the case of information we want to extract.Features
Name Abbrev. Place Date Submission Notification Final ver. due
All
0.36
0.76
0.67
0.80
0.60
0.46
0.65
Without local
0.09
0.55
0.66
0.33
0.50
0.35
0.52
Without offset
0.33
0.68
0.62
0.67
0.00
0.00
0.00
Without layout
0.26
0.52
0.54
0.71
0.58
0.48
0.60
Without dict.
0.33
0.74
0.55
0.69
0.56
0.49
0.58
Table 3 :
3The results of extraction for entities (the best F1 results marked in bold).of label (SVM lacks this property). However, for entities that do not consist of several consecutive words we have not observed improvements; even contrary, we notice small decrease for place and date. Surprisingly, single token CRF cannot handle important dates extraction like in the case of single token SVM. However, sequence CRF (Lin. CRF seq. in
It means that the model assigns a label; that is, a type of entity, to a single token.
hierarchical CRF, MLNs, to obtain better results. Especially, we want to focus on important dates extraction by experimenting with different models and gathering more instances of these entity types. We also would like to extend our corpus by adding new conferences and annotations, e.g., chairs, committee members. In future work, we plan to apply other models. in order to encourage researchers to make experiments on our corpusIn future work, we plan to apply other models, e.g., hierar- chical CRF, MLNs, to obtain better results. Especially, we want to focus on important dates extraction by experiment- ing with different models and gathering more instances of these entity types. We also would like to extend our corpus by adding new conferences and annotations, e.g., chairs, committee members, in order to encourage researchers to make experiments on our corpus.
Bibliographical References. Bibliographical References
Automatic extraction of profiles from web pages. P Andruszkiewicz, B Nachyla, Intelligent Tools for Building a Scientific Information Platform -Advanced Architectures and Solutions. Andruszkiewicz, P. and Nachyla, B. (2013). Automatic extraction of profiles from web pages. In Intelligent Tools for Building a Scientific Information Platform -Ad- vanced Architectures and Solutions, pages 415-431.
Relational learning of pattern-match rules for information extraction. M E Califf, R J Mooney, Proceedings of the Sixteenth National Conference on Artificial Intelligence and Eleventh Conference on Innovative Applications of Artificial Intelligence. Jim Hendler et al.the Sixteenth National Conference on Artificial Intelligence and Eleventh Conference on Innovative Applications of Artificial IntelligenceOrlando, Florida, USA.AAAI Press / The MIT PressCaliff, M. E. and Mooney, R. J. (1999). Relational learning of pattern-match rules for information extraction. In Jim Hendler et al., editors, Proceedings of the Sixteenth Na- tional Conference on Artificial Intelligence and Eleventh Conference on Innovative Applications of Artificial Intel- ligence, July 18-22, 1999, Orlando, Florida, USA., pages 328-334. AAAI Press / The MIT Press.
LIBSVM: A library for support vector machines. C Chang, C Lin, ACM TIST. 2327Chang, C. and Lin, C. (2011). LIBSVM: A library for sup- port vector machines. ACM TIST, 2(3):27.
(LP ) 2 , an adaptive algorithm for information extraction from web-related texts. F Ciravegna, Proceedings of the IJCAI-2001 Workshop on Adaptive Text Extraction and Mining. the IJCAI-2001 Workshop on Adaptive Text Extraction and MiningCiravegna, F. (2001). (LP ) 2 , an adaptive algorithm for information extraction from web-related texts. In Pro- ceedings of the IJCAI-2001 Workshop on Adaptive Text Extraction and Mining.
Support-vector networks. C Cortes, V Vapnik, Machine Learning. 20Cortes, C. and Vapnik, V. (1995). Support-vector net- works. Machine Learning, 20(3):273-297.
Scholarly document information extraction using extensible features for efficient higher order semi-CRFs. N V Cuong, M K Chandrasekaran, M Kan, W S Lee, Proceedings of the 15th ACM/IEEE-CE Joint Conference on Digital Libraries. Paul Logasa Bogen II, et al.the 15th ACM/IEEE-CE Joint Conference on Digital LibrariesKnoxville, TN, USAACMCuong, N. V., Chandrasekaran, M. K., Kan, M., and Lee, W. S. (2015). Scholarly document information extrac- tion using extensible features for efficient higher order semi-CRFs. In Paul Logasa Bogen II, et al., editors, Pro- ceedings of the 15th ACM/IEEE-CE Joint Conference on Digital Libraries, Knoxville, TN, USA, June 21-25, 2015, pages 61-64. ACM.
LIBLINEAR: A library for large linear classification. R Fan, K Chang, C Hsieh, X Wang, Lin , C , Journal of Machine Learning Research. 9Fan, R., Chang, K., Hsieh, C., Wang, X., and Lin, C. (2008). LIBLINEAR: A library for large linear classifi- cation. Journal of Machine Learning Research, 9:1871- 1874.
Information extraction with HMMs and shrinkage. D Freitag, A K Mccallum, Proceedings of the AAAI-99 Workshop on Machine Learning for Information Extraction. the AAAI-99 Workshop on Machine Learning for Information ExtractionFreitag, D. and McCallum, A. K. (1999). Information ex- traction with HMMs and shrinkage. In In Proceedings of the AAAI-99 Workshop on Machine Learning for Infor- mation Extraction, pages 31-36.
Home pages identification and information extraction in researcher profiling. R Hazan, P Andruszkiewicz, Intelligent Tools for Building a Scientific Information Platform -Advanced Architectures and Solutions. Hazan, R. and Andruszkiewicz, P. (2013). Home pages identification and information extraction in researcher profiling. In Intelligent Tools for Building a Scientific Information Platform -Advanced Architectures and So- lutions, pages 41-51.
A practical guide to support vector classification. C.-W Hsu, C.-C Chang, C.-J Lin, Hsu, C.-W., Chang, C.-C., Lin, C.-J., et al. (2003). A prac- tical guide to support vector classification.
Evaluating machine learning for information extraction. N Ireson, F Ciravegna, M E Califf, D Freitag, N Kushmerick, A Lavelli, Machine Learning, Proceedings of the Twenty-Second International Conference (ICML 2005). Luc De Raedt et al.Bonn, GermanyACM119Ireson, N., Ciravegna, F., Califf, M. E., Freitag, D., Kush- merick, N., and Lavelli, A. (2005). Evaluating machine learning for information extraction. In Luc De Raedt et al., editors, Machine Learning, Proceedings of the Twenty-Second International Conference (ICML 2005), Bonn, Germany, August 7-11, 2005, volume 119 of ACM International Conference Proceeding Series, pages 345- 352. ACM.
Information extraction and ontology model for a 'call for paper' manager. L Issertial, H Tsuji, iiWAS'2011 -The 13th International Conference on Information Integration and Web-based Applications and Services. David Taniar, et al.Ho Chi Minh City, VietnamACMIssertial, L. and Tsuji, H. (2011). Information extraction and ontology model for a 'call for paper' manager. In David Taniar, et al., editors, iiWAS'2011 -The 13th In- ternational Conference on Information Integration and Web-based Applications and Services, 5-7 December 2011, Ho Chi Minh City, Vietnam, pages 539-542. ACM.
Using GATE as an Annotation Tool. T Kenter, D Maynard, Kenter, T. and Maynard, D. (2005). Using GATE as an Annotation Tool, January.
Boilerplate detection using shallow text features. C Kohlschütter, P Fankhauser, W Nejdl, Proceedings of the Third International Conference on Web Search and Web Data Mining, WSDM 2010. Brian D. Davison, et al.the Third International Conference on Web Search and Web Data Mining, WSDM 2010New York, NY, USAACMKohlschütter, C., Fankhauser, P., and Nejdl, W. (2010). Boilerplate detection using shallow text features. In Brian D. Davison, et al., editors, Proceedings of the Third International Conference on Web Search and Web Data Mining, WSDM 2010, New York, NY, USA, February 4-6, 2010, pages 441-450. ACM.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. J D Lafferty, A Mccallum, F C N Pereira, Carla E. Brodley et al.Morgan KaufmannLafferty, J. D., McCallum, A., and Pereira, F. C. N. (2001). Conditional random fields: Probabilistic models for seg- menting and labeling sequence data. In Carla E. Brodley et al., editors, ICML, pages 282-289. Morgan Kaufmann.
Combining information retrieval with information extraction for efficient retrieval of calls for papers. F Lazarinis, 20th Annual BCS-IRSG Colloquium on IR. Computing. BCSAutrans, FranceLazarinis, F. (1998). Combining information retrieval with information extraction for efficient retrieval of calls for papers. In 20th Annual BCS-IRSG Colloquium on IR, Autrans, France. 25th-27th March 1998, Workshops in Computing. BCS.
Extracting relation descriptors with conditional random fields. Y Li, J Jiang, H L Chieu, K M A Chai, Fifth International Joint Conference on Natural Language Processing. Chiang Mai, ThailandThe Association for Computer LinguisticsLi, Y., Jiang, J., Chieu, H. L., and Chai, K. M. A. (2011). Extracting relation descriptors with conditional random fields. In Fifth International Joint Conference on Natu- ral Language Processing, IJCNLP 2011, Chiang Mai, Thailand, November 8-13, 2011, pages 392-400. The Association for Computer Linguistics.
FAC-TORIE: probabilistic programming via imperatively defined factor graphs. A Mccallum, K Schultz, S Singh, Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems. Yoshua Bengio, et al.Vancouver, British Columbia, CanadaCurran Associates, IncProceedings of a meeting heldMcCallum, A., Schultz, K., and Singh, S. (2009). FAC- TORIE: probabilistic programming via imperatively de- fined factor graphs. In Yoshua Bengio, et al., editors, Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Process- ing Systems 2009. Proceedings of a meeting held 7-10 December 2009, Vancouver, British Columbia, Canada., pages 1249-1257. Curran Associates, Inc.
Snowball: A language for stemming algorithms. M F Porter, Porter, M. F. (2001). Snowball: A language for stemming algorithms.
Wbi-ner: The impact of domain-specific features on the performance of identifying and classifying mentions of drugs. T Rocktäschel, T Huber, M Weidlich, U Leser, Proceedings of the 7th International Workshop on Semantic Evaluation. the 7th International Workshop on Semantic EvaluationRocktäschel, T., Huber, T., Weidlich, M., and Leser, U. (2013). Wbi-ner: The impact of domain-specific fea- tures on the performance of identifying and classifying mentions of drugs. In Proceedings of the 7th Interna- tional Workshop on Semantic Evaluation, pages 356- 363.
Information extraction from calls for papers with conditional random fields and layout features. K Schneider, Artif. Intell. Rev. 251-2Schneider, K. (2006). Information extraction from calls for papers with conditional random fields and layout fea- tures. Artif. Intell. Rev., 25(1-2):67-77.
Arnetminer: extraction and mining of academic social networks. J Tang, J Zhang, L Yao, J Li, L Zhang, Z Su, Ying Li, et al.ACMTang, J., Zhang, J., Yao, L., Li, J., Zhang, L., and Su, Z. (2008). Arnetminer: extraction and mining of academic social networks. In Ying Li, et al., editors, KDD, pages 990-998. ACM.
Tool wear state recognition based on linear chain conditional random field model. G Wang, X Feng, Eng. Appl. of AI. 264Wang, G. and Feng, X. (2013). Tool wear state recognition based on linear chain conditional random field model. Eng. Appl. of AI, 26(4):1421-1427.
Open information extraction using wikipedia. F Wu, D S Weld, ACL 2010, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Jan Hajic, et al.Uppsala, SwedenThe Association for Computer LinguisticsWu, F. and Weld, D. S. (2010). Open information extrac- tion using wikipedia. In Jan Hajic, et al., editors, ACL 2010, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, July 11-16, 2010, Uppsala, Sweden, pages 118-127. The Associa- tion for Computer Linguistics.
Academic conference homepage understanding using constrained hierarchical conditional random fields. X Xin, J Li, J Tang, Q Luo, James GXin, X., Li, J., Tang, J., and Luo, Q. (2008). Academic conference homepage understanding using constrained hierarchical conditional random fields. In James G.
Shanahan, Proceedings of the 17th ACM Conference on Information and Knowledge Management, CIKM 2008. the 17th ACM Conference on Information and Knowledge Management, CIKM 2008Napa Valley, California, USAACMShanahan, et al., editors, Proceedings of the 17th ACM Conference on Information and Knowledge Manage- ment, CIKM 2008, Napa Valley, California, USA, Oc- tober 26-30, 2008, pages 1301-1310. ACM. |
34,700,045 | Multivariate Linear Regression of Symptoms-related Tweets for Infectious Gastroenteritis Scale Estimation | To date, various Twitter-based event detection systems have been proposed. Most of their targets, however, share common characteristics. They are seasonal or global events such as earthquakes and flu pandemics. In contrast, this study targets unseasonal and local disease events. Our system investigates the frequencies of disease-related words such as "nausea," "chill," and "diarrhea" and estimates the number of patients using regression of these word frequencies. Experiments conducted using Japanese 47 areas from January 2017 to April 2017 revealed that the detection of small and unseasonal event is extremely difficult (r = 0.13). However, we found that the event scale and the detection performance show high correlation in the specified cases (in the phase of patient increasing or decreasing). The results also suggest that when 150 and more patients appear in a high population area, we can expect that our social sensors detect this outbreak. Based on these results, we can infer that social sensors can reliably detect unseasonal and local disease events under certain conditions, just as they can for seasonal or global events. | [
12248612,
2876214,
7418935
] | Multivariate Linear Regression of Symptoms-related Tweets for Infectious Gastroenteritis Scale Estimation
AFNLPCopyright AFNLPNovember 27, 2017. 2017
Ryo Takeuchi takeuchi.ryo.tj7@is.naist.jp
Hayate Iso iso.hayate.id3@is.naist.jp
Kaoru Ito kito@is.naist.jp
Shoko Wakamiya wakamiya@is.naist.jp
Eiji Aramaki aramaki@is.naist.jp
Multivariate Linear Regression of Symptoms-related Tweets for Infectious Gastroenteritis Scale Estimation
Proceedings of the International Workshop on Digital Disease Detection using Social Media 2017 (DDDSM-2017)
the International Workshop on Digital Disease Detection using Social Media 2017 (DDDSM-2017)Taipei, TaiwanAFNLPNovember 27, 2017. 2017
To date, various Twitter-based event detection systems have been proposed. Most of their targets, however, share common characteristics. They are seasonal or global events such as earthquakes and flu pandemics. In contrast, this study targets unseasonal and local disease events. Our system investigates the frequencies of disease-related words such as "nausea," "chill," and "diarrhea" and estimates the number of patients using regression of these word frequencies. Experiments conducted using Japanese 47 areas from January 2017 to April 2017 revealed that the detection of small and unseasonal event is extremely difficult (r = 0.13). However, we found that the event scale and the detection performance show high correlation in the specified cases (in the phase of patient increasing or decreasing). The results also suggest that when 150 and more patients appear in a high population area, we can expect that our social sensors detect this outbreak. Based on these results, we can infer that social sensors can reliably detect unseasonal and local disease events under certain conditions, just as they can for seasonal or global events.
Introduction
Nowadays, the concept of social sensors (Sakaki et al., 2010) has been shown to have great potential feasibility for various practical applications. Particularly, disease detection is a core target of social sensor based studies. To date, detection has been demonstrated for influenza (Aramaki et al., 2011;Paul et al., 2014;Lampos et al., 2015;Iso et al., 2016;Zhang et al., 2017; Lampos et al., 2017), E.Coli (Diaz-Aviles and Stewart, 2012), and H1N1-type flu (Culotta, 2013;Lampos and Cristianini, 2010).
In this field, infectious diseases have drawn much attention mainly for the following two reasons. First, from a practical perspective, infectious disease prevention is a crucially important mission for a nation because infectious diseases, especially influenza, cause many deaths and spread rapidly. Next, from the perspective of informatics, epidemics of these diseases are suitable targets because some epidemics have the following characteristics that make them easy to ascertain from social media:
1. Seasonal Event: some epidemics are seasonal diseases that have basically one big peak during one year (e.g. influenza).
2. Large Scale Event: some epidemics infect thousands of people. Accordingly, the scale of information related to the disease in twitter also becomes large (e.g. more than 100,000 flu-related Japanese tweets per day).
Compared with previous works, this study tackles a more challenging task: detection of outbreaks of infectious gastroenteritis (In the rest of the paper, we simply call it gastroenteritis). Outbreaks of gastroenteritis are often caused by viruses such as Norovirus and Campylobacter. Symptoms include some combinations of various hard complaints, diarrhea, vomiting, and abdominal pain, fever, and dehydration, which typically last less than two weeks. These features of gastroenteritis make the task more difficult: unlike the flu, the name of a particular disease agent is rarely tweeted. The increased number of patients must be estimated with tweets related to several symptoms.
Although gastroenteritis is sometimes called stomach flu, the gastroenteritis characteristics in social respects show quite a contrast to the flu.
1. Unseasonal Event: An outbreak of gastroenteritis is not seasonal. It can burst at any time of a year. Moreover, there can be many peaks during a single year.
2. Local Event: The scale of the gastroenteritis varies, starting from a smaller event involving a couple of patients to a larger event involving thousands of patients.
A comparison of influenza and gastroenteritis is presented in Figure 1. These characteristics also make it difficult to apply a method intended for influenza detection to gastroenteritis detection. This study investigates the estimation performance for smaller events rather than previous targets. The results reveal that the event size is a core factor affecting the social sensor performance. From experimentally obtained results, small events (related to about 150 people) were detected with high accuracy (the correlation ratio between social sensor estimation and the actual value is 0.8).
This result contributes to social sensor reliability. This paper is the first reporting the overall relation between social sensor performance and its factors. Although detection of small and unseasonal events is difficult, the sensor can be applied in specified situations.
Related Work
Detection of infectious diseases is an important part of national health control. Detection tasks are classifiable into two types: (1) Seasonal infection for diseases such as influenza, and (2) Unseasonal infection such as food poisoning (infectious gastroenteritis) and bio-terror attacks.
For the earliest possible detection, most countries have infection prevention centers: The U.S. has the Centers for Disease Control and Prevention (CDC). The E.U. has its European Influenza Surveillance Scheme (EISS). Japan has its Infection Disease Surveillance Center (IDSC). For each of them, surveillance systems rely on virology and clinical data. For instance, the IDSC gathers influenza patient data from 5,000 clinics and releases summary reports. Such manual systems typically have a 1-2 week reporting lag, which is sometimes pointed out as a major flaw.
In an attempt to provide earlier infectious detection, various new approaches have been proposed to date, such as telephone triage based estimation (Espino et al., 2003) and over the counter drug sales based estimation (Magruder, 2003).
The first web-based infectious disease surveillance was Google Flu Trends (GFT), which uses the Google query log dataset to predict the number of flu patients (Ginsberg et al., 2009). Although GFT has illustrated the effectiveness of web-based surveillance, the Google query log is not a public dataset.
Recent advances of the Web-based infectious disease surveillance depend mainly on open datasets such as those of Twitter (Zhang et al., 2017;Lampos et al., 2017;Iso et al., 2016;Paul et al., 2014). Zhang et al. (2017) use several indicator information resources and report the prediction performance obtained for the U.S., Italy, and Spain. Lampos et al. (2017) use word embedding (Mikolov et al., 2013) for enriching the feature selection of the flu model and thereby increase the inference performance. In Japan, the first successful system is that of Aramaki et al. (2011). They classify whether a user is infected by the flu or not for each tweet that includes a flu-related word. examines the popularity difference between urban and rural cities for finergrained infectious disease surveillance.
A state-of-the-art system for use with a Japanese infectious disease model by Iso et al. (2016) uses a time lag for improving nowcasting and for extending the forecasting model. However, they merely examine the prevalence rate throughout Japan; they do not consider the scale of user popularity.
This paper presents an examination of Twitter data through various scales of events, from infection of a few people to an epidemic affecting thousands of people, to detect Twitter-based detection performance.
Method
Extracting Tweets by Patients
To detect outbreaks of gastroenteritis with tweets, we estimate the number of patients. First, the system collects Japanese tweets via Twitter API 1 . Then we select keyword sets of the following three typical patient complaints: "nausea", "chill", and "diarrhea". This keyword sets are selected in preliminary experiments that use 11 major complaints (Chester et al., 2011). Using the tweet corpus collected in the previous step, we built a classifier that judges whether a given tweet is sent by a patient (positive) or not (negative). This task is a sentence binary classification. We used a SVM-based classifier under the bag-of-words (BOW) representation (Cortes and Vapnik, 1995;Joachims, 1998). Then we split a Japanese sentence into a sequence of words using a Japanese morphological analyzer, MeCab 2 (ver.0.98) with IPADic (ver.2.7.0) (Kudo et al., 2004). The polynomial kernel (d=2) is used as the kernel function. To build the training set, a human annotator assigned either a positive or negative label. For the labeling process, we followed conditions used in our previous study (Aramaki et al., 2011). Table 1 presents samples of tweets with labels.
Finally, we classified tweets into areas for areabased disease surveillance. The area is resolved based on metadata attached to a tweet as follows: GPS Information: A tweet includes GPS data if a user allows the use of the location function. However, most users turn off this function for privacy reasons. Currently, the ratio of tweets with GPS information is only 0.46% (=35,635/7,666,201) in our dataset.
Profile Information: Several users include an address in a profile. We regard the user as near the profile address. The ratio of tweets with profile location is 26.2% (=2,010,605/7,666,201). To disambiguate the location names, we use a Geocoding service provided by Google Maps 3 .
We removed the tweets without inferred geolocation for the study.
Linear Regression Analysis of Patient Numbers
Next we investigate the relation between the number of infected people and the number estimated using positive tweets. We use the number of infected people reported from the National Institute of Infectious Diseases (NIID) 4 . The number of infected people in each area is reported per sentinel weekly. To remove the population bias in areas, we calculate the Incidence Rate (IR) of people in an area during a week as follows.
IR repo (a, t) = pat a,t pop a × 10 k(1)
In that equation, pat a,t is the total number of all patients reported in the specified area a within the week index t, pop a is the area's population, and k is a constant for correcting the value. In the experiment, k is set to 5.
Then, we estimate the linear association between the IR repo and the estimated one, IR est , by application of multivariate linear regression as
IR est = b s1 a x s1 a + b s2 a x s2 a + b s3 a x s3 a + b p ,(2)
where x s a represents the number of positive tweets containing the specific word s. In addition, b s a and b p are variables to be estimated.
Experiment
Setting
For this experiment, we estimated the Incidence Rate from positive tweets with the exploratory variables derived in Section 3.2. The experimental data consist of a training set and a test set. The data are shown in Table 2. For training, we used 1,720,325 tweets for 52 weeks from March 19, 2016 through December 31, 2016 for each area (47 areas in Japan).
Results
The overall result is shown in Figure 2. Figure 3a and Figure 3b present details of results from two areas. Figure 3a presents a moderate example in Nagano area, where tweet-based estimation highly correlates with the reported values. In contrast, Figure 3b corresponding to Tokyo area reveals the weakness of our approach: the estimated value differs greatly from the reported value.
The difference between the two areas reflects the scale of the event. In Figure 3a, the reported values have one large peak (starting from 20 people to 30 people). In contrast, Figure 3b shows a slight increase in the reported values (from 13 to 15 during 15 weeks). From these results, the estimation of small events is difficult, causing numerous false-positive results.
Discussion
Event scale and Estimation Performance
Results reveal that Twitter-based estimation is often adversely affected by small events, yielding poor performance overall. However, in the case of large-scale events, the social sensor usually works well. For that reason, we investigated the relation between the (Sensor) Estimation Performance (EP) and the Event Scale (ES). For this work, we define these indicators as explained below.
Estimation Performance (EP): It is necessary to ascertain how accurately social sensors can estimate an event. We define this indicator as the correlation between IR repo and IR est .
Event Scale (ES):
Fundamentally, the higher EP should be obtained when the epidemic scale (ES) is larger. We therefore assessed the correlation between the ES and the EP. We simply define ES based on the difference of IR in a time window as
ES = max t∈T IR(t) − min t∈T IR(t),(3)
where T stands for a time window in the target timeline. IR(t) is a function indicating IR at the week index t.
As for a time window, we divided the test set every 4 weeks (one window) and calculated IR. Correlation between EP and ES is shown in Figure 4. From Figure 4, correlation between EP and ES revealed poor performance (only 0.13). This result suggests that no overall correlation exists between EP and ES.
Discussion based on Epidemic Pattern
Not only the Event Scale (ES) but also event pattern would affect EP. We considered that event pattern classified by epidemic phase, such as the beginning and the end:
Increasing is the phase of the beginning of the epidemic during which the IR increases during the target time window.
Decreasing is the phase of the end of the epidemic during which the IR decreases during the target time window.
Peak ∧ is the phase of the epidemic peak during which the maximum of the IR is observed.
Between ∨ is intermediate of two epidemic peaks. The detailed definition is presented in Table 3. The table presents a window for which IR b − IR i > 0 and IR b − IR e > 0 (represented as IR b − IR i > 0 and IR b − IR e > 0) is regarded as the increasing Pattern.
The results are presented in Table 4, indicating correlation between the EP and ES for each Pattern. As the table shows, the performance showed divergence in each Pattern. For instance, the decreasing Pattern showed high correlation (r = 0.305). In contrast, the between Pattern shows quite poor performance (less than 0 correlation).
Discussion based on Area Population
The number of tweets is related to the population Therefore, we inferred that the EP is affected by the population of each area. We classified each window by four types based on population. We defined the four types as explained below.
Super High population area (SHP) is area with population of 2.5 million or more
High population area (HP) is area with population of 1.5 million to 2.5 million
Low population area (LP) is area with population of 1 million to 1.5 million Super Low population area (SLP) is area with population of 1 million or less Table 5 shows the correlation between the EP and ES in each population area. From Table 5, in high population area (1.5 million to 2.5 million), weak correlation was found (r = 0.214). Furthermore, correlation between EP and ES is related to population.
Combination of Factors
As described above, we introduced three factors that affected the estimation performance (EP): (1) event scale (ES), (2) event pattern, and (3) area population. In this section, we combined the above findings, and investigated the correlation coefficient between the performance and the (1) ES in each factor, (2) event pattern (four types) and (3) area population (four types). The 16 obtained combinations are shown in Table 6.
From Table 6, when the epidemic decreases greatly in areas with low population, the performance tends to be high. Especially, this trend is significant for decreasing pattern in low and super low population areas. In contrast, peak and between pattern show poor correlation.
From practical viewpoints, the increasing pattern is important because catching the increase or decrease of patients contributes to prevention. Figure 5 presents the relation between the EP and ES in increasing pattern in the high population area, which shows moderate correlation (r = 0.378). In Table 3: Patterns of Epidemic Phase. Each pattern is classified by three phase: IR b , IRi and IRe. The IR b is the IR at the beginning of the target window. The IRi is the average IR of all weeks, except for the beginning and the end. In the experiments, the window size is 4 weeks. Consequently, the IRi is the average IR of the second and third weeks. The IRe is the IR at the end of the target window.
Pattern the figure, moderate performance (r > 0.8 in the X-axis) is obtained when the ES is greater than 7 in the Y-axis. It corresponds to a greater than 150 patient increase in a month. From this result, we can estimate the borderline of the reliable warning that is 150 patient increasing or decreasing in the high population area. Bold font indicates significant correlation (** is p < 0.05, * is p < 0.10).
IR b − IRi IR b −
Practical Contribution and Future Direction
To date, social sensors have demonstrated their potential feasibility for various event detections. However, the practical application is rarely launched. One reason is the lack of reliability of social sensors. In other words, we can never fully trust social sensor-based information.
Results of this study demonstrated that the event scale and the estimation performance of social sensor are related. We think this finding is practically important because this characteristic provides important information for the following two use cases:
1. In cases where a really big epidemic occurs, we can believe that the system must detect the clue of the epidemic.
2. In contrast, in cases where the system estimation is normal, we can at least infer that the current situation is not crisis.
From a practical viewpoint, these features that can engage such safety are important. Based on these results, we are developing a surveillance service supported by Infectious Disease Surveillance Center (IDSC). In the near future, we would like to report a case of a system-running experience.
This report describes our attempt at the detection of small and unseasonal disease events.
The method employs the regression of diseaserelated word frequencies. Results of the experiment, based on Japanese 47 areas from January 2017 to April 2017, suggests that the detection of small events is difficult (r = 0.13). Although the overall performance is poor, the event scale (change in the number of patients) and the detection performance size show correlation (the phase of epidemic in high population area shows a correlation ratio of r = 0.38). We think this finding is practically important because it enables realization of a practical system that is useful in the following two use cases. (1) If a truly large epidemic occurs, we can infer that the system must detect it, (2) In other words, the system estimation is low, we can at least infer that the current situation is not so severe. These characteristics are fundamentally important for use in protecting public safety.
In future work, we plan to apply other classification algorithms and compare the performance. Furthermore, we will examine the indicator to represent the ES more effectively.
Figure 1 :
1Seasonal global event (a) vs. Unseasonal local event (b). The X-axis shows the timeline (weekly based). The Y-axis shows the incidence rate (IR), corresponding to the patient number per area during the latest two years. Thin line with red square markers show the incidence rates of 2016. The line with blue triangle markers show the incidence rates of 2017. (a) Seasonal global event (Influenza in Japan). The influenza portrays a single big peak. (b) Unseasonal local event Gastroenteritis in the same area as (a). The Gastroenteritis shows numerous small peaks. It is difficult to detect Peak periods.
Figure 2 :
2Result for each area. The X-axis indicates the time line (week). The Y-axis indicates the Incidence Rate (IR). T indicates the time window (4 weeks).
Figure 3 :
3Representative results in two areas: (a) Nagano and (b) Tokyo. The red line shows IR repo . The blue one shows IR est .
Figure 4 :
4Relation between EP (X-axis) and ES (Y-axis).This revealed poor performance (r = 0.13).
Figure 5 :
5Relation between EP (X-axis) and ES (Y-axis) in the Increasing Pattern in the High Population area (plot with blue triangle) and the Decreasing Pattern in the Low Population area (plot with red cross). This situation for which performance depends on the scale.
Table 1 :
1Samples of labeled tweetsWhen I got out of the bath I felt chilly. chill P So I am wearing long sleeves and long pants, but now it's hot ('-'). I changed clothes ('-') It might be a cold ... I feel nauseous... I thought it resulted from coccyx pain, but I wonder.nausea P if I caught a cold.I have diarrhea. I am going to a public restroom.diarrhea P I really hate mantis. I hate them more than pigeons. chill N I feel chilly when I think about it.whole-body exposure: 1 Gy nausea, Death year exposure is 10 Gy 1 ms nausea N Meanwhile, Chiba prefecture announced on January 1 that 39 people diarrhea N that 39 people in Ichikawa City, Ibaraki prefecture, had group food poisoning complaining of symptoms such as diarrhea.The tweet on the table are Japanese translations of English.Tweet
keyword
P/N
Table 2 :
2Dataset statistics For clinical research, we estimated IR est from the test set for 16 weeks from January 1, 2017 through April 19, 2017.keyword
training
test
nausea
560,620 (53%)
594,443 (50%)
chill
378,652 (37%)
498,748 (35%)
diarrhea
781,053 (74%)
493,693 (69%)
Total
1,720,325 (34%) 1,586,884 (51%)
In brackets represents proportion of positive tweet
Table 4 :
4Correlation between EP and ES in each patternPattern
correlation
Increasing
0.098
Decreasing
0.305
Peak ∧
-0.024
Between ∨
0.058
Table 5 :
5Correlation between EP and ES in each area populationPopulation correlation
SHP area
0.125
HP area
0.214
LP area
0.185
SLP area
0.059
Table 6 :
6Correlation between EP and ES in the combination of event pattern and population areasSHP
HP
LP
SLP
area
area
area
area
Increasing
0.255 0.378
0.01
-0.112
Decreasing
-0.678 0.164 0.550** 0.538*
Peak ∧
0.144 0.063
-0.394
0.411
Between ∨
0.494 0.411
-0.409
0.179
https://dev.twitter.com/overview/api 2 http://taku910.github.io/mecab/
https://developers.google.com/maps/ documentation/\geocoding/start 4 https://www.niid.go.jp/niid/en/
AcknowledgementsThis work was supported by Japan Agency for Medical Research and Development (Grant Number: 16768699) and JST ACT-I.
Twitter catches the flu: Detecting influenza epidemics using twitter. Eiji Aramaki, Sachiko Maskawa, Mizuki Morita, Proc. of EMNLP. of EMNLPEiji Aramaki, Sachiko Maskawa, and Mizuki Morita. 2011. Twitter catches the flu: Detecting influenza epidemics using twitter. In Proc. of EMNLP. pp. 1568-1576.
Use of a web forum and an online questionnaire in the detection and investigation of an outbreak. Tammy L Stuart Chester, Marsha Taylor, Jat Sandhu, Sara Forsting, Andrea Ellis, Rob Stirling, Eleni Galanis, Online Journal of Public Health Informatics. 31Tammy L Stuart Chester, Marsha Taylor, Jat Sandhu, Sara Forsting, Andrea Ellis, Rob Stirling, and Eleni Galanis. 2011. Use of a web forum and an online questionnaire in the detection and investigation of an outbreak. Online Journal of Public Health Infor- matics 3(1).
Supportvector networks. Corinna Cortes, Vladimir Vapnik, Machine Learning. 203Corinna Cortes and Vladimir Vapnik. 1995. Support- vector networks. Machine Learning 20(3):273-297.
Lightweight methods to estimate influenza rates and alcohol sales volume from twitter messages. Aron Culotta, Lang. Resour. Eval. 471Aron Culotta. 2013. Lightweight methods to estimate influenza rates and alcohol sales volume from twitter messages. Lang. Resour. Eval. 47(1):217-238.
Tracking twitter for epidemic intelligence: Case study: Ehec/hus outbreak in germany. Ernesto Diaz-Aviles, Avaré Stewart, Proc. of WebSci. of WebSciErnesto Diaz-Aviles and Avaré Stewart. 2012. Track- ing twitter for epidemic intelligence: Case study: Ehec/hus outbreak in germany, 2011. In Proc. of WebSci. pp. 82-85.
Telephone triage: A timely data source for surveillance of influenza-like diseases. J Espino, W Hogan, M Wagner, Proc. of AMIA Annual Symposium. of AMIA Annual SymposiumJ. Espino, W. Hogan, and M. Wagner. 2003. Tele- phone triage: A timely data source for surveillance of influenza-like diseases. In Proc. of AMIA Annual Symposium. pp. 215-219.
Detecting influenza epidemics using search engine query data. Jeremy Ginsberg, H Matthew, Mohebbi, S Rajan, Lynnette Patel, Brammer, S Mark, Larry Smolinski, Brilliant, Nature. 4577232Jeremy Ginsberg, Matthew H Mohebbi, Rajan S Pa- tel, Lynnette Brammer, Mark S Smolinski, and Larry Brilliant. 2009. Detecting influenza epidemics using search engine query data. Nature 457(7232):1012- 1014.
Forecasting word model: Twitter-based influenza surveillance and prediction. Hayate Iso, Shoko Wakamiya, Eiji Aramaki, Proc. of COLING. of COLINGHayate Iso, Shoko Wakamiya, and Eiji Aramaki. 2016. Forecasting word model: Twitter-based influenza surveillance and prediction. In Proc. of COLING. pp. 76-86.
Text categorization with support vector machines: Learning with many relevant features. Thorsten Joachims, proc. of ECML pp. of ECML ppThorsten Joachims. 1998. Text categorization with support vector machines: Learning with many rel- evant features. In proc. of ECML pp. 137-142.
Applying conditional random fields to japanese morphological analysis. Taku Kudo, Kaoru Yamamoto, Yuji Matsumoto, Proc. of EMNLP. of EMNLP4Taku Kudo, Kaoru Yamamoto, and Yuji Matsumoto. 2004. Applying conditional random fields to japanese morphological analysis. In Proc. of EMNLP. volume 4, pp. 230-237.
Tracking the flu pandemic by monitoring the social web. Vasileios Lampos, Nello Cristianini, Proc. of CIP. of CIPVasileios Lampos and Nello Cristianini. 2010. Track- ing the flu pandemic by monitoring the social web. In Proc. of CIP. pp. 411-416.
Advances in nowcasting influenza-like illness rates using search query logs. Vasileios Lampos, C Andrew, Steve Miller, Christian Crossan, Stefansen, Scientific Reports. 5Vasileios Lampos, Andrew C Miller, Steve Crossan, and Christian Stefansen. 2015. Advances in now- casting influenza-like illness rates using search query logs. Scientific Reports 5.
Enhancing feature selection using word embeddings: The case of flu surveillance. Vasileios Lampos, Bin Zou, Ingemar Johansson Cox, Proc. of WWW. International World Wide Web Conferences Steering Committee. of WWW. International World Wide Web Conferences Steering CommitteeRepublic and Canton of Geneva, SwitzerlandVasileios Lampos, Bin Zou, and Ingemar Johansson Cox. 2017. Enhancing feature selection using word embeddings: The case of flu surveillance. In Proc. of WWW. International World Wide Web Confer- ences Steering Committee, Republic and Canton of Geneva, Switzerland, pp. 695-704.
Evaluation of over-the-counter pharmaceutical sales as a possible early warning indicator of human disease. S Magruder, Johns Hopkins University APL Technical Digest. S. Magruder. 2003. Evaluation of over-the-counter pharmaceutical sales as a possible early warning in- dicator of human disease. In Johns Hopkins Univer- sity APL Technical Digest (24).
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Proc. of NIRS. of NIRSTomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Proc. of NIRS. pp. 3111-3119.
Twitter improves influenza forecasting. J Michael, Mark Paul, David Dredze, Broniatowski, PLoS Currents Outbreaks. Michael J Paul, Mark Dredze, and David Broniatowski. 2014. Twitter improves influenza forecasting. PLoS Currents Outbreaks .
Earthquake shakes twitter users: Real-time event detection by social sensors. Takeshi Sakaki, Makoto Okazaki, Yutaka Matsuo, Proc. of WWW. of WWWTakeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. 2010. Earthquake shakes twitter users: Real-time event detection by social sensors. In Proc. of WWW. pp. 851-860.
After the boom no one tweets: microblogbased influenza detection incorporating indirect information. Shoko Wakamiya, Yukiko Kawai, Eiji Aramaki, Proc. of EDB. of EDBShoko Wakamiya, Yukiko Kawai, and Eiji Aramaki. 2016. After the boom no one tweets: microblog- based influenza detection incorporating indirect in- formation. In Proc. of EDB. pp. 17-25.
Forecasting seasonal influenza fusing digital indicators and a mechanistic disease model. Qian Zhang, Nicola Perra, Daniela Perrotta, Michele Tizzoni, Daniela Paolotti, Alessandro Vespignani, Proc. of WWW. of WWWQian Zhang, Nicola Perra, Daniela Perrotta, Michele Tizzoni, Daniela Paolotti, and Alessandro Vespig- nani. 2017. Forecasting seasonal influenza fusing digital indicators and a mechanistic disease model. In Proc. of WWW. pp. 311-319. |
26,397,607 | Object-oriented Neural Programming (OONP) for Document Understanding | We propose Object-oriented Neural Programming (OONP), a framework for semantically parsing documents in specific domains. Basically, OONP reads a document and parses it into a predesigned object-oriented data structure (referred to as ontology in this paper) that reflects the domain-specific semantics of the document. An OONP parser models semantic parsing as a decision process: a neural net-based Reader sequentially goes through the document, and during the process it builds and updates an intermediate ontology to summarize its partial understanding of the text it covers. OONP supports a rich family of operations (both symbolic and differentiable) for composing the ontology, and a big variety of forms (both symbolic and differentiable) for representing the state and the document. An OONP parser can be trained with supervision of different forms and strength, including supervised learning (SL) , reinforcement learning (RL) and hybrid of the two. Our experiments on both synthetic and real-world document parsing tasks have shown that OONP can learn to handle fairly complicated ontology with training data of modest sizes.* The work is done when the authors worked as interns at DeeplyCurious.ai. arXiv:1709.08853v4 [cs.LG] 8 Oct 2017 * It is not entirely accurate, since the Inline Memory can be modified during the reading process it also records some of the state information. | [
12873739,
15412473,
5590763
] | Object-oriented Neural Programming (OONP) for Document Understanding
Zhengdong Lu
Department of Bio-medical Engineering
Tsinghua University
Haotian Cui
Department of Bio-medical Engineering
Tsinghua University
Xianggen Liu
Department of Bio-medical Engineering
Tsinghua University
Yukun Yan
Department of Bio-medical Engineering
Tsinghua University
Daqi Zheng
Department of Bio-medical Engineering
Tsinghua University
Deeplycurious Ai
Department of Bio-medical Engineering
Tsinghua University
Object-oriented Neural Programming (OONP) for Document Understanding
We propose Object-oriented Neural Programming (OONP), a framework for semantically parsing documents in specific domains. Basically, OONP reads a document and parses it into a predesigned object-oriented data structure (referred to as ontology in this paper) that reflects the domain-specific semantics of the document. An OONP parser models semantic parsing as a decision process: a neural net-based Reader sequentially goes through the document, and during the process it builds and updates an intermediate ontology to summarize its partial understanding of the text it covers. OONP supports a rich family of operations (both symbolic and differentiable) for composing the ontology, and a big variety of forms (both symbolic and differentiable) for representing the state and the document. An OONP parser can be trained with supervision of different forms and strength, including supervised learning (SL) , reinforcement learning (RL) and hybrid of the two. Our experiments on both synthetic and real-world document parsing tasks have shown that OONP can learn to handle fairly complicated ontology with training data of modest sizes.* The work is done when the authors worked as interns at DeeplyCurious.ai. arXiv:1709.08853v4 [cs.LG] 8 Oct 2017 * It is not entirely accurate, since the Inline Memory can be modified during the reading process it also records some of the state information.
Introduction
Mapping a document into a structured "machine readable" form is a canonical and probably the most effective way for document understanding. There are quite some recent efforts on designing neural net-based learning machines for this purpose, which can be roughly categorized into two groups: 1) sequence-to-sequence model with the neural net as the the black box [Dong andLapata, 2016, Liang et al., 2017], and 2) neural net as a component in a predesigned statistical model [Zeng et al., 2014]. We however argue that both approaches have their own serious problems and cannot be used on document with relatively complicated structures. Towards solving this problem, we proposed Object-oriented Neural Programming (OONP ), a framework for semantically parsing in-domain documents. OONP is neural net-based, but it also has sophisticated architecture and mechanism designed for taking and outputting discrete structures, hence nicely combining symbolism (for interpretability and formal reasoning) and connectionism (for flexibility and learnability). This ability, as we argue in this paper, is critical to document understanding.
OONP seeks to map a document to a graph structure with each node being an object, as illustrated in Figure 1. We borrow the name from Object-oriented Programming [Mitchell, 2003] to emphasize the central position of "objects" in our parsing model: indeed, the representation of objects in OONP allows neural and symbolic reasoning over complex structures and hence it make it possible to represent much richer semantics. Similar to Object-oriented Programming, OONP has the concept of "class" and "objects" with the following analogousness: 1) each class defines the types and organization of information it contains, and we can define inheritance for class with different abstract levels as needed; 2) each object is an instance of a certain class, encapsulating a number of properties and operations; 3) objects can be connected with relations (called links) with pre-determined type. Based on objects, we can define ontology and operations that reflect the intrinsic structure of the parsing task.
For parsing, OONP reads a document and parses it into this object-oriented data structure through a series of discrete actions along reading the document sequentially. OONP supports a rich family of operations for composing the ontology, and flexible hybrid forms for knowledge representation. An OONP parser can be trained with supervised learning (SL) , reinforcement learning (RL) and hybrid of the two. Our experiments on one synthetic dataset and two realworld datasets have shown the efficacy of OONP on document understanding tasks with a variety of characteristics. In addition to the work on semantic parsing mentioned above, OONP is also related to multiple threads of work in natural language processing and machine learning. It is inspired by [Daumé III et al., 2009] on modeling parsing as a decision process, and also state-tracking models in dialogue system [Henderson et al., 2014] for the mixture of symbolic and probabilistic representations of dialogue state. OONP is also related to [Johnson, 2017] for modeling the transition of symbolic state and [Henaff et al., 2016] on having explicit (although not thorough) modeling on entities. OONP is also obviously related to the the recent work on neural-symbolism [Mou et al., 2017, Liang et al., 2017.
Overview of OONP
An OONP parser (as illustrated through the diagram in Figure 2) consists of a Reader equipped with read/write heads, Inline Memory that represents the document, and Carry-on Memory that summarizes the current understanding of the document at each time step. For each document to parse, OONP first preprocesses it and puts it into the Inline Memory , and then Reader controls the read-heads to sequentially go through the Inline Memory (for possibly multiple times, see Section 6.3 for an example) and at the same time update the Carry-on Memory. The major components of OONP are described in the following:
• Memory: we have two types of memory, Carry-on Memory and Inline Memory. Carry-on
Memory is designed to save the state * in the decision process and summarize current understanding of the document based on the text that has been 'read". Carry-on Memory has three compartments:
-Object Memory: denoted as M obj , the object-based ontology constructed during the parsing process, see Section 2.1 for details;
-Matrix Memory: denoted as M mat , a matrix-type memory with fixed size, for differentiable read/write by the controlling neural net [Graves et al., 2014]. In the simplest case, it could be just a vector as the hidden state of conventional Recurrent Neural Netwokr (RNN);
-Action History: denoted as M act , saving the entire history of actions made during the parsing process.
Intuitively, M obj stores the extracted knowledge with defined structure and strong evidence, while M mat keeps the knowledge that is fuzzy, uncertain or incomplete, waiting for future information to confirm, complete and clarify. Inline Memory , denoted M inl , is designed to save location-specific information about the document. In a sense, the information in Inline Memory is low level and unstructured, waiting for Reader to fuse and integrate for more structured representation.
• Reader: Reader is the control center of OONP, coordinating and managing all the operations of OONP. More specifically, it takes the input of different forms (reading), processes it (thinking), and updates the memory (writing). As shown in Figure 3, Reader contains Neural Net Controller (NNC) and multiple symbolic processors, and Neural Net Controller also has Policy-net as its sub-component. Similar to the controller in Neural Turing Machine [Graves et al., 2014], Neural Net Controller is equipped with multiple read-heads and write-heads for differentiable read/write over Matrix Memory and (the distributed part of) Inline Memory, with possibly a variety of addressing strategies [Graves et al., 2014]. Policy-net however issues discrete outputs (i.e., actions), which gradually builds and updates the Object Memory in time (see Section 2.1 for more details). The actions could also updates the symbolic part of Inline Memory if needed. The symbolic processors are designed to handle information in symbolic form from Object Memory, Inline Memory, Action History, and Policy-net, while that from Inline Memory and Action History is eventually generated by Policy-net.
Figure 3: The overall digram of OONP
We can show how the major components of OONP collaborate to make it work through the following sketchy example. In reading the following text OONP has reached the underlined word "BMW" in Inline Memory. At this moment, OONP has two objects (I01 and I02) for Audi-06 and BMW respectively in Object Memory. Reader determines that the information it is currently holding is about I02 (after comparing it with both objects) and updates its status property to sold, along with other update on both Matrix Memory and Action History.
OONP in a nutshell: The key properties of OONP can be summarized as follows 1. OONP models parsing as a decision process: as the "reading and comprehension" agent goes through the text it gradually forms the ontology as the representation of the text through its action;
2. OONP uses a symbolic memory with graph structure as part of the state of the parsing process. This memory will be created and updated through the sequential actions of the decision process, and will be used as the semantic representation of the text at the end;
3. OONP can blend supervised learning (SL) and reinforcement learning (RL) in tuning its parameters to suit the supervision signal in different forms and strength;
4. OONP allows different ways to add symbolic knowledge into the raw representation of the text (Inline Memory) and its policy net in forming the final structured representation of the text.
RoadMap of the paper: The rest of the paper is organized as follows. We will elaborate on the components of OONP in Section 2 and actions of OONP in Section 3. After that we will give a detailed analysis on the neural-symbolism in OONP in Section 4. Then in Section 5 we will discuss the learning for OONP , which is followed by experiments on three datasets in Section 6. Finally we conclude the paper in Section 7.
OONP: Components
In this section we will discuss the major components in OONP, namely Object Memory , Inline Memory and Reader. We omit the discussion on Matrix Memory and Action History since they are straightforward given the description in Section 1.1.
Object Memory
Object Memory stores an object-oriented representation of document, as illustrated in Figure 4. Each object is an instance of a particular class † , which specifies the internal structure of the object, including internal properties, operations, and how this object can be connected with others. The internal properties can be of different types, for example string or category, which usually correspond to different actions in composing them: the string-type property is usually "copied" from the original text in Inline Memory, while the category properties usually needs to be rendered by a classifier. The links are by nature bi-directional, meaning that it can be added from both ends (e.g., in the experiment in Section 6.1), but for modeling convenience, we might choose to let it to be one directional (e.g., in the experiments in Section 6.2 and 6.3). In Figure 4, there are six "linked" objects of three classes (namely, Person, Event, and Item) . Taking Item-object I02 for example, it has five internal properties (Type, Model, Color, Value, Status), and is linked with two Event-objects through stolen and disposed link respectively. In addition to the symbolic part, each object had also its own distributed presentation (named object-embedding), which serves as its interface with other distributed representations in Reader (e.g., those from the Matrix Memory or the distributed part of Inline Memory). For description simplicity, we will refer to the symbolic part of this hybrid representation of objects as ontology, with some slight abuse of this word. Object-embedding serves as a dual representation to the symbolic part of a object, recording all the relevant information associated with it but not represented in the ontology, e.g., the context of text when the object is created.
The representations in Object Memory, including the ontology and object embeddings, will be updated in time by the operations defined for the corresponding classes. Usually, the actions are the driving force in those operations, which not only initiate and grow the ontology, but also coordinate other differentiable operations. For example, object-embedding associated with a certain object changes with any non-trivial action concerning this object, e.g., any update on the internal properties or the external links, or even a mention (corresponding to an Assign action described in Section 3) without any update. † In this paper, we limit ourselves to a flat structure of classes, but it is possible and even beneficial to have a hierarchy of classes. In other words, we can have classes with different levels of abstractness, and allow an object to go from abstract class to its child class during the parsing process, with more and more information is obtained. According to the way the ontology evolves with time, the parsing task can be roughly classified into two categories • Stationary: there is a final ground truth that does not change with time. So with any partial history of the text, the corresponding ontology is always part of the final one, while the missing part is due to the lack of information. See task in Section 6.2 and 6.3 for example.
• Dynamical: the truth changes with time, so the ontology corresponding to partial history of text may be different from that of the final state. See task in Section 6.1 for example.
It is important to notice that this categorization depends not only on the text but also heavily on the definition of ontology. Taking the text in Figure 1 for example: if we define ownership relation between a Person-object and Item-object, the ontology becomes dynamical, since ownership of the BMW changed from Tom to John.
Inline Memory
Inline Memory stores the relatively raw representation of the document that follows the temporal structure of the text, as illustrated through Figure 2. Basically, Inline Memory is an array of memory cells, each corresponding to a pre-defined language unit (e.g., word) in the same order as they are in the original text. Each cell can have distributed part and symbolic part, designed to save 1) the result of preprocessing of text from different models, and 2) certain output from Reader, for example from previous reading rounds. Following are a few examples for preprocessing
• Word embedding: context-independent vectorial representation of words
• Hidden states of NNs: we can put the context in local representation of words through gated RNN like LSTM [Greff et al., 2015] or GRU [Cho et al., 2014], or particular design of convolutional neural nets (CNN) [Yu and Koltun, 2015].
• Symbolic preprocessing: this refer to a big family of methods that yield symbolic result, including various sequential labeling models and rule-based methods. As the result we may have tag on words, extracted sub-sequences, or even relations on two pieces.
During the parsing process, Reader can write to Inline Memory with its discrete or continuous outputs, a process we named "notes-taking". When the output is continuous, the notes-taking process is similar to the interactive attention in machine translation [Meng et al., 2016], which is from a NTM-style write-head [Graves et al., 2014] on Neural Net Controller. When the output is discrete, the notes-taking is essentially an action issued by Policy-net. Inline Memory provides a way to represent locally encoded "low level" knowledge of the text, which will be read, evaluated and combined with the global semantic representation in Carry-on Memory by Reader. One particular advantage of this setting is that it allows us to incorporate the local decisions of some other models, including "higher order" ones like local relations across two language units, as illustrated in the left panel of Figure 5. We can also have a rather "nonlinear" representation of the document in Inline Memory. As a particular example [Yan et al., 2017], at each location, we can have the representation of the current word, the representation of the rest of the sentence, and the representation of the rest of the current paragraph, which enables Reader to see information of history and future at different scales, as illustrated in the right panel of Figure 5.
Reader
Reader is the control center of OONP , which manages all the (continuous and discrete) operations in the OONP parsing process. Reader has three symbolic processors (namely, Symbolic Matching, Symbolic Reasoner, Symbolic Analyzer) and a Neural Net Controller (with Policy-net as the sub-component). All the components in Reader are coupled through intensive exchange of information as shown in Figure 6. Below is a snapshot of the information processing at time t in Reader • STEP-1: let the processor Symbolic Analyzer to check the Action History (M t act ) to construct some symbolic features for the trajectory of actions;
• STEP-2: access Matrix Memory (M t mat ) to get an vectorial representation for time t, denoted as s t ;
• STEP-3: access Inline Memory (M t inl ) to get the symbolic representation x (s) t (through locationbased addressing) and distributed representation x (d) t (through location-based addressing and/or content-based addressing);
• STEP-4: feed x (d) t and the embedding of x (s) t to Neural Net Controller to fuse with s t ;
• STEP-5: get the candidate objects (some may have been eliminated by x (s) t ) and let them meet x (d) t through the processor Symbolic Matching for the matching of them on symbolic aspect;
• STEP-6: get the candidate objects (some may have been eliminated by x (s) t ) and let them meet the result of STEP-4 in Neural Net Controller ; • STEP-7: Policy-net combines the result of STEP-6 and STEP-5, to issue actions; • STEP-8: update M t obj , M t mat and M t inl with actions on both symbolic and distributed representations;
• STEP-9: put M t obj through the processor Symbolic Reasoner for some high-level reasoning and logic consistency.
Note that we consider only single action for simplicity, while in practice it is common to have multiple actions at one time step, which requires a slightly more complicated design of the policy as well as the processing pipeline. The actions issued by Policy-net can be generally categorized as the following
• New-Assign : determine whether to create an new object (a "New " operation) for the information at hand or assign it to a certain existed object • Update.X : determine which internal property or external link of the selected object to update;
• Update2what : determine the content of the updating, which could be about string, category or links.
The typical order of actions is New-Assign → Update.X → Update2what, but it is very common to have New-Assign action followed by nothing, when, for example, an object is mentioned but no substantial information is provided,
New-Assign
With any information at hand (denoted as S t ) at time t, the choices of New-Assign typically include the following three categories of actions: 1) creating (New) an object of a certain type, 2) assigning S t to an existed object, and 3) doing nothing for S t and moving on. For Policy-net, the stochastic policy is to determine the following probabilities:
prob(c, new|S t ), c = 1, 2, · · · , |C| prob(c, k|S t ), for O c,k t ∈ M t obj prob(none|S t )
where |C| stands for the number of classes, O c,k t stands for the k th object of class c at time t. Determining whether to new objects always relies on the following two signals 1. The information at hand cannot be contained by any existed objects;
2. Linguistic hints that suggests whether a new object is introduced.
Based on those intuitions, we takes a score-based approach to determine the above-mentioned probability. More specifically, for a given S t , Reader forms a "temporary" object with its own structure (denotedÔ t ), including symbolic and distributed sections. In addition, we also have a virtual object for the New action for each class c, denoted O c,new t , which is typically a time-dependent vector formed by Reader based on information in M t mat . For a givenÔ t , we can then define the following 2|C| + 1 types of score functions, namely New an object of class c: score
(c) new (O c,new t ,Ô t ; θ (c)
new ), c = 1, 2, · · · , |C| Assign to existed objects: score
(c) assign (O c,k t ,Ô t ; θ (c) assign ), for O c,k t ∈ M t obj Do nothing: score none (Ô t ; θ none ).
to measure the level of matching between the information at hand and existed objects, as well as the likeliness for creating an object or doing nothing. This process is pictorially illustrated in Figure 7. We therefore can define the following probability for the stochastic policy
prob(c, new|S t ) = e score (c) new (O c,new t ,Ôt;θ (c) new ) Z(t) prob(c, k|S t ) = e score (c) assign (O c,k t ,Ôt;θ (c) assign ) Z(t) prob(none|S t ) = e scorenone(Ôt;θnone) Z(t) where Z(t) = c ∈C e score (c ) new (O c ,new t ,Ôt;θ (c ) new ) + (c ,k )∈idx(M t obj ) e score (c ) assign (O c ,k t ,Ôt;θ (c )
assign ) +e scorenone(Ôt;θnone) is the normalizing factor. Many actions are essentially trivial on the symbolic part, for example, when Policy-net chooses none in New-Assign, or assigns the information at hand to an existed object but choose to update nothing in Update.X, but this action will affect the distributed operations in Reader. This distributed operation will affect the representation in Matrix Memory or the object-embedding in Object Memory.
Updating objects: Update.X and Update2what
In Update.X step, Policy-net needs to choose the property or external link (or none) to update for the selected object determined by New-Assign step. If Update.X chooses to update an external link, Policy-net needs to further determine which object it links to. After that Update2what updates the chosen property or links. In task with static ontology, most internal properties and links will be "locked" after they are updated for the first time, with some exception on a few semi-structured property (e.g., the Description property in the experiment in Section 6.2). For dynamical ontology, on contrary, many important properties and links are always subject to changes. A link can often be determined from both ends, e.g., the link that states the fact that "Tina (a Person-object ) carries apple (an Item-object )" can be either specified from from Tina(through adding the link "carry" to apple) or from apple (through adding the link "iscarriedby" to Tina ), as in the experiment in Section 6.1. In practice, it is often more convenient to make it asymmetrical to reduce the size of action space.
In practice, for a particular type of ontology, both Update.X and Update2what can often be greatly simplified: for example,
• when the selected object (in New-Assign step) has only one property "unlocked", the Update.X step will be trivial;
• in S t , there is often information from Inline Memory that tells us the basic type of the current information, which can often automatically decide the property or link.
An example
In Figure 8, we give an example of the entire episode of OONP parsing on the short text given in the example in Figure 1. Note that different from our late treatment of actions, we let some selection actions (e.g., the Assign) be absorbed into the updating actions to simplify the illustration.
OONP: Neural-Symbolism
OONP offers a way to parse a document that imitates the cognitive process of human when reading and comprehending a document: OONP maintains a partial understanding of document as a mixture of symbolic (representing clearly inferred structural knowledge) and distributed (representing knowledge without complete structure or with great uncertainty). As shown in Figure 2, Reader is taking and issuing both symbolic signals and continuous signals, and they are entangled through Neural Net Controller. OONP has plenty space for symbolic processing: in the implementation in Figure 6, it is carried out by the three symbolic processors. For each of the symbolic processors, the input symbolic representation could be rendered partially by neural models, therefore providing an intriguing way to entangle neural and symbolic components. Here are three examples we implemented for two different tasks 1. Symbolic analysis in Action History: There are many symbolic summary of history we can extracted or constructed from the sequence of actions, e.g., "The system just New an object with Person-class five words ago" or "The system just put a paragraph starting with '(2)' into event-3". In the implementation of Reader shown in Figure 6, this analysis is carried out with the component called Symbolic Analyzer. Based on those more structured representation of history, Reader might be able to make a informed guess like "If the coming paragraph starts with '(3)', we might want to put it to event-2" based on symbolic reasoning. This kind of guess can be directly translated into feature to assist Reader's decisions, resembling what we do with high-order features in CRF [Lafferty et al., 2001], but the sequential decision makes it possible to construct a much richer class of features from symbolic reasoning, including those with recursive structure. One example of this can be found in [Yan et al., 2017], as a special case of OONP on event identification.
Symbolic reasoning on
Object Memory: we can use an extra Symbolic Reasoner to take care of the high-order logic reasoning after each update of the Object Memory caused by the actions. This can illustrated through the following example. Tina (a Person-object) carries an apple (an Item-object), and Tina moves from kitchen (a Location-object) to garden (Location-object) at time t. Supposing we have both Tina-carry-apple and Tina-islocatedat-kitchen relation kept in Object Memory at time t, and OONP updates the Tina -islocatedat-kitchen to Tina -islocatedatgarden at time t+1, the Symbolic Reasoner can help to update the relation apple -islocatedat-kitchen to apple -islocatedat-garden . This is feasible since the Object Memory is supposed to be logically consistent. This external logic-based update is often necessary since it is hard to let the Neural Net Controller see the entire Object Memory due to the difficulty to find a distributed representation of the dynamic structure there. Please see Section 6.1 for experiments.
3. Symbolic prior in New-Assign : When Reader determines an New-Assign action, it needs to match the information about the information at hand (S t ) and existed objects. There is a rich set of symbolic prior that can be added to this matching process in Symbolic Matching component. For example, if S t contains a string labeled as entity name (in preprocessing), we can use some simple rules (part of the Symbolic Matching component) to determine whether it is compatible with an object with the internal property Name.
Learning
The parameters of OONP models (denoted Θ) include that for all operations and that for composing the distributed sections in Inline Memory. They can be trained with different learning paradigms: it takes both supervised learning (SL) and reinforcement learning (RL) while allowing different ways to mix the two. Basically, with supervised learning, the oracle gives the ground truth about the "right action" at each time step during the entire decision process, with which the parameter can be tuned to maximize the likelihood of the truth. In a sense, SL represents rather strong supervision which is related to imitation learning [Stefan, 1999] and often requires the labeler (expert) to give not only the final truth but also when and where a decision is made. For supervised learning, the objective function is given as
J SL (Θ) = − 1 N N i T i t=1 log(π (i) t [a t ])(1)
where N stands for the number of instances, T i stands for the number of steps in decision process for the i th instance, π (i) t [·] stands for the probabilities of the feasible actions at t from the stochastic policy, and a t stands fro the ground truth action in step t.
With reinforcement learning, the supervision is given as rewards during the decision process, for which an extreme case is to give the final reward at the end of the decision process by comparing the generated ontology and the ground truth, e.g.,
r (i) t = 0, if t = T i match(M T i obj , G i ), if t = T i(2)
where the match(M T i obj , G i ) measures the consistency between the ontology of in M T i obj and the ground truth G . We can use any policy search algorithm to maximize the expected total reward. With the commonly used REINFORCE [Williams, 1992] for training, the gradient is given by
∇ Θ J RL (Θ) = E π θ ∇ Θ log π Θ a i t |s i t r (i) t:T ≈ − 1 N T i N i T t=1 ∇ Θ log π Θ a i t |s i t r (i) t:T i .(3)
When OONP is applied to real-world tasks, there is often quite natural SL and RL. More specifically, for "static ontology" one can often infer some of the right actions at certain time steps by observing the final ontology based on some basic assumption, e.g.,
• the system should New an object the first time it is mentioned,
• the system should put an extracted string (say, that for Name ) into the right property of right object at the end of the string.
For those that can not be fully reverse-engineered, say the categorical properties of an object (e.g., Type for event objects), we have to resort to RL to determine the time of decision, while we also need SL to train Policy-net on the content of the decision. Fortunately it is quite straightforward to combine the two learning paradigms in optimization. More specifically, we maximize this combined objective
J (Θ) = J SL (Θ) + λJ RL (Θ),(4)
where J SL and J RL are over the parameters within their own supervision mode and λ coordinates the weight of the two learning mode on the parameters they share. Equation 4 actually indicates a deep coupling of supervised learning and reinforcement learning, since for any episode the samples of actions related to RL might affect the inputs to the models under supervised learning. For dynamical ontology (see Section 6.1 for example), it is impossible to derive most of the decisions from the final ontology since they may change over time. For those, we have to rely mostly on the supervision at the time step to train the action (supervised mode) or count on the model to learn the dynamics of the ontology evolution by fitting the final ground truth. Both scenarios are discussed in Section 6.1 on a synthetic task.
Experiments
We applied OONP on three document parsing tasks, to verify its efficacy on parsing documents with different characteristics and investigate different components of OONP.
Task-I: bAbI Task
Data and task
We implemented OONP an enriched version of bAbI tasks [Johnson, 2017] with intermediate representation for history of arbitrary length. In this experiment, we considered only the original bAbi task-2 [Weston et al., 2015], with an instance shown in the left panel Figure 9. The ontology has three types of objects: Person-object, Item-object, and Location-object, and three types of links:
1. is-located-at A : between a Person-object and a Location-object, 2. is-located-at B : between a Item-object and a Location-object; 3. carry: between a Person-object and Item-object; which could be rendered by description of different ways. All three types of objects have Name as the only internal property. Figure 9: One instance of bAbI (6-sentence episode) and the ontology of two snapshots.
The task for OONP is to read an episode of story and recover the trajectory of the evolving ontology. We choose this synthetic dataset because it has dynamical ontology that evolves with time and ground truth given for each snapshot, as illustrated in Figure 9. Comparing with the real-world tasks we will present later, bAbi has almost trivial internal property but relatively rich opportunities for links, considering any two objects of different types could potentially have a link.
Implementation details
For preprocessing, we have a trivial NER to find the names of people, items and locations (saved in the symbolic part of Inline Memory) and word-level bi-directional GRU for the distributed representations of Inline Memory. In the parsing process, Reader goes through the inline word-by-word in the temporal order of the original text, makes New-Assign action at every word, leaving Update.X and Update2what actions to the time steps when the read-head on Inline Memory reaches a punctuation (see more details of actions in Table 1). For this simple task, we use an almost fully neural Reader (with MLPs for Policy-net) and a vector for Matrix Memory, with however a Symbolic Reasoner for some logic reasoning after each update of the links, as illustrated through the following example. Suppose at time t, the ontology in M t obj contains the following three facts (among others)
• fact-1: John (a Person-object) is in kichten (a Location-object);
• fact-2: John carries apple (an Item-object);
• fact-3: John drops apple;
where fact-3 is just established by Policy-net at t. Symbolic Reasoner will add a new is-located-at B link between apple and kitchen based on domain logic ‡ .
Action Description
NewObject(c)
New an object of class-c.
AssignObject(c, k)
Assign the current information to existed object (c, k) Update(c, k).AddLink(c , k , ) Add an link of type-from object-(c, k) to object-(c , k ) Update(c, k).DelLink(c , k , ) Delete the link of type-from object-(c, k) to object-(c , k ) Table 1: Actions for bAbI.
Results and Analysis
For training, we use 1,000 episodes with length evenly distributed from one to six. We use just REINFORCE with only the final reward defined as the overlap between the generated ontology and the ground truth, while step-by-step supervision on actions yields almost perfect result (result omitted). For evaluation, we use the following two metrics:
• the Rand index [Rand, 1971] between the generated set of objects and the ground truth, which counts both the duplicate objects and missing ones, averaged over all snapshots of all test instances;
• the F1 [Rijsbergen, 1979] between the generated links and the ground truth averaged over all snapshots of all test instances, since the links are typically sparse compared with all the possible pairwise relations between objects.
with results summarized in Table 2. OONP can learn fairly well on recovering the evolving ontology with such a small training set and weak supervision (RL with the final reward), which clearly shows that the credit assignment over to earlier snapshots does not cause much difficulty in the learning of OONP even with a generic policy search algorithm. It is not so surprising to observe that Symbolic Reasoner helps to improve the results on discovering the links, while it does not improves the performance on identifying the objects although it is taken within the learning. It is quite interesting to observe that OONP achieves rather high accuracy on discovering the links while it performs relatively poorly on specifying the objects. It is probably due to the fact that the rewards does not penalizes the objects. 6.2 Task-II: Parsing Police Report
Data & task
We implement OONP for parsing Chinese police report (brief description of criminal cases written by policeman), as illustrated in the left panel of Figure 10. We consider a corpus of 5,500 cases with a variety of crime categories, including theft, robbery, drug dealing and others. The ontology we designed for this task mainly consists of a number of Person-objects and Item-objects connected through a Event-object with several types of relations, as illustrated in the right panel of Figure 10. A Person-object has three internal properties: Name (string), Gender (categorical) and Age (number), and two types of external links (suspect and victim) to an Event-object. An Item-object has three internal properties: Name (string), Quantity (string) and Value (string), and six types of external links (stolen, drug, robbed, swindled, damaged, and other) to an Event-object. Compared with bAbI in Section 6.1, the police report ontology has less pairwise links but much richer internal properties for objects of all three objects. Although the language in this dataset is reasonably formal, the corpus coverages a big variety of topics and language styles, and has a high proportion of typos. The average length of a document is 95 Chinese characters, with digit string (say, ID number) counted as one character. Figure 10: An example of police report and its ontology.
Implementation details
The OONP model is designed to generate ontology as illustrated in Figure 10 through a decision process with actions in Table 3. As pre-processing, we performed regular NER with third party algorithm (therefore not part of the learning) and simple rule-based extraction to yield the symbolic part of Inline Memory as shown in Figure 11. For the distributed part of Inline Memory, we used dilated CNN with different choices of depth and kernel size [Yu and Koltun, 2015], all of which will be jointly learned during training. In making the New-Assign decision, Reader considers the matching between two structured objects, as well as the hints from the symbolic part of Inline Memory as features, as pictorially illustrated in Figure 7. In updating objects with its string-type properties (e.g., Name for a Person-object ), we use Copy-Paste strategy for extracted string (whose NER tag already specifies which property in an object it goes to) as Reader sees it. For undetermined category properties in existed objects, Policy-net will determine the object to update (an New-Assign action without New option), its property to update (an Update.X action), and the updating operation (an Update2what action) at milestones of the decision process , e.g., when reaching an punctuation. For this task, since all the relations are between the single by-default Event-object and other objects, the relations can be reduced to category-type properties of the corresponding objects in practice. For category-type properties, we cannot recover New-Assign and Update.X actions from the label (the final ontology), so we resort RL for learning to determine that part, which is mixed with the supervised learning for Update2what and other actions for string-type properties.
Action Description
NewObject(c)
New an object of class-c.
AssignObject(c, k)
Assign the current information to existed object (c, k) UpdateObject(c, k).Name
Set the name of object-(c, k) with the extracted string.
UpdateObject(Person, k).Gender
Set the name of a Person-object indexed k with the extracted string.
UpdateObject(Item, k).Quantity
Set the quantity of an Item-object indexed k with the extracted string.
UpdateObject(Item, k).Value
Set the value of an Item-object indexed k with the extracted string. UpdateObject(Event, 1).Items.x Set the link between the Event-object and an Item-object, where x ∈{stolen, drug, robbed, swindled, damaged, other} UpdateObject(Event, 1).Persons.x Set the link between the Event-object and an Person-object, and x ∈{victim, suspect} Table 3: Actions for parsing police report.
Results & discussion
We use 4,250 cases for training, 750 for validation an held-out 750 for test. We consider the following four metrics in comparing the performance of different models:
Assignment Accuracy the accuracy on New-Assign actions made by the model Category Accuracy the accuracy of predicting the category properties of all the objects Ontology Accuracy the proportion of instances for which the generated ontology is exactly the same as the ground truth Ontology Accuracy-95 the proportion of instances for which the generated ontology achieves 95% consistency with the ground truth which measures the accuracy of the model in making discrete decisions as well as generating the final ontology. We empirically examined several OONP implementations and compared them with a Bi-LSTM baseline, with results given in Table 4. Table 4: OONP on parsing police reports.
The Bi-LSTM is essentially a simple version of OONP without a structured Carry-on Memory and designed operations (sophisticated matching function in New-Assign ). Basically it consists of a Bi-LSTM Inline Memory encoder and a two-layer MLP on top of that acting as a simple Policy-net for prediction actions. Since this baseline does not has an explicit object representation, it does not support category type of prediction. We hence only train this baseline model to perform New-Assign actions, and evaluate with the Assignment Accuracy (first metric) and a modified version of Ontology Accuracy (third and fourth metric) that counts only the properties that can be predicted by Bi-LSTM, hence in favor of Bi-LSTM. We consider three OONP variants:
• OONP (neural): simple version of OONP with only distributed representation in Reader in determining all actions;
• OONP (structured): OONP that considers the matching between two structured objects in New-Assign actions, with symbolic prior encoded in Symbolic Matching and other features for Policy-net;
• OONP (RL): another version of OONP (structured) that uses RL to determine the time for predicting the category properties, while OONP (neural) and OONP (neural) use a rule-based approach to determine the time.
As shown in Table 4, Bi-LSTM baseline struggles to achieve around 73% Assignment Accuracy on test set, while OONP (neural) can boost the performance to 88.5%. Arguably, this difference in performance is due to the fact that Bi-LSTM lacks Object Memory, so all relevant information has to be stored in the Bi-LSTM hidden states along the reading process. When we start putting symbolic representation and operation into Reader , as shown in the result of OONP (structure), the performance is again significantly improved on all four metrics. More specifically, we have the following two observations (not shown in the table),
• Adding inline symbolic features as in Figure 11 improves around 0.5% in New-Assign action prediction, and 2% in category property prediction. The features we use include the type of the candidate strings and the relative distance to the maker character we chose. Figure 11: Information in distributed and symbolic forms in Inline Memory.
• Using a matching function that can take advantage of the structures in objects helps better generalization. Since the objects in this task has multiple property slots like Name, Gender, Quantity, Value. We tried adding both the original text (e.g., Name, Gender, Quantity, Value ) string of an property slot and the embedding of that as additional features, e.g., the length the longest common string between the candidate string and a relevant property of the object.
When using REINFORCE to determine when to make prediction for category property, as shown in the result of OONP (RL), the prediction accuracy for category property and the overall ontology accuracy is improved. It is quite interesting that it has some positive impact on the supervised learning task (i.e., learning the New-Assign actions) through shared parameters. The entanglement of the two learning paradigms in OONP is one topic for future research, e.g., the effect of predicting the right category property on the New-Assign actions if the predicted category property is among the features of the matching function for New-Assign actions.
6.3 Task-III: Parsing court judgment documents
Data and task
We also implement OONP for parsing court judgement on theft. Unlike the two previous tasks, court judgements are typically much longer, containing multiple events of different types as well as bulks of irrelevant text, as illustrated in the left panel of Figure 10. The dataset contains 1961 Chinese judgement documents, divided into training/dev/testing set with 1561/200/200 texts respectively. The ontology we designed for this task mainly consists of a number of Person-objects and Item-objects connected through a number Event-object with several types of links. A Event-object has three internal properties: Time (string), Location (string), and Type (category, ∈{theft, restitutionx, disposal}), four types of external links to Person-objects (namely, principal, companion, buyer, victim) and four types of external links to Item-objects (stolen, damaged, restituted, disposed). In addition to the external links to Event-objects , a Person-object has only the Name (string) as the internal property. An Item-object has three internal properties: Description (array of strings), Value (string) and Returned(binary) in addition to its external links to Eventobjects , where Description consists of the words describing the corresponding item, which could come from multiple segments across the document. A Person-object or an Itemobject could be linked to more than one Event-object, for example a person could be the principal suspect in event A and also a companion in event B. An illustration of the judgement document and the corresponding ontology can be found in Figure 12.
Implementation details
We use a model configuration similar to that in Section 6.2, with however the following important difference. In this experiment, OONP performs a 2-round reading on the text. In the first round, OONP identifies the relevant events, creates empty Event-objects, and does Notes-Taking on Inline Memory to save the information about event segmentation (see [Yan et al., 2017] for more details). In the second round, OONP read the updated Inline Memory, fills the Eventobjects, creates and fills Person-objects and Item-objects, and specifies the links between them. When an object is created during a certain event, it will be given an extra feature (not an internal propoerty) indicating this connection, which will be used in deciding links between this object and event object, as well as in determining the future New-Assign actions. The actions of the two round reading are summarized in Table 5.
Action for 1st-round Description NewObject(c)
New an Event-object, with c =Event. NotesTaking(Event, k).word
Put indicator of event-k on the current word. NotesTaking(Event, k).sentence
Put indicator of event-k on the rest of sentence, and move the read-head to the first word of next sentence NotesTaking(Event, k).paragraph
Put indicator of event-k on the rest of the paragraph, and move the read-head to the first word of next paragraph.
Skip.word
Move the read-head to next word Skip.sentence
Move the read-head to the first word of next sentence Skip.paragraph
Move the read-head to the first word of next paragraph. Action for 2nd-round Description NewObject(c)
New an object of class-c. AssignObject(c, k)
Assign the current information to existed object (c, k) UpdateObject(Person, k).Name
Set the name of the k th Person-object with the extracted string. UpdateObject(Item, k).Description Add to the description of an k th Item-object with the extracted string. UpdateObject(Item, k).Value
Set the value of an k th Item-object with the extracted string. UpdateObject(Event, k).Time
Set the time of an k th Event-object with the extracted string.
UpdateObject(Event, k).Location
Set the location of an k th Event-object with the extracted string. UpdateObject(Event, k).Type
Set the type of the k th Event-object among {theft, disposal, restitution} UpdateObject(Event, k).Items.x Set the link between the k th Event-object and an Item-object, where x ∈ {stolen, damaged, restituted, disposed } UpdateObject(Event, k).Persons.x Set the link between the k th Event-object and an Person-object, and x ∈ {principal, companion, buyer, victim} Table 5: Actions for parsing court judgements.
Results and Analysis
We use the same metric as in Section 6.2, and compare two OONP variants, OONP (neural) and OONP (structured), with Bi-LSTM. The Bi-LSTM will be tested only on the secondround reading, while both OONP variant are tested on a two-round reading. The results are shown in Table 6. OONP parsers attain accuracy significantly higher than Bi-LSTM models. Among, OONP (structure) achieves over 64% accuracy on getting the entire ontology right and over 78% accuracy on getting 95% consistency with the ground truth.
Conclusion
We proposed Object-oriented Neural Programming (OONP), a framework for semantically parsing in-domain documents. OONP is neural net-based, but equipped with sophisticated architecture and mechanism for document understanding, therefore nicely combining interpretability and learnability. Our experiments on both synthetic and real-world document parsing tasks have shown that OONP can learn to handle fairly complicated ontology with training data of modest sizes.
Figure 1 :
1Illustration of OONP on a parsing task.
Figure 2 :
2The overall digram of OONP, where S stands for symbolic representation, D stands for distributed representation, and S+D stands for a hybrid representation with both symbolic and distributed parts.
Figure 4 :
4An example of the objects from three classes.
Figure 5 :
5Left panel: Inline Memory with symbolic knowledge; Right panel: one choice of nonlinear representation of the distributed part onf Inline Memory used in[Yan et al., 2017].
Figure 6 :
6A particular implementation of Reader in a closer look, which reveals some details about the entanglement of neural and symbolic components. Dashed lines stand for continuous signal and the solid lines for discrete signal 3 OONP: Actions
Figure 7 :
7A pictorial illustration of what the Reader sees in determining whether to New an object and the relevant object when the read-head on Inline Memory reaches the last word in the sentence inFigure 2. The color of the arrow line stands for different matching functions for object classes, where the dashed lines is for the new object.
Figure 8 :
8A pictorial illustration of a full episode of OONP parsing, where we assume the description of cars (highlighted with shadow) are segmented in preprocessing.
Figure 12 :
12Left panel: the judgement document with highlighted part being the description the facts of crime, right panel: the corresponding ontology
Table 2 :
2The performance a implementation of OONP on bAbI task 2.
Model Assign Acc. (%) Type Acc. (%) Ont. Acc. (%) Ont. Acc-95 (%) Bi-LSTM (baseline)84.66 ± 0.20
-
18.20 ± 1.01
36.88 ± 0.01
OONP (neural)
94.50 ± 0.24
97.73 ± 0.12
53.29 ± 0.26
72.22 ± 1.01
OONP (structured)
97.49 ± 0.43
97.43 ± 0.07
64.51 ± 0.99
78.61 ± 0.95
Table 6 :
6OONP on judgement documents with the four metrics defined in Section 6.2.3.
‡ The logic says, an item is not "in" a location if it is held by a person.
Learning phrase representations using rnn encoder-decoder for statistical machine translation. [ References, Cho, Proceedings of EMNLP. EMNLP17241734References [Cho et al., 2014] Cho, K., van Merrienboer, B., Gulcehre, C., Bougares, F., Schwenk, H., and Bengio, Y. (2014). Learning phrase representations using rnn encoder-decoder for statistical machine translation. Proceedings of EMNLP, pages 17241734.
Search-based structured prediction. Iii Daumé, [Daumé III et al., 2009] Daumé III, H., Langford, J., and Marcu, D. (2009). Search-based structured prediction.
Language to logical form with neural attention. Lapata ; Dong, L Dong, M Lapata, ACL. [Dong and Lapata, 2016] Dong, L. and Lapata, M. (2016). Language to logical form with neural attention. ACL, pages 33-43.
[ Graves, abs/1410.5401Neural turing machines. CoRR. [Graves et al., 2014] Graves, A., Wayne, G., and Danihelka, I. (2014). Neural turing machines. CoRR, abs/1410.5401.
LSTM: A search space odyssey. [ Greff, abs/1503.04069CoRR[Greff et al., 2015] Greff, K., Srivastava, R. K., Koutník, J., Steunebrink, B. R., and Schmid- huber, J. (2015). LSTM: A search space odyssey. CoRR, abs/1503.04069.
Tracking the world state with recurrent entity networks. [ Henaff, abs/1612.03969CoRR[Henaff et al., 2016] Henaff, M., Weston, J., Szlam, A., Bordes, A., and LeCun, Y. (2016). Tracking the world state with recurrent entity networks. CoRR, abs/1612.03969.
Wordbased dialog state tracking with recurrent neural networks. [ Henderson, Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL). the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)[Henderson et al., 2014] Henderson, Matthew, Thomson, B., , and Young, S. (2014). Word- based dialog state tracking with recurrent neural networks. Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pp. 292-299.
D D Johnson, Learning graphical state transitions. the international conference on learning representations. [Johnson, 2017] Johnson, D. D. (2017). Learning graphical state transitions. the international conference on learning representations.
[ Lafferty, Conditional random fields: Probabilistic models for segmenting and labeling sequence data. international conference on machine learning. [Lafferty et al., 2001] Lafferty, D., Mccallum, A., and Pereira, F. (2001). Conditional ran- dom fields: Probabilistic models for segmenting and labeling sequence data. international conference on machine learning, pages 282-289.
Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. [ Liang, ACL[Liang et al., 2017] Liang, C., Berant, J., Le, Q., Forbus, K. D., and Lao, N. (2017). Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. ACL.
Interactive attention for neural machine translation. [ Meng, abs/1610.05011CoRR[Meng et al., 2016] Meng, F., Lu, Z., Li, H., and Liu, Q. (2016). Interactive attention for neural machine translation. CoRR, abs/1610.05011.
Concepts in programming languages. J C Mitchell ; Mitchell, Cambridge University PressMitchell, 2003] Mitchell, J. C. (2003). Concepts in programming languages. Cambridge University Press.
Coupling distributed and symbolic execution for natural language queries. Mou, Proceedings of the 34th International Conference on Machine Learning (ICML). the 34th International Conference on Machine Learning (ICML)[Mou et al., 2017] Mou, L., Lu, Z., Li, H., and Jin, Z. (2017). Coupling distributed and symbolic execution for natural language queries. In Proceedings of the 34th International Conference on Machine Learning (ICML), pages 2518-2526.
Objective criteria for the evaluation of clustering methods. W M Rand, Journal of the American Statistical Association. 66336Rand, 1971[Rand, 1971] Rand, W. M. (1971). Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association, 66(336):846-850.
. C J V Rijsbergen, Information Retrieval. Butterworth-Heinemann. Rijsbergen, 1979. 2nd edition[Rijsbergen, 1979] Rijsbergen, C. J. V. (1979). Information Retrieval. Butterworth- Heinemann, Newton, MA, USA, 2nd edition.
Is imitation learning the route to humanoid robots?. S Stefan ; Stefan, Trends in cognitive sciences. 3Stefan, 1999] Stefan, S. (1999). Is imitation learning the route to humanoid robots? Trends in cognitive sciences, 3.6:233-242.
Towards ai-complete question answering: A set of prerequisite toy tasks. [ Weston, abs/1502.05698CoRR[Weston et al., 2015] Weston, J., Bordes, A., Chopra, S., and Mikolov, T. (2015). Towards ai-complete question answering: A set of prerequisite toy tasks. CoRR, abs/1502.05698.
Simple statistical gradient-following algorithms for connectionist reinforcement learning. R J Williams ; Williams, Machine Learning. 8Williams, 1992] Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229-256.
Event identification as a decision process with non-linear representation of text. [ Yan, abs/1710.00969CoRR[Yan et al., 2017] Yan, Y., Zheng, D., Lu, Z., and Song, S. (2017). Event identification as a decision process with non-linear representation of text. CoRR, abs/1710.00969.
Multi-scale context aggregation by dilated convolutions. Yu , Koltun ; Yu, F Koltun, V , abs/1511.07122CoRR[Yu and Koltun, 2015] Yu, F. and Koltun, V. (2015). Multi-scale context aggregation by dilated convolutions. CoRR, abs/1511.07122.
Relation classification via convolutional deep neural network. [ Zeng, Proceedings of COLING. COLING[Zeng et al., 2014] Zeng, D., Liu, K., Lai, S., Zhou, G., and Zhao, J. (2014). Relation classi- fication via convolutional deep neural network. In Proceedings of COLING. |
10,642,417 | Exploring Affect-Context Dependencies for Adaptive System Development | We use χ 2 to investigate the context dependency of student affect in our computer tutoring dialogues, targeting uncertainty in student answers in 3 automatically monitorable contexts. Our results show significant dependencies between uncertain answers and specific contexts. Identification and analysis of these dependencies is our first step in developing an adaptive version of our dialogue system. | [
12083964
] | Exploring Affect-Context Dependencies for Adaptive System Development
Association for Computational LinguisticsCopyright Association for Computational LinguisticsApril 2007. 2007
Kate Forbes-Riley
Learning R&D Ctr. Univ. Pittsburgh Pittsburgh
Computer Science Dpt. Univ. Pittsburgh Pittsburgh
Learning R&D Ctr. Computer Science Dpt. Univ. Pittsburgh Pittsburgh
Learning R&D Ctr. Univ. Pittsburgh Pittsburgh
15260, 15260, 15260, 15260PA, PA, PA, PA
Mihai Rotaru
Learning R&D Ctr. Univ. Pittsburgh Pittsburgh
Computer Science Dpt. Univ. Pittsburgh Pittsburgh
Learning R&D Ctr. Computer Science Dpt. Univ. Pittsburgh Pittsburgh
Learning R&D Ctr. Univ. Pittsburgh Pittsburgh
15260, 15260, 15260, 15260PA, PA, PA, PA
Diane J Litman litman@cs.pitt.edu
Learning R&D Ctr. Univ. Pittsburgh Pittsburgh
Computer Science Dpt. Univ. Pittsburgh Pittsburgh
Learning R&D Ctr. Computer Science Dpt. Univ. Pittsburgh Pittsburgh
Learning R&D Ctr. Univ. Pittsburgh Pittsburgh
15260, 15260, 15260, 15260PA, PA, PA, PA
Joel Tetreault tetreaul@pitt.edu
Learning R&D Ctr. Univ. Pittsburgh Pittsburgh
Computer Science Dpt. Univ. Pittsburgh Pittsburgh
Learning R&D Ctr. Computer Science Dpt. Univ. Pittsburgh Pittsburgh
Learning R&D Ctr. Univ. Pittsburgh Pittsburgh
15260, 15260, 15260, 15260PA, PA, PA, PA
Exploring Affect-Context Dependencies for Adaptive System Development
Proceedings of NAACL HLT 2007, Companion Volume
NAACL HLT 2007, Companion VolumeRochester, NYAssociation for Computational LinguisticsApril 2007. 2007mrotaru@cs.pitt.edu
We use χ 2 to investigate the context dependency of student affect in our computer tutoring dialogues, targeting uncertainty in student answers in 3 automatically monitorable contexts. Our results show significant dependencies between uncertain answers and specific contexts. Identification and analysis of these dependencies is our first step in developing an adaptive version of our dialogue system.
Introduction
Detecting and adapting to user affect is being explored by many researchers to improve dialogue system quality. Detection has received much attention (e.g., (Litman and Forbes-Riley, 2004;Lee and Narayanan, 2005)), but less work has been done on adaptation, due to the difficulty of developing responses and applying them at the right time. Most work on adaptation takes a context-independent approach: use the same type of response after all instances of an affective state. For example, Liu and Picard (2005)'s health assessment system responds with empathy to all instances of user stress.
Research suggests, however, that it may be more effective to take a context-dependent approach: develop multiple responses for each affective state, whose use depends on the state's context. E.g., in the tutoring domain, Pon-Barry et al. (2006) show that human tutors use multiple responses to uncertain student answers, depending on the answer's correctness and prior context. In the information-seeking domain, it is commonly believed that while an apology is a good default response to user frustration (as in (Klein et al., 2002)), one context requires a different response: after several frustrated user turns, the call should be forwarded to a human operator.
A context-dependent approach to affect adaptation must address 2 issues: in what contexts to adapt, and what responses to use there. This paper addresses the first issue and targets student uncertainty in our computer tutoring dialogues. Although our dialogues have a Question-Answer format, our system contains 275 tutor questions. Treating each question as a context is too labor-intensive for adaptation development and creates a data sparsity issue. Instead we treat automatically monitorable question properties as contexts. Here we examine 3 contexts: the dialogue act interpretation, and the discourse structure depth and transition, of the prior tutor question. We use χ 2 to investigate the context dependency of uncertain student answers (correct and incorrect). Our results show that some contexts are significantly associated with uncertain answers. Our next step will be to use these significant dependencies to develop system responses to uncertain answers in these contexts. These responses will be based both on our hypotheses about why uncertainty is associated with these contexts, and on analyses of human tutor responses to uncertain answers in these contexts.
Student Uncertainty and Prior Context
ITSPOKE is a speech-enabled version of a textbased tutoring system (VanLehn et al., 2002). The student first types an essay answering one of 5 qualitative physics problems. ITSPOKE parses the essay, extracts dialogue topics concerning misconceptions, then engages the student in dialogue. In this study we used 2 ITSPOKE corpora containing 4590 student turns over 185 dialogues from 37 students. Figure 1 shows an annotated dialogue excerpt.
Uncertainty and Correctness Annotations
ITSPOKE, like most computer tutors, responds only to student correctness. ITSPOKE labels each answer as correct or incorrect 1 . If correct, ITSPOKE moves on to the next question. If incorrect, then for questions on simple topics, ITSPOKE gives the correct answer and moves on, while for questions on complex topics (ITSPOKE 4 , Figure 1), ITSPOKE initiates a sub-dialogue with remediation questions (ITSPOKE 5 -ITSPOKE 6 ), before moving on.
Recent computer tutoring research has shown interest in responding to student affect 2 over correctness. Uncertainty is of particular interest: researchers hypothesize that uncertainty and incorrectness each create an opportunity to learn (VanLehn et al., 2003). They cannot be equated, however. First, an uncertain answer may be correct or incorrect (Pon-Barry et al., 2006). Second, uncertainty indicates that the student perceives a possible misconception in their knowledge. Thus, system responses to uncertain answers can address both the correctness and the perceived misconception.
In our ITSPOKE corpora, each student answer has been manually annotated as uncertain or nonuncertain 3 : uncertain is used to label answers expressing uncertainty or confusion about the material; non-uncertain is used to label all other answers. 1 We have also manually labeled correctness in our data; agreement between ITSPOKE and human is 0.79 Kappa (90%).
2 We use 'affect' to cover emotions and attitudes that affect how students communicate. Although some argue 'emotion' and 'attitude' should be distinguished, some speech researchers find the narrow sense of 'emotion' too restrictive because it excludes states where emotion is present but not full-blown, including arousal and attitude (Cowie and Cornelius, 2003).
3 A second annotator relabeled our dataset, yielding interannotator agreement of 0.73 Kappa (92%).
Context Annotations
Here we examine 3 automatically monitorable tutor question properties as our contexts for uncertainty: Tutor Question Acts: In prior work one annotator labeled 4 Tutor Question Acts in one ITSPOKE corpus (Litman and Forbes-Riley, 2006) Discourse Structure Depth/Transition: In prior work we showed that the discourse structure Depth and Transition for each ITSPOKE turn can be automatically annotated (Rotaru and Litman, 2006). E.g., as shown in Figure 1, ITSPOKE 4,7 have depth 1 and ITSPOKE 5,6 have depth 2. We combine levels 3 and above (3+) due to data sparsity. 6 Transition labels represent the turn's position relative to the prior ITSPOKE turn: NewTopLevel labels the first question after an essay. Advance labels questions at the same depth as the prior question (ITSPOKE 4,6 ).
Push labels the first question in a sub-dialogue (after an incorrect answer) (ITSPOKE 5 ). After a sub-dialogue, ITSPOKE asks the original question again, labeled PopUp (ITSPOKE 7 ), or moves on to the next question, labeled PopUpAdv. SameGoal labels both ITSPOKE RPTS (after rejections) and repeated questions after timeouts.
Uncertainty Context Dependencies
We use the χ 2 test to investigate the context dependency of uncertain (unc) or non-uncertain (nonunc) student answers that are correct (C) or incorrect (I). First, we compute an overall χ 2 value between each context variable and the student answer variable. For example, the Question Act variable (QACT) has 4 values: SAQ, LAQ, DAQ, RPT. The answer variable (SANSWER) also has 4 values: uncC, uncI, nonuncC, nonuncI. However, this does not tell us which variable values are significantly dependent. To do this, we create a binary variable from each value of the context and answer variables. E.g., the binary variable for LAQ has 2 values: "LAQ" and "Anything Else", and the binary variable for uncC has 2 values: "uncC" and "Anything Else". We then compute the χ 2 value between the binary variables. Table 1 shows this value is 133.98, which greatly exceeds the critical value of 3.84 (p≤ 0.05, df=1). The table also shows the observed (72) and expected (22) counts. Comparison determines the sign of the dependency: uncC occurs significantly more than expected (+) after LAQ. The "=" sign indicates a non-significant dependency. Table 1 shows uncertain answers (uncC and uncI) occur significantly more than expected after LAQs. In contrast, non-uncertain answers occur significantly less (-), or aren't significantly dependent (=). Also, uncI occurs significantly more than expected after DAQs. We hypothesize that LAQs and DAQs are associated with more uncertainty because they are harder questions requiring definitions or deep reasoning. Not surprisingly, uncertain (and incorrect) answers occur significantly less than expected after SAQs (easier fill-in-the-blank questions). Uncertainty shows very weak dependencies on RPTs. Table 2 shows that Depth1 is associated with more correctness and less uncertainty overall. Both types of correct answer occur significantly more than expected, but this dependency is stronger for nonuncC. Both incorrect answers occur significantly less than expected, but this dependency is stronger for uncI. At Depths 2 and 3+, correct answers occur significantly less than expected or show no significance. Incorrect answers occur significantly more than expected, and the dependencies are stronger for uncI. We hypothesize that deeper depths are associated with increased uncertainty and incorrectness because they correspond to deeper knowledge gaps; uncertainty here may also relate to a perceived lack of cohesion between sub-topic and larger solution. Table 3 shows Pushes have the same dependencies as deeper depths (increased uncertainty and incorrectness); however, here the uncI dependency is only slightly stronger than nonuncI, which suggests that increased uncertainty at deeper depths is more reliably associated with remediation questions after the Push. Although uncertainty shows only weak dependencies on PopUps, after PopUpAdvs the uncI dependency is strong, with uncI occurring more than expected. We hypothesize that this dependency relates to students losing track of the original question/larger topic. Uncertainty shows only weak dependencies on Advances. After NewTopLevels, incorrect answers occur less than expected, but the dependency is stronger for nonuncI. After SameGoals, incorrect answers occur more than expected, but the dependency is stronger for nonuncI. Compared with the RPT results, the SameGoal results suggest students feel increased uncertainty after timeouts.
Current Directions
We analyzed dependencies between uncertain student answers and 3 automatically monitorable contexts. We plan to examine more contexts, such as a Topic Repetition variable that tracks similar questions about a topic (e.g. gravity) across dialogues. Our next step will be to use the significant dependencies to develop system responses to uncertain answers in these contexts. These responses will be based both on our hypotheses about why uncertainty is significantly associated with these contexts, as well as on analyses of human tutor responses in these contexts, using our human tutoring corpus, which was collected with our first ITSPOKE corpus using the same experimental procedure.
We also plan to investigate context dependencies for other affective states, such as student frustration.
Figure 1 :
1ITSPOKE Annotated Dialogue Excerpt
ITSPOKE4 :
ITSPOKE4Good. Does the packet have an acceleration? If yes please specify its direction. [SAQ, Advance, D1] STUDENT4: horizontally forward yes [non-uncertain, incorrect] ITSPOKE5: Well... To find the packet's acceleration we will use Newton's 2nd law. What is the definition of this law? Well... < further discussion of Newton's 2nd law...> So once again on the original question. Does the packet have an acceleration? If yes please specify its direction.[SAQ, PopUp, D1][LAQ,
Push, D2]
STUDENT5: an object in motion tends to stay in motion until it's act by an outside force [uncertain, incorrect]
ITSPOKE6: Well... Newton's 2nd law says that the net force acting on an object equals the mass times the acceleration of the
object. We can define Newton's 2nd law as F=m*a. Hence what can we determine about the acceleration of the packet given
that we know the net force acting on it is not zero and points straight down? [DAQ, Advance, D2]
STUDENT6: I don't know [uncertain, incorrect]
ITSPOKE7:
4 :
4Short (SAQ),Long(LAQ), and Deep Answer Question (DAQ) distinguish the question in terms of content and the type of answer it requires. Repeat (RPT) labels variants of "Can you repeat that?" after rejections. From these annotations we built a hash table associating each ITSPOKE question with a Question Act label; with this table we automatically labeled ITSPOKE questions in our second ITSPOKE corpus.
Table 1 (
1last column) shows the χ 2 value between these variables is 203.38, which greatly exceeds the critical value of 16.92 (p≤ 0.05, df=9), indicating a highly significant dependency. Significance increases as the χ 2 value increases.Dependency
Obs. Exp.
χ 2
QACT ∼ SANSWER
203.38
LAQ ∼ uncC
+
72
22
133.98
LAQ ∼ uncI
+
43
27
11.17
LAQ ∼ nonuncC
-
96
151
50.13
LAQ ∼ nonuncI
=
48
60
3.10
DAQ ∼ uncC
=
22
22
0.01
DAQ ∼ uncI
+
37
27
4.57
DAQ ∼ nonuncC
=
135
149
3.53
DAQ ∼ nonuncI
=
63
59
0.35
SAQ ∼ uncC
-
285
328
41.95
SAQ ∼ uncI
-
377
408
17.10
SAQ ∼ nonuncC
+ 2368 2271
66.77
SAQ ∼ nonuncI
-
875
898
5.31
RPT ∼ uncC
-
7
14
4.15
RPT ∼ uncI
=
22
18
1.25
RPT ∼ nonuncC
-
70
98
20.18
RPT ∼ nonuncI
+
70
39
33.59
Table 1 :
1Tutor Question Act Dependencies (p≤.05:
critical χ 2 =16.92 (df=9); critical χ 2 =3.84 (df=1))
Table 2 :
2Depth Dependencies (p≤.05: critical
χ 2 =12.59 (df=6); critical χ 2 =3.84 (df=1))
Table 3 :
3Transition Dependencies (p≤.05: critical χ 2 =25.00 (df=15); critical χ 2 =3.84 (df=1))
Our Acts are based on related work(Graesser et al., 1995). Two annotators labeled the Acts in 8 dialogues in a parallel human tutoring corpus, with agreement of 0.75 Kappa (90%).
AcknowledgmentsNSF (#0631930, #0354420 and #0328431) and ONR (N00014-04-1-0108) support this research.
Describing the emotional states that are expressed in speech. R Cowie, R R Cornelius, Speech Communication. 40R. Cowie and R. R. Cornelius. 2003. Describing the emotional states that are expressed in speech. Speech Communication, 40:5-32.
Collaborative dialog patterns in naturalistic one-on-one tutoring. A Graesser, N Person, J Magliano, Applied Cognitive Psychology. 9A. Graesser, N. Person, and J. Magliano. 1995. Collabo- rative dialog patterns in naturalistic one-on-one tutor- ing. Applied Cognitive Psychology, 9:495-522.
This computer responds to user frustration: Theory, design, and results. J Klein, Y Moon, R Picard, Interacting with Computers. 14J. Klein, Y. Moon, and R. Picard. 2002. This computer responds to user frustration: Theory, design, and re- sults. Interacting with Computers, 14:119-140.
Towards detecting emotions in spoken dialogs. C M Lee, S Narayanan, IEEE Transactions on Speech and Audio Processing. 132C. M. Lee and S. Narayanan. 2005. Towards detect- ing emotions in spoken dialogs. IEEE Transactions on Speech and Audio Processing, 13(2), March.
Predicting student emotions in computer-human tutoring dialogues. D Litman, K Forbes-Riley, Proc. ACL. ACLD. Litman and K. Forbes-Riley. 2004. Predicting student emotions in computer-human tutoring dialogues. In Proc. ACL, pages 352-359.
Correlations between dialogue acts and learning in spoken tutoring dialogues. D J Litman, K Forbes-Riley, Natural Language Engineering. 122D. J. Litman and K. Forbes-Riley. 2006. Correlations between dialogue acts and learning in spoken tutoring dialogues. Natural Language Engineering, 12(2).
Embedded empathy in continuous, interactive health assessment. K Liu, R W Picard, CHI Workshop on HCI Challenges in Health Assessment. K. Liu and R. W. Picard. 2005. Embedded empathy in continuous, interactive health assessment. In CHI Workshop on HCI Challenges in Health Assessment.
Responding to student uncertainty in spoken tutorial dialogue systems. H Pon-Barry, K Schultz, E Bratt, B Clark, S Peters, International Journal of Artificial Intelligence in Education. 16H. Pon-Barry, K. Schultz, E. Bratt, B. Clark, and S. Pe- ters. 2006. Responding to student uncertainty in spo- ken tutorial dialogue systems. International Journal of Artificial Intelligence in Education, 16:171-194.
Exploiting discourse structure for spoken dialogue performance analysis. M Rotaru, D Litman, Proceedings of EMNLP. EMNLPSydney, AustraliaM. Rotaru and D. Litman. 2006. Exploiting discourse structure for spoken dialogue performance analysis. In Proceedings of EMNLP, Sydney, Australia.
The architecture of Why2-Atlas: A coach for qualitative physics essay writing. K Vanlehn, P W Jordan, C P Rosé, Proceedings of ITS. ITSK. VanLehn, P. W. Jordan, and C. P. Rosé et al. 2002. The architecture of Why2-Atlas: A coach for qualitative physics essay writing. In Proceedings of ITS.
Why do only some events cause learning during human tutoring?. K Vanlehn, S Siler, C Murray, Cognition and Instruction. 213K. VanLehn, S. Siler, and C. Murray. 2003. Why do only some events cause learning during human tutor- ing? Cognition and Instruction, 21(3):209-249. |
7,184,744 | Integrating a Phrase-based SMT Model and a Bilingual Lexicon for Human in Semi-Automatic Acquisition of Technical Term Translation Lexicon | This paper presents an attempt at developing a technique of acquiring translation pairs of technical terms with sufficiently high precision from parallel patent documents. The approach taken in the proposed technique is based on integrating the phrase translation table of a state-of-the-art statistical phrasebased machine translation model, and compositional translation generation based on an existing bilingual lexicon for human use. Our evaluation results clearly show that the agreement between the two individual techniques definitely contribute to improving precision of translation candidates. We then apply the Support Vector Machines (SVMs) to the task of automatically validating translation candidates in the phrase translation table. Experimental evaluation results again show that the SVMs based approach to translation candidates validation can contribute to improving the precision of translation candidates in the phrase translation table. | [
10311619,
14421001,
8884845,
15034416,
12146323,
4593718,
2357627,
5219389
] | Integrating a Phrase-based SMT Model and a Bilingual Lexicon for Human in Semi-Automatic Acquisition of Technical Term Translation Lexicon
Yohei Morishita
Graduate School of Systems and Information Engineering
University of Tsukuba Tsukuba
305-8573JAPAN
Takehito Utsuro
Graduate School of Systems and Information Engineering
University of Tsukuba Tsukuba
305-8573JAPAN
Mikio Yamamoto
Graduate School of Systems and Information Engineering
University of Tsukuba Tsukuba
305-8573JAPAN
Integrating a Phrase-based SMT Model and a Bilingual Lexicon for Human in Semi-Automatic Acquisition of Technical Term Translation Lexicon
This paper presents an attempt at developing a technique of acquiring translation pairs of technical terms with sufficiently high precision from parallel patent documents. The approach taken in the proposed technique is based on integrating the phrase translation table of a state-of-the-art statistical phrasebased machine translation model, and compositional translation generation based on an existing bilingual lexicon for human use. Our evaluation results clearly show that the agreement between the two individual techniques definitely contribute to improving precision of translation candidates. We then apply the Support Vector Machines (SVMs) to the task of automatically validating translation candidates in the phrase translation table. Experimental evaluation results again show that the SVMs based approach to translation candidates validation can contribute to improving the precision of translation candidates in the phrase translation table.
Introduction
For both high quality machine and human translation, a large scale and high quality bilingual lexicon is the most important key resource. Since manual compilation of bilingual lexicon requires plenty of time and huge manual labor, in the research area of knowledge acquisition from natural language text, automatic bilingual lexicon compilation have been studied for more than a decade. Techniques invented so far include translation term pair acquisition based on statistical co-occurrence measure from parallel sentences (Matsumoto and Utsuro, 2000), translation term pair acquisition from comparable corpora (Fung and Yee, 1998), compositional translation generation based on an existing bilingual lexicon for human use (Tonoike et al., 2006), and translation term pair acquisition by collecting partially bilingual texts through the search engine (Huang et al., 2005).
However, most of those techniques invented so far have not been reliable enough in any practical situation of semi-automatically developing a bilingual lexicon. This is especially true in the case of techniques which use resources other than parallel sentences, since searching comparable corpora or the search engine snippet for a translation of a term into another language is much harder compared with when searching parallel sentences for a translation pair. Even in the case of techniques on translation term pair acquisition from parallel sentences, those techniques do not seem to be reliable enough for those who are actually working on semiautomatically or manually compiling a bilingual lexicon using parallel sentences.
For example, we have been working with a Japanese organization which is responsible for translating Japanese patent applications published by the Japanese Patent Office (JPO) into English. Among various document genres where machine and/or human translation of documents is really required in industrial situation, patent document is one of the most important and have substantial impact in a number of practical applications and services, such as cross-lingual patent retrieval and filing patent applications to foreign countries. Here, in the process Therefore, the organization is continuously working on manually extending its Japanese-English lexicon of technical terms by utilizing Japanese-English parallel patent sentences as certain reference text data for searching for a translation of a Japanese technical term into English.
Through our personal communication with the organization, it is claimed that automatic techniques for translation term pair acquisition are mostly useless. This is because it is often necessary to manually validate acquired translation term pairs by referring to parallel sentences, where this validation process usually takes as much time as when without automatic translation term pair acquisition techniques. According to the organization, when employing certain statistical techniques on automatic acquisition of translation pairs of technical terms from parallel patent sentences, the primary requirement is precision rather than recall. This is because when translation candidates suggested by such statistical techniques are with more than 90% precision, it saves time for persons who work on compiling bilingual lexicon to searching for English translation of a Japanese technical term. Even with relatively low recall, the organization has sufficient number of patent documents so that, for many years, they can continue working on compiling bilingual lexicon only by accepting translation candidates highly confidently suggested by a statistical technique, but rejecting those suggested with less confidence.
Based on such requirement from the organization working on compiling bilingual lexicon of technical terms from parallel patent documents, this paper presents an attempt at developing a technique of acquiring translation pairs of technical terms with sufficiently high precision from parallel patent documents. The approach taken in the proposed technique is based on integrating the phrase translation table of a state-of-the-art statistical phrase-based machine translation model (Koehn et al., 2007), and compositional translation generation based on an existing bilingual lexicon for human use (Tonoike et al., 2006).
In this approach, we first simply evaluate translation candidates in the phrase translation table as well as those generated by compositional translation generation based on an existing bilingual lexicon for human use. We also evaluate agreement between translation candidates from those two individual techniques that are different from each other with respect to their approaches as well as resource used in their approaches. Our evaluation results clearly show that the agreement between the two individual techniques definitely contribute to improving precision of translation candidates. We then apply the Support Vector Machines (SVMs) to the task of automatically validating translation candidates in the phrase translation table, where features from various sources such as translation candidates for each constituent word found in the existing bilingual lexicon for human use, as well as statistics from the whole parallel sentences used for learning the phrase trans-
Japanese-English Parallel Patent Documents
In the NTCIR-7 workshop, the Japanese-English patent translation task is organized (Fujii et al., 2008), where parallel patent documents and sentences are provided by the organizer. Those parallel patent documents are collected from the 10 years of unexamined Japanese patent applications published by the Japanese Patent Office (JPO) and the 10 years patent grant data published by the U.S. Patent & Trademark Office (USPTO) in 1993-2000. The numbers of documents are approximately 3,500,000 for Japanese and 1,300,000 for English. Because the USPTO documents consist of only patent that have been granted, the number of these documents is smaller than that of the JPO documents. From these document sets, patent families are automatically extracted and the fields of "Background of the Invention" and "Detailed Description of the Preferred Embodiments" are selected. This is because the text of those fields is usually translated on a sentence-by-sentence basis. Then, the method of (Utiyama and Isahara, 2007) is applied to the text of those fields, and Japanese and English sentences are aligned. Table 1 shows the distribution of the IPC (International Patent Classification) Categories in the whole parallel documents and sentences (about 1.8M sentences in total).
Techniques of Generating Translation Candidates
Techniques based on a Bilingual Lexicon for Human Use
A Bilingual Lexicon: Eijiro
As an existing Japanese-English translation lexicon for human use, we use Eijiro (http://www. eijiro.jp/, Ver.79, with 1.6M translation pairs. ).
Compositional Translation Generation
In compositional translation generation (Tonoike et al., 2006), translation candidates of a term are compositionally generated by concatenating the translation of the constituents of the term. Here, as an existing bilingual lexicon for translating constituents, we use Eijiro and bilingual constituents lexicons (0.14M translation pairs) compiled from the translation pairs of Eijiro.
An example of compositional translation generation for the Japanese technical term " " is illustrated in Figure 1. First, the Japanese technical term " " is decomposed into its constituents by consulting an existing bilingual lexicon and retrieving Japanese headwords. In this case, the result of this decomposition can be given as in Figure 1). Then, each constituent is translated into the target language. A confidence score is assigned to the translation of each constituent. Finally, translation candidates are generated by concatenating the translation of those constituents according to word ordering rules considering prepositional phrase construction.
Each constituent is assigned a score based on the number of morphemes and the frequencies of translation pairs in the bilingual constituent lexicons. Then, the score of the concatenated translation candidates is calculated as the product of the scores of their constituents. When more than one translation candidates are generated as in the case of Figure 1, they are ranked in descending order of their scores.
Phrase Translation Table of an SMT Model
As a toolkit of a phrase-based statistical machine translation model, we use Moeses (Koehn et al., 2007) and apply it to the whole 1.8M parallel patent sentences. In Moses, first, word alignment of par-allel sentences are obtained by GIZA++ (Och and Ney, 2003) in both translation directions and then the two alignments are symmetrised. Next, any phrase pair that is consistent with word alignment is collected into the phrase translation table and a phrase translation probability is assigned to each pair (Koehn et al., 2003). We finally obtain 76M translation pairs with 33M unique Japanese phrases, i.e., 2.29 English translations per Japanese phrase on average, with Japanese to English phrase translation probabilities P (p E | p J ) of translating a Japanese phrase p J into an English phrase p E . For each Japanese phrase, those multiple translation candidates in the phrase translation table are ranked in descending order of Japanese to English phrase translation probabilities.
[8th AMTA conference, Hawaii, 21-25 October 2008] Out of the whole 1.8M parallel sentences, we randomly select 400 for evaluating translation generation techniques 1 , restricting that they have uniform distribution of IPC categories. Figure 2 illustrates the procedure of generating translation of technical terms in parallel patent sentences. First, we automatically extract noun phrases from Japanese sentences by applying a simple regular expression for noun phrase extraction. Next, we manually extract 1,040 technical terms from those Japanese noun phrases 2 .
To those 1,040 Japanese technical terms, the three techniques (i.e., A, B, and C in Figure 2) for generating English translation candidates are applied. Here, suppose that we are given a Japanese noun phrase t J extracted from the Japanese sentence S J of a parallel sentence pair S J , S E , and that for t J , the techniques for generating English translation candidates are applied. Then, those translation candidates are matched against the English sentence S E of the parallel sentence pair, and those which are not found in the English part are filtered out. Finally, Support Vector Machines (SVMs) are applied to the task of 1 Since our primary application is semi-automatic acquisition of technical term bilingual lexicon from parallel sentences, it is quite usual that a large scale parallel sentences are provided and are used both for learning a phrase translation table and for generating technical term translation pairs. If one wants to consider another task such as acquiring technical term translation pairs that do not appear in the parallel sentences used for learning the phrase translation table, it is necessary to invent a framework slightly different from the one we proposed in this paper.
2 In a situation of practically applying the technique proposed in this paper, we are planning to use a large scale lexicon of Japanese technical terms when extracting Japanese technical terms for which English translation candidates are to be generated.
validating translation candidates based on features from various sources such as the existing bilingual lexicon for human use and statistics from the whole 1.8M aligned parallel sentences.
For each of the three techniques, Table 2 lists the number of Japanese noun phrases for which the technique can generate English translation candidates, as well as the number of generated English translation candidates. In Figure 3, out of the set (a) of the whole 1,040 Japanese noun phrases, we denote the set of Japanese noun phrases for which Eijiro can generate English translation candidates as E. We also denote the set of those for which compositional translation generation can generate English translation candidates as C, and the set of those for which the phrase translation table can generate English translation candidates as P . We further focus on the set (E ∩P ) of Japanese noun phrases for each of which all of the three techniques can generate the same English translation candidate, and on the set (C ∩ P ) − E of Japanese noun phrases for each of which both compositional translation generation and the phrase translation table can generate the same English translation candidate, but the Eijiro can not. We also focus on the set P − (C ∩ P ) of Japanese noun phrases for which only the phrase translation table can generate English translation candidates.
For a given Japanese noun phrase, both compositional translation generation and the phrase translation table generate English translation candidates that are ranked in descending order of certain scores or probabilities. As we show in Table 3, in the following sections, we evaluate the 1st ranked translation candidate. On the other hand, only with an exception of a few technical terms, in each entry of Japanese technical terms, Eijiro lists only one English translation 3 . In the case of such exception where Eijiro lists more than one English translations for one Japanese technical term, we regard those multiple translations as equally correct in the evaluation of the subsequent sections.
Among the whole procedure in Figure 2, the next section presents results of evaluating the recall/precision/F-measure of the three techniques individually, as well as that of agreement among two or three of the individual techniques. Furthermore, Section 5 presents results of applying SVMs to the task of validating translation candidates. tion, Eijiro includes 321 of them. Each of those 321 Japanese noun phrases has about 2.31 English translation on the average. However, among those 321 Japanese noun phrases, only 175 have their English translation in the English part of parallel sentences for learning the phrase translation table. Furthermore, almost all of them have only one translation in the English part.
Evaluation Results
In the left half of Table 3, we show results of evaluating each of individual techniques against (a) the whole 1,040 Japanese noun phrases,
(b) the set (E ∩ P ), (c) the set (C ∩ P ) − E, (d) the set P − (C ∩ P ).
Against the whole set (a), both Eijiro and compositional translation generation based on Eijiro have very low recall, while their precisions are over 90%.
On the other hand, as can be easily expected, the phrase translation table has nearly 80% recall, but its precision is around 87%. When considering our [8th AMTA conference, Hawaii, 21-25 October 2008] primary application of semi-automatic acquisition of technical term bilingual lexicon, we prefer precision to recall, and regard this precision (around 87%) of the phrase translation table against the whole set (a) as a baseline of the evaluation of this paper. Compared with this baseline, against the set (b) (i.e., Japanese noun phrases for which all the three techniques can generate English translation candidates) and (c) (i.e., Japanese noun phrases for which both compositional translation generation and the phrase translation table can generate English translation candidates, but the Eijiro can not), agreements of the three or the two techniques have precisions and F-measures over 90%. For both sets (b) and (c), agreements of the three or the two techniques essentially represent agreement of the two resources that have quite different nature, i.e. a bilingual lexicon for human use and a statistical technique. Because of this difference in nature of the resource, we can achieve high precision in their agreement. Union of those sets (b) and (c) cover 43% of the whole set of 1,040 Japanese noun phrases, and we can have around 95% precision for the union in total. We can claim that such a high precision is definitely an advantage in terms of our application of semi-automatic acquisition of technical term bilingual lexicon.
Validating Translation Candidates by SVMs
The Procedure
This section describes the procedure and the results of applying Support Vector Machines (SVMs) (Vapnik, 1998) to the task of validating translation candidates generated by the three techniques. As a tool for learning SVMs, we use TinySVM (http://chasen.org/˜taku/software/ [8th AMTA conference, Hawaii, 21-25 October 2008] number of morphemes in the Japanese noun phrase number of words in the English translation candidate Bilingual -based on Eijiro score and rank of compositional translation generation given to the English translation candidate (for the set (c)) whether at least one translation pair of constituents of the Japanese noun phrase and the English translation candidate is included in Eijiro (for the set (d)) Bilingual -based on statistics in the parallel sentences probability and rank of the phrase translation table given to the English translation candidate frequencies f req(t E , t J ), f req(t E , ¬t J ), and f req(¬t E , t J ) in the contingency table TinySVM/). Each training/test instance of SVMs learning is represented as a tuple t J , t E , c , where t J and t E denote a Japanese noun phrase and an English translation candidate generated by at least one of the three techniques, and the class c denotes whether t E is a correct translation of t J found in the English part of the parallel sentence (i.e., "c = +"), or not (i.e., "c = −"). Out of the whole 1,040 Japanese noun phrases, at least one English translation candidate is generated for 954 of them and the total number of generated English translation candidates is 2,851. Thus, we have 2,851 instances for training/testing SVMs in total. As the kernel function, we compare the linear and the polynomial (2nd order) kernels, where the latter performs better.
In the testing of a SVMs classifier, given a Japanese noun phrase x J , we collect all the tuples x J , t E , c which have x J in the Japanese part, and classify each tuple by the SVMs classifier. Here, we regard the distance from the separating hyperplane to each test instance as a confidence measure, and choose a tuple which satisfies the following: i.e., one for which the classifier outputs the class as "+", and furthermore, one with the greatest distance from the separating hyperplane. In the actual evaluation of the "Validation by SVMs" column in Table 3, we train/test an SVMs classifier separately for each of the sets (c) of 272 Japanese noun phrases and (d) of 504 Japanese noun phrases. For both sets, the "Validation by SVMs" column in Table 3 shows the evaluation results by 10-fold cross-validation. Table 4 lists the features used in the SVMs learning. As monolingual features, we use the number of morphemes constituting the Japanese noun phrase as well as that of words constituting the English translation candidate. We evaluated these features for both of the sets (c) and (d) in Table 3, where for the set (c), we had better performance without these features. Thus, we use these features only for the set (d).
Features
Bilingual features can be classified into two types: one is based on translation knowledge in the bilingual lexicon Eijiro for human use, while the other is based on statistics obtained from the parallel sentences used for learning the phrase translation table. As bilingual features based on Eijiro, first we use the score and the rank of compositional translation generation given to the English translation candidate, which are used only for the set (c). Second, for the set (d), although compositional translation generation can not generate any translation candidate, we lookup the bilingual lexicon Eijiro and examine whether any translation pair for a constituent of the Japanese noun phrase and that of the English translation candidate can be found. Then, we use whether at least one translation pair is included in Eijiro as a bilingual feature. For example, in the case of a Japanese technical term " " and its English translation candidate "application behavior analysis", the value of this feature is true if a translation pair such as " " and "analysis" is included [8th AMTA conference, Hawaii, 21-25 October 2008] in Eijiro. As bilingual features based on statistics obtained from the parallel sentences, first we use the probability and the rank of the phrase translation table given to the English translation candidate. Second, as another type of bilingual features based on statistics from the parallel sentences, we use statistics previously used for measuring statistical co-occurrence of translation pairs such as the mutual information, the φ 2 statistic, the dice coefficient, and the log-likelihood ratio (Matsumoto and Utsuro, 2000). Given an English term t E and a Japanese term t J , as bilingual features, we use co-occurrence frequencies of t E and t J in the contingency table below:
t J ¬t J t E f req(t E , t J ) f req(t E , ¬t J ) ¬t E f req(¬t E , t J ) f req(¬t E , ¬t J )
We also evaluated the φ 2 statistic as a feature, but we had better performance without the feature, and thus we use those co-occurrence frequencies directly as features.
Evaluation Results
First, for the set (c), we regard the result of the agreement between compositional translation generation and the phrase translation table as a baseline. Here, in the column "Validation by SVMs" of Table 3, we have the F-measure as 93.0, which is slightly higher than the baseline 91.2, although their difference is not statistically significant.
Next, for the set (d), we regard the precision and the F-measure of the phrase translation table as a baseline. For the set (d), only the phrase translation table can generate English translation candidates, where the precision and the F-measure are 81.5% which is lower than those for other sets (b) and (c). In this case, it is required for the SVMs classifier to validate the English translation candidates in the phrase translation table and to reject incorrect candidates. In order to realize this, we introduce a lower bound against the distance from the separating hyperplane to each test instance, where English translation candidates with this distance smaller than the lower bound are rejected. By examining various values of this lower bound with other held out data, we can achieve the highest precision 90.1%, or slightly less precision 87.1% with higher F-measure. Differ-ences between those precisions and the baseline are statistically significant at a level of 0.05. With these improvement in precisions, again we can claim that the approach of applying the SVMs learning technique to the task of validating translation candidates definitely contributes to semi-automatic acquisition of technical term bilingual lexicon.
Related Works
Among the techniques studied so far in the research area of automatic bilingual lexicon compilation as well as empirical approaches to machine translation such as statistical machine translation models, (Itagaki et al., 2007) is most closely related to the approach taken in this paper. (Itagaki et al., 2007) focused on automatic validation of translation pairs available in the phrase translation table learned by a statistical machine translation model. One of the major differences between (Itagaki et al., 2007) and the approach taken in this paper is that we focus on integrating the phrase translation table with compositional translation generation based on an existing bilingual lexicon for human use (Tonoike et al., 2006). As we showed in the experimental evaluation, translation knowledge resource of an existing bilingual lexicon for human use definitely contributes to improving the precision of translation candidates both in the agreement of two or three techniques and in validation by SVMs learning.
The system combination approaches to machine translation (Rosti et al., 2007;Matusov et al., 2006) are another related research in a broader perspective. One of the major differences between such system combination approaches to the whole sentence MT and the task focused in this paper is apparently in that we concentrate on application of semi-automatic acquisition of technical term bilingual lexicon, where the primary requirement is precision rather than recall of the acquired translation pairs.
Conclusion
This paper presented an attempt at developing a technique of acquiring translation pairs of technical terms with sufficiently high precision from parallel patent documents. The approach taken in the proposed technique is based on integrating the [8th AMTA conference, Hawaii, 21-25 October 2008] phrase translation table of a state-of-the-art statistical phrase-based machine translation model, and compositional translation generation based on an existing bilingual lexicon for human use. Our evaluation results clearly showed that the agreement between the two individual techniques definitely contribute to improving precision of translation candidates. We then applied the Support Vector Machines (SVMs) to the task of automatically validating translation candidates in the phrase translation table. Experimental evaluation results again showed that the SVMs based approach to translation candidates validation can contribute to improving the precision of translation candidates in the phrase translation table.
Figure 2 :
2Generating/Filtering/Validating Translation Candidates of Technical Terms in Parallel Patent Documents the cases "a" and "b" (in
Figure 3 :
3Number of Japanese Noun Phrases for which Each Individual Technique can generate English Translation Candidates
Table 1 :
1Distribution of IPC (International Patent Classification) Categories in 1.8M Parallel Patent SentencesIPC Category
Table 2 :
2Number of Translation Candidates generated by Individual TechniquesIndividual Techniques
# of Japanese Noun Phrases for
which English Translation Can-
didates are Generated
# of Generated English Trans-
lation Candidates (rate per a
Japanese Noun Phrase)
Eijiro
175
177 (1.01)
Compositional Translation Generation
450
465 (1.03)
Phrase Translation Table
950
2851 (3.00)
4 Evaluating Individual Techniques and
their Agreements
4.1 The Procedure
Table 3 :
3Recall/Precision/F-measure of 1st Ranked Translation Candidates (%)(a) Individual techniques against the whole 1,040 Japanese noun phrases
Eijiro
Compositional
Tran. Generation
Phrase Tran.
Table
16.3 (170/1040)
97.1 (170/175)
28.0
40.3 (419/1040)
93.1 (419/450)
56.2
79.3 (825/1040)
86.8 (825/950)
82.9
(b) Against the set (E ∩ P ) (for each of which all of the three techniques can generate the same English
translation candidate, 174 Japanese noun phrases)
Eijiro
Compositional
Tran. Generation
Phrase Tran.
Table
Agreements of the Three Tech-
niques
97.7 (170/174)
97.7 (170/174)
97.7
97.1 (169/174)
97.1 (169/174)
97.1
96.0 (167/174)
96.0 (167/174)
96.0
96.0 (167/174)
98.8 (167/169)
97.4
(c) Against the set (C ∩ P ) − E (for each of which both compositional translation generation and phrase
translation table can generate the same English translation candidate, but the Eijiro can not, 272 Japanese
noun phrases)
Compositional
Tran. Generation
Phrase Tran.
Table
Agreements of Compositional
Tran. Generation and Phrase Tran.
Table
Validation by SVMs
91.9 (250/272)
91.9 (250/272)
91.9
90.8 (247/272)
90.8 (247/272)
90.8
89.3 (243/272)
93.1 (243/261)
91.2
93.0 (253/272)
93.0 (253/272)
93.0
(d) Against the set P − (C ∩ P ) (for which only the phrase translation table can generate
English translation candidates, 504 Japanese noun phrases)
Phrase Tran.
Table
Validation by SVMs
81.5 (411/504)
81.5 (411/504)
81.5
57.5(290/504)
90.1(290/322)
70.2
72.8(366/504)
87.1(366/420)
79.2
Table 4 :
4Features of SVMs LearningFeature Type
Features
Monolingual (for the set (d))
Out of the whole 1,040 Japanese noun phrases for evalua-[8thAMTA conference, Hawaii, 21-25 October 2008]
Toward the evaluation of machine translation using patent information. A Fujii, M Utiyama, M Yamamoto, T Utsuro, Proc. 8th AMTA. 8th AMTAA. Fujii, M. Utiyama, M. Yamamoto, and T. Utsuro. 2008. Toward the evaluation of machine translation using patent information. In Proc. 8th AMTA.
An IR approach for translating new words from nonparallel, comparable texts. P Fung, L Y Yee, Proc. 17th COLING and 36th ACL. 17th COLING and 36th ACLP. Fung and L. Y. Yee. 1998. An IR approach for trans- lating new words from nonparallel, comparable texts. In Proc. 17th COLING and 36th ACL, pages 414-420.
Mining key phrase translations from Web corpora. F Huang, Y Zhang, S Vogel, Proc. HLT/EMNLP. HLT/EMNLPF. Huang, Y. Zhang, and S. Vogel. 2005. Mining key phrase translations from Web corpora. In Proc. HLT/EMNLP, pages 483-490.
Automatic validation of terminology translation consistency with statistical method. M Itagaki, T Aikawa, X He, Proc. MT summit XI. MT summit XIM. Itagaki, T. Aikawa, and X. He. 2007. Automatic vali- dation of terminology translation consistency with sta- tistical method. In Proc. MT summit XI, pages 269- 274.
Statistical phrase-based translation. P Koehn, F J Och, D Marcu, Proc. HLT-NAACL. HLT-NAACLP. Koehn, F. J. Och, and D. Marcu. 2003. Statistical phrase-based translation. In Proc. HLT-NAACL, pages 127-133.
Moses: Open source toolkit for statistical machine translation. P Koehn, H Hoang, A Birch, C Callison-Burch, M Federico, N Bertoldi, B Cowan, W Shen, C Moran, R Zens, C Dyer, O Bojar, A Constantin, E Herbst, Proc. 45th ACL, Companion Volume. 45th ACL, Companion VolumeP. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. 45th ACL, Companion Volume, pages 177-180.
Lexical knowledge acquisition. Y Matsumoto, T Utsuro, Handbook of Natural Language Processing. R. Dale, H. Moisl, and H. SomersMarcel Dekker Inc24Y. Matsumoto and T. Utsuro. 2000. Lexical knowledge acquisition. In R. Dale, H. Moisl, and H. Somers, editors, Handbook of Natural Language Processing, chapter 24, pages 563-610. Marcel Dekker Inc.
Computing consensus translation for multiple machine translation systems using enhanced hypothesis alignment. E Matusov, N Ueffing, H Ney, Proc. 11th EACL. 11th EACLE. Matusov, N. Ueffing, and H. Ney. 2006. Comput- ing consensus translation for multiple machine trans- lation systems using enhanced hypothesis alignment. In Proc. 11th EACL, pages 33-40.
A systematic comparison of various statistical alignment models. F J Och, H Ney, Computational Linguistics. 291F. J. Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.
Improved word-level system combination for machine translation. A.-V Rosti, S Matsoukas, R Schwartz, Proc. 45th ACL. 45th ACLA.-V. Rosti, S. Matsoukas, and R. Schwartz. 2007. Im- proved word-level system combination for machine translation. In Proc. 45th ACL, pages 312-319.
A comparative study on compositional translation estimation using a domain/topicspecific corpus collected from the web. M Tonoike, M Kida, T Takagi, Y Sasaki, T Utsuro, S Sato, Proc. 2nd. 2ndM. Tonoike, M. Kida, T. Takagi, Y. Sasaki, T. Utsuro, and S. Sato. 2006. A comparative study on com- positional translation estimation using a domain/topic- specific corpus collected from the web. In Proc. 2nd
. Intl. Workshop on Web as Corpus. Intl. Workshop on Web as Corpus, pages 11-18.
A Japanese-English patent parallel corpus. M Utiyama, H Isahara, Proc. MT summit XI. MT summit XIM. Utiyama and H. Isahara. 2007. A Japanese-English patent parallel corpus. In Proc. MT summit XI, pages 475-482.
V N Vapnik, Statistical Learning Theory. Wiley-InterscienceV. N. Vapnik. 1998. Statistical Learning Theory. Wiley- Interscience. |
236,778,532 | [] | MACHINE TRANSLATION IN SAUDI ARABIA
Ph.DMahmoud Esma'il Sieny
Translation Centre
King Saud University
MACHINE TRANSLATION IN SAUDI ARABIA
It is a well known fact that machine and machine aided translation have been gaining ground at the various public and private institutions since the mid-seventies in many parts of the world. People are realizing that only with the help of computers can they cope with dazzling burst of information and the computer age. For more than a decade some governmental institutions in Saudi Arabia have been interested in computer applications in the field of translation. For example, since the late seventies both the University of Riyadh (now, King Saud University) and the Saudi National Center for Science and Technology (now, King Abdul-Aziz City for Science and Technology) have been investigating the possibilities of using computers in the translation of English texts into Arabic. This interest gained momentum since 1933, when both started some actual work in this direction.. . The University of Riyadh:In the late seventies the University thought of MT as an essentially engineering and/or computational issue; hence, help was sought from the College of Engineering. But around 1984 it came to realize that it was more of a linguistic, more specifically a computational linguistic problem. A linguist was assigned the responsibility of looking into the subject. By then, the University had already made arrangements with the University of Grenoble in France to train one of the lecturers in the College of Arts in the area of MT, since Geta was already involved in developing a pilot project to translate English into Arabic. In my capacity of supervisor of the project I tried to look into the various systems that had developed English-Arabic programmes. Since the University was not quite convinced of the viability of English-Arabic MT systems, all of which were originally developed for other language pairs, it did not take any practical steps in this direction. However, a committee was formed to look into setting up a terminology data bank. A Translator was also sent to the University of Manchester Institute of Technology to study for an M.A. in machine translation. For some reason or another, the mission was aborted at the insistence of the student.A new Translation Centre has just been established. One of the responsibilities of this Centre is to develop machine aids to translation, including a term bank and a computer aided translation system. It is our hope that more practical steps will be taken in the direction of MT.143
Translation Centre, King Saud University
It is a well known fact that machine and machine aided translation have been gaining ground at the various public and private institutions since the mid-seventies in many parts of the world. People are realizing that only with the help of computers can they cope with dazzling burst of information and the computer age. For more than a decade some governmental institutions in Saudi Arabia have been interested in computer applications in the field of translation. For example, since the late seventies both the University of Riyadh (now, King Saud University) and the Saudi National Center for Science and Technology (now, King Abdul-Aziz City for Science and Technology) have been investigating the possibilities of using computers in the translation of English texts into Arabic. This interest gained momentum since 1933, when both started some actual work in this direction.
. . The University of Riyadh:
In the late seventies the University thought of MT as an essentially engineering and/or computational issue; hence, help was sought from the College of Engineering. But around 1984 it came to realize that it was more of a linguistic, more specifically a computational linguistic problem. A linguist was assigned the responsibility of looking into the subject. By then, the University had already made arrangements with the University of Grenoble in France to train one of the lecturers in the College of Arts in the area of MT, since Geta was already involved in developing a pilot project to translate English into Arabic. In my capacity of supervisor of the project I tried to look into the various systems that had developed English-Arabic programmes. Since the University was not quite convinced of the viability of English-Arabic MT systems, all of which were originally developed for other language pairs, it did not take any practical steps in this direction. However, a committee was formed to look into setting up a terminology data bank. A Translator was also sent to the University of Manchester Institute of Technology to study for an M.A. in machine translation. For some reason or another, the mission was aborted at the insistence of the student.
A new Translation Centre has just been established. One of the responsibilities of this Centre is to develop machine aids to translation, including a term bank and a computer aided translation system. It is our hope that more practical steps will be taken in the direction of MT.
. 2 . King Abdul-Aziz City for Science and Technology:
In the late seventies KACST (then, SANCST) sponsored an investigation into the possibilities of MT. But this interest lay dormant for a while until SA1ICST appointed two linguists from King Saud University in Riyadh as part-time consultants for the MT project in 1983. Two lines of approach were suggested to SANCST: the establishment of a terminology data bank and convening an international meeting on computer-aided translation. The present writer's plan for the terminology data bank was approved, and preliminary work was initiated in June, 1983. After a visit to various term banks in Europe and extensive discussions with the experts at those banks, we started actual work on the term bank, dubbed BASM (an Arabic acronym for "the Saudi Terminology Data Bank"). Since SANCST had already developed software for its bilingual data bases, this was adapted for BASM purposes. By September of 1983 an experimental version, with 600 terms in the four languages of BASM (Arabic, English, French, German) was developed. The program was further refined later on.
Presently, BASM has more than two hundred thousand terms in the four languages covering a wide range of scientific and technical fields.
The meeting on MT was convened in March, 1985. Eminent experts and scholars from around the world (Japan, Hong Kong, Britain, France, Canada and the U.S.A.) were invited to talk on the subject and for consultation sessions with a team of Saudi linguists and computer specialists. The proceedings were later published under the title, STUDIES ON MACHINE TRANSLATION. In the absence of adequate follow-up the MT project is temporarily dormant at KACST.
Other Governmental Agencies:
Late in 1934 and early in 1985 SANCST conducted a survey to find out the various governmental agencies in the Kingdom that were interested in MT. Three agencies were found to be interested in the field, King Saud University, the Presidency of Civil Aviation, and the National Guards. The latter two apparently had feasibility studies done for them. It seems the results of the investigation were not convincing enough for them to implement or practically pursue MT.
Other agencies, such as the Ministry of Finance National Information Center, have shown interest in and/or investigated the possibilities of making use of MT in the past few years. However, I have not seen any concrete steps towards adopting or developing MT taken by any so far. search in Arabic computational linguistics has received an increasing attention in the past few years, as witnessed in the proceedings of the last two conferences held at King Saud University in April, 1987(sponsored Mention should also be made of Al-Alamiyyah computer company, based in Kuwait, which has been doing interesting work in the field. It has developed a very sophisticated Arabic morphology analyzer and the first spell-checker for Arabic.
The Scientific Studies and Research Center of Syria has been fairly active in the area of Arabic speech synthesis and recognition. The Center sponsored a few interesting "Summer Sessions" dealing with informatics and Arabic linguistics.
We should also mention the conference related to computational linguistics and MT that was held in Tunisia in March, 1983. Some interesting papers in the area of Arabic computational linguistics were also read at the conference sponsored by the Linguistic Society of Morocco in Rabat in October last year.
by the College of Computer Science) and King Fahd University of Petroleum, March, 1989 (llth National Computer Conference"). Among the other Gulf States, Kuwait has been showing an increasing interest in both MT and computational linguistics. In fact, in April, 1985 the Kuwait Institute for Scientific Research hosted a workshop on Computer Processing of the Arabic Language. "The University of Kuwait and KISR (Kuwait Institute for Scientific Research) as well the IBM Scientific Center in Kuwait have been doing interesting work in the area of Arabic computational linguistics.4. Outside Saudi Arabia:
. Computational Linguistics:Before we leave the Saudi scene let me point out that re- |
||
7,814,136 | Improved Phrase-based SMT with Syntactic Reordering Patterns Learned from Lattice Scoring | In this paper, we present a novel approach to incorporate source-side syntactic reordering patterns into phrase-based SMT. The main contribution of this work is to use the lattice scoring approach to exploit and utilize reordering information that is favoured by the baseline PBSMT system. By referring to the parse trees of the training corpus, we represent the observed reorderings with source-side syntactic patterns. The extracted patterns are then used to convert the parsed inputs into word lattices, which contain both the original source sentences and their potential reorderings. Weights of the word lattices are estimated from the observations of the syntactic reordering patterns in the training corpus. Finally, the PBSMT system is tuned and tested on the generated word lattices to show the benefits of adding potential sourceside reorderings in the inputs. We confirmed the effectiveness of our proposed method on a medium-sized corpus for Chinese-English machine translation task. Our method outperformed the baseline system by 1.67% relative on a randomly selected testset and 8.56% relative on the NIST 2008 testset in terms of BLEU score. | [
1111494,
751375,
16847508,
6826069,
7075805
] | Improved Phrase-based SMT with Syntactic Reordering Patterns Learned from Lattice Scoring
Jie Jiang jjiang@computing.dcu.ie
School of Computing
Dublin City University
Glasnevin, Dublin 9Ireland
Jinhua Du
School of Computing
Dublin City University
Glasnevin, Dublin 9Ireland
Andy Way Cngl
School of Computing
Dublin City University
Glasnevin, Dublin 9Ireland
Improved Phrase-based SMT with Syntactic Reordering Patterns Learned from Lattice Scoring
In this paper, we present a novel approach to incorporate source-side syntactic reordering patterns into phrase-based SMT. The main contribution of this work is to use the lattice scoring approach to exploit and utilize reordering information that is favoured by the baseline PBSMT system. By referring to the parse trees of the training corpus, we represent the observed reorderings with source-side syntactic patterns. The extracted patterns are then used to convert the parsed inputs into word lattices, which contain both the original source sentences and their potential reorderings. Weights of the word lattices are estimated from the observations of the syntactic reordering patterns in the training corpus. Finally, the PBSMT system is tuned and tested on the generated word lattices to show the benefits of adding potential sourceside reorderings in the inputs. We confirmed the effectiveness of our proposed method on a medium-sized corpus for Chinese-English machine translation task. Our method outperformed the baseline system by 1.67% relative on a randomly selected testset and 8.56% relative on the NIST 2008 testset in terms of BLEU score.
Introduction
To take consideration of reordering problem between different language pairs, phrase-based statistical machine translation (PBSMT) systems (Koehn et al., 2003) incorporate two different of methods: 1) learning phrase pairs with different word orders in the source and target sentences; 2) attempting potential target phrase orders during the decoding phase, and penalizing potential phrase orders using both distance-based and lexical reordering models. However, for some language pairs, this model is not powerful enough to capture the word order differences between the source and target sentences. To tackle this problem, previous studies (Wang et al., 2007a;Chang et al., 2009a) showed that syntactic reorderings can benefit state-of-the-art PBSMT systems by handling systematic differences in word order between language pairs. From their conclusions, for the Chinese-English task, syntactic reorderings can greatly improve the performance by explicitly modeling the structural differences between this language pair.
Interestingly, lots of work has been reported on syntactic reorderings and similar conclusions have been drawn from them. These methods can be roughly divided into two main categories (Elming, 2008): the deterministic reordering approach and the non-deterministic reordering approach.
For the deterministic approach, syntactic reorderings take place outside the PBSMT system, and the corresponding PBSMT systems only deal with the reordered source sentences. In this approach, syntactic reorderings can be performed by manually created rules (Collins et al., 2005;Wang et al., 2007a), or by rules extracted automatically from parse trees (Collins et al., 2005;Habash, 2007). For some typical syntactic structures (e.g. DE construction in Chinese), classifiers (Chang et al., 2009b;Du et al., 2010) are built to carry out source reorderings.
For the non-deterministic approach, both the original and reordered source sentences are fed into the PBSMT decoders, and the decisions are left to the decoders to choose the most appropriate one. (Crego et al., 2007) used syntactic structures to reorder the input into word lattices for N-gram-based Statistical Machine Translation. (Zhang et al., 2007a;Zhang et al., 2007b) employed chunks and POS tags to extract reordering rules, language models and reordering models are also used to weight the generated word lattices. Weighted n-best lists generated from rules are also used in (Li et al., 2007) for input into the decoders, while the rules are created from a syntactic parser. On the other hand, using the syntactic rules to score the output word order is adopted by (Elming, 2008;Elming, 2009), both on English-Danish and English-Arabic tasks, which confirmed the effectiveness of syntactic reorderings for distant language pairs. Another related pieces of work applies syntactic reordering information extracted from phrase orientation classifiers as an extra feature in PBSMT systems (Chang et al., 2009b) for a Chinese-English task.
However, rewriting the source sentence cannot be undone by the decoders (Al-Onaizan et al., 2006), which makes the deterministic approach less flexible than the non-deterministic one. Nevertheless, for the non-deterministic approach, most of the work relies on the syntactic information (cf. parse tree, chunks, POS tags) but never addresses which kind of rules are favoured by the decoders in SMT systems. Accordingly, the final systems might not benefit from many of the reordering rules.
In this paper, we adopt the lattice scoring approach proposed in (Jiang et al., 2010) to discover reorderings contained in phrase alignments that are favoured by a baseline PBSMT system. Given this, the central idea of this work is to feed these reorders back to the baseline PBSMT system with optional reordering information on the source-side, and let the decoder choose better reorderings according to our inputs. To accomplish this, syntactic reordering patterns on the source side are used to represent the potential reorderings from the lattice scoring outputs. However, these patterns are also used to transform the baseline inputs into word lattices to carry potential reorderings that are useful for PBSMT decoders.
The other main contributions of this work are:
• Syntactic reordering patterns are automatically extracted from lattice scoring outputs which show the preferences of the baseline PBSMT system, rather than heuristic rules.
• Our method is seamlessly incorporated with existing distance-based and lexical reordering models, as the potential reorderings are constructed on the source-side with word lattices.
The rest of this paper is organized as follows: In section 2 we give a brief overview of the lattice scoring approach for PBSMT systems, as well as the generated phrase alignments. In section 3 we discuss the extraction process of syntactic reordering patterns from phrase aligned sentences in the training corpus. Then in section 4 we present the way to transform inputs into word lattices with syntactic reordering patterns. After that, we present our experiments setup and results, as well as the discussions in section 5. Finally, we give the conclusion and future work in section 6.
Lattice scoring for phrase alignments
The lattice scoring approach was previously proposed in (Jiang et al., 2010) for data cleaning. The idea of that work is to utilize word alignments to perform approximated decoding on the training corpus, thus to calculate BLEU (Papineni et al., 2002) scores from the decoding results which are subsequently used to filter out low score sentences pairs. The lattice scoring procedure contains the following steps: 1) Train an initial PBSMT model on the given corpus; 2) Collect anchor pairs containing both the source and target side phrase positions from word alignments generated from the training phase; 3) Build source-side lattices from the anchor pairs and the translation model; 4) Expand and search on the source-side lattices to obtain an approximated decoding result; 5) Calculate BLEU scores on the training set and filter sentence pairs with lower scores.
Step 5 is only useful for data cleaning, but steps 1-4 can be used to extract reordering information in this paper.
By taking the lattice scoring steps above, it is interesting that in step 4, not only the approximated decoding results are obtained, but also its corresponding phrase alignments can be tracked. That is because the source-side lattices built in step 3 are come from anchor pairs, so each edge in the lattices contains both the source and target side phrase positions. Once the best paths are searched for in step 4, we can obtain sequences of phrase alignments between source and target side sentences. A sample of the phrase alignments generated from lattice scoring is illustrated in Figure 1.
In Figure 1, the source sentence (Chinese) is shown on the right hand side of the alignments and the target sentence (English) is on the bottom. Note that different from word alignments, elements of the alignments in Figure 1 are phrases, and the alignment points in the figure indicates the relationship between source and target phrases which are segmented from the lattice scoring approach. Not all the phrases have alignment points because implicit edges are chosen during the search phase of lattice scoring (Jiang et al., 2010).
Rather than using word alignments (Crego et al., 2007) or phrase alignments from heuristic rules (Xia et al., 2004), we use phrase alignments generated from lattice scoring, because this incorporates the PBSMT model to score potential phrase segmentations and alignments, and only those phrase segmentations and alignments have a higher model score are selected, while unlikely reorderings from word alignments for PBSMT model are filtered before pattern extraction, hence we can obtain better reordering patterns after that has taken place. In the following section, we use this information to extract reorderings, which also indicate higher model scores from the PBSMT model.
Reordering patterns
In the last section, we obtained phrase alignments from the lattice scoring procedure. From the alignment points, the reordering is shown in the nonmonotonic region of Figure 1, i.e. between source words 8-13 and target words 7-12, there is a nonmonotonic alignment region. By comparing source and target texts within this region, there is a structural word order difference between Chinese and English, which is specified as the DE construction in (Chang et al., 2009a;Du et al., 2010). However, in this paper, instead of dealing with a specified reordering structure for one language pair, we aim at using reordering patterns to discover any kind of potential source-side syntactic reordering patterns from phrase alignments.
Reordering regions extraction
Unlike previous work in (Wang et al., 2007a;Chang et al., 2009a) which is carried out directly from parse trees in a top-down approach, our work aims at utilizing reorder information in phrase alignments. Accordingly, we use a bottom-up approach similar to (Xia et al., 2004;Crego et al., 2007) in this paper. We start by locating the reordering regions in the non-monotonic areas in the phrase alignments, and thereafter use syntactic patterns to describe such reorderings.
As is shown in Figure 1, to accomplish the same phrase orders on both source and target sides, supposed we retain the target sentence orders and try to adjust the phrase order on the source-side, one possible reordering operation is to swap the regions A and B on the source-side, where regions A and B contain source words 8-10 and 11-13 respectively. In this paper, reordering regions A and B indicating swapping operations on the source side are only considered as potential source-side reorderings, thus regions AB imply (1):
AB ⇒ BA (1)
on the source-side word sequences. For each non-monotonic area in the phrase alignments, all its sub-areas are attempted to extract reordering regions A and B, and each of them are fed into the pattern extraction process. The reason for doing this is the phrase alignments from lattice scoring cannot always be perfectly matched with parse trees (specified in the next section), and sometimes reordering regions from sub-areas can produce more meaningful patterns.
Reordering patterns from parse trees
Reordering regions AB extracted from the nonmonotonic areas of the phrase alignments cannot be directly used to perform source-side reorderings, because they are just sequences of source-side words. To extract useful information from them, we map reordering regions onto parse trees to obtain syntactic reordering patterns, similar to previous work in (Xia , 2004;Crego et al., 2007). However, in this paper, the Chinese Treebank (Xue, 2005) tag set is used, and the aim is to extract appropriate patterns from them for reordering type AB in formula (1). The following steps are taken to accomplish this:
1. Parse the source side sentences into parse trees.
We use the Berkeley parser (Petrov, 2006) for parsing purposes, and all parse trees are rightbinarized to generate simpler tree structures for pattern extraction.
2. For each of the reordering regions AB extracted in Section 3.1, denote N A as the node set corresponding with the words in region A and N B for region B. The objective is to find a minimum treelet T of the whole parse tree, where T satisfies the following two criteria: 1) there must exist a path from each node in N A ∪ N B to the root node of T ; 2) each leaf node of T cannot be the ancestor of nodes in both N A and N B (which means each leaf node can only be the ancestor of nodes in N A , N B , or none of them).
3. Transform T into reordering patterns P by traversing it in preorder, and at the same time, label all the leaf nodes of T with A or B as reorder options, which indicates that the descendants of nodes labeled with A are meant to swap with those of nodes labeled with B.
Instead of using subtrees, we use treelets to refer the located parse tree substructures, since treelets do not necessarily go down to leaf nodes. The extraction process is illustrated in Figure 2. Part of the source-side parse tree is shown for the pattern extraction process of region AB in Figure 1. The parse tree is binarized and the symbol @ is used to indicate the extra tags generated in tree binarization (for example, @N P in Figure 2).
As depicted in Figure 2, tree T (surrounded by dashed lines) is the minimum treelet of the parse tree that satisfies the two criteria in step 2 of section 3.2. Note also that the leaf nodes of T are labeled by A or B according to their descendants, e.g. IP (in region A) is labeled by A, DEC and @N P (in region B) are labeled by B. After the tree T is found, we convert it into a syntactic reordering pattern by traversing it in preorder. At the same time, we collect leaf nodes labeled A or B into reordering node sequences L A or L B respectively to record the reordering operations. Futhermore, in order to generate larger sets of patterns, we do not distinguish tags generated in the parse tree binarization process with the original ones, which means that we treat @N P and N P as the same tag. Thus, we obtain a syntactic reordering pattern P from T as in (2):
P = {N P (CP (IP DEC) N P )|O = {L A , L B }}
(2) where the first part of P is the N P with its tree structure, and the second part O indicates the reordering scheme, which implies that source words corresponding with descendants of L A are supposed to swap with those of L B .
Context tags in reordering patterns
As specified at the end of section 3.1, phrase alignments cannot always be perfectly matched with parse tree topologies, especially when all sub parts of non-monotonic areas of phrase alignment are considered as potential AB reordering regions. Figure 3 illustrates this situation where there is no matched treelet for reorder regions AB.
In this case, we expand AB to the right and/or the left side with a limited number of words to find a minimum treelet which is specified in step 2 of section 3.2. In the figure, the tree node with tag P is selected when expanding region A one word to the left, such that the corresponding treelet T can be obtained. Note that in this situation, a minimum number of ancestors of expanded tree nodes are kept A B T Figure 3: Context tag in pattern extraction in T but they are assigned same labels as those from which they have been expanded, e.g. in Figure 3, the node with tag P (not in region A) is expanded from region A, so it is kept in T but labeled with A (linked with dashed arrow in the figure).
We consider expanded tree nodes as the context of syntactic reordering patterns, since they are siblings of the ancestors of word nodes in reordering regions AB. If their structure is frequently observed in the corpus, there is a greater chance that structural differences exist between source and target languages. For example, treelet T with the P tag in Figure 3 is the reordering when V P occurs with a P P modifier, which is specified in (Wang et al., 2007a). Thus, the syntactic reordering pattern for Figure 3 is as in (3):
P = {V P (P P (P N P ) V P )|O} (3)
However, the previous steps tend to generated duplicate reordering patterns because each sub-area of the non-monotonic phrase alignments are attempted and node expanding is carried out. To remove the duplications, a merge operation is carried out as follows: suppose treelets T 1 and T 2 are extracted from the same sentences while sharing the same root symbol, if T 1 is also a treelet of T 2 and their reordering regions AB overlap, then T 2 is merged into T 1 . However, not all the reordering regions will generated a pattern because some of them will not have a corresponding minimum treelet.
Pattern weights estimation
Syntactic reordering patterns are extracted from non-monotonic phrase alignments. However, in the training corpus, there is not always a reordering where a treelet matches a pattern. To describe the chance of reordering p reo when a treelet is matched with a pattern P , we count the occurrences of P in the training corpus, and also count the number of reorderings where there is a reordering indicated by P , and estimate it as in (4): p reo (P ) = count{P with reordering} count{P observed} (4)
By contrast, one syntactic pattern P usually contains more than one reordering scheme from different reordering regions and parse trees, so we assign each reordering scheme O (specified in formula (2)) with a weight as in (5):
w(O, P ) = count{reordering O in P } count{P with reordering} (5)
Thus, generally, a syntactic reordering pattern is expressed as in (6):
P = {tree | p reo | O 1 , w 1 , · · · , O n , w n } (6)
where tree indicates the tree structures of the pattern, which have a reordering probability p reo , and also contain n reordering schemes with weights.
Applying syntactic reordering patterns
Similar to (Crego et al., 2007;Zhang et al., 2007a;Zhang et al., 2007b), we use extracted patterns to transform source-side sentences into word lattices. Sentences in both the development and test sets are transformed into word lattices for potential reorderings, where a tree structure of a pattern is a treelet of a source-side parse tree. A toy example is depicted in Figure 4. In the figure, treelet T ′ of the source-side parse tree is matched with a pattern. Leaf nodes {a 1 , · · · a m } ∈ L A of T ′ have a span from {w 1 , · · · , w p } in the source sentence, while {b 1 , · · · , b n } ∈ L B have a span from {v 1 , · · · , v q }. Applying the reordering operation in formula (1), we add an edge from the start of w 1 to the end of v q by swapping {w 1 , · · · , w p } with {v 1 , · · · , v q }.
For each source sentence, all matched patterns are sorted by weights p reo in formula (6), and a predefined number of reorderings are applied to generate lattice. For each node in the lattice with an initial edge E 0 coming from the original source sentence, if there are outgoing edges generated from patterns {P 1 , · · · , P i , · · · , P k }, the weights for E 0 are defined as in (7):
w(E 0 ) = α+ k i=1 { (1 − α) k * {1 − p reo (P i )}} (7)
where α is the base probability to avoid E 0 being equal to zero, and p reo (P i ) is the weight of pattern defined in formula (4). By contrast, suppose that P i has r reordering schemes corresponding with {E s , · · · , E s+r−1 }, then weight for E j is defined as in (8):
w(E j ) = (1 − α) k * p reo (P i ) * w s−j+1 (P i ) r t=1 w t (P i )(8)
where s <= j < s + r, and w t (P i ) is the reordering scheme weight defined in formula (5). Here we suppose equal probabilities for all possible reorderings which start with a same lattice node.
Experiments
The experiments are conducted on a medium-sized corpus for Chinese-English task. The training data is the FBIS corpus, which is a multilingual paragraph-aligned corpus with LDC resource number LDC2003E14, and we use the Champollion aligner (Ma, 2006) to perform sentence alignment to obtain 256,911 sentence pairs. We randomly selected 2,000 pairs for devset and another 2,000 pairs for test set, which is referred as FBIS set in this paper. The rest of the data is used as the training set.
Evaluation results are reported on two different sets: FBIS set and the NIST 2008 test data. For FBIS set, only one reference translation is avaible for both devset and testset. For NIST data, we use the NIST 2005 test set which includes 1,082 sentences as the devset, while the NIST 2008 set is used as the test set with 1,357 sentences. In both devset and testset of NIST data, there are four reference translations for each of the sentences.
Moses is used as a baseline. Word alignment is performed with GIZA++ 1 and is refined with the "grow-diag-final" method (Koehn et al., 2005), while tuning is performed with Minimum error rate training (MERT) (Och, 2003). We also use SRILM 2 to build 5-gram language models for all the experiments with modified Kneser-Ney smoothing (Kneser & Ney, 1995).
The pattern extraction experiments and the results are reported in the following subsections.
Pattern extraction
The lattice scoring approach is performed in a similar manners to that of (Jiang et al., 2010). We use the same baseline system as specified above to accomplish the lattice scoring procedure. However, instead of NIST data, the initial PBSMT system is tuned with FBIS devset to obtain weights for lattice scoring. After that, we collect anchor pairs and build source-side lattices based on the word alignments generated in the training phase. Then Viterbi search is carried out to generate phrase alignments.
From the training corpus, 48,285 syntactic reordering patterns with a total of 57,861 reordering schemes are extracted from phrase alignments. The average number of non-terminals in all patterns is 11.02. However, for reason of computational efficiency, we pruned any patterns with non-terminal numbers less than 3 and more than 9. This leaves 18,169 remaining syntactic reordering patterns with 22,850 reordering schemes, with a average number of 7.6 non-terminals. 1 http://fjoch.com/GIZA++.html 2 http://www.speech.sri.com/projects/srilm/
Lattice building
We apply the pruned syntactic reordering patterns to both the devset and testset, and convert source sentences of both sets into word lattices. However, the lattices size increases dramatically with respect to the number of applied patterns. To guarantee manageable word lattice inputs for the Moses decoder, we also constrain the generating process of word lattices with empirical parameters: for each source sentence, the maximum number of reordering schemes is set to 30, and the maximum span of a pattern is set to 30.
To calculate the weights of word lattices, we set the base probability in formula (7) and (8) to be 0.05. The generated word lattices of the devset and the testset are fed into Moses for tuning and evaluation respectively. No extra training steps are required.
The built-in reordering models (distance-based and lexical reordering) of Moses are also enabled while dealing with word lattice inputs, and their weights in the log-linear model (including lattice input weights) are tuned at the same time.
Results on FBIS set
To compare with the built-in reordering models of Moses, we set the distortion-limit (DL) parameter of Moses to be {0, 6, 10, 12}, and the evaluation results of the testset on FBIS data are shown in Table 1. As shown in Table 1, for BLEU, NIST and ME-TEOR scores, the best performance of the baseline system is achieved withdistortion limit 12 (underlined), and the best peroformance of our syntactic reordering method is obtained with ditortion limit 10 (underlined). Our method outperformed the baseline by 0.41 (1.67% relative) BLEU points, 0.02 (0.30% relative) NIST points and 0.36 (0.66% relative) METEOR points respectively. The comparison between the baseline system and our method with the same distortion limits shows that the improvments are consistent for all distortion limits (scores with bold face) except the NIST score with distortion limit 12. However, these results still confirm our proposed method on the FBIS data.
System DL BLEU NIST METEOR
Results on NIST set
As in the last section, we also adopt serveral distortion limit parameters, and report NIST evaluation results in Table 2 Table 2, the best performance of the baseline system is achieved with ditortion limit 12 (underlined), while for our method, the best BLEU and NIST scores are obtained with distortion limit 6 (underlined), and the best METEOR score is accomplished with distortion limit 10 (underlined). Our proposed method significally outperformed the baseline system by 1.36 (8.56% relative) BLEU points, 0.51 (8.28% relative) NIST points and 1.90 (4.14% relative) METEOR points respectively. Similary, the comparison between the baseline system and our moethod with the same distortion limits demostrates that the improvments are also consistent for all distortion limits (scores with bold face). These results indicate the effectiveness of the syntactic reordering model on the NIST 08 data for our medium-sized corpus.
Discussion
From the results shown in the previous sections, we found that our method can benefit the baseline PB-SMT system with its built-in reordering models. But we observed that with a larger distortion limit, the improvements become lesser singnificant. This isbecause with larger distortion limit of PBSMT, the baseline system can try longer reorderings, while our method has a restriction on the range of the reordering patterns. In this case, the number of reorderings that are considered by our method but not tried by the baseline systems become lesser. Thus the improvements of our method become smaller.
However, we can still improve the system by 0.9 (3.8% relative) and 1.64 (10.5% relative) BLEU points for the two testset with distortion limit 6, which is the default setting of Moses. And with all distortion limits, our method can benefit the baseline system for different automatic evalutaion metircs. This indicates that our method can provide extra reordering capabilities for the built-in reordering models of PBSMT. We also compare system performance with respect to the distortion limit parameter of Moses in Figure 5 and 6 for FBIS testset and NIST testset respectively. In the figure, for each of the three automatic evaluation metrics, the baseline system performance tends to have a better results with a larger distortion limit, while for lattice inputs, medium distortion limits lead to better performance. This indicates that, with lattice inputs which have already considered potential reordering on the source side, large distortion limits do not further benefit the SMT system. From this point of view, it also indicates that long range reordering might be captured well by syntactic reordering. By contrast, short range reorderings are supposed to be handled well by distance-based and lexical reordering models. Thus, for our proposed syntactic reordering enhanced system, a medium distortion limit should be preferred. However, in the experiments, our method do provides consistent improvemnts for all distortion limits.
Conclusion and future work
A novel approach of syntactic reorderings for PB-SMT systems is studied in this paper. It aims at a bottom-up approach to extract syntactic reordering patterns from phrase alignments generated via lattice scoring, which indicates reorders favoured by the baseline system. Word lattices are used to represent potential source-side reorderings. Pattern weights are estimated from the training corpus and are used to determine the edge weights in the word lattices. The proposed approach is integrated with existing distance-based and lexical reordering models, and their weights in a log-linear model are tuned with MERT. Experiments on a medium-sized corpus showed consistents improvements with all distortion limits. Compared with the baseline system, we obtained improvements of 1.67% relative on a randomly selected testset and 8.56% relative on the NIST 2008 testset in terms of BLEU score.
In the future, we plan to carry out experiments on large corpus. Furthermore, a large range of reordering types will be examined to extract more finegrained patterns. We will also try different methods of binarizing parse trees (Wang et al., 2007b) to improve the pattern extraction process still further.
Figure 1 :
1Phrase alignments and reorderings et al.
Figure 2 :
2Reordering pattern extraction
Figure 4 :
4Applying patterns
Figure 5 :Figure 6 :
56Score comparison on FBIS testset (DL = distortion limit) Score comparison on NIST testset (DL = distortion limit)
.System DL BLEU NIST METEOR
Baseline
0
14.43
5.75
45.03
6
15.61
5.88
45.75
10
15.73
5.78
45.27
12
15.89
6.16
45.88
Lattices
0
16.77
6.54
47.16
6
17.25
6.67
47.65
10
17.15
6.64
47.78
12
16.88
6.56
47.17
Table 2 :
2Results on NIST testsetFrom
AcknowledgmentsReferencesYaser Al-Onaizan and Kishore Papineni 2006
Ondrej Bojar, Alexandra Constantin, and Evan Herbst. P A Pittsburgh, Usa Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, IWSLT Speech Translation Evaluation. International Workshop on Spoken Language Translation: Evaluation Campaign on Spoken Language Translation. Christine Moran, Richard Zens, Chris Dyer; Prague, Czech RepublicMoses: Open Source Toolkit for Statistical Machine Translation. ACL 2007: proceedings of demo and poster sessionsIWSLT Speech Translation Evaluation. Inter- national Workshop on Spoken Language Translation: Evaluation Campaign on Spoken Language Transla- tion [IWSLT 2005], Pittsburgh, PA, USA. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Con- stantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. ACL 2007: proceedings of demo and poster sessions, pp. 177-180, Prague, Czech Republic.
A probabilistic approach to syntax-based reordering for statistical machine translation. Chi-Ho Li, Dongdong Zhang, Mu Li, Ming Zhou, Minghui Li, Yi Guan, ACL 2007: proceedings of the 45th. Chi-Ho Li, Dongdong Zhang, Mu Li, Ming Zhou, Minghui Li, and Yi Guan 2007. A probabilistic ap- proach to syntax-based reordering for statistical ma- chine translation. ACL 2007: proceedings of the 45th
Annual Meeting of the Association for Computational Linguistics. Prague, Czech RepublicAnnual Meeting of the Association for Computational Linguistics, pages 720-727, Prague, Czech Republic.
Champollion: A Robust Parallel Text Sentence Aligner. Xiaoyi Ma, LREC 2006: Fifth International Conference on Language Resources and Evaluation. Genova, ItalyXiaoyi Ma. 2006. Champollion: A Robust Parallel Text Sentence Aligner. LREC 2006: Fifth International Conference on Language Resources and Evaluation, pp.489-492, Genova, Italy.
Franz Josef Och, Hermann Ney, Discriminative Training and Maximum Entropy Models for Statistical Machine Translation. ACL-2002: 40th Annual meeting of the Association for Computational Linguistics. Philadelphia, PAFranz Josef Och, Hermann Ney. 2002. Discriminative Training and Maximum Entropy Models for Statistical Machine Translation. ACL-2002: 40th Annual meet- ing of the Association for Computational Linguistics, pp. 295-302, Philadelphia, PA.
Minimum Error Rate Training in Statistical Machine Translation. Franz Josef Och, ACL-2003: 41st Annual meeting of the Association for Computational Linguistics. Sapporo, JapanFranz Josef Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. ACL-2003: 41st Annual meeting of the Association for Computational Linguistics, pp. 160-167, Sapporo, Japan.
BLEU: A Method For Automatic Evaluation of Machine Translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, ACL-2002: 40th Annual meeting of the Association for Computational Linguistics. Philadelphia, PAKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A Method For Automatic Evaluation of Machine Translation. ACL-2002: 40th Annual meeting of the Association for Computational Linguistics, pp.311-318, Philadelphia, PA.
Slav Petrov, Leon Barrett, Romain Thibaux, Dan Klein, Learning Accurate, Compact, and Interpretable Tree Annotation. Coling-ACL 2006: Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. Sydney, AustraliaSlav Petrov, Leon Barrett, Romain Thibaux and Dan Klein. 2006. Learning Accurate, Compact, and In- terpretable Tree Annotation. Coling-ACL 2006: Pro- ceedings of the 21st International Conference on Com- putational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 433- 440, Sydney, Australia.
Chinese syntactic reordering for statistical machine translation. Chao Wang, Michael Collins, Philipp Koehn, Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningPrague, Czech RepublicChao Wang, Michael Collins, and Philipp Koehn. 2007a. Chinese syntactic reordering for statistical machine translation. EMNLP-CoNLL-2007: Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 737-745, Prague, Czech Republic.
Binarizing Syntax Trees to Improve Syntax-Based Machine Translation Accuracy. Wei Wang, Kevin Knight, Daniel Marcu, Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningPrague, Czech RepublicWei Wang, Kevin Knight and Daniel Marcu. 2007b. Bi- narizing Syntax Trees to Improve Syntax-Based Ma- chine Translation Accuracy. EMNLP-CoNLL-2007: Proceedings of the 2007 Joint Conference on Empir- ical Methods in Natural Language Processing and Computational Natural Language Learning, pages 737-745, Prague, Czech Republic.
Improving a statistical MT system with automatically learned rewrite patterns. Fei Xia, Michael Mccord, 20th International Conference on Computational Linguistics. Coling; Geneva, SwitzerlandUniversity ofFei Xia, and Michael McCord 2004. Improving a sta- tistical MT system with automatically learned rewrite patterns. Coling 2004: 20th International Conference on Computational Linguistics, pages 508-514, Univer- sity of Geneva, Switzerland.
The Penn Chinese TreeBank: Phrase structure annotation of a large corpus. Nianwen Xue, Fei Xia, Fu-Dong Chiou, Martha Palmer, Natural Language Engineering. 112Nianwen Xue, Fei Xia, Fu-dong Chiou, and Martha Palmer. 2005. The Penn Chinese TreeBank: Phrase structure annotation of a large corpus. Natural Lan- guage Engineering, 11(2), pages 207-238.
Chunk-level reordering of source language sentences with automatically learned rules for statistical machine translation. Richard Zens, Franz Josef Och, Hermann Ney, SSST, NAACL-HLT-2007 AMTA Workshop on Syntax and Structure in Statistical Translation. Suntec, Singapore. Yuqi Zhang, Richard Zens, and Hermann Ney; Rochester, NYProceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLPRichard Zens, Franz Josef Och, and Hermann Ney. 2002. Phrase-based statistical machine translation. Proceed- ings of the 47th Annual Meeting of the ACL and the 4th IJCNLP, pages 333-341, Suntec, Singapore. Yuqi Zhang, Richard Zens, and Hermann Ney 2007a. Chunk-level reordering of source language sentences with automatically learned rules for statistical machine translation. SSST, NAACL-HLT-2007 AMTA Work- shop on Syntax and Structure in Statistical Transla- tion, pages 1-8, Rochester, NY.
Improved chunk-level reordering for statistical machine translation. Yuqi Zhang, Richard Zens, Hermann Ney, IWSLT 2007: International Workshop on Spoken Language Translation. Trento, ItalyYuqi Zhang, Richard Zens, and Hermann Ney 2007b. Improved chunk-level reordering for statistical ma- chine translation. IWSLT 2007: International Work- shop on Spoken Language Translation, pages 21-28, Trento, Italy. |
248,780,052 | Modeling U.S. State-Level Policies by Extracting Winners and Losers from Legislative Texts | Decisions on state-level policies have a deep effect on many aspects of our everyday life, such as health-care and education access. However, there is little understanding of how these policies and decisions are being formed in the legislative process. We take a datadriven approach by decoding the impact of legislation on relevant stakeholders (e.g., teachers in education bills) to understand legislators' decision-making process and votes. We build a new dataset for multiple US states that interconnects multiple sources of data including bills, stakeholders, legislators, and money donors. Next, we develop a textual graphbased model to embed and analyze state bills. Our model predicts winners/losers of bills and then utilizes them to better determine the legislative body's vote breakdown according to demographic/ideological criteria, e.g., gender. . 2016. Follower-followee network, communication networks, and vote agreement of the us members of congress. Communication Research, 43(7):996-1024. | [
233365325,
18790966
] | Modeling U.S. State-Level Policies by Extracting Winners and Losers from Legislative Texts
Association for Computational LinguisticsCopyright Association for Computational LinguisticsMay 22-27, 2022 c 2022
Maryam Davoodi mdavoodi@purdue.edu
Purdue University
Purdue University
Purdue University
Eric Waltenburg
Purdue University
Purdue University
Purdue University
Dan Goldwasser
Purdue University
Purdue University
Purdue University
Modeling U.S. State-Level Policies by Extracting Winners and Losers from Legislative Texts
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
the 60th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1May 22-27, 2022 c 2022
Decisions on state-level policies have a deep effect on many aspects of our everyday life, such as health-care and education access. However, there is little understanding of how these policies and decisions are being formed in the legislative process. We take a datadriven approach by decoding the impact of legislation on relevant stakeholders (e.g., teachers in education bills) to understand legislators' decision-making process and votes. We build a new dataset for multiple US states that interconnects multiple sources of data including bills, stakeholders, legislators, and money donors. Next, we develop a textual graphbased model to embed and analyze state bills. Our model predicts winners/losers of bills and then utilizes them to better determine the legislative body's vote breakdown according to demographic/ideological criteria, e.g., gender. . 2016. Follower-followee network, communication networks, and vote agreement of the us members of congress. Communication Research, 43(7):996-1024.
Introduction
State-level legislation is the cornerstone of national policies and has long-lasting effects on residents of US states. Thus, decoding the processes that shape state bills is crucial yet involved. State legislatures vote on 23 times more bills than Federal legislatures, exceeding 120K bills per year (King, 2019). In addition, these state bills cover a broader range of local problems as each state possesses lawmaking power effective within its boundaries. E.g., the State of Washington Health Care Committee addresses health service issues including licensing and regulation of health care facilities and providers. Moreover, it regulates pharmacies, pharmaceutical drugs, state public health programs, and private/public insurance markets (House, 2021).
We argue that recent NLP architectures can provide new insights into the state-level legislative efforts. In particular, contextualized graph and text embedding can better represent policies within and across states via a shared political context. However, most of the prior efforts are focused on analyzing congressional bills with traditional techniques, e.g., (Gerrish andBlei, 2011, 2012). A few state-level studies (Eidelman et al., 2018;Davoodi et al., 2020) took great steps in predicting the progression of state bills towards a vote on the floor and the breakdown of votes based on demographic metrics (e.g., gender). But their main downside is they evaluate policies in a limited context and do not capture cross-state patterns.
Medical malpractice actions. Permits a patient to bring an action against a health care provider without submitting the complaint to the medical review board if: (1) the amount of the claim is not more than $15,000; (2) the cause of action is based on the removal of the wrong body part Cleavage Yea Nay Figure 1: A health bill leading to voting cleavage based on the party metric primarily due to its specific losers (red) and winners (green).
Winners-Losers analysis. In this work, we take a new data-driven approach to analyzing state legislation. Our key insight is that each state bill inevitably produces some winners and losers to provide practical solutions to specific in-state and local problems. Thus, we argue that it is important to examine state bills in the larger context of their impact on different population segments as well as commercial and professional stakeholders. To help clarify this idea, consider the example in Figure 1. This state bill makes it easier for patients (winners) to take legal actions against healthcare providers (losers). This analysis of winners and losers (WLs) can foster transparency in legislative efforts in each state, while interconnecting different states through common stakeholders and revealing cross-state patterns. In addition, the context of WLs can enable a new category of NLP models for predicting the roll-call behavior of legislators.
Downstream bill classification tasks. For instance, the political science community sees tremendous value in predicting voting cleavages, based on the ideological and demographic identities of legislators (Section 2). Each of such metrics (e.g., party, gender, district, ideology) splits legislators into groups. Measuring lack of consensus within and across these groups, which has political and social benefits, can be done using two classification tasks (Section 5): For a given metric, we say a bill is competitive (Figure 2) if the majority vote of legislators from a group (e.g., Democrat, male, urban, liberal) is different from that of the opposite group (e.g., Republican, female, rural, conservative). Similarly, a bill is inverse-competitive if there is a tie in votes of members of the same group (e.g., liberals). For instance, the health bill in Figure 1 resulted in a party competitive vote. Another example is a state bill on abortion that "requires... physician performing an abortion to admitting privileges at a hospital in the county" resulted in a gender competitive vote. We show the context of winners/losers of these bills could hint at such cleavages prior to voting (Sections 4, 6).
Framework overview. To achieve this goal, we address multiple NLP challenges in our proposed framework: (1) Data: The legislative process in US states does not track the stakeholders of bills and the impact of bills on them. Thus, we design a reliable crowd-sourcing pipeline to extract and analyze winners and losers of state bills from their text and form a new annotated dataset.
(2) Modeling: To automate the WL analysis, next, we provide a nationwide graph abstraction to model the state legislative process, as well as a joint text and graph embedding architecture for predicting winners and losers. Our model captures the interplay of different entities, e.g., bills, stakeholders, legislators, and money donors, while maintaining dependencies between their textual attributes. We leverage RGCN (Schlichtkrull et al., 2018), a relational graph convolutional network, to represent diverse relations. We also adopt the RoBERTa transformer (Liu et al., 2019) after performing domain-adaptive pretraining on political texts using the MLM (Masked Language Model) task.
(3) Application: Finally, we showcase the ability of our WL analysis and prediction model in decoding the voting behavior of state legislators. In summary, we make three technical contributions:
• We provide the first definition and realization of winners/losers analysis for state bills using the latest NLP advances. (Sections 2, 3, 4).
• We developed a new joint graph and text embedding model that both predicts winners/losers of bills and legislators' votes. In particular, it incorporates the winners/losers inference into the vote prediction task, to evaluate bills in a broader context (Section 5).
• We operationalized the winners/losers analysis for several legislative topics (e.g., health) and created a new dataset. The extensive evaluation shows our approach delivers a higher F1, than existing models (Sections 3, 6)
Related Works
Our work is inspired by some promising studies: Roll-call classification. Eidelman et al. 2018 associate the bill's text with partisan information of its sponsors to predict the likelihood of a member of the U.S. Congress voting in support of a bill. Similarly, Gerrish and Blei 2011 embed the combined topic and text of Congress bills in the ideological space and develop ideal point models for inferring votes. Peng et al., 2016;Kornilova et al., 2018;Kraft et al., 2016;Patil et al., 2019;Karimi et al., 2019;Pujari and Goldwasser, 2021 augment this model using data on social networks, thus generating better embeddings.
Bill text classification. Instead of leveraging bill text in models to describe the behavior of each legislator, Yano et al. 2012 include the bill's text in a model that directly predicts whether a bill comes out from a standing committee. Particularly, they develop features based on the urgency of the problem being solved by the bill and the set of legislators co-sponsoring the bill. Eidelman et al. 2018 conduct a similar study on US states.
Winners-losers analysis. Analyzing the impact of bills on its stakeholders is a well-studied topic in the political science literature. Gamm and Kousser, 2010 reveal state legislators are more likely to write bills aimed at a particular local stakeholder when the legislative body is dominated by one party. Similarly, Bagashka and Clark, 2016 show state legislators are motivated to introduce particularistic bills designed to help a specific geographical area within their district. Pennock, 1979 analyzes legislation based on its generalized and particularized impact on different interest groups. By leveraging recent NLP advances (e.g., contextualized language models, graph embedding, crowdsourcing), our work extends these studies and provides the first automated framework for the stakeholders analysis on state bills.
Voting cleavages. Research has covered multiple ways that the demographic background of legislators can affect roll-call voting. Frederick 2010 demonstrates gender affects the roll-call vote in the Senate by changing the influence of partisanship for GOP women. Broach 1972 describes that urbanrural voting cleavages happen in less partisan states and on bills that separate urban and rural interests. Similar to us, Davoodi et al. 2020 build a textual graph to predict such cleavages. While our focus is on a different problem, stakeholders analysis, we outperform this prior study by representing bills in a broader context containing their stakeholders.
Graph embedding in NLP. Our work uses Graph convolutional networks (GCNs), which have been applied to various NLP tasks, e.g., Semantic role labeling (SRL) (Marcheggiani and Titov, 2017) and relation classification in clinical narratives . In these tasks, GCNs encode the syntactic structure of sentences. Similarly, Defferrard et al., 2016;Peng et al., 2018;Henaff et al., 2015 use graph neural networks (GNNs) to represent a network of documents based on their references. Similar to our work but for a different problem and objective, Sawhney et al., 2020 analyze speech-level stance of members of the parliament, by performing node classification on graph attention networks (GATs), and Pujari and Goldwasser, 2021 analyze social media content generated by politicians using a graph transformer model.
Modeling
We first provide an overview of key players in the state-level legislative process. Then, we model them using an efficient text-based graph abstraction (Figure 3), which will enable us to embed and evaluate state policies in a broad context and perform the stakeholder and roll-call analysis on them.
Players in State Legislative Process
Our model, unlike prior works, fully captures the interplay of main players in the lawmaking process:
1. Legislators. A state legislature typically con-
Winner-Loser Analysis Roll-call Analysis
Ye s Vo te Figure 3: Nationwide, Heterogeneous, and multirelational graph abstraction for analysis and prediction of winners-losers and roll-calls.
sists of two "chambers" 1 : the House and the Senate. The legislative process starts with legislators sponsoring a bill in a chamber. The idea of a bill can come from different sources. Next, the bill goes through multiple roll-call votes in the origin chamber, where it can fail at any stage. It is first referred to the proper committee by the chamber leader. Committee members, before casting their votes, may set up a public hearing with the sponsors and interested parties. If the bill passes out of the committee, it reaches the second reading, where the full chamber debates, amends, and votes on the bill. If the bill passes by a majority vote, it is scheduled for the third reading and final vote. A bill must go through a similar procedure in the other chamber before it is acted on by the governor.
Contributors.
While legislators navigate through bills, external contributors influence their decisions. Individual and corporate money donors aim at developing changes in the outcome and theme of bills starting from the election times. Lobbyists launch campaigns to persuade legislators towards certain policies. Such efforts inevitably lead to new bills or amendments to existing laws.
Stakeholders.
A state bill cannot benefit everyone and it produces beneficial or detrimental effects on its stakeholders. Identifying winners and losers of a bill from its text is crucial, which can hint at the fate of a bill. Particularly, legislators do not always write bills themselves. Corporations and interest groups (e.g., ALEC) sell fill-in-theblank bills to legislators. Thus, we can see voting patterns on bills with the same winners and losers.
Nationwide, Multi-Relational, and
Heterogeneous Legislative Graph
To model these players and their interactions, we design a legislative graph with three important properties ( Figure 3). First, since each of the players (e.g., stakeholders, legislators) has different textual attributes, our proposed graph supports heterogeneous textual nodes. Second, we form a nationwide graph to capture cross-state patterns (ablation study in Appendix A.2) by building common entities (e.g., stakeholders in Section 4). Finally, our abstraction supports multiple relations between each pair of entities (e.g., legislators voting and sponsoring a bill). With this overview, we present the nodes and relations that we will realize based on the real data: Node types. The nodes in the legislative graph contain a rich set of textual features: (1) Bill nodes embed title, abstract, and body of state bills.
(2) Stakeholder nodes come with short texts on political interests and constituent entities of stakeholders of policies in bills (will be detailed shortly).
(3) Legislator nodes contain diverse textual information on legislators, e.g., their biography, political interests, committee assignments, and demographic profile (e.g., party, gender, ideology, and district).
(4) Contributors nodes have text-based attributes on money donors covering their specific/general business interests, party, and their type (individual or non-individual).
Relation types. Based on the legislative process, legislator and bill nodes participate in Bill Sponsorship, 'No' Vote, and 'Yes' Vote relations in the graph (See Appendix A.4 for handling abstain votes.) A stakeholder node forms Winner, Loser, or Neutral relations with a bill node, which we will extract it based on the bill text. Similarly, we form two types of relations between contributors and legislators: Positive Donation realized based on the real donation data, and Negative Donation, which we infer when a contributor shows a lack of interest in a demographic of legislators (e.g., never donates to women). We sample and connect such legislators and the contributor via a negative relation.
Data-Driven Stakeholders Analysis
Next, we describe how we build up the legislative graph, by collecting data on legislators, bills, and contributors. US states do not record the impact of bills on relevant stakeholders. Thus, we explain how to derive stakeholders from bill nodes, perform winners-losers analysis on them, and interconnect different US states by forming common stakeholder nodes. We highlight how our analysis can be used (1) to inform the public about the dynamic and direction of state policies, and (2) to determine legislators' roll-call behavior with different demographic and ideological profiles.
Data Collection & Bootstrapping Graph
Bills. From the LegiScan website (LegiScan, 2019), we collected data on bills introduced in Indiana, Oregon, and Wisconsin from 2011 through 2018 (details in Appendix 7). We developed a crawler that uses the LegiScan API to fetch legislative information on every bill, including: (1) bill metadata, e.g., the bill type, title, description, sponsors, and links to its texts; (2) vote metadata, e.g., legislator's roll-call vote; and (3) legislator metadata, e.g., party and district info. Then, our crawler converts bill texts in PDF format to text files. In total, we collected ∼35k bills and sampled 58% of them that had both roll-call data and full texts. Our focus is on the 2nd/3rd reading, in which the full chambers vote, so we selected 32% of the bills for building the legislative graph (Table 1). In LegiScan, each bill is associated with a main topic (e.g., health), used for referral to a proper committee. For the four most frequent topics (Table 2), we will define a group of generic stakeholders for the winners-losers analysis.
Legislators. Our crawler also used Ballotpedia (Ballotpedia, 2019) to collect text information on each legislator's biography, political interests, and committee assignments. Also, it consumed other publicly available datasets to identify a legislator's demographic profile, e.g., ideology, gender, and district. The ideology scores for legislators (Shor and McCarty, 2011) were grouped into conservatives, moderates, and liberals. The district identifier was combined with GIS census data (Census, 2019) to identify each legislator as representing an urban or rural district. Table 3 summarizes the distribution of legislators' demographic profile. Contributors: FollowTheMoney (FollowThe-Money, 2019) records donations to state legislators and candidates. Our crawler collected the information of donors for each legislator in our dataset (See Table 1). This includes multiple textual attributes for each contributor: type (individual or non-individual), general party, and economic and business information. While the contributor data can be utilized in more sophisticated ways, we focused on major contributors by setting a minimum donation threshold and pruning donors who contributed to a single legislator; We set the fraction of negative donations (Section 3) to 30% of the positive ones extracted from the data.
Stakeholders Extraction & Annotation
For each select bill topic, we (authors) randomly sampled 10% of bills and carefully analyzed their texts. We recorded entities discussed in the bill texts as well as the detrimental or beneficial impact of the suggested policies on them (regardless of the legislative outcome, i.e., passed in a vote or not). To interconnect different states and opti-mize the legislative graph, we deduplicated entities and clustered those whose interests align (e.g., surgeons, doctors, dentists, and etc.) into generic ones (e.g., healthcare providers). Table 4 shows the final list of stakeholders for the select topics. With detailed annotation guidelines, we leveraged Amazon MTurk for labeling ∼4k bill texts from these topics (Table 2), where 3-5 workers identified the effect of the suggested policies in each bill on the relevant stakeholders. As will be detailed in Appendix A.1, we ensured the accuracy of the labeled data is 90%+.
Benefits of Winners-Losers Analysis
Based on the outcomes of the previous two steps, we formed a legislative graph for our target states. We briefly provide two results from the winnerslosers analysis on the graph to highlight its importance. First, we show the frequency distribution of the stakeholders as a winner vs. a loser for each topic in Table 4, which would inform the public about the dynamics and directions of state-level policies. E.g., under the education topic, students were the largest winners, while educational institutions were the major losers. For law bills, law enforcement agencies were the top losers given the recent nationwide focus on police use of force. Also, our winners-losers analysis captures the policy preferences of different ideological and demographic groups of legislators. For example, Democrats are more likely to support legislation benefiting teachers, compared to Republicans (GOP). This fact is also reflected in our models predicting voting cleavages in Section 6 (e.g., our naive model, WL-Correlation, outperforming other models in its category). Here, to motivate the need for building such models, we are interested in measuring the rate of 'Yes' votes from each demographic and ideological group of legislators on bills of a given topic, where a stakeholder is a loser and a winner. E.g., on education bills benefiting a stakeholder (e.g., Students) as a winner, we compute, A = [# of yes votes]/[total # of votes] in the GOP legislators. Similarly, on educations bills, where this stakeholder is a loser, we calculate B = [# of yes votes]/[total # of votes] for GOP. We then report the difference, A-B, in Table 5, where a large positive value indicates the stakeholder is being advantaged by the respective group of legislators. E.g., we see GOP has significantly more Yes votes when students are winners, compared to Yes votes when students are losers. By running queries on the legislative graph containing all players (e.g., donors), we were able to see the voting behavior of GOP could be motivated by major donations to this party from corporations representing students (e.g., School Choice).
Embedding & Prediction Architecture
The stakeholder analysis based on human data annotation is expensive and time-consuming. To automate the analysis and better leverage its results in other applications, we build up a contextualized embedding architecture and define two classification tasks on the legislative graph:
Classification Tasks on Legislative Graph
Task 1: winners-losers prediction. Our first task is to predict the relation between a bill node and each relevant stakeholder node (based on its topic in Table 4). Such predicted relations will bring valuable insights into the bills, while also clarifying legislators' roll-call behavior (Section 6). Thus, we define the next task to showcase these benefits.
Task 2: bill cleavage and survival prediction. For a bill, we predict if (1) it shows identifiable voting cleavages and (2) it can advance by getting a pass. We achieve these by predicting and aggregating roll-call relations (between legislators and bills) in the graph. In particular, we assign 9 labels to each bill: (1) Competitive labels: For voting cleavages, we split legislators into groups based on their demographic and ideological profiles (party, gender, ideology, and the urban/rural nature of their district as defined in Section 4). For an attribute (e.g., gender), we say a voting round is "competitive" if the majority of legislators from one group (e.g., Women) and the majority of the opposite group (e.g., Men) cast different votes (Figure 2a). (2) Inverse-competitive labels: Similarly, for an attribute (e.g., gender), a voting round is inverse-competitive if we identify a partial or complete tie (Appendix A.4) in the vote of legislators of the same group (e.g., Men in Figure 2b).
(3) Survival label: Finally, a bill passes its current voting round by getting a majority vote.
Avg. pooling
RoBERTa
Overview of Embedding & Training
At a high-level, we propose a unified model to jointly embed and classify both roll-call and winner-loser relations in the legislative graph (Figure 4a): (a) We first train our model to predict relations between bill nodes and their stakeholders. One can use the result of this stage for further analysis of state policies (e.g., Section ). (b) Our key insight is that knowing winner-loser relations enhances the embedding of nodes in the legislative graph. Thus, we conduct inference on bills that lack such relations (if any) using the pretrained model from step (a) and add these predicted relations to the graph. (c) Next, continue training on the updated graph to fine-tune the model for the roll-call (vote) prediction task. Finally, we aggregate the predicted votes for the bill cleavages/survival analysis. In all these steps, our model generates and jointly optimizes both text and graph embeddings for each node, and consumes them to classify the two types of relations. Thanks to jointly optimizing the tasks over the textual and graph information, our architecture outperforms existing models (Section 6). Hereafter, we detail the layers in our model using a bottom-up approach:
Contextualized Text Embedding Layers
The lower half of our model generates a contextual embedding for textual attributes of nodes in the legislative graph. We leverage the RoBERTa architecture (Liu et al., 2019). For improved performance, one of our contributions is that we will pretrain RoBERTa on unlabeled bill texts using the MLM task (Section 6). In more detail, for each bill node, we feed three pieces of textual information to RoBERta: title, abstract, and body. RoBERTa does not support input sequences longer than 512 tokens. Thus, we take the representation of each of these components separately (the embedding of their [CLS] token) and do average pooling to output the final representation of the bill. Similarly, the text embedding of stakeholder, legislator, and contributor nodes are the average of the vectors representing their key textual attributes (Section 4).
Relational Graph Convolutional Layers
On top of the text embedding layer, we place a Relational Graph Convolutional Network (RGCN) to create a graph embedding for each node. The RGCN uses the text embedding of each node to initialize its graph representation. In parallel, we build a feed-forward neural network (FFNN), taking the text embeddings of nodes to a concatenation layer for our joint text-graph optimization. The (non-relational) GCN has multiple layers and each layer performs two operations: propagation and aggregation. In the propagation, nodes update their neighbors by sharing their features or hidden states. In the aggregation, each node adds up the messages coming to it to update its representation. In GCN, at layer l + 1, the hidden representation of node i, h i l+1 , with neighbours N i is:
h i l+1 = σ j∈N i 1 c i W l h l j(1)
GCN uses the same weight matrix in each layer, W l , and normalization factor, c i = |N i |, for all relation types in a graph. We choose RGCN as it uses unique parameters for each relation type, thus better handling our multi-relational graph. In RGCN, the embedding of node i in layer (l + 1) is:
h i l+1 = σ W l 0 h l i + r∈R j∈N r i 1 c i,r W l r h l j(2)
A 3-layer RGCN turns out to be sufficient in our case to capture the 3rd order relations between contributors and stakeholder nodes.
Relations Prediction Layers
By combining the outputs of the RGCN and FFNN, we train a relation classification layer by using the DistMult scoring function (Schlichtkrull et al., 2018;Yang et al., 2014). For each relation (s, r, d) being predicted, this layer computes f (s, r, d) = e T s W r e d . e s and e r are the joint text and graph embeddings of the source and destination nodes and w r is a diagonal relational weight matrix. Our loss function is L = L CLS + L T ext + L Graph enabling us to jointly optimize the text and graph embeddings as well as the relation prediction. L CLS is the cross-entropy loss of the relation classification; L Graph and L T ext are the L2 regularization of RGCN's and FFNN's weights, optimizing the graph and text representations, respectively.
Experiments
We first evaluate the efficiency of our legislative graph abstraction and text+graph embedding model in the winners-losers prediction. Then, we show the benefits of our combined inference of stakeholders and roll-calls in decoding state bills.
Experimental Setup
Data Split and metric. We split the legislative graph (Formed in Section 4) based on bill nodes. We randomly select 20% of the bills for testing and keep the rest for training and validation. We study three settings in terms of the winners-losers (stakeholders) information in the graph: (a) Unknown winners-losers relations. (b) Known relations based on our human-labeled annotation. (c) Predicted: 30% of bills in the train graph come with such relations and we predict them for the rest of bills. In Appendix A.3, we will report the results of state-and time-based splits. Finally, given our data is highly skewed, we choose Macro F1 as the main metric over accuracy. Settings/parameters. We build our joint model (Figure 4) on top of PyTorch, DGL, and spaCy. We set the initial embedding dimension in RoBERTa and RGCN to 1024. The FFNN and RGCN take the embeddings to a 256-dimensional space. We also used Adam optimizer, and for each observed relation (Table 1), we sampled a negative example.
Baseline Models
We devise robust baselines for both of our tasks:
1. Text-based models. We build a logistic regression (LR) classifier that takes the text embedding of a bill and predicts if it shows a certain cleavage or passes/fails. A similar classifier takes the text embeddings of a bill and a stakeholder to classify their relation. We evaluate three embedding architectures: (a) BoW, where unigram and bigram features (top 5K highest-scoring) are used to represent textual information. (b) RoBERTa (Liu et al., 2019). (c) Pretrained RoBERTa that we adapted its domain by applying MLM on 10K unlabeled state bills (39K sentences) (Gururangan et al., 2020). We study two additional variations of these models (only for winners-losers prediction due to limited space): Sponsors, where the bill sponsors are represented using a one-hot vector and concatenated to the bill text representation. Roll-Call, where we concatenate a vector containing cleavage/survival info. of each bill to its text embedding.
2. Graph-based models: We build a relation classifier over edge embeddings, generated by three widely-used graph models, to predict roll-call and winner-loser relations (for the bill cleavages/survival task, we aggregate votes): 3. Naive models. We evaluate three naive classifiers: (a) Majority: A baseline predicting the most frequent class in the train data. (b) Sponsor: An LR classifier that uses the one-hot embedding of bill sponsors to determine bill survival/cleavages (similarly winner/loser relations). (c) WL-Correlation (solely for the survival/cleavage task) predicts a legislator's vote on a test bill with known winners/losers based on his historical votes on train bills with the same winners/losers.
Exp 1: Winners-Losers Prediction
We compare these models in predicting relations between bills and their relevant stakeholders (Table 6). (1) In the vanilla text-based category, RoBERTa shows 2.9 higher F1 than BoW. Our pretrained RoBERTa generates more efficient contextual embedding for text information of bills and stakeholders (e.g., summary), and thus better determines the impact of a bill on its stakeholders. Including the sponsors' info in the pretrained RoBERTa leads to the best text model.
(2) In the graph-based models, Deepwalk/GCN exhibits a sharp drop in F1, by ignoring the heterogeneity of relations in the graph and thus producing inefficient representations for them. RGCN overcomes this issue and approaches the best text model with F1 of 63.9.
(3) Our joint text-graph model combines the strengths of the graph and text models and delivers 3.3 points higher F1.
Exp 2: Impact on Bill Cleavage Prediction
Next, we focus on the performance of different models in determining voting cleavages/survival, with Unknown, Known, Predicted winners-losers in the legislative graph. In Table 7, we report the results for the bill survival and party-based voting cleavages (results for the other cleavages in Appendix, Table 11). We can make a few observations: First, our stakeholder analysis helps all models to better decode state policies, when comparing the same model in the Unknown and Known winnerslosers settings: (1) In the text-based models, prediction on the textual information of both bills and known winners-losers delivers a higher F1 than only on the text of bills (e.g., Pretrained RoBERTa model gets a 5.4% boost in F1 in predicting party competitive bills). Similarly, Table 11.
the limitations of Deepwalk in handling heterogeneous relation types (winner-loser vs. roll-call) and delivers consistent gains in the setting Known.
(3) Our model has the best performance due to generating and optimizing a joint graph and text representation for legislators, bills, money donors, and stakeholders in the setting with known winners and losers. Second, by focusing on the models with the Predicted winners-losers information, we observe:
(5) Our model still beats the other baselines, due to our unified model for roll-call and winner-loser training as well as our text-based legislative graph abstraction (Section 5). Of course, there is an expected drop in F1 across different models including ours, when we consume predicted winner-loser relations instead of human-labeled ones. This drop could be tolerable in most cases, thus not hindering the automation of our stakeholder analysis and leveraging its results in downstream vote analysis tasks (ethical considerations in Section 7).
Conclusion
We took a new data-driven approach to analyze state legislation in the US. We showed that identifying the winners/losers of state bills can (1) inform the public on the directions of state policies, and (2) build a nationwide context for a better understanding of legislators' roll-call behaviors. Thus, we proposed a text-based graph abstraction to model the interplay of key players in the state legisla-tive process, e.g., bills, stakeholders, legislators, and donors to legislators' campaigns. Next, to automate our analysis, we developed a shared text and graph embedding architecture to jointly predict winners/losers of bills and legislators' votes on them. We created a new dataset using different data sources and human annotation and evaluated the strength of our architecture against existing models. We hope this work will provide a starting point for further studies examining the impact of policy decisions on individuals and groups, an important step towards making the democratic process more transparent.
Ethical Considerations
Analyzing state legislation is a sensitive task, where unexpected results of research and deployed ML systems can create misguided beliefs on the government policies on important topics (e.g., health, education). Thus, we would like to discuss some ethical aspects related to our work in terms of data and model (considering potential scenarios suggested by Chandrabose et al., 2021): 1. Selection of data sources. While there can be different inherent imbalances in the state legislature (e.g., gender and party distribution), we were not able to identify that our data sources adding systematic political and social biases to our study, e.g., towards demographic populations of legislators. All our data sources (e.g., LegiScan and FollowTheMoney) are publicly available and have been used by the political science community over the years. LegiScan (LegiScan, 2019) is a nonpartisan and impartial legislative tracking and reporting service for state bills. FollowTheMoney (Fol-lowTheMoney, 2019) is a nonpartisan, nonprofit organization revealing the influence of campaign money on state-level elections and public policy in all US states. Finally, Ballotpedia (Ballotpedia, 2019) is a nonpartisan, nonprofit organization providing a brief introduction, biography, committee assignment, and general information on legislators across different years. Our study combined these data sources for analyzing state bills in a broad context, thus contributing to reduced data bias for all models evaluated in this paper.
2. Selection of states. In addition, to help mitigate the risk of data collection bias or topic preference that can be introduced through the choice of specific state legislatures, we randomly picked a "red", a "blue", and a "purple" state (indicating a significant majority for Republicans, Democrats or more balanced state legislature, respectively). There were some restrictions in terms of collecting the data from the above sources (e.g., FollowThe-Money and Ballotpedia). These data sources and services often limit the number of API calls and queries for retrieving the data for educational institutions. Besides this, annotating the data through Amazon MTurk was expensive for us so we conducted our study on four highly discussed topics in state bills (i.e., health, education, agriculture, and law). We will explore ways of expanding our dataset to more states and topics over time.
3. Disguised winners and losers. In theory, the authors of state bills (e.g., interest groups selling fill-in-the-blank bills to legislators) may try to reframe bills (disguise winners or losers) to further their political aims. At the first glance, this could pose a challenge to our bill annotation, dataset, and stakeholder analysis. As described in Section 3, the state legislative process has a multi-stage reviewing process in two chambers (e.g., first reading, second reading, and third reading). Thus, we have observed that it is hard to hide the impact of bills on their relevant stakeholders from our qualified annotators, i.e., the authors and multiple vetted MTurk workers for each example, in practice. In addition, our work on MTurk maps the impact of policies suggested by bills to winners and losers. Thus, it already considers those stakeholders that are not mentioned in the text explicitly (More details in Appendix A.1).
Winners and losers analysis.
The analysis, aligning demographic cleavages with winners and losers preferences, is done at an aggregate level based on the data we annotated. These preferences could be influenced by other factors beyond demographics. Deriving conclusions from this analysis could require longitudinal studies, capturing the change of these patterns over time, for example when analyzing policies intended to help correct inequities towards marginalized groups. Our goal is to provide a tool for domain experts that would point at nuanced, stakeholder specific, legislative preferences that can be studied further in order to determine their significance.
Handling abstain votes.
There are abstain (absent and N/A) votes in our dataset. However, we did not include them in our study due to their extremely low frequency (for our proposed model and other baseline models). We leave this evaluation as a future work.
6. Handling other countries and languages. While our dataset is specific to the US, the the problem we studied, stakeholder analysis, can be generalized to legislation from other countries and in different languages. Although we have not evaluated such bills (due to lack of data sources), we expect such legislation to produce winners or losers to provide practical solutions to their local problems. In particular, our framework offers a multirelational graph abstraction and prediction models to analyze stakeholders of bills (winners/losers) and the voting behavior of legislators. These techniques can support non-US national and state-level legislative processes. To accommodate other languages, one could adopt cross-lingual embedding models, e.g., XLM-R (Conneau et al., 2019) instead of RoBERTa, in our architecture.
A Appendices
A.1 Data Annotation Pipeline
Our analysis on MTurk maps the policy described in the bills to potential winners and losers, i.e., stakeholders that would be positively or negatively impacted if the bill passes. The analysis is for the proposed policy, regardless of the legislative outcome (pass a vote or not). Due to lack of space, we did not mention certain aspects of our bill annotation task on MTurk in Section 4. We referred to it as a pipeline (in Section 4) because we fully automated the whole process (e.g., selection of MTurkers, publishing bills, collecting and analyzing results, and etc.) in Python, based on MTurk
APIs and other open-source libraries. Annotation of political bills, particularly our winners/losers analysis, turned out to be a challenging task for typical MTurk workers. Thus, we developed an automated quality assurance scheme to ensure highquality annotations for our study. In particular, we built the following components in our pipeline:
Questions/Tasks How many women currently serve on the US Supreme Court? Which party currently has the majority of seats in the US Senate? What is the topic of the following legislation? Prevention and control of, emergency and involuntary commitment for, and treatment programs and services for drug dependence. Select the entities that lose benefits from this bill? Requires Oregon Health Authority to commission independent study of costs and impacts of operating basic health program in Oregon. Specifies parameters of study. Requires a report to Legislative Assembly by November 30, 2014. Appropriates money from General Fund to authority for contract costs to conduct study. Declares emergency, effective on passage. 2. Our pipeline selected 20 qualified Englishspeaker annotators who successfully completed 80% of the tasks, assigned them our qualification label on Mturk, and added them to our pool. We designed the test such that it must be completed by candidates in 30 minutes and those who failed the test were not allowed to take it again. While location was not a determining metric for us in selectors annotators (instead of focused on evaluating their knowledge of the US policies and politics), most of our qualified annotators were located in the US.
3. Next, for annotating each bill in our dataset, our pipeline randomly chose 3 annotators from the pool to determine the effect of the bill on the relevant stakeholders (generated based on the topic of the bill).
4. After collecting the result, for each bill, it computed the final winners and losers based on the majority rule. For 5% of bills with no agreement among annotators (each annotator selected different winners/losers), we automatically assigned these bills to two additional MTurkers, and then recalculated the final results/labels based on the majority rule.
5. For a small fraction of bills (around 1%) adding new annotators was not sufficient to reach an agreement, and thus we automatically rejected all the results and restarted the process from
Step 3 with a new group of annotators. Finally, for a handful of bills, the authors performed the annotation manually.
6. To monitor the accuracy of our annotation, our pipeline sampled labeled bills from each batch of bills and we (authors) performed winners/losers analysis on them to validate the results. We ended up observing that our pipeline generated fully correct labels (all winners and losers) for 90%+ of bills. Figure 5 shows the distribution of winners and losers associated with the bills in our crowd-sourced pipeline.
A.2 Ablation Study: Effect of Nationwide Context
As discussed in Section 3, proposed policies and legislative outcomes at the state level are influenced by the nationwide context. Corporations and lobbying groups coordinate their efforts across multiple states to influence legislators in a similar way. We capture this fact in our graph representation by interconnecting states through common/shared nodes in Section 3. We conducted an ablation study to show the benefit of building a nationwide legislative graph. We split common nodes in the legislative graph that were shared across states (e.g., stakeholders, money donors) into state-specific nodes. Then, we repeated a handful of experiments in Section 6. In our classification tasks (both winners/losers prediction and bill cleavage/survival), we observed up to 4.3 points drop in the macro F1. This indicated interconnecting states through common nodes (e.g., stakeholders, and money donors) leads to better contextualized textual+graph embedding. In addition, in another ablation study, we measured the effect of different relation types and textual attributes of nodes in the legislative graph. For example, our evaluation showed the donors' information (and relations with legislators) improves the F1 score by at least 2.1 points across different tasks (for graph models listed in Table 6).
A.3 Additional Results: State and Time-based Data Splits
In Section 6, we evaluated all models using a random split based on bill nodes. Here, We further evaluate the best model in each category (from Table 6, Table 7) with two different data splits: (1) Time-based where test bills are selected from 20% of most recently introduced bills.
(2) State-based where we choose the test bills from one specific state and train bills from the other two states. In Table 9, we look at the winners/losers prediction task and the performance of the best model in each category (i.e., naive, text-based, graph-based that we discussed in Section 6). Similarly, in Table 10, we study the best models (from Table 7) for classifying party-based competitive bills. Overall, we make multiple observations. First, our results of the time-and state-based splits still show that our pretrained textual+graph model outperforms other models in both of the tasks (i.e., winners/losers prediction, and voting cleavage classification). Second, we can see a rather sharp drop across all the models in terms of F1 score for these Table 11: Effect of winners/losers information on the graph and text-based models in different vote classification tasks. Extending the results based on the random split of bills in Table 7.
two new splits. The reason is that time-based and state-based splits of bills lead to more unseen nodes (e.g., legislators, money donors), challenging graph models more than text models. For the time-based data split, the performance degradation is slightly less as the number of unseen nodes was fewer in the test dataset. Third, when we use Oregon for testing, we observe there is a higher drop in the performance of models, compared to using Indiana for testing; One potential reason is that Wisconsin and Indiana tend to be Republican states, while OR has been a Democratic one. Forth, our graph abstraction and stakeholders analysis (relations) help even baseline models to better decode state policies, when we compare the performance of models in the bill cleavage/survival tasks, with Unknown, Known, Predicted winners and losers in the legislative graph.
A.4 Measuring Disagreement and Labeling Competitive Roll-Calls
As discussed in Section 5, roll-call votes occur when a state-level legislator votes "yea" or "nay" on a bill. In Sections 1 and 5, we defined two types of bill classification tasks to characterize voting cleavages or disagreement across and within different ideological and demographic groups of legislators. Here, we discuss how we measure the disagreement and label bills in each of these tasks: (1) Inverse-Competitive bills: Consider a bill where 55% of Men voting Yea and 45% of them voting Nay. We define the disagreement as the percentage of minority votes or 45%. When the disagreement within a group of legislators (e.g, men) is between 40-49%, we consider the bill as an inverse-competitive bill with a partial tie in votes. The disagreement of 50% is a complete tie.
(2) Competitive bills: Next, consider a bill with 60% of Women voting Yea and 80% Men voting Nay on a roll call. This bill is competitive because the majority of female legislators voted differently than the majority of male legislators (the crossgroup disagreement is 20% = 80%-60% in this case.) Conceptually, we do not need to compute the cross-group disagreement to identify competitive bills.
Figure 2 :
2Two types of gender-based voting cleavages.
Figure 4 :
4Joint text-graph embedding model for shared prediction of winners/losers and roll-calls in the legislative graph using RGCN & pretrained RoBERTa.
(a) DeepWalk (Perozzi et al., 2014) that generates embeddings for nodes and edges by running Skip-Gram on random walks on the graph. (b) GCN (Kipf and Welling, 2016) is a basic 3layer GCN model with random node features in its first layer. (c) RGCN (Schlichtkrull et al., 2018) is the relational GCN handling different relation types in the legislative graph.
Figure 5 :
5Distribution of winner and loser relations between bills and their stakeholders. Most of the annotated bills had at least one winner or loser. A small portion of the bills (only 3.5%) had no loser and winner. A bill does not necessarily have both a winner and a loser.
Table 1 :
1Aggregated statistics of the legislative graph-Cont: Contributor, Leg: Legislator.
Topic
Education Health Law Agriculture
# of bills
957
942
1140
758
Table 2 :
2Bills sampled for the stakeholders analysis.
Table 3 :
3Aggregated legislators' attributes-UR: Ur-ban, RU: Rural, C: Conservative, M: Moderate, L: Liberal.
Topic
Stakeholders
W (%) L (%)
Education
Edu. companies & service providers
1.4
1
Educational institutions and schools
23.9
8.7
State education agencies
6.3
8.6
Teachers and education workers
13.2
1.3
Students
34.2
1.6
Agriculture
Agriculture and food-related companies
4.5
4.1
Agricultural and food producers
24.4
6.9
End consumers or retail customers
11.6
11.2
State agriculture and food agencies
14.5
1.4
Grocery stores or food providers
11.6
9.8
Health
Healthcare facilities
16.7
7.7
Healthcare providers and professionals
6.8
3.3
Insurance providers and companies
11.4
10.5
Patients and insurance owners
16.7
6.3
Pharma and medical device companies
4.6
0.5
State healthcare agencies
11.7
4
Law
Law enforcement agencies and officers
15.7
24.7
Judges
11.5
9.4
Victims, offenders, suspects
9.9
11.2
Lawyers
9.8
7.7
Table 4 :
4Stakeholders of different bill topics and their frequency distribution as winners (W) and losers (L).
Table 5 :
5Capturing policy preferences of different demographic and ideological groups of legislators on education bills, by measuring the change in the rate (%) of 'yes' vote when a stakeholder is a winner and a loser.
Emergency medications … The prescription of … A school nurse or …Bill embedding
Body
Description
Title
Concatenation
Relation Prediction
FFNN
Text emb.
Graph emb.
cont.
bill
Leg.
Stake.
Cleavage/survival
R7
R2
R4
R5
Stake. embed
Bill embed
Cont. embed
Textual node encoders
Predicted
W/L relations
Roll-call Analysis
Winner-Loser Analysis
…
Winner/Loser
Stake.
RoBERTa
RoBERTa
Table 6 :
6Performance of different models in predicting winner-loser relations between bills and stakeholders.
In the graph-based models: RGCN overcomesModels
Pass/
Fail
Comp.
(Party)
Inverse
Comp.
(Party)
Type
Winner/
Loser
Embedding
Naive
Unknown
Majority
47.2
43.1
48.3
Sponsor
50.6
52.3
53.4
Known
WL Correlation 58.7
51.3
54.8
Text-based
Unknown
BoW
48.1
56.8
48.6
Pre. RoBERTa
49.8
58.2
49.3
Known
BoW
49.4
61.2
50.1
Pre. RoBERTa
51.9
63.6
51.8
Predicted
BoW
48.3
59.1
49.3
Pre. RoBERTa
50.1
61.4
49.8
Graph-based
Unknown
DeepWalk
59.9
49.1
49.8
GCN
60.8
50.4
48.2
RGCN
64.3
52.8
49.8
Known
DeepWalk
62.6
49.6
50.3
GCN
63.6
52.2
50.3
RGCN
66.3
53.7
52.4
Predicted
DeepWalk
61.3
47.7
50.1
GCN
62.3
51.2
49.5
RGCN
65.6
52.3
52.1
Our Joint
Text + Graph
Unknown Pre. RoBERTa
+ RGCN
69.8
69.9
59.1
Known
73.7
71.8
59.8
Predicted
70.7
71.6
59.5
Table 7 :
7Effect of winners/losers information on the graph and text-based models in different downstream vote classification tasks. See the results for other demographic voting cleavages in
Table 8 :
8Some of the questions used in our Political Science qualification test.1. We developed a Political Science Qualifi-
cation test on MTurk to evaluate candidate
MTurk workers. Our test consists of 20 ques-
tions (e.g., Sentiment analysis on the US po-
litical text, identifying winners and losers of
US bills, basic political knowledge questions).
Table 8 shows the first four questions in the
test.
Table 9 :
9Performance of best model in each category (defined in Section 6 andTable 6) in predicting winner/loser relations between bills and stakeholders for time-based and state-based splits.Models
State
(Test: IN)
State
(Test: OR)
Time
(Test: 20%)
Type
Embedding
Winner/Loser
Naive
WL Correlation
Known
49.5
49.3
50.1
Text-based
Pretrained
RoBERTa
Unknown
57.3
57.2
57.4
Known
59.4
58.8
60.1
Predicted
58.3
58.0
58.8
Graph-based
RGCN
Unknown
49.5
49.0
51.1
Known
51.5
50.1
52.1
Predicted
50.7
49.3
51.5
Our Joint
Text + Graph
Pretrained
RoBERTa
+ RGCN
Unknown
62.9
61.3
62.8
Known
64.1
63.2
65.6
Predicted
63.8
62.4
64.2
Table 10 :
10Effect of winners/losers information on the graph and text-based models in different vote classification tasks, for time-based and state-based data splits.9.2
52.4
36.3
2.1
21.3
48.9
27.2
2.6
0
20
40
60
0
1
2
3
% of Bills
# of stakeholders
Winner
Loser
Nebraska's legislature is unique in the nation because it has a single-house system.
AcknowledgementWe would like to acknowledge the members of the PurdueNLP lab. We also thank the reviewers for their constructive feedback. The funding for the use of mTurk was part of the Purdue University Integrative Data Science Initiative: Data Science for Ethics, Society, and Policy Focus Area. This work was partially supported by an NSF CAREER award IIS-2048001.
Electoral rules and legislative particularism: Evidence from us state legislatures. Tanya Bagashka, Jennifer Hayes Clark, American Political Science Review. 1103Tanya Bagashka and Jennifer Hayes Clark. 2016. Elec- toral rules and legislative particularism: Evidence from us state legislatures. American Political Sci- ence Review, 110(3):441-456.
State-level political encyclopedia data. Ballotpedia, Ballotpedia. 2019. State-level political encyclopedia data. https://ballotpedia.org/.
A comparative dimensional analysis of partisan and urban-rural voting in state legislatures. T Glen, Broach, The Journal of Politics. 343Glen T Broach. 1972. A comparative dimensional anal- ysis of partisan and urban-rural voting in state legis- latures. The Journal of Politics, 34(3):905-921.
Gis census data. Gis Census, GIS Census. 2019. Gis census data. https://www. nhgis.org/.
An overview of fairness in datailluminating the bias in data pipeline. Aravindan Chandrabose, Bharathi Raja Chakravarthi, Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion. the First Workshop on Language Technology for Equality, Diversity and InclusionAravindan Chandrabose, Bharathi Raja Chakravarthi, et al. 2021. An overview of fairness in data- illuminating the bias in data pipeline. In Proceed- ings of the First Workshop on Language Technology for Equality, Diversity and Inclusion, pages 34-45.
. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1911.02116Xlmr. arXiv preprintAlexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Xlmr. arXiv preprint arXiv:1911.02116.
Ana Suchin Gururangan, Swabha Marasović, Kyle Swayamdipta, Iz Lo, Doug Beltagy, Noah A Downey, Smith, arXiv:2004.109642020. Don't stop pretraining: adapt language models to domains and tasks. arXiv preprintSuchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. 2020. Don't stop pretraining: adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964.
Mikael Henaff, Joan Bruna, Yann Lecun, arXiv:1506.05163Deep convolutional networks on graph-structured data. arXiv preprintMikael Henaff, Joan Bruna, and Yann LeCun. 2015. Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163.
Washington State Health Care & Wellness Committee. Wa State House, Online; accessed 19WA State House. 2021. Washington State Health Care & Wellness Committee. https: //leg.wa.gov/House/Committees/HCW/ Pages/default.aspx. [Online; accessed 19-July-2021].
Multi-factor congressional vote prediction. Hamid Karimi, Tyler Derr, Aaron Brookhouse, Jiliang Tang, Advances in Social Networks Analysis and Mining (ASONAM). Hamid Karimi, Tyler Derr, Aaron Brookhouse, and Jil- iang Tang. 2019. Multi-factor congressional vote prediction. Advances in Social Networks Analysis and Mining (ASONAM).
Kevin King, State Legislatures Vs. Congress: Which Is More Productive. Online; accessed 19Kevin King. 2019. State Legislatures Vs. Congress: Which Is More Productive? http://bit.ly/ 30YsKwT. [Online; accessed 19-July-2019].
Semisupervised classification with graph convolutional networks. N Thomas, Max Kipf, Welling, arXiv:1609.02907arXiv preprintThomas N Kipf and Max Welling. 2016. Semi- supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.
Party matters: Enhancing legislative embeddings with author attributes for vote prediction. Anastassia Kornilova, Daniel Argyle, Vladimir Eidelman, Proceedings of ACL. ACLAnastassia Kornilova, Daniel Argyle, and Vladimir Ei- delman. 2018. Party matters: Enhancing legislative embeddings with author attributes for vote predic- tion. In Proceedings of ACL.
An embedding model for predicting roll-call votes. Peter Kraft, Hirsh Jain, Alexander M Rush, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingPeter Kraft, Hirsh Jain, and Alexander M Rush. 2016. An embedding model for predicting roll-call votes. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing.
State-level legislative data. Legiscan, LegiScan. 2019. State-level legislative data. https: //legiscan.com/.
Classifying relations in clinical narratives using segment graph convolutional and recurrent neural networks (Seg-GCRNs). Yifu Li, Ran Jin, Yuan Luo, Journal of the American Medical Informatics Association. 263Yifu Li, Ran Jin, and Yuan Luo. 2018. Classifying relations in clinical narratives using segment graph convolutional and recurrent neural networks (Seg- GCRNs). Journal of the American Medical Infor- matics Association, 26(3):262-268.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Encoding sentences with graph convolutional networks for semantic role labeling. Diego Marcheggiani, Ivan Titov, arXiv:1703.04826arXiv preprintDiego Marcheggiani and Ivan Titov. 2017. En- coding sentences with graph convolutional net- works for semantic role labeling. arXiv preprint arXiv:1703.04826.
Roll call vote prediction with knowledge augmented models. Pallavi Patil, Kriti Myer, Ronak Zala, Arpit Singh, Sheshera Mysore, Andrew Mccallum, Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL). the 23rd Conference on Computational Natural Language Learning (CoNLL)Adrian Benton, and Amanda StentPallavi Patil, Kriti Myer, Ronak Zala, Arpit Singh, Sheshera Mysore, Andrew McCallum, Adrian Ben- ton, and Amanda Stent. 2019. Roll call vote pre- diction with knowledge augmented models. In Pro- ceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 574- 581.
Large-scale hierarchical text classification with recursively regularized deep graph-cnn. Hao Peng, Jianxin Li, Yu He, Yaopeng Liu, Mengjiao Bao, Lihong Wang, Proceedings of the 2018 World Wide Web Conference. the 2018 World Wide Web ConferenceYangqiu Song, and Qiang Yang. International World Wide Web Conferences Steering CommitteeHao Peng, Jianxin Li, Yu He, Yaopeng Liu, Mengjiao Bao, Lihong Wang, Yangqiu Song, and Qiang Yang. 2018. Large-scale hierarchical text classification with recursively regularized deep graph-cnn. In Pro- ceedings of the 2018 World Wide Web Conference, pages 1063-1072. International World Wide Web Conferences Steering Committee. |
1,905,325 | Comparison of Similarity Models for the Relation Discovery Task | We present results on the relation discovery task, which addresses some of the shortcomings of supervised relation extraction by applying minimally supervised methods. We describe a detailed experimental design that compares various configurations of conceptual representations and similarity measures across six different subsets of the ACE relation extraction data. Previous work on relation discovery used a semantic space based on a term-bydocument matrix. We find that representations based on term co-occurrence perform significantly better. We also observe further improvements when reducing the dimensionality of the term co-occurrence matrix using probabilistic topic models, though these are not significant. | [
5139774,
6204420,
1077383,
6305097,
2096243,
2480472,
10827006
] | Comparison of Similarity Models for the Relation Discovery Task
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 2006. 2006
Ben Hachey bhachey@inf.ed.ac.uk
School of Informatics
University of Edinburgh
Buccleuch Place
EH8 9LWEdinburgh
Comparison of Similarity Models for the Relation Discovery Task
Proceedings of the Workshop on Linguistic Distances
the Workshop on Linguistic DistancesSydneyAssociation for Computational LinguisticsJuly 2006. 2006
We present results on the relation discovery task, which addresses some of the shortcomings of supervised relation extraction by applying minimally supervised methods. We describe a detailed experimental design that compares various configurations of conceptual representations and similarity measures across six different subsets of the ACE relation extraction data. Previous work on relation discovery used a semantic space based on a term-bydocument matrix. We find that representations based on term co-occurrence perform significantly better. We also observe further improvements when reducing the dimensionality of the term co-occurrence matrix using probabilistic topic models, though these are not significant.
Introduction
This paper describes work that aims to improve upon previous approaches to identifying relationships between named objects in text (e.g., people, organisations, locations). Figure 1 contains several example sentences from the ACE 2005 corpus that contain relations and Figure 2 summarises the relations occurring in these sentences. So, for example, sentence 1 contains an employment relation between Lebron James and Nike, sentence 2 contains a sports-affiliation relation between Stig Toefting and Bolton and sentence 4 contains a business relation between Martha Stewart (she) and the board of directors (of Martha Stewart Living Omnimedia).
Possible applications include identifying companies taking part in mergers/acquisitions from business newswire, which could be inserted into a corporate intelligence database. In the biomedical domain, we may want to identify relationships between genes and proteins from biomedical publications, e.g. Hirschman et al. (2004), to help scientists keep up-to-date on the literature. Or, we may want to identify disease and treatment relations in publications and textbooks, which can be used to help formalise medical knowledge and assist general practitioners in diagnosis, treatment and prognosis (Rosario and Hearst, 2004). Another application scenario involves building networks of relationships from text collections that indicate the important entities in a domain and can be used to visualise interactions. The networks could provide an alternative to searching when interacting with a document collection. This could prove beneficial, for example, in investigative journalism. It might also be used for social science research using techniques from social network analysis (Marsden and Lin, 1982). In previ-ous work, relations have been used for automatic text summarisation as a conceptual representation of sentence content in a sentence extraction framework (Filatova and Hatzivassiloglou, 2004).
In the next section, we motivate and introduce the relation discovery task, which addresses some of the shortcomings of conventional approaches to relation extraction (i.e. supervised learning or rule engineering) by applying minimally supervised methods. 1 A critical part of the relation discovery task is grouping entity pairs by their relation type. This is a clustering task and requires a robust conceptual representation of relation semantics and a measure of similarity between relations. In previous work (Hasegawa et al., 2004;Chen et al., 2005), the conceptual representation has been limited to term-by-document (TxD) models of relation semantics. The current work introduces a term co-occurrence (TxT) representation for the relation discovery task and shows that it performs significantly better than the TxD representation. We also explore dimensionality reduction techniques, which show a further improvement.
Section 3 presents a parameterisation of similarity models for relation discovery. For the purposes of the current work, this consists of the semantic representation for terms (i.e. how a term's context is modelled), dimensionality reduction technique (e.g. singular value decomposition, latent Dirichlet allocation), and the measure used to compute similarity.
We also build on the evaluation paradigm for relation discovery with a detailed, controlled experimental setup. Section 4 describes the experiment design, which compares the various system configurations across six different subsets of the relation extraction data from the automatic content extraction (ACE) evaluation. Finally, Section 5 presents results and statistical analysis.
The Relation Discovery Task
Conventionally, relation extraction is considered to be part of information extraction and has been approached through supervised learning or rule engineering (e.g., Blaschke and Valencia (2002), Bunescu and Mooney (2005)). However, traditional approaches have several shortcomings. First and foremost, they are generally based on predefined templates of what types of relations exist in the data and thus only capture information whose importance was anticipated by the template designers. This poses reliability problems when predicting new data in the same domain as the training data will be from a certain epoch in the past. Due to language change and topical variation, as time passes, it is likely that the new data will deviate more and more from the trained models. Additionally, there are cost problems associated with the conventional supervised approach when updating templates or transferring to a new domain, both of which require substantial effort in re-engineering rules or re-annotating training data.
The goal of the relation discovery task is to identify the existence of associations between entities, to identify the kinds of relations that occur in a corpus and to annotate particular associations with relation types. These goals correspond to the three main steps in a generalised algorithm (Hasegawa et al., 2004):
1. Identify co-occurring pairs of named entities 2. Group entity pairs using the textual context 3. Label each cluster of entity pairs The first step is the relation identification task. In the current work, this is assumed to have been done already. We use the gold standard relations in the ACE data in order to isolate the performance of the second step. The second step is a clustering task and as such it is necessary to compute similarity between the co-occurring pairs of named entities (relations). In order to do this, a model of relation similarity is required, which is the focus of the current work.
We also assume that it is possible to perform the third step. 2 The evaluation we present here looks just at the quality of the clustering and does not attempt to assess the labelling task. parameters including the term context representation, whether or not we apply dimensionality reduction, and what similarity measure we use.
Term Context
Representing texts in such a way that they can be compared is a familiar problem from the fields of information retrieval (IR), text mining (TM), textual data analysis (TDA) and natural language processing (NLP) (Lebart and Rajman, 2000). The traditional model for IR and TM is based on a term-by-document (TxD) vector representation. Previous approaches to relation discovery (Hasegawa et al., 2004;Chen et al., 2005) have been limited to TxD representations, using tf*idf weighting and the cosine similarity measure. In information retrieval, the weighted term representation works well as the comparison is generally between pieces of text with large context vectors. In the relation discovery task, though, the term contexts (as we will define them in Section 4) can be very small, often consisting of only one or two words. This means that a term-based similarity matrix between entity pairs is very sparse, which may pose problems for performing reliable clustering.
An alternative method widely used in NLP and cognitive science is to represent a term context by its neighbouring words as opposed to the documents in which it occurs. This term cooccurrence (TxT) model is based on the intuition that two words are semantically similar if they appear in a similar set of contexts (see e.g. Pado and Lapata (2003)). The current work explores such a term co-occurrence (TxT) representation based on the hypothesis that it will provide a more robust representation of relation contexts and help overcome the sparsity problems associated with weighted term representations in the relation discovery task. This is compared to a baseline term-by-document (TxD) representation which is a re-implementation of the approach used by Hasegawa et al. (2004) and Chen et al. (2005).
Dimensionality Reduction
Dimensionality reduction techniques for document and corpus modelling aim to reduce description length and model a type of semantic similarity that is more linguistic in nature (e.g., see Landauer et al.'s (1998) discussion of LSA and synonym tests). In the current work, we explore singular value decomposition (Berry et al., 1994), a technique from linear algebra that has been applied to a number of tasks from NLP and cognitive modelling. We also explore latent Dirichlet allocation, a probabilistic technique analogous to singular value decomposition whose contribution to NLP has not been as thoroughly explored.
Singular value decomposition (SVD) has been used extensively for the analysis of lexical semantics under the name of latent semantic analysis (Landauer et al., 1998). Here, a rectangular matrix is decomposed into the product of three matrices (X w×p = W w×n S n×n (P p×n ) T ) with n 'latent semantic' dimensions. The resulting decomposition can be viewed as a rotation of the n-dimensional axes such that the first axis runs along the direction of largest variation among the documents (Manning and Schütze, 1999). W and P represent terms and documents in the new space. And S is a diagonal matrix of singular values in decreasing order.
Taking the product W w×k S k×k (P p×k ) T over the first D columns gives the best least square approximation of the original matrix X by a matrix of rank D, i.e. a reduction of the original matrix to D dimensions. SVD can equally be applied to the word co-occurrence matrices obtained in the TxT representation presented in Section 2, in which case we can think of the original matrix as being a term × co-occurring term feature matrix.
While SVD has proved successful and has been adapted for tasks such as word sense discrimination (Schütze, 1998), its behaviour is not easy to interpret. Probabilistic LSA (pLSA) is a generative probabilistic version of LSA (Hofmann, 2001). This models each word in a document as a sample from a mixture model, but does not provide a probabilistic model at the document level. Latent Dirichlet Allocation (LDA) addresses this by representing documents as random mixtures over latent topics (Blei et al., 2003). Besides having a clear probabilistic interpretation, an additional advantage of these models is that they have intuitive graphical representations. Figure 3 contains a graphical representation of the LDA model as applied to TxT word co-occurrence matrices in standard plate notation. This models the word features f in the co-occurrence context (size N ) of each word w (where w ∈ W and |W| = W ) with a mixture of topics z. In its generative mode, the LDA model samples a topic from the word-specific multino- mial distribution θ. Then, each context feature is generated by sampling from a topic-specific multinomial distribution φ z . 3 In a manner analogous to the SVD model, we use the distribution over topics for a word w to represent its semantics and we use the average topic distribution over all context words to represent the conceptual content of an entity pair context.
Measuring Similarity
Cosine (Cos) is commonly used in the literature to compute similarities between tf*idf vectors:
Cos(p, q) = i p i q i p 2 q 2
In the current work, we use cosine over term and SVD representations of entity pair context. However, it is not clear which similarity measure should be used for the probabilistic topic models. Dagan et al. (1997) find that the symmetric information radius measure performs best on a pseudoword sense disambiguation task, while Lee (1999) find that the asymmetric skew divergence -a generalisation of Kullback-Leibler divergence -performs best for improving probability estimates for unseen word co-occurrences.
In the current work, we compare KL divergence with two methods for deriving a symmetric mea-3 The hyperparameters α and β are Dirichlet priors on the multinomial distributions for word features (φ ∼ Dir(β)) and topics (θ ∼ Dir(α)). The choice of the Dirichlet is explained by its conjugacy to the multinomial distribution, meaning that if the parameter (e.g. φ, θ) for a multinomial distribution is endowed with a Dirichlet prior then the posterior will also be a Dirichlet. Intuitively, it is a distribution over distributions used to encode prior knowledge about the parameters (φ and θ) of the multinomial distributions for word features and topics. Practically, it allows efficient estimation of the joint distribution over word features and topics P ( f , z) by integrating out φ and θ.
sure. The KL divergence of two probability distributions (p and q) over the same event space is defined as:
KL(p||q) = i p i log p i q i
In information-theoretic terms, KL divergence is the average number of bits wasted by encoding events from a distribution p with a code based on distribution q. The symmetric measures are defined as:
Sym(p, q) = 1 2 [KL(p||q) + KL(q||p)] JS(p, q) = 1 2 KL p|| p + q 2 + KL q|| p + q 2
The first is termed symmetrised KL divergence (Sym) and the second is termed Jensen-Shannon (JS) divergence. We explore KL divergence as well as the symmetric measures as it is not known in advance whether a domain is symmetric or not.
Technically, the divergence measures are dissimilarity measures as they calculate the difference between two distributions. However, they can be converted to increasing measures of similarity through various transformations. We treated this as a parameter to be tuned during development and considered two approaches. The first is from Dagan et al. (1997). For KL divergence, this function is defined as Sim(p, q) = 10 −βKL(p||q) , where β is a free parameter, which is tuned on the development set (as described in Section 4.2). The same procedure is applied for symmetric KL divergence and JS divergence. The second approach is from Lee (1999). Here similarity for KL is defined as Sim(p, q) = C − KL(p||q), where C is a free parameter to be tuned.
Experimental Setup
Materials
Following Chen et al. (2005), we derive our relation discovery data from the automatic content extraction (ACE) 2004 and 2005 materials for evaluation of information extraction. 4 This is preferable to using the New York Times data used by Hasegawa et al. (2004) as it has gold standard annotation, which can be used for unbiased evaluation.
The relation clustering data is based on the gold standard relations in the information extraction data. We only consider data from newswire or broadcast news sources. We constructed six data subsets from the ACE corpus based on four of the ACE entities: persons (PER), organisations (ORG), geographical/social/political entities (GPE) and facilities (FAC). The six data subsets were chosen during development based on a lower limit of 50 for the data subset size (i.e. the number of entity pairs in the domain), ensuring that there is a reasonable amount of data. We also set a lower limit of 3 for the number of classes (relation types) in a data subset, ensuring that the clustering task is not too simple.
The entity pair instances for clustering were chosen based on several criteria. First, we do not use ACE's discourse relations, which are relations in which the entity referred to is not an official entity according to world knowledge. Second, we only use pairs with one or more non-stop words in the intervening context, that is the context between the two entity heads. 5 Finally, we only keep relation classes with 3 or more members. We use the Infomap tool 6 for singular value decomposition of TxT matrices and compute the conceptual content of an entity pair context as the average over the reduced D-dimensional representation of the co-occurrence vector of the terms in the relation context. For LDA, we use Steyvers and Griffiths' Topic Modeling Toolbox 7 ). The input is produced by a version of Infomap which was modified to output the TxT matrix. Again, we compute the conceptual content of an entity pair as the average over the topic vectors for the context words. As documents are explicitly modelled in the LDA model, we input a matrix with raw frequencies. In the TxD, unreduced TxT and SVD models we use tf*idf term weighting.
We use the same preprocessing when preparing the text for building the SVD and probabilistic topic models as we use for processing the intervening context of entity pairs. This consisted of Mx-Terminator (Reynar and Ratnaparkhi., 1997) for sentence boundary detection, the Penn Treebank sed script 8 for tokenisation, and the Infomap stop word list. We also use an implementation of the Porter algorithm (Porter, 1980) for stemming. 9
Model Selection
We used the ACE 2004 relation data to perform model selection. Firstly, dimensionality (D) needs to be optimised for SVD and LDA. SVD was found to perform best with the number of dimensions set to 10. For LDA, dimensionality interacts with the divergence-to-similarity conversion so they were tuned jointly. The optimal configuration varies by the divergence measure with D = 50 and C = 14 for KL divergence, D = 200 and C = 4 for symmetrised KL, and D = 150 and C = 2 for JS divergence. For all divergence measures, Lee's (1999) method outperformed Dagan et al.'s (1997) method. Also for all divergence measures, the model hyper-parameter β was found to be optimal at 0.0001. The α hyper-parameter was always set to 50/T following Griffiths and Steyvers (2004).
Clustering is performed with the CLUTO software 10 and the technique used is identical across models. Agglomerative clustering is used for comparability with the original relation discovery work of Hasegawa et al. (2004). This choice was motivated because as it is not known in advance how many clusters there should be in a new domain.
One way to view the clustering problem is as an optimisation process where an optimal clustering is chosen with respect to a criterion function over the entire solution. The criterion function used here was chosen based on performance on the development data. We compared a number of criterion functions including single link, complete link, group average, I 1 , I 2 , E 1 and H 1 . I 1 is a criterion function that maximises sum of pairwise similarities between relation instances assigned to each cluster, I 2 is an internal criterion function that maximises the similarity between each relation instance and the centroid of the cluster it is assigned to, E 1 is an external criterion function that minimises the similarity between the centroid vector of each cluster and the centroid vector of the entire collection, and H 1 is a combined criterion function that consists of the ration of I 1 over E 1 . The I 2 , H 1 and H 2 criterion functions outperformed single link, complete link and group average on the development data. We use I 2 , which performed as well as H 1 and H 2 and is superior in terms of computational complexity (Zhao and Karypis, 2004).
Experiment
Method
This section describes experimental setup, which uses relation extraction data from ACE 2005 to answer four questions concerning the effectiveness of similarity models based on term co-occurrence and dimensionality reduction for the relation discovery task:
1. Do term co-occurrence models provide a better representation of relation semantics than standard term-by-document vector space? 2. Do textual dimensionality reduction techniques provide any further improvements?
3. How do probabilistic topic models perform with respect to SVD on the relation discovery task?
4. Does one similarity measure (for probability distributions) outperform the others on the relation discovery task?
System configurations are compared across six different data subsets (entity type pairs, i.e., organisation-geopolitical entity, organisationorganisation, person-facility, person-geopolitical entity, person-organisation, person-person) and evaluated following suggestions by Demšar (2006) for statistical comparison of classifiers over multiple data sets.
The dependent variable is the clustering performance as measured by the F-score. F-score accounts for both the amount of predictions made that are true (Precision) and the amount of true classes that are predicted (Recall). We use the CLUTO implementation of this measure for evaluating hierarchical clustering. Based on (Larsen and Aone, 1999), this is a balanced F-score (F = 2RP R+P ) that computes the maximum per-class score over all possible alignments of gold standard classes with nodes in the hierarchical tree. The average F-score for the entire hierarchical tree is a micro-average over the class-specific scores weighted according to the relative size of the class. Table 3 Table 3: F-score performance on the test data (ACE 2005) using agglomerative clustering with the I 2 criterion function.
Results
ity reduction technique used. The column labels in the third row indicated the similarity measure used, i.e. cosine (Cos) and KL (KL), symmetrised KL (Sym) and JS (JS) divergence. The rows contain results for the different data subsets. While we do not use them for analysis of statistical significance, we include micro and macro averages over the data subsets. 11 We also include the average ranks, which show that the LDA system using KL divergence performed best.
Initial inspection of the table shows that all systems that use the term co-occurrence semantic space outperform the baseline system that uses the term-by-document semantic space. To test for statistical significance, we use non-parametric tests proposed by Demšar (2006) for comparing classifiers across multiple data sets. The use of nonparametric tests is safer here as they do not assume normality and outliers have less effect. The first test we perform is a Friedman test (Friedman, 1940), a multiple comparisons technique which is the non-parametric equivalent of the repeatedmeasures ANOVA. The null hypothesis is that all models perform the same and observed differences are random. With a Friedman statistic (χ 2 F ) of 21.238, we reject the null hypothesis at p < 0.01.
The first question we wanted to address is whether term co-occurrence models outperform the term-by-document representation of relation semantics. To address this question, we continue with post-hoc analysis. The objective here is to 11 Averages over data sets are unreliable where it is not clear whether the domains are commensurable (Webb, 2000). We present averages in our results but avoid drawing conclusions based on them.
compare several conditions to a control (i.e., compare the term co-occurrence systems to the termby-document baseline) so we use a Bonferroni-Dunn test. At a significance level of p < 0.05, the critical difference for the Bonferroni-Dunn test for comparing 6 systems across 6 data sets is 2.782. We conclude that the unreduced term cooccurrence system and the LDA systems with KL and JS divergence all perform significantly better than baseline, while the SVD system and the LDA system with symmetrised KL divergence do not.
The second question asks whether SVD and LDA dimensionality reduction techniques provide any further improvement. We observe that the systems using KL and JS divergence both outperform the unreduced term co-occurrence system, though the difference is not significant.
The third question asks how the probabilistic topic models perform with respect to the SVD models. Here, Holm-correct Wilcoxon signedranks tests show that the KL divergence system performs significantly better than SVD while the symmetrised KL divergence and JS divergence systems do not.
The final question is whether one of the divergence measures (KL, symmetrised KL or JS) outperforms the others. With a statistic of χ 2 F = 9.336, we reject the null hypothesis that all systems are the same at p < 0.01. Post-hoc analysis with Holm-corrected Wilcoxon signed-ranks tests show that the KL divergence system and the JS divergence system both perform significantly better than the symmetrised KL system at p < 0.05, while there is no significant difference between the KL and JS systems.
Discussion
An interesting aspect of using the ACE corpus is the wealth of linguistic knowledge encoded. With respect to named entities, this includes class information describing the kind of reference the entity makes to something in the world (i.e., specific referential, generic referential, under-specified referential) and it includes mention type information (i.e., names, quantified nominal constructions, pronouns). It also includes information describing the lexical condition of a relation (i.e., possessive, preposition, pre-modifier, formulaic, , verbal). Based on a mapping between gold standard and predicted clusters, we assigned each case a value of 1 or 0 to indicate whether it is a correct or incorrect classification. We then carried out detailed statistical analysis 12 to test for effects of the entity and relation information described above on each system in each domain.
Overall, the effects were fairly small and do not generalise across domains or systems very well. However, there were some observable tendencies. With respect to entity class, relations with specific referential entities tend to correlate positively with correct classifications while under-specified referential entities tend to correlate negatively with correct classifications. With respect to entity mention type, relations entities that consist of names tend to correlate positively with correct classifications while pronouns tend to correlate negatively with correct classifications. Though, this is only reliably observed in the PER-GPE domain. Finally, with respect to lexical condition, we observe that possessive conditioned relations tend to correlate negatively, especially in the PER-GPE and PER-ORG domains with the PER-PER domain also showing some effect. Pre-modifier conditioned relations also tend to correlate negatively in the PER-GPE domain. The effect with verbally conditioned relations is mixed. This is probably due to the fact that verbal relations tend to have more words occurring between the entity pair, which provides more context but can also be misleading when the key terms describing the relation do not occur between the entity pair (e.g., the first sentence in Figure 1).
It is also informative to look at overall properties of the entity pair domains and compare this 12 For this analysis, we used the Phi coefficient, which is a measure of relatedness for binomial variables that is interpreted like correlation. Table 4: System score, type-to-token ratio (TTR) and relation type entropy (Entrpy) for entity pair domains.
to the system performance. Table 6 contains, for each domain, the F-score of the LDA+KL system, the type-to-token ratio, and the entropy of the relation type distribution for each domain. Type-totoken ratio (TTR) is the number of words divided by the number of word instances and indicates how much repetition there is in word use. Since TTR can vary depending on the size of the text, we compute it on a random sample of 75 tokens from each domain. Entropy can be interpreted as a measure of the uniformity of a distribution. Low entropy indicates a more spiked distribution while high entropy indicates a more uniform distribution. Though there is not enough data to make a reliable conclusion, it seems that the system does poorly on domains that have both a high type-totoken ratio and a high entropy (uniform relation type distribution), while it performs very well on domains that have low TTR or low entropy.
Conclusions and Future Work
This paper presented work on the relation discovery task. We tested several systems for the clustering subtask that use different models of the conceptual/semantic similarity of relations. These models included a baseline system based on a term-by-document representation of term context, which is equivalent to the representation used in previous work by Hasegawa et al. (Hasegawa et al., 2004) and Chen et al. (Chen et al., 2005). We hypothesised that this representation suffers from a sparsity problem and showed that models that use a term co-occurrence representation perform significantly better. Furthermore, we investigated the use of singular value decomposition and latent Dirichlet allocation for dimensionality reduction. It has been suggested that representations using these techniques are able to model a similarity that is less reliant on specific word forms and therefore more semantic in nature. Our experiments showed an improvement over a term co-occurrence baseline when using LDA with KL and JS divergence, though it was not significant. We also found that LDA with KL divergence performs significantly better than SVD.
Comparing the different divergence measures for LDA, we found that KL and JS perform significantly better than symmetrised KL divergence. Interestingly, the performance of the asymmetric KL divergence and the symmetric JS divergence is very close, which makes it difficult to conclude whether the relation discovery domain is a symmetric domain or an asymmetric domain like Lee's (1999) task of improving probability estimates for unseen word co-occurrences.
A shortcoming of all the models we will describe here is that they are derived from the basic bag-of-words models and as such do not account for word order or other notions of syntax. Related work on relation discovery by Zhang et al. (2005) addresses this shortcoming by using tree kernels to compute similarity between entity pairs. In future work we will extend our experiment to explore the use of syntactic and semantic features following the frame work of Pado and Lapata (2003). We are also planning to look at non-parametric versions of LDA that address the model order selection problem and perform an extrinsic evaluation of the relation discovery task.
Figure 1 :
1Example sentences from ACE 2005.
Figure 2 :
2Example entity pairs and relation types.
Figure 3 :
3Graphical representation of LDA.
contains F-score performance on the test set(ACE 2005). The columns contain results from the different system configurations. The column labels in the top row indicate the different representations of relation similarity. The column labels in the second row indicate the dimensional-
Table 4 .
41 contains the full list of relation types from the subsets of ACE that we used. (Refer to Table 4.2 for definition of the relation type abbreviations.)
Table 2 :
2Overview of ACE relations with abbreviations used here.
As for that $90 million shoe contract with Nike, it may be a good deal for James. 2 Toefting transferred to Bolton in February 2002 from German club Hamburg. 3 Toyoda founded the automaker in 1937 ... . 4 In a statement, she says she's stepping aside in the best interest of the company, but she will stay on the board of directors.
The relation discovery task is minimally supervised in the sense that it relies on having certain resources such as named entity recognition. The focus of the current paper is the unsupervised task of clustering relations.
Modelling Relation SimilarityThe possible space of models for relation similarity can be explored in a principled manner by parameterisation. In this section, we discuss several 2 Previous approaches select labels from the collection of context words for a relation cluster(Hasegawa et al., 2004;Zhang et al., 2005).Chen et al. (2005) use discriminative category matching to make sure that selected labels are also able to differentiate between clusters.
http://www.nist.gov/speech/tests/ace/
Following results reported byChen et al. (2005), who tried unsuccessfully to incorporate words from the surrounding context to represent a relation's semantics, we use only intervening words. 6 http://infomap.stanford.edu/ 7 http://psiexp.ss.uci.edu/research/ programs_data/toolbox.htm
http://www.cis.upenn.edu/˜treebank/ tokenizer.sed 9 http://www.ldc.usb.ve/˜vdaniel/ porter.pm 10 http://glaros.dtc.umn.edu/gkhome/ cluto/cluto/overview
AcknowledgementsThis work was supported by Scottish Enterprise Edinburgh-Stanford Link grant R37588 as part of the EASIE project. I would like to thank Claire Grover, Mirella Lapata, Gabriel Murray and Sebastian Riedell for very useful comments and discussion on this work. I would also like to thank the anonymous reviewers for their comments.
Using linear algebra for intelligent information retrieval. Michael W Berry, Susan T Dumais, Gavin W O'brien, SIAM Review. 374Michael W. Berry, Susan T. Dumais, and Gavin W. O'Brien. 1994. Using linear algebra for intelligent information retrieval. SIAM Review, 37(4):573-595.
The frame-based module of the suiseki information extraction system. Christian Blaschke, Alfonso Valencia, IEEE Intelligent Systems. 17Christian Blaschke and Alfonso Valencia. 2002. The frame-based module of the suiseki information ex- traction system. IEEE Intelligent Systems, 17:14- 20.
Latent dirichlet allocation. David Blei, Andrew Y Ng, Michael I Jordan, Journal of Machine Learning Research. 3David Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of Ma- chine Learning Research, 3.
Subsequence kernels for relation extraction. C Razvan, Raymond J Bunescu, Mooney, Proceedings of the 19th Conference on Neural Information Processing Systems. the 19th Conference on Neural Information Processing SystemsVancouver, BC, CanadaRazvan C. Bunescu and Raymond J. Mooney. 2005. Subsequence kernels for relation extraction. In Pro- ceedings of the 19th Conference on Neural Informa- tion Processing Systems, Vancouver, BC, Canada.
Automatic relation extraction with model order selection and discriminative label identification. Jinxiu Chen, Donghong Ji, Zhengyu Chew Lim Tan, Niu, Proceedings of the 2nd International Joint Conference on Natural Language Processing. the 2nd International Joint Conference on Natural Language ProcessingJinxiu Chen, Donghong Ji, Chew Lim Tan, and Zhengyu Niu. 2005. Automatic relation extraction with model order selection and discriminative label identification. In Proceedings of the 2nd Interna- tional Joint Conference on Natural Language Pro- cessing.
Similarity-based methods for word sense disambiguation. Lillian Ido Dagan, Fernando Lee, Pereira, Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics. the 35th Annual Meeting of the Association for Computational LinguisticsMadrid, SpainIdo Dagan, Lillian Lee, and Fernando Pereira. 1997. Similarity-based methods for word sense disam- biguation. In Proceedings of the 35th Annual Meet- ing of the Association for Computational Linguis- tics, Madrid, Spain.
Statistical comparisons of classifiers over multiple data sets. Janez Demšar, Journal of Machine Learning Research. 7Janez Demšar. 2006. Statistical comparisons of clas- sifiers over multiple data sets. Journal of Machine Learning Research, 7:1-30, Jan.
Event-based extractive summarization. Elena Filatova, Vasileios Hatzivassiloglou, Proceedings of the ACL-2004 Text Summarization Branches Out Workshop. the ACL-2004 Text Summarization Branches Out WorkshopBarcelona, SpainElena Filatova and Vasileios Hatzivassiloglou. 2004. Event-based extractive summarization. In Proceed- ings of the ACL-2004 Text Summarization Branches Out Workshop, Barcelona, Spain.
A comparison of alternative tests of significance for the problem of m rankings. Milton Friedman, The Annals of Mathematical Statistics. 11Milton Friedman. 1940. A comparison of alternative tests of significance for the problem of m rankings. The Annals of Mathematical Statistics, 11:86-92.
Finding scientific topics. L Thomas, Mark Griffiths, Steyvers, Proceedings of the National Academy of Sciences. 101Thomas L. Griffiths and Mark Steyvers. 2004. Find- ing scientific topics. Proceedings of the National Academy of Sciences, 101:5228-5235.
Discovering relations among named entities from large corpora. Takaaki Hasegawa, Satoshi Sekine, Ralph Grishman, Proceedings of the 42nd Annual Meeting of Association of Computational Linguistics. the 42nd Annual Meeting of Association of Computational LinguisticsTakaaki Hasegawa, Satoshi Sekine, and Ralph Grish- man. 2004. Discovering relations among named entities from large corpora. In Proceedings of the 42nd Annual Meeting of Association of Computa- tional Linguistics.
Overview of BioCreAtIvE: Critical assessment of information extraction for biology. Lynette Hirschman, Alexander Yeh, Christian Blaschke, Alfonso Valencia, Proceedings of Critical Assessment of Information Extraction Systems in Biology Workshop (BioCreAtIvE). Critical Assessment of Information Extraction Systems in Biology Workshop (BioCreAtIvE)Granada, SpainLynette Hirschman, Alexander Yeh, Christian Blaschke, and Alfonso Valencia. 2004. Overview of BioCreAtIvE: Critical assessment of information extraction for biology. In Proceedings of Critical Assessment of Information Extraction Systems in Biology Workshop (BioCreAtIvE), Granada, Spain.
Unsupervised learning by probabilistic latent semantic analysis. Thomas Hofmann, Machine Learning. 42Thomas Hofmann. 2001. Unsupervised learning by probabilistic latent semantic analysis. Machine Learning, 42:177-196.
An introduction to latent semantic analysis. Thomas K Landauer, Peter W Foltz, Darrell Laham, Discourse Processes. 25Thomas K. Landauer, Peter W. Foltz, and Darrell La- ham. 1998. An introduction to latent semantic anal- ysis. Discourse Processes, 25:259-284.
Fast and effective text mining using linear-time document clustering. Buornar Larsen, Chinatsu Aone, Proceedings of the 5th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 5th ACM SIGKDD International Conference on Knowledge Discovery and Data MiningSan Diego, CA, USABuornar Larsen and Chinatsu Aone. 1999. Fast and ef- fective text mining using linear-time document clus- tering. In Proceedings of the 5th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Diego, CA, USA.
Computing similarity. Ludovic Lebart, Martin Rajman, Handbook of Natural Language Processing. Robert Dale, Hermann Moisl, and Harold SomersNew YorkMarcel DekkerLudovic Lebart and Martin Rajman. 2000. Comput- ing similarity. In Robert Dale, Hermann Moisl, and Harold Somers, editors, Handbook of Natural Lan- guage Processing, pages 477-505. Marcel Dekker, New York.
Measures of distributional similarity. Lillian Lee , Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics. the 37th Annual Meeting of the Association for Computational LinguisticsCollege Park, MD, USALillian Lee. 1999. Measures of distributional similar- ity. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, Col- lege Park, MD, USA.
Foundations of Statistical Natural Language Processing. D Christopher, Hinrich Manning, Schütze, MIT PressChristopher D. Manning and Hinrich Schütze. 1999. Foundations of Statistical Natural Language Pro- cessing. MIT Press.
Social Structure and Network Analysis. V Peter, Nan Marsden, Lin, Sage, Beverly HillsPeter V. Marsden and Nan Lin, editors. 1982. So- cial Structure and Network Analysis. Sage, Beverly Hills.
Constructing semantic space models from parsed corpora. Sebastian Pado, Mirella Lapata, Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. the 41st Annual Meeting of the Association for Computational LinguisticsSapporo, JapanSebastian Pado and Mirella Lapata. 2003. Construct- ing semantic space models from parsed corpora. In Proceedings of the 41st Annual Meeting of the As- sociation for Computational Linguistics, Sapporo, Japan.
An algorithm for suffix stripping. Martin F Porter, Program14Martin F. Porter. 1980. An algorithm for suffix strip- ping. Program, 14(3):130-137.
A maximum entropy approach to identifying sentence boundaries. C Jeffrey, Adwait Reynar, Ratnaparkhi, Proceedings of the 5th Conference on Applied Natural Language Processing. the 5th Conference on Applied Natural Language ProcessingWashington, D.C., USAJeffrey C. Reynar and Adwait Ratnaparkhi. 1997. A maximum entropy approach to identifying sentence boundaries. In Proceedings of the 5th Conference on Applied Natural Language Processing, Washington, D.C., USA.
Classifying semantic relations in bioscience text. Barbara Rosario, Marti Hearst, Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics. the 42nd Annual Meeting of the Association for Computational LinguisticsBarcelona, SpainBarbara Rosario and Marti Hearst. 2004. Classifying semantic relations in bioscience text. In Proceed- ings of the 42nd Annual Meeting of the Association for Computational Linguistics, Barcelona, Spain.
Automatic word sense discrimination. Hinrich Schütze, Computational Linguistics. 241Hinrich Schütze. 1998. Automatic word sense dis- crimination. Computational Linguistics, 24(1):91- 124.
Multiboosting: A technique for combining boosting and wagging. Machine Learning. I Geoffrey, Webb, 40Geoffrey I. Webb. 2000. Multiboosting: A tech- nique for combining boosting and wagging. Ma- chine Learning, 40(2):159-196.
Discovering relations from a large raw corpus using tree similarity-based clustering. Min Zhang, Jian Su, Danmei Wang, Guodong Zhou, Chew Lim Tan, Proceedings of the 2nd International Joint Conference on Natural Language Processing. the 2nd International Joint Conference on Natural Language ProcessingMin Zhang, Jian Su, Danmei Wang, Guodong Zhou, and Chew Lim Tan. 2005. Discovering relations from a large raw corpus using tree similarity-based clustering. In Proceedings of the 2nd International Joint Conference on Natural Language Processing.
Empirical and theoretical comparisons of selected criterion functions for document clustering. Ying Zhao, George Karypis, Machine Learning. 55Ying Zhao and George Karypis. 2004. Empirical and theoretical comparisons of selected criterion func- tions for document clustering. Machine Learning, 55:311-331. |
15,123,272 | A Comparative Study on Reordering Constraints in Statistical Machine Translation | In statistical machine translation, the generation of a translation hypothesis is computationally expensive. If arbitrary wordreorderings are permitted, the search problem is NP-hard. On the other hand, if we restrict the possible word-reorderings in an appropriate way, we obtain a polynomial-time search algorithm.In this paper, we compare two different reordering constraints, namely the ITG constraints and the IBM constraints. This comparison includes a theoretical discussion on the permitted number of reorderings for each of these constraints. We show a connection between the ITG constraints and the since 1870 known Schröder numbers.We evaluate these constraints on two tasks: the Verbmobil task and the Canadian Hansards task. The evaluation consists of two parts: First, we check how many of the Viterbi alignments of the training corpus satisfy each of these constraints. Second, we restrict the search to each of these constraints and compare the resulting translation hypotheses.The experiments will show that the baseline ITG constraints are not sufficient on the Canadian Hansards task. Therefore, we present an extension to the ITG constraints. These extended ITG constraints increase the alignment coverage from about 87% to 96%. | [
14386564,
2650085,
5284722,
284436,
74294,
912349
] | A Comparative Study on Reordering Constraints in Statistical Machine Translation
Richard Zens zens@cs.rwth-aachen.de
Chair of Computer Science
VI RWTH Aachen -University of Technology
Hermann Ney ney@cs.rwth-aachen.de
Chair of Computer Science
VI RWTH Aachen -University of Technology
A Comparative Study on Reordering Constraints in Statistical Machine Translation
In statistical machine translation, the generation of a translation hypothesis is computationally expensive. If arbitrary wordreorderings are permitted, the search problem is NP-hard. On the other hand, if we restrict the possible word-reorderings in an appropriate way, we obtain a polynomial-time search algorithm.In this paper, we compare two different reordering constraints, namely the ITG constraints and the IBM constraints. This comparison includes a theoretical discussion on the permitted number of reorderings for each of these constraints. We show a connection between the ITG constraints and the since 1870 known Schröder numbers.We evaluate these constraints on two tasks: the Verbmobil task and the Canadian Hansards task. The evaluation consists of two parts: First, we check how many of the Viterbi alignments of the training corpus satisfy each of these constraints. Second, we restrict the search to each of these constraints and compare the resulting translation hypotheses.The experiments will show that the baseline ITG constraints are not sufficient on the Canadian Hansards task. Therefore, we present an extension to the ITG constraints. These extended ITG constraints increase the alignment coverage from about 87% to 96%.
Introduction
In statistical machine translation, we are given a source language ('French') sentence f J 1 = f 1 . . . f j . . . f J , which is to be translated into a target language ('English') sentence e I 1 = e 1 . . . e i . . . e I . Among all possible target language sentences, we will choose the sentence with the highest probability:ê
I 1 = argmax e I 1 {P r(e I 1 |f J 1 )}(1)
= argmax e I 1 {P r(e I 1 ) · P r(f J 1 |e I 1 )} (2)
The decomposition into two knowledge sources in Eq. 2 is the so-called source-channel approach to statistical machine translation (Brown et al., 1990). It allows an independent modeling of target language model P r(e I 1 ) and translation model P r(f J 1 |e I 1 ). The target language model describes the well-formedness of the target language sentence. The translation model links the source language sentence to the target language sentence. It can be further decomposed into alignment and lexicon model. The argmax operation denotes the search problem, i.e. the generation of the output sentence in the target language. We have to maximize over all possible target language sentences.
In this paper, we will focus on the alignment problem, i.e. the mapping between source sentence positions and target sentence positions. As the word order in source and target language may differ, the search algorithm has to allow certain word-reorderings. If arbitrary word-reorderings are allowed, the search problem is NP-hard (Knight, 1999). Therefore, we have to restrict the possible reorderings in some way to make the search problem feasible. Here, we will discuss two such constraints in detail. The first constraints are based on inversion transduction grammars (ITG) (Wu, 1995;Wu, 1997). In the following, we will call these the ITG constraints. The second constraints are the IBM constraints (Berger et al., 1996). In the next section, we will describe these constraints from a theoretical point of view. Then, we will describe the resulting search algorithm and its extension for word graph generation. Afterwards, we will analyze the Viterbi alignments produced during the training of the alignment models. Then, we will compare the translation results when restricting the search to either of these constraints.
Theoretical Discussion
In this section, we will discuss the reordering constraints from a theoretical point of view. We will answer the question of how many word-reorderings are permitted for the ITG constraints as well as for the IBM constraints. Since we are only interested in the number of possible reorderings, the specific word identities are of no importance here. Furthermore, we assume a one-to-one correspondence between source and target words. Thus, we are interested in the number of word-reorderings, i.e. permutations, that satisfy the chosen constraints. First, we will consider the ITG constraints. Afterwards, we will describe the IBM constraints.
ITG Constraints
Let us now consider the ITG constraints. Here, we interpret the input sentence as a sequence of blocks. In the beginning, each position is a block of its own. Then, the permutation process can be seen as follows: we select two consecutive blocks and merge them to a single block by choosing between two options: either keep them in monotone order or invert the order. This idea is illustrated in Fig. 1. The white boxes represent the two blocks to be merged. Now, we investigate, how many permutations are obtainable with this method. A permutation derived by the above method can be represented as a binary tree where the inner nodes are colored either black or white. At black nodes the resulting sequences of the children are inverted. At white nodes they are kept in monotone order. This representation is equivalent to the parse trees of the simple grammar in (Wu, 1997). We observe that a given permutation may be constructed in several ways by the above method. For instance, let us consider the identity permutation of 1, 2, ..., n. Any binary tree with n nodes and all inner nodes colored white (monotone order) is a possible representation of this permutation. To obtain a unique representation, we pose an additional constraint on the binary trees: if the right son of a node is an inner node, it has to be colored with the opposite color. With this constraint, each of these binary trees is unique and equivalent to a parse tree of the 'canonical-form' grammar in (Wu, 1997).
In (Shapiro and Stephens, 1991), it is shown that the number of such binary trees with n nodes is the (n − 1)th large Schröder number S n−1 . The (small) Schröder numbers have been first described in (Schröder, 1870) as the number of bracketings of a given sequence (Schröder's second problem). The large Schröder numbers are just twice the Schröder numbers. Schröder remarked that the ratio between two consecutive Schröder numbers approaches 3 + 2 √ 2 = 5.8284... . A second-order recurrence for the large Schröder numbers is:
(n + 1)S n = 3(2n − 1)S n−1 − (n − 2)S n−2
with n ≥ 2 and S 0 = 1, S 1 = 2.
The Schröder numbers have many combinatorical interpretations. Here, we will mention only two of them. The first one is another way of viewing at the ITG constraints. The number of permutations of the sequence 1, 2, ..., n, which avoid the subsequences (3, 1, 4, 2) and (2, 4, 1, 3), is the large Schröder number S n−1 . More details on forbidden subsequences can be found in (West, 1995). The interesting point is that a search with the ITG constraints cannot generate a word-reordering that contains one of these two subsequences. In (Wu, 1997), these forbidden subsequences are called 'inside-out' transpositions.
Another interpretation of the Schröder numbers is given in (Knuth, 1973): The number of permutations that can be sorted with an output-restricted doubleended queue (deque) is exactly the large Schröder number. Additionally, Knuth presents an approximation for the large Schröder numbers:
S n ≈ c · (3 + √ 8) n · n − 3 2 (3)
where c is set to 1 2 (3 √ 2 − 4)/π. This approximation function confirms the result of Schröder, and we obtain S n ∈ Θ((3 + √ 8) n ), i.e. the Schröder numbers grow like (3 + √ 8) n ≈ 5.83 n .
IBM Constraints
In this section, we will describe the IBM constraints (Berger et al., 1996). Here, we mark each position in the source sentence either as covered or uncovered.
In the beginning, all source positions are uncovered. Now, the target sentence is produced from bottom to top. A target position must be aligned to one of the first k uncovered source positions. The IBM constraints are illustrated in Fig. 2. For most of the target positions there are k permitted source positions. Only towards the end of the sentence this is reduced to the number of remaining uncovered source positions. Let n denote the length of the input sequence and let r n denote the permitted number of permutations with the IBM constraints. Then, we obtain:
r n = k n−k · k! n > k n! n ≤ k (4)
Typically, k is set to 4. In this case, we obtain an asymptotic upper and lower bound of 4 n , i.e. r n ∈ Θ(4 n ).
In Tab. 1, the ratio of the number of permitted reorderings for the discussed constraints is listed as a function of the sentence length. We see that for longer sentences the ITG constraints allow for more reorderings than the IBM constraints. For sentences of length 10 words, there are about twice as many reorderings for the ITG constraints than for the IBM constraints. This ratio steadily increases. For longer sentences, the ITG constraints allow for much more flexibility than the IBM constraints.
Search
Now, let us get back to more practical aspects. Reordering constraints are more or less useless, if they do not allow the maximization of Eq. 2 to be performed in an efficient way. Therefore, in this section, we will describe different aspects of the search algorithm for the ITG constraints. First, we will present the dynamic programming equations and the resulting complexity. Then, we will describe pruning techniques to accelerate the search. Finally, we will extend the basic algorithm for the generation of word graphs.
Algorithm
The ITG constraints allow for a polynomial-time search algorithm. It is based on the following dynamic programming recursion equations. During the search a table Q j l ,jr,e b ,et is constructed. Here, Q j l ,jr,e b ,et denotes the probability of the best hypothesis translating the source words from position j l (left) to position j r (right) which begins with the target language word e b (bottom) and ends with the word e t (top). This is illustrated in Fig. 3.
Here, we initialize this table with monotone translations of IBM Model 4. Therefore, Q 0 j l ,j r ,e b ,e t denotes the probability of the best monotone hypothesis of IBM Model 4. Alternatively, we could use any other single-word based lexicon as well as phrasebased models for this initialization. Our choice is the IBM Model4 to make the results as comparable as possible to the search with the IBM constraints.
We introduce a new parameter p m (m= monotone), which denotes the probability of a monotone combination of two partial hypotheses.
Q j l ,j r ,e b ,e t = (5) max j l ≤k<jr , e ,e Q 0 j l ,j r ,e b ,e t , Q j l ,k,e b ,e · Q k+1,jr,e ,et · p(e |e ) · p m , Q k+1,jr,e b ,e · Q j l ,k,e ,et · p(e |e ) · (1 − p m )
We formulated this equation for a bigram language model, but of course, the same method can also be applied for a trigram language model. The resulting algorithm is similar to the CYK-parsing algorithm. It has a worst-case complexity of O(J 3 · E 4 ). Here, J is the length of the source sentence and E is the vocabulary size of the target language.
Pruning
Although the described search algorithm has a polynomial-time complexity, even with a bigram language model the search space is very large. A full search is possible but time consuming. The situation gets even worse when a trigram language model is used. Therefore, pruning techniques are obligatory to reduce the translation time.
Pruning is applied to hypotheses that translate the same subsequence f j r j l of the source sentence. We use pruning in the following two ways. The first pruning technique is histogram pruning: we restrict the number of translation hypotheses per sequence f jr j l . For each sequence f jr j l , we keep only a fixed number of translation hypotheses. The second pruning technique is threshold pruning: the idea is to remove all hypotheses that have a low probability relative to the best hypothesis. Therefore, we introduce a threshold pruning parameter q, with 0 ≤ q ≤ 1. Let Q * j l ,j r denote the maximum probability of all translation hypotheses for f j r j l . Then, we prune a hypothesis iff:
Q j l ,j r ,e b ,e t < q · Q * j l ,j r
Applying these pruning techniques the computational costs can be reduced significantly with almost no loss in translation quality.
Generation of Word Graphs
The generation of word graphs for a bottom-top search with the IBM constraints is described in (Ueffing et al., 2002). These methods cannot be applied to the CYK-style search for the ITG constraints. Here, the idea for the generation of word graphs is the following: assuming we already have word graphs for the source sequences f k j l and f j r k+1 , then we can construct a word graph for the sequence f jr j l by concatenating the partial word graphs either in monotone or inverted order. Now, we describe this idea in a more formal way. A word graph is a directed acyclic graph (dag) with one start and one end node. The edges are annotated with target language words or phrases. We also allow -transitions. These are edges annotated with the empty word. Additionally, edges may be annotated with probabilities of the language or translation model. Each path from start node to end node represents one translation hypothesis. The probability of this hypothesis is calculated by multiplying the probabilities along the path.
During the search, we have to combine two word graphs in either monotone or inverted order. This is done in the following way: we are given two word graphs w 1 and w 2 with start and end nodes (s 1 , g 1 ) and (s 2 , g 2 ), respectively. First, we add an -transition (g 1 , s 2 ) from the end node of the first graph w 1 to the start node of the second graph w 2 and annotate this edge with the probability of a monotone concatenation p m . Second, we create a copy of each of the original word graphs w 1 and w 2 . Then, we add an -transition (g 2 , s 1 ) from the end node of the copied second graph to the start node of the copied first graph. This edge is annotated with the probability of a inverted concatenation 1 − p m . Now, we have obtained two word graphs: one for a monotone and one for a inverted concatenation. The final word graphs is constructed by merging the two start nodes and the two end nodes, respectively.
Let W (j l , j r ) denote the word graph for the source sequence f jr j l . This graph is constructed from the word graphs of all subsequences of (j l , j r ). Therefore, we assume, these word graphs have already been produced. For all source positions k with j l ≤ k < j r , we combine the word graphs W (j l , k) and W (k + 1, j r ) as described above. Finally, we merge all start nodes of these graphs as well as all end nodes. Now, we have obtained the word graph W (j l , j r ) for the source sequence f j r j l . As initialization, we use the word graphs of the monotone IBM4 search.
Extended ITG constraints
In this section, we will extend the ITG constraints described in Sec. 2.1. This extension will go beyond basic reordering constraints.
We already mentioned that the use of consecutive phrases within the ITG approach is straightforward. The only thing we have to change is the initialization of the Q-table. Now, we will extend this idea to phrases that are non-consecutive in the source language. For this purpose, we adopt the view of the ITG constraints as a bilingual grammar as, e.g., in (Wu, 1997). For the baseline ITG constraints, the resulting grammar is:
A → [AA] | AA | f /e | f / | /e
Here, [AA] denotes a monotone concatenation and AA denotes an inverted concatenation.
Let us now consider the case of a source phrase consisting of two parts f 1 and f 2 . Let e denote the corresponding target phrase. We add the productions
A → [e/f 1 A /f 2 ] | e/f 1 A /f 2
to the grammar. The probabilities of these productions are, dependent on the translation direction, p(e|f 1 , f 2 ) or p(f 1 , f 2 |e), respectively. Obviously, these productions are not in the normal form of an ITG, but with the method described in (Wu, 1997), they can be normalized.
Corpus Statistics
In the following sections we will present results on two tasks. Therefore, in this section we will show the corpus statistics for each of these tasks.
Verbmobil
The first task we will present results on is the Verbmobil task (Wahlster, 2000). The domain of this corpus is appointment scheduling, travel planning, and hotel reservation. It consists of transcriptions of spontaneous speech. Table 2 shows the corpus statistics of this corpus. The training corpus (Train) was used to train the IBM model parameters. The remaining free parameters, i.e. p m and the model scaling factors , were adjusted on the development corpus (Dev). The resulting system was evaluated on the test corpus (Test).
Canadian Hansards
Additionally, we carried out experiments on the Canadian Hansards task. This task contains the proceedings of the Canadian parliament, which are kept by law in both French and English. About 3 million parallel sentences of this bilingual data have been made available by the Linguistic Data Consortium (LDC). Here, we use a subset of the data containing only sentences with a maximum length of 30 words. Table 3 shows the training and test corpus statistics.
Evaluation in Training
In this section, we will investigate for each of the constraints the coverage of the training corpus alignment. For this purpose, we compute the Viterbi alignment of IBM Model 5 with GIZA++ . This alignment is produced without any restrictions on word-reorderings. Then, we check for every sentence if the alignment satisfies each of the constraints. The ratio of the number of satisfied alignments and the total number of sentences is referred to as coverage. Tab. 4 shows the results for the Verbmobil task and for the Canadian Hansards task. It contains the results for both translation directions German-English (S→T) and English-German (T→S) for the Verbmobil task and French-English (S→T) and English-French (T→S) for the Canadian Hansards task, respectively.
For the Verbmobil task, the baseline ITG constraints and the IBM constraints result in a similar coverage. It is about 91% for the German-English translation direction and about 88% for the English-German translation direction. A significantly higher
Translation Experiments
Evaluation Criteria
In our experiments, we use the following error criteria:
• WER (word error rate):
The WER is computed as the minimum number of substitution, insertion and deletion operations that have to be performed to convert the generated sentence into the target sentence.
• PER (position-independent word error rate): A shortcoming of the WER is the fact that it requires a perfect word order. The PER compares the words in the two sentences ignoring the word order.
• mWER (multi-reference word error rate): For each test sentence, not only a single reference translation is used, as for the WER, but a whole set of reference translations. For each translation hypothesis, the WER to the most similar sentence is calculated (Nießen et al., 2000).
• BLEU score: This score measures the precision of unigrams, bigrams, trigrams and fourgrams with respect to a whole set of reference translations with a penalty for too short sentences (Papineni et al., 2001). BLEU measures accuracy, i.e. large BLEU scores are better.
• SSER (subjective sentence error rate): For a more detailed analysis, subjective judgments by test persons are necessary. Each translated sentence was judged by a human examiner according to an error scale from 0.0 to 1.0 (Nießen et al., 2000).
Translation Results
In this section, we will present the translation results for both the IBM constraints and the baseline ITG constraints. We used a single-word based search with IBM Model 4. The initialization for the ITG constraints was done with monotone IBM Model 4 translations. So, the only difference between the two systems are the reordering constraints. In Tab. 5 the results for the Verbmobil task are shown. We see that the results on this task are similar. The search with the ITG constraints yields slightly lower error rates.
Some translation examples of the Verbmobil task are shown in Tab. 6. We have to keep in mind, that the Verbmobil task consists of transcriptions of spontaneous speech. Therefore, the source sentences as well as the reference translations may have an unorthodox grammatical structure. In the first example, the German verb-group ("würde vorschlagen") is split into two parts. The search with the ITG constraints is able to produce a correct translation. With the IBM constraints, it is not possible to translate this verb-group correctly, because the distance between the two parts is too large (more than four words). As we see in the second example, in German the verb of a subordinate clause is placed at the end ("übernachten"). The IBM search is not able to perform the necessary long-range reordering, as it is done with the ITG search.
Related Work
The ITG constraints were introduced in (Wu, 1995). The applications were, for instance, the segmentation of Chinese character sequences into Chinese "words" and the bracketing of the source sentence into sub-sentential chunks. In (Wu, 1996) the baseline ITG constraints were used for statistical machine translation. The resulting algorithm is similar to the one presented in Sect. 3.1, but here, we use monotone translation hypotheses of the full IBM Model 4 as initialization, whereas in (Wu, 1996) a single-word based lexicon model is used. In (Vilar, 1998) a model similar to Wu's method was considered.
Conclusions
We have described the ITG constraints in detail and compared them to the IBM constraints. We draw the following conclusions: especially for long sentences the ITG constraints allow for higher flexibility in word-reordering than the IBM constraints. Regarding the Viterbi alignment in training, the baseline ITG constraints yield a similar coverage as the IBM constraints on the Verbmobil task. On the Canadian Hansards task the baseline ITG constraints were not sufficient. With the extended ITG constraints the coverage improves significantly on both tasks. On the Canadian Hansards task the coverage increases from about 87% to about 96%.
We have presented a polynomial-time search algorithm for statistical machine translation based on the ITG constraints and its extension for the generation of word graphs. We have shown the translation results for the Verbmobil task. On this task, the translation quality of the search with the baseline ITG constraints is already competitive with the results for the IBM constraints. Therefore, we expect the search with the extended ITG constraints to outperform the search with the IBM constraints.
Future work will include the automatic extraction of the bilingual grammar as well as the use of this grammar for the translation process. ITG yes, I would suggest the flight at seven fifteen. IBM yes, I would be the flight at quarter to seven suggestion.
source ich schlage vor, dass wir in Hannover im Hotel Grünschnabelübernachten. reference I suggest to stay at the hotel Grünschnabel in Hanover.
ITG I suggest that we stay in Hanover at hotel Grünschnabel. IBM I suggest that we are in Hanover at hotel Grünschnabel stay.
Figure 1 :
1Illustration of monotone and inverted concatenation of two consecutive blocks.
Figure 2 :
2Illustration of the IBM constraints.
Figure 3 :
3Illustration of the Q-table.
Table 1 :
1Ratio of the number of permitted reorderings with the ITG constraints S n−1 and the IBM constraints r n for different sentence lengths n. /r n ≈ 1.0 1.2 1.4 1.7 2.1 2.6 3.n
1 ... 6
7
8
9 10 11 12 13 14 15 16
17
18
19
20
S n−1
Table 2 :
2Statistics of training and test corpus for the Verbmobil task (PP=perplexity, SL=sentence length).German English
Train Sentences
58 073
Words
519 523 549 921
Vocabulary
7 939
4 672
Singletons
3 453
1 698
average SL
8.9
9.5
Dev
Sentences
276
Words
3 159
3 438
Trigram PP
-
28.1
average SL
11.5
12.5
Test
Sentences
251
Words
2 628
2 871
Trigram PP
-
30.5
average SL
10.5
11.4
Table 3 :
3Statistics of training and test corpus
for the Canadian Hansards task (PP=perplexity,
SL=sentence length).
French English
Train Sentences
1.5M
Words
24M
22M
Vocabulary 100 269 78 332
Singletons
40 199 31 319
average SL
16.6
15.1
Test
Sentences
5432
Words
97 646 88 773
Trigram PP
-
179.8
average SL
18.0
16.3
Table 4 :
4Coverage on the training corpus for alignment constraints for the Verbmobil task (VM) and for the Canadian Hansards task (CH). Especially for the English-French translation direction, the ITG coverage of 73.6% is very low. Again, the extended ITG constraints obtained the best results. Here, the coverage increases from about 87% for the IBM constraints to about 96% for the extended ITG constraints.coverage [%]
Table 5 :Table 6 :
56Translation results on the Verbmobil task. type automatic human System WER [%] PER [%] mWER [%] BLEU [%] SSER [%] Verbmobil: translation examples. source ja, ich würde den Flug um viertel nach sieben vorschlagen. reference yes, I would suggest the flight at a quarter past seven.IBM
46.2
33.3
40.0
42.5
40.8
ITG
45.6
33.9
40.0
37.1
42.0
Language translation apparatus and method of using context-based translation models, United States patent. A L Berger, P F Brown, S A D Pietra, V J D Pietra, J R Gillett, A S Kehler, R L Mercer, 5510981A. L. Berger, P. F. Brown, S. A. D. Pietra, V. J. D. Pietra, J. R. Gillett, A. S. Kehler, and R. L. Mercer. 1996. Language translation apparatus and method of using context-based translation models, United States patent, patent number 5510981, April.
A statistical approach to machine translation. P F Brown, J Cocke, S A Della Pietra, V J Della Pietra, F Jelinek, J D Lafferty, R L Mercer, P S Roossin, Computational Linguistics. 162P. F. Brown, J. Cocke, S. A. Della Pietra, V. J. Della Pietra, F. Jelinek, J. D. Lafferty, R. L. Mercer, and P. S. Roossin. 1990. A statistical approach to machine translation. Computational Linguistics, 16(2):79-85, June.
Decoding complexity in wordreplacement translation models. K Knight, Computational Linguistics. 254K. Knight. 1999. Decoding complexity in word- replacement translation models. Computational Lin- guistics, 25(4):607-615, December.
The Art of Computer Programming, volume 1 -Fundamental Algorithms. D E Knuth, Addison-WesleyReading, MA2nd editionD. E. Knuth. 1973. The Art of Computer Program- ming, volume 1 -Fundamental Algorithms. Addison- Wesley, Reading, MA, 2nd edition.
An evaluation tool for machine translation: Fast evaluation for MT research. S Nießen, F J Och, G Leusch, H Ney, Proc. of the Second Int. Conf. on Language Resources and Evaluation (LREC). of the Second Int. Conf. on Language Resources and Evaluation (LREC)Athens, GreeceS. Nießen, F. J. Och, G. Leusch, and H. Ney. 2000. An evaluation tool for machine translation: Fast eval- uation for MT research. In Proc. of the Second Int. Conf. on Language Resources and Evaluation (LREC), pages 39-45, Athens, Greece, May.
Improved statistical alignment models. F J Och, H Ney, Proc. of the 38th Annual Meeting of the Association for Computational Linguistics (ACL). of the 38th Annual Meeting of the Association for Computational Linguistics (ACL)Hong KongF. J. Och and H. Ney. 2000. Improved statistical align- ment models. In Proc. of the 38th Annual Meeting of the Association for Computational Linguistics (ACL), pages 440-447, Hong Kong, October.
Discriminative training and maximum entropy models for statistical machine translation. F J Och, H Ney, Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL). of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)F. J. Och and H. Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 295-302, July.
Bleu: a method for automatic evaluation of machine translation. K A Papineni, S Roukos, T Ward, W J Zhu, RC22176 (W0109-022IBM Research Division, Thomas J. Watson Research CenterTechnical ReportK. A. Papineni, S. Roukos, T. Ward, and W. J. Zhu. 2001. Bleu: a method for automatic evaluation of machine translation. Technical Report RC22176 (W0109-022), IBM Research Division, Thomas J. Watson Research Center, September.
E Schröder, Vier combinatorische Probleme. Zeitschrift für Mathematik und Physik. 1870E. Schröder. 1870. Vier combinatorische Probleme. Zeitschrift für Mathematik und Physik, 15:361-376.
Boostrap percolation, the Schröder numbers, and the n-kings problem. L Shapiro, A B Stephens, SIAM Journal on Discrete Mathematics. 42L. Shapiro and A. B. Stephens. 1991. Boostrap percola- tion, the Schröder numbers, and the n-kings problem. SIAM Journal on Discrete Mathematics, 4(2):275- 280, May.
Generation of word graphs in statistical machine translation. N Ueffing, F J Och, H Ney, Proc. Conf. on Empirical Methods for Natural Language Processing. Conf. on Empirical Methods for Natural Language essingPhiladelphia, PAN. Ueffing, F. J. Och, and H. Ney. 2002. Generation of word graphs in statistical machine translation. In Proc. Conf. on Empirical Methods for Natural Lan- guage Processing, pages 156-163, Philadelphia, PA, July.
Aprendizaje de Transductores Subsecuenciales para su empleo en tareas de Dominio Restringido. J M Vilar, Universidad Politecnica de ValenciaPh.D. thesisJ. M. Vilar. 1998. Aprendizaje de Transductores Subse- cuenciales para su empleo en tareas de Dominio Re- stringido. Ph.D. thesis, Universidad Politecnica de Va- lencia.
Verbmobil: Foundations of speech-to-speech translations. W. WahlsterSpringer VerlagBerlin, GermanyW. Wahlster, editor. 2000. Verbmobil: Foundations of speech-to-speech translations. Springer Verlag, Berlin, Germany, July.
Generating trees and the Catalan and Schröder numbers. Discrete Mathematics. 146J. West.J. West. 1995. Generating trees and the Catalan and Schröder numbers. Discrete Mathematics, 146:247- 262, November.
Stochastic inversion transduction grammars, with application to segmentation, bracketing, and alignment of parallel corpora. D Wu, Proc. of the 14th International Joint Conf. on Artificial Intelligence (IJ-CAI). of the 14th International Joint Conf. on Artificial Intelligence (IJ-CAI)MontrealD. Wu. 1995. Stochastic inversion transduction gram- mars, with application to segmentation, bracketing, and alignment of parallel corpora. In Proc. of the 14th International Joint Conf. on Artificial Intelligence (IJ- CAI), pages 1328-1334, Montreal, August.
A polynomial-time algorithm for statistical machine translation. D Wu, Proc. of the 34th Annual Conf. of the Association for Computational Linguistics (ACL '96). of the 34th Annual Conf. of the Association for Computational Linguistics (ACL '96)Santa Cruz, CAD. Wu. 1996. A polynomial-time algorithm for statis- tical machine translation. In Proc. of the 34th Annual Conf. of the Association for Computational Linguistics (ACL '96), pages 152-158, Santa Cruz, CA, June.
Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. D Wu, Computational Linguistics. 233D. Wu. 1997. Stochastic inversion transduction gram- mars and bilingual parsing of parallel corpora. Com- putational Linguistics, 23(3):377-403, September. |
18,473,670 | SAP-RI: A Constrained and Supervised Approach for Aspect-Based Sentiment Analysis | We describe the submission of the SAP Research & Innovation team to the Se-mEval 2014 Task 4: Aspect-Based Sentiment Analysis (ABSA). Our system follows a constrained and supervised approach for aspect term extraction, categorization and sentiment classification of online reviews and the details are included in this paper. | [
13886408,
61955135
] | SAP-RI: A Constrained and Supervised Approach for Aspect-Based Sentiment Analysis
SemEval 2014. August 23-24, 2014
Nishtha Malhotra nishtha.malhotra@sap.com
Research & Innovation
SAP Asia
Singapore
Nanyang Technological University
Singapore
Akriti Vij akriti.vij@sap.com
Research & Innovation
SAP Asia
Singapore
Nanyang Technological University
Singapore
Naveen Nandan naveen.nandan@sap.com
Research & Innovation
SAP Asia
Singapore
Daniel Dahlmeier d.dahlmeier@sap.com
Research & Innovation
SAP Asia
Singapore
SAP-RI: A Constrained and Supervised Approach for Aspect-Based Sentiment Analysis
Proceedings of the 8th International Workshop on Semantic Evaluation
the 8th International Workshop on Semantic EvaluationDublin, IrelandSemEval 2014. August 23-24, 2014
We describe the submission of the SAP Research & Innovation team to the Se-mEval 2014 Task 4: Aspect-Based Sentiment Analysis (ABSA). Our system follows a constrained and supervised approach for aspect term extraction, categorization and sentiment classification of online reviews and the details are included in this paper.
Introduction
The increasing popularity of the internet as a source of information, and e-commerce as a way of life, has led to a major surge in the number of reviews that can be found online, for a wide range of products and services. Consequently, more and more consumers have taken to consulting these online reviews as part of their pre-purchase research before deciding on availing services from a local business or investing in a product from a particular brand. This calls for innovative techniques for the sentiment analysis of online reviews so as to generate accurate and relevant recommendations. Sentiment analysis has been extensively studied and applied in different domains. Predicting the sentiment polarity (positive, negative, neutral) of user opinions by mining user reviews (Hu and Liu, 2004;Liu, 2012;Pang and Lee, 2008;Liu, 2010) has been of high commercial and research interest. In these studies, sentiment analysis is often conducted at one of the three levels: document level, sentence level or attribute level.
Through the SemEval 2014 Task 4 on Aspect Based Sentiment Analysis (Pontiki et al., 2014), we explore sentiment analysis at the aspect level. * The work was done during an internship at SAP. This work is licenced under a Creative Commons Attribution 4.0 International License. Page numbers and proceedings footer are added by the organizers. License details: http: //creativecommons.org/licenses/by/4.0/ The task consists of four subtasks: in subtask 1 aspect term extraction, participants need to identify the aspect terms present in a sentence and return a list containing all distinct aspect terms, in subtask 2 aspect term polarity, participants were to determine the polarity of each aspect term in a sentence, in subtask 3 aspect category detection, participants had to identify the aspect categories discussed in a given sentence, and in subtask 4 aspect category polarity, participants were to determine the polarity of each aspect category. The polarity classification subtasks consider sentiment analysis to be a three-way classification problem between positive, negative and neutral sentiment. On the other hand, the aspect category detection subtask is a multi-label classification problem where one sentence can be labelled with more than one aspect category.
In this paper, we describe the submission of the SAP-RI team to the SemEval 2014 Task 4. We make use of supervised techniques to extract the aspects of interest (Jakob and Gurevych, 2010), categorize them (Lu et al., 2011) and predict the sentiment of customer online reviews on Laptops and Restaurants. We developed a constrained system for aspect-based sentiment analysis of these online reviews. The system is constrained in the sense that we only use the training data that was provided by the challenge organizers and no other external data sources. Our system performed reasonably well, especially with a F 1 score of 75.61% for the aspect category polarity subtask, 79.04% F 1 score on the aspect category detection task and 66.61% F 1 score on the aspect term extraction task.
Subtask 1: Aspect Term Extraction
Given a review with annotated entities in the training set, the task was to extract the aspect terms for reviews in the test set. For this subtask, training, development and testing were conducted for both the laptop and the restaurant domain.
Features
Each review was represented as a feature vector made up of the following features:
• Word N-grams: all unigrams, bigrams and trigrams from the review text
Method
We approach the task by casting it as a sequence tagging task where each token in a candidate sentence is labelled as either Beginning, Inside or Outside (BIO). We then employ conditional random fields (CRF), which is a discriminative, probabilistic model for sequence data with state-of-theart performance (Lafferty et al., 2001). A linearchain CRF tries to estimate the conditional probability of a label sequence y given the observed features x, where each label y t is conditioned on the previous label y t−1 . In our case, we use BIO CoNLL-style tags (Sang and De Meulder, 2003). During development, we split the training data in the ratio of 60:20:20 as training, development (dev) and testing (dev-test). We train the CRF model on the training set of the data, perform feature selection based on the dev set, and test the resulting model on the dev-test. In all experiments, we use the CRF++ 1 implementation of conditional random fields with the parameter c=4.0. This value was chosen based on manual observation. We perform a feature ablation study and the results are reported in Table 1. Features listed in section 2.1 were those that were retained for the final run. 1 code.google.com/p/crfpp/
Subtask 2: Aspect Term Polarity Estimation
For this subtask, the training, development and testing was done using reviews on laptops and restaurants. Given the aspect terms in a sentence, the task was to predict their sentiment polarities.
Features
For each review, we used the following features:
• Word N-grams: all lowercased unigrams, bigrams and trigrams from the review text
• Polarity of neighbouring adjectives: extracted word sentiment from SentiWordNet lexicon (Baccianella et al., 2010) • Neighbouring POS tags: the POS tags of up to neighbouring 3 words
• Parse dependencies and relations: parse dependency relations of the aspects, i.e., presence/absence of adjectives and adverbs in the dependency parse tree
Method
For each aspect term of a sentence, the aforementioned features were extracted. For example, for the term Sushi in the sentence Sushi was delicious., the following feature vector is constructed, {aspect: 'sushi', advmod:'null', amod:'delicious', uni sushi: 1, uni was: 1, uni delicious, uni the: 0, .. }. We then treat the aspect sentiment polarity estimation as a multi-class classification task where each instance would be labelled as either positive, negative or neutral. For the classification task, we experimented with Naive Bayes and Support Vector Machines (SVM) -both linear and RBF kernels -and it was observed that linear SVM performed best. Hence, we use linear SVM for the classification task. Table 2 summarizes the results obtained from our experiments for various feature combinations. The classifiers used are implementations from scikit-learn 2 , which is also used for the remaining tasks.
Subtask3: Aspect Category Detection
Given a review with annotated entities or aspect terms, the task was to predict the aspect categories. As one sentence in a review could belong to multiple aspect categories, we model the task as a multi-label classification problem, i.e., given an instance, predict all labels that the instance fits to.
Features
We experimented with different features, for example unigrams, dependency tree relations, bigrams, POS tags and sentiment of the words (Sen-tiWordNet), but using just the unigrams alone happened to yield the best result. The feature vector was merely a bag-of-words vector indicating the presence or absence of a word in an instance.
Method
The training instances were divided into 5 sets based on the aspect categories and thereby, we treated the multi-label classification task as 5 different binary classification tasks. Hence, we used an ensemble of binary classifiers for the multilabel classification. An SVM model was trained using one classifier per class to distinguish it from all other classes. For the binary classification tasks, directly estimating a linear separating function (such as linear SVM) gave better results, as shown in Table 3. Finally, the results of the 5 binary classifiers were combined to label the test instance.
The category Miscellaneous was observed to have the lowest accuracy, probably due to the fact that miscellaneous captures all those aspects terms that do not have a clearly defined category.
Subtask4 Aspect Category Polarity Detection
For each review with pre-labelled aspect categories, the task was to produce a model which predicts the sentiment polarity of each aspect category.
Features
The training data contains reviews with the polarity for the corresponding aspect category. The models performed best on using just unigram and bigram features.
Method
The training instances were split into 5 sets based on the aspect categories. We make use of the sentiment polarity classifier, as described in section 3.2, thereby, training one sentiment polarity classifier for each aspect category. Table 4 indicates the performance of different classifiers for this task, using features as discussed in section 5.1. Table 5 gives an overview of the performance of our system in this year's task based on the official scores from the organizers. We see that our system performs relatively well for subtasks 1, 3 and 4, while for subtask 2 the F 1 scores are behind the best system by about 12%. As observed, a sentence could have more than one aspect and each of these aspects could have different polarities expressed. Including features that preserve the context of the aspect could probably improve the performance in the subtask 2. In most cases, a simple set of features was enough to result in a Table 4: Training-phase experimental results (F 1 score) for Subtask 4.
Results
high F 1 score, for example, in subtask 3 a bag-ofwords feature set proved to yield a relatively high F 1 score. In general, for the classification tasks, we observe that the linear SVM performs best.
Conclusion
In this paper, we have described the submission of the SAP-RI team to the SemEval 2014 Task 4. We model the classification tasks using linear SVM and the term extraction task using CRF in order to develop an aspect-based sentiment analysis system that performs reasonably well.
Table 2 :
2Training-phase experimental results (Accuracy) for Subtask 2.
Table 3 :
3Training-phase experimental results (F 1 score) for Subtask 3.Restaurants Category Naive Bayes AdaBoost LinearSVC
Food
0.7136
0.6711
0.7417
Service
0.6733
0.5244
0.6688
Miscellaneous
0.4756
0.3170
0.4756
Ambience
0.6574
0.7232
0.6885
Price
0.7477
0.7752
0.6651
Table 5 :
5Results (F 1 score and ranking) for the Semeval-2014 test set.
scikit-learn.org/stable/
AcknowledgementThe research is partially funded by the Economic Development Board and the National Research Foundation of Singapore.
SentiWordNet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. Stefano Baccianella, Andrea Esuli, Fabrizio Sebastiani, Proceedings of the Seventh Conference on International Language Resources and Evaluation (LREC'10). the Seventh Conference on International Language Resources and Evaluation (LREC'10)10Stefano Baccianella, Andrea Esuli, and Fabrizio Sebas- tiani. 2010. SentiWordNet 3.0: An enhanced lexi- cal resource for sentiment analysis and opinion min- ing. In Proceedings of the Seventh Conference on International Language Resources and Evaluation (LREC'10), volume 10, pages 2200-2204.
Mining and summarizing customer reviews. Minqing Hu, Bing Liu, Proceedings of the tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the tenth ACM SIGKDD International Conference on Knowledge Discovery and Data MiningMinqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, pages 168-177.
Extracting opinion targets in a single-and cross-domain setting with conditional random fields. Niklas Jakob, Iryna Gurevych, Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. the 2010 Conference on Empirical Methods in Natural Language ProcessingNiklas Jakob and Iryna Gurevych. 2010. Extracting opinion targets in a single-and cross-domain setting with conditional random fields. In Proceedings of the 2010 Conference on Empirical Methods in Nat- ural Language Processing, pages 1035-1045.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. John Lafferty, Andrew Mccallum, Fernando Pereira, Proceedings of the Eighteenth International Conference on Machine Learning. the Eighteenth International Conference on Machine LearningJohn Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth In- ternational Conference on Machine Learning, pages 282-289.
Sentiment analysis and subjectivity. Bing Liu, Handbook of Natural Language Processing. Chapman & Hall2 editionBing Liu. 2010. Sentiment analysis and subjectiv- ity. In Handbook of Natural Language Processing, pages 627-666. Chapman & Hall, 2 edition.
Sentiment analysis and opinion mining. Bing Liu, Synthesis Lectures on Human Language Technologies. 51Bing Liu. 2012. Sentiment analysis and opinion min- ing. Synthesis Lectures on Human Language Tech- nologies, 5(1):1-167.
Multi-aspect sentiment analysis with topic models. Bin Lu, Myle Ott, Claire Cardie, Benjamin Tsou, Proceedings of Sentiment Elicitation from Natural Text for Information Retrieval and Extraction. Sentiment Elicitation from Natural Text for Information Retrieval and ExtractionBin Lu, Myle Ott, Claire Cardie, and Benjamin Tsou. 2011. Multi-aspect sentiment analysis with topic models. In Proceedings of Sentiment Elicitation from Natural Text for Information Retrieval and Ex- traction, pages 81-88.
Opinion mining and sentiment analysis. Foundations and trends in information retrieval. Bo Pang, Lillian Lee, 2Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and trends in infor- mation retrieval, 2(1-2):1-135.
SemEval-2014 Task 4: Aspect based sentiment analysis. Maria Pontiki, Dimitrios Galanis, John Pavlopoulos, Haris Papageorgiou, Proceedings of the 8th International Workshop on Semantic Evaluation. the 8th International Workshop on Semantic EvaluationIon Androutsopoulos, and Suresh Manandhar. SemEvalMaria Pontiki, Dimitrios Galanis, John Pavlopou- los, Haris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 Task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evalu- ation (SemEval 2014).
Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. Erik Tjong , Kim Sang, Fien De Meulder, Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL. the Seventh Conference on Natural Language Learning at HLT-NAACLErik Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL, pages 142-147. |
17,876,020 | On the reliability and inter-annotator agreement of human semantic MT evaluation via HMEANT | We present analyses showing that HMEANT is a reliable, accurate and fine-grained semantic frame based human MT evaluation metric with high inter-annotator agreement (IAA) and correlation with human adequacy judgments, despite only requiring a minimal training of about 15 minutes for lay annotators. Previous work shows that the IAA on the semantic role labeling (SRL) subtask within HMEANT is over 70%. In this paper we focus on (1) the IAA on the semantic role alignment task and (2) the overall IAA of HMEANT. Our results show that the IAA on the alignment task of HMEANT is over 90% when humans align SRL output from the same SRL annotator, which shows that the instructions on the alignment task are sufficiently precise, although the overall IAA where humans align SRL output from different SRL annotators falls to only 61% due to the pipeline effect on the disagreement in the two annotation task. We show that instead of manually aligning the semantic roles using an automatic algorithm not only helps maintaining the overall IAA of HMEANT at 70%, but also provides a finer-grained assessment on the phrasal similarity of the semantic role fillers. This suggests that HMEANT equipped with automatic alignment is reliable and accurate for humans to evaluate MT adequacy while achieving higher correlation with human adequacy judgments than HTER. | [
2464290,
2331493,
5280872,
974899,
12617973,
14287994,
40322238
] | On the reliability and inter-annotator agreement of human semantic MT evaluation via HMEANT
Chi-Kiu Lo
Department of Computer Science and Engineering
HKUST Human Language Technology Center
Hong Kong University of Science and Technology Clear Water Bay
Kowloon, Hong Kong
Dekai Wu
Department of Computer Science and Engineering
HKUST Human Language Technology Center
Hong Kong University of Science and Technology Clear Water Bay
Kowloon, Hong Kong
On the reliability and inter-annotator agreement of human semantic MT evaluation via HMEANT
semantic MT evaluationHMEANTinter-annotator agreement
We present analyses showing that HMEANT is a reliable, accurate and fine-grained semantic frame based human MT evaluation metric with high inter-annotator agreement (IAA) and correlation with human adequacy judgments, despite only requiring a minimal training of about 15 minutes for lay annotators. Previous work shows that the IAA on the semantic role labeling (SRL) subtask within HMEANT is over 70%. In this paper we focus on (1) the IAA on the semantic role alignment task and (2) the overall IAA of HMEANT. Our results show that the IAA on the alignment task of HMEANT is over 90% when humans align SRL output from the same SRL annotator, which shows that the instructions on the alignment task are sufficiently precise, although the overall IAA where humans align SRL output from different SRL annotators falls to only 61% due to the pipeline effect on the disagreement in the two annotation task. We show that instead of manually aligning the semantic roles using an automatic algorithm not only helps maintaining the overall IAA of HMEANT at 70%, but also provides a finer-grained assessment on the phrasal similarity of the semantic role fillers. This suggests that HMEANT equipped with automatic alignment is reliable and accurate for humans to evaluate MT adequacy while achieving higher correlation with human adequacy judgments than HTER.
Introduction
HMEANT is a human metric that fully realizes a semantic frame based approach to MT evaluation that we originally envisioned in Lo and Wu (2010) and then subsequently implemented and refined over a substantial series of development cycles (Lo and Wu, 2011a,d). In this paper we present new focused empirical analyses showing that HMEANT achieves high inter-annotation agreement (IAA) and correlation with human judgments of translation adequacy, despite requiring only minimal training for inexpensive lay annotators. Through extensive IAA analyses-particularly on the semantic frame alignment task, an interesting question raised for example by by Birch et al. (2013)-we show that annotators align semantic frames consistently when the SRL output comes from the same SRL annotator, although the pipeline effect of accumulating the disagreement in the two annotation tasks significantly degrades the overall IAA of HMEANT when the alignment annotators align the SRL output from different SRL annotators. However, our results further show that instead of manually aligning the semantic roles, using an automatic algorithm in the alignment task not only helps maintain a high overall IAA for HMEANT, and at the same time also provides a finer-grained assessment of the phrasal similarity of semantic role fillers, such that HMEANT achieves higher correlation with human adequacy judgments than HTER or any automatic metric. These results indicate that HMEANT equipped with automatic alignment is a reliable and accurate methodology for human subjective evaluation of MT adequacy. The MEANT family also includes other fully automatic approximations of HMEANT, that are accurate, inexpensive, and tunable semantic frame based MT evaluation metrics quantifying the semantic similarity between reference and machine translations in terns of how well their semantic frames match. HMEANT (Lo and Wu, 2011a,d), the human variant in the family, correlates better with human adequacy judgments than HTER at a significantly lower labor cost. MEANT , the fully automatic metric in the family, correlates better with human adequacy judgments than the other commonly used automatic MT evaluation metrics, such as, BLEU (Papineni et al., 2002), NIST (Doddington, 2002), or TER (Snover et al., 2006). Since a high MEANT score is contingent on correct lexical choices as well as syntactic and semantic structures, tuning MT systems against MEANT improves both adequacy and fluency and outperforms BLEU-tuned and TER-tuned systems across different languages and different genres, such as formal newswire, informal web forum and informal public speech (Lo et al., 2013a;Lo and Wu, 2013a;Lo et al., 2013b). As we continue to investigate on how to leverage the MEANT family of metrics to improve actual MT utility, we revisit in this paper one of the important concerns about using HMEANT as a human MT evaluation metric: is HMEANT a reliable human MT evaluation metric? Given only minimal instructions on on the SRL and alignment annotation tasks, humans might label and align the semantic roles inconsistently, which would reduce the reliability of HMEANT. Lo and Wu (2011a) carried out an extensive IAA analysis on the SRL task showing that using monolingual annotators to label the semantic roles achieves 79% IAA on average and using bilingual annotators to label the semantic roles achieves 70% IAA on average. We avoid directly diving into aggregating the overall IAA, which might risk prematurely jumping to the conclusion that HMEANT is not reliable; instead, we take a cautious approach to first analyze the IAA solely on the semantic frame alignment task to ensure any inconsistency is not caused by the fun- Figure 1: Examples of human semantic frame annotation. Semantic parses of the Chinese input and the English reference translation are from the Propbank gold standard. The MT output is semantically parsed by monolingual lay annotators according to the HMEANT guidelines. There are no semantic frames for MT3 because there is no predicate. damental design of HMEANT. Based on our findings in that MEANT equipped with automatic SRL and automatic semantic role alignment outperforms HMEANT equipped with automatic SRL and manual semantic role alignment, we evaluate the feasibility and reliability of replacing human semantic frame alignment with an automatic alignment algorithm in HMEANT. Since automatic alignment aligns semantic frames more consistently and measures phrasal similarity of the role fillers in a finer-grained manner, we believe HMEANT using human SRL and automatic alignment will be more reliable in terms of IAA and more accurate in correlation with human adequacy judgment. In this paper we focus on the problem of inter-annotator agreement for the semantic frame alignment task in HMEANT, and on evaluating the feasibility and reliability of replacing HMEANT's human semantic role filler alignment step with a cheaper yet more accurate automatic alignment algorithm instead.
Related work
2.1. The MEANT family of metrics 2.1.1. HMEANT HMEANT, proposed in Lo and Wu (2011a,c,d), has been found to correlate significantly better with human adequacy judgments than other commonly used automatic MT evaluation metrics, as well as other human metrics like HTER (Snover et al., 2006). HMEANT consists of two manual steps: (1) human semantic role labeling, which labels aspects of the meaning of the reference and machine translations in terms of semantic predicate-argument structure; and (2) human semantic frame alignment, which aligns the annotated semantic predicates and role fillers. Monolingual (or bilingual) human annotators label the semantic roles and fillers in both the reference and machine translations, so that human semantic frame aligners can align the predicates and semantic role fillers in the MT output to the reference translations. These human annotations (semantic role labeling and semantic frame alignment) allow HMEANT to then aggregate the translation accuracy for each role into semantic frame accuracy, which is then aggregated into the overall sentence accuracy in meaning. The HMEANT score is simply defined in terms of a modified weighted f-score over these aligned predicates and role fillers. More precisely, HMEANT is computed as follows:
1. Human labelers annotate the shallow semantic structures of both the references and MT output.
2. Human aligners align the semantic frames between the references and MT output by judging the correctness of Table 1: Example of SRL annotation for the MT2 output from figure 1 along with the human judgements of translation correctness for each argument. *Notice that although the decision made by the human judge for "in mainland China" in the reference translation and "the mainland of China" in MT2 is "correct", nevertheless the HMEANT computation will not count this as a match since their role labels do not match. the predicates.
3. For each pair of aligned semantic frames, (a) Human aligners determine the translation correctness of the semantic role fillers.
(b) Human aligners align the semantic role fillers between the reference and MT output according to the correctness of the semantic role fillers.
4.
Compute the weighted f-score over the matching role labels of these aligned predicates and role fillers.
m i ≡ #tokens filled in aligned frame i of MT total #tokens in MT r i ≡ #tokens filled in aligned frame i of REF total #tokens in REF M i,j ≡ total # ARG j of aligned frame i in MT R i,j ≡ total # ARG j of aligned frame i in REF C i,j ≡ # correct ARG j of aligned frame i in MT P i,j ≡ # partially correct ARG j of aligned frame i in MT w pred ≡ weight of similarity of predicates w j ≡ weight of similarity of ARG j precision = ∑ i m i w pred + ∑ j wj (Ci,j +w partial Pi,j ) w pred + ∑ j wj Mi,j ∑ i m i recall = ∑ i r i w pred + ∑ j wj (Ci,j +w partial Pi,j ) w pred + ∑ j wj Ri,j ∑ i r iwhere
m i and r i are the weights for frame, i, in the MT/REF respectively. These weights estimate the degree of contribution of each frame to the overall meaning of the sentence. M i,j and R i,j are the total counts of argument of type j in frame i in the MT and REF respectively. C i,j and P i,j are the count of the correctly and partial correctly translated argument of type j in frame i in the MT output. w pred and w j are the aligned predicates and the aligned arguments of type j between the reference translations and the MT output. There are a total of 12 weights for the set of semantic role labels in HMEANT as defined in Lo and Wu (2011c) and they are determined in a supervised learning manner by optimizing the correlation with human adequacy judgments through simple grid searching (Lo and Wu, 2011a). Figure 1 shows examples of human semantic frame annotation on reference and machine translations as used in HMEANT. Table 2.1.1. shows examples of human judges' decisions for semantic frame alignment and translation correctness for each semantic roles, for the "MT2" output from Figure 1. Birch et al. (2013) reported that the final IAA of HMEANT drops below 50% due to the pipelining effect, where annotation disagreements in the SRL task and the semantic role alignment task accumulate.
MEANT and UMEANT
Unlike HMEANT, MEANT is fully automatic; but nevertheless, it adheres to HMEANT's principles of Occam's razor simplicity and representational transparency and outperforms BLEU, NIST, METEOR, WER, CDER and TER in correlation with human adequacy judgments. MEANT automates HMEANT by replacing the human semantic role labelers with shallow semantic parsers and replacing the human semantic frame aligners with the maximum weighted bipartite matching algorithm based on a context vector model that computes the lexical similarity of the semantic role fillers. The minimal changes in the mathematics formula from HMEANT to MEANT are illustrated as follow:
S i,pred ≡ predicate similarity in aligned frame i S i,j ≡ ARG j similarity in aligned frame i
precision = ∑ i m i w pred S i,pred + ∑ j wj Si,j w pred + ∑ j wj Mi,j ∑ i m i recall = ∑ i r i w pred S i,pred + ∑ j wj Si,j w pred + ∑ j wj Ri,j ∑ i r i
where S i,pred and S i,j are the lexical and phrasal similarities based on a context vector model of the predicates and role fillers of the arguments of type j between the reference translations and the MT output. The lexical similarities of the semantic role fillers can be computed using different statistical similarity measures while the phrasal similarities can be aggregated from lexical similarities using different heuristics, like the geometric mean used in and or the normalized phrasal aggregation (Mihalcea et al., 2006) used subsequently in Lo et al. (2013a); Lo and Wu (2013a); Lo et al. (2013b). In MEANT, the weights w pred and w j are estimated in the same way as HMEANT, i.e. by optimizing the correlation with human adequacy judgments through simple grid searching. As for UMEANT (Lo and Wu, 2013b), these weights are estimated in an unsupervised manner using relative frequency of each semantic role label in the reference translations. UMEANT can thus be used when human judgments on adequacy of the development set are unavailable. show that fully automated MEANT outperforms semi-automated HMEANT (automatic SRL and human semantic frame alignment) in correlating with human adequacy judgments. Recent studies (Lo et al., 2013a;Lo and Wu, 2013a;Lo et al., 2013b) show that tuning MT systems against MEANT produces more robustly adequate translations than the common practice of tuning against BLEU or TER across different data genres, including formal newswire text, informal web forum text, and informal public speech. The work shows that the automatic alignment algorithm aligns semantic frames more consistently and measures phrasal similarity of the role fillers in a finergrained manner, and thus suggests that the reliability of HMEANT would be improved by automatically aligning the manually labeled semantic frames.
Other human MT evaluation
HTER (Snover et al., 2006) is only used in large-scale MT evaluation campaign because of its high labor cost. It not only requires well-trained professional human translators reading and understanding both the reference translation and the MT output, but also relies on the minimum edits made by those translator on the MT output so as to match the meaning expressed in the edited MT output with that in the reference translation. This requirement of heavy manual decision making in HTER greatly increases the cost of evaluation.
In contrast, task based human MT evaluation in Voss and Tate (2006) reduces labor cost by requiring human evaluators to finish some simple question answering tasks after reading the MT output. However, task based human MT evaluation do not generalize across different test sets.
IAA on the alignment task in HMEANT
To address the interesting questions raised by Birch et al. (2013), we systematically analyze the IAA for semantic frame alignment task by asking the alignment annotators to align SRL output from the same SRL annotators. This avoids directly diving into rough aggregation of the overall IAA for the entire evaluation pipeline, which might mis-Annotator pairs IAA S1-A1 vs. S1-A2 90% S2-A1 vs. S2-A2 91% Table 2: IAA on the alignment task Annotator pairs IAA S1-A1 vs. S2-A2 63% S2-A1 vs. S1-A2 61% Table 3: IAA on the overall annotation pipeline leadingly jump to the conclusion that HMEANT is not reliable.
Setup
For our benchmark comparison, the evaluation data for our experiments is the same set of sentences, GALE-A, that were used in . The reference and each of the MT system outputs are labeled by two SRL annotators for IAA analysis. For the purpose of cross-validation, we setup two rounds of alignment tasks. In the first round, two alignment annotators align the SRL output from the first SRL annotator. In the second round, the two alignment annotators align the SRL output from the second SRL annotator. As described in Lo and Wu (2011b), in all the human SRL task and the alignment task, we supplement annotators with one double sided sheet with three examples. As a result, we have alignment output from four combinations of the two SRL annotators and the two alignment annotators. For inter-annotator agreement, we follow the definition as in Lo and Wu (2011a).
Results
Table 3.1. shows the IAA on the alignment task of HMEANT is over 90% consistently when the alignment annotators align the SRL output from the same SRL annotator. This shows that the instructions on the alignment task is sufficient and effective. However, table 3.1. shows the final IAA where the alignment annotators align the SRL output from different SRL annotators falls to only 61% due to the pipelining effect on the disagreement in the two annotation tasks. reported that MEANT equipped with automatic SRL and automatic semantic frame alignment outperforms HMEANT equipped with automatic SRL and manual semantic role alignment. The natural question following such findings is whether the reliability of HMEANT improves by replacing human semantic roles alignment with automatic alignment algorithm, and if so, the extent to which it helps.
Don't align semantic frames manually
Setup
We run MEANT's automatic alignment algorithm on the SRL output from the two SRL annotators in previous experiment. We use a HMEANT implementation along the lines described in , except the set of weights Annotator pairs IAA S1-auto vs. S2-auto 70%
Results
Table 4.1. shows the IAA on HMEANT using automatic semantic role alignment algorithm rises to 70%. These results are expected because the automatic alignment algorithm handles the partial alignment more consistently, especially on cases where the role fillers of the semantic roles in the reference is split into role fillers in more than one role in the MT output. Table 4.1. shows that performing the semantic frame alignment automatically is better than aligning manually. The results are in line with the findings in . Since automatic alignment algorithm aligns semantic roles more consistently and measures phrasal similarity of the role fillers in a finer-grained manner, we believe HMEANT using human SRL and automatic alignment is more reliable in term of IAA and accurate in correlating with human adequacy judgments.
Conclusion
We have shown that HMEANT is a reliable, accurate and fine-grained semantic frame based human MT evaluation metric with high IAA and correlation with human adequacy judgment, despite requiring only minimal training for lay annotators. Our results show that the IAA on the semantic frame alignment task of HMEANT is over 90% when the human annotators align SRL output from the same SRL annotator, although the final IAA of HMEANT based on alignment results from annotators aligning SRL output from different SRL annotators falls to only 61% due to the pipelining effect of the disagreement in the two annotation tasks. More importantly, we have shown that to improve the reliability of HMEANT, completely replacing the manual semantic frame alignment with fully automatic alignment not only helps to maintain the overall IAA of HMEANT at a 70% level, but also provides a finer-grained assessment on the phrasal similarity of the semantic role fillers, so that HMEANT achieves higher correlation with human judgments of translation adequacy than HTER. This has the additional important benefit of making HMEANT even more cost effective. The results show that HMEANT equipped with automatic alignment algorithm is a highly reliable and accurate methodology for MT evaluation.
Acknowledgments
This material is based upon work supported in part by the Defense Advanced Research Projects Agency (DARPA) under BOLT contract nos. HR0011-12-C-0014 and HR0011-12-C-0016, and GALE contract nos. HR0011-06-C-0022 and HR0011-06-C-0023; by the European Union under the FP7 grant agreement no. 287658; and by the Hong Kong Research Grants Council (RGC) research grants GRF620811, GRF621008, and GRF612806. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA, the EU, or RGC.
Table 4 :
4IAA on the overall annotation pipeline where the human alignment annotators are replaced by an automatic alignment algorithmKendall
Human metrics
HMEANT(S2-auto) 0.53
HMEANT(S1-auto) 0.53
HMEANT(S2-A2)
0.49
HMEANT(S2-A1)
0.49
HMEANT(S1-A1)
0.49
HMEANT(S1-A2)
0.47
HTER
0.43
Automatic metrics
MEANT
0.39
NIST
0.29
METEOR
0.20
BLEU
0.20
TER
0.20
PER
0.20
CDER
0.12
WER
0.10
Table 5 :
5Sentence-level correlation with human adequacy judgment on GALE-A is estimated in an unsupervised manner like UMEANT(Lo and Wu, 2013b).
The feasibility of HMEANT as a human MT evaluation metric. Alexandra Birch, Barry Haddow, Ulrich Germann, Maria Nadejde, Christian Buck, Philipp Koehn, 8th Workshop on Statistical Machine Translation (WMT 2013). Alexandra Birch, Barry Haddow, Ulrich Germann, Maria Nadejde, Christian Buck, and Philipp Koehn. The feasi- bility of HMEANT as a human MT evaluation metric. In 8th Workshop on Statistical Machine Translation (WMT 2013), 2013.
Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. George Doddington, The second international conference on Human Language Technology Research (HLT '02). San Diego, CaliforniaGeorge Doddington. Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In The second international conference on Human Lan- guage Technology Research (HLT '02), pages 138-145, San Diego, California, 2002.
Evaluating machine translation utility via semantic role labels. Chi-Kiu Lo, Dekai Wu, The Seventh International Conference on Language Resources and Evaluation (LREC 2010). Chi-Kiu Lo and Dekai Wu. Evaluating machine translation utility via semantic role labels. In The Seventh Interna- tional Conference on Language Resources and Evalua- tion (LREC 2010), 2010.
MEANT: An inexpensive, highaccuracy, semi-automatic metric for evaluating translation utility based on semantic roles. Chi-Kiu Lo, Dekai Wu, 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL HLT 2011). Chi-kiu Lo and Dekai Wu. MEANT: An inexpensive, high- accuracy, semi-automatic metric for evaluating transla- tion utility based on semantic roles. In 49th Annual Meet- ing of the Association for Computational Linguistics: Human Language Technologies (ACL HLT 2011), 2011.
A radically simple, effective annotation and alignment methodology for semantic frame based SMT and MT evaluation. Chi-Kiu Lo, Dekai Wu, International Workshop on Using Linguistic Information for Hybrid Machine Translation (LIHMT-2011). Chi-kiu Lo and Dekai Wu. A radically simple, effective an- notation and alignment methodology for semantic frame based SMT and MT evaluation. In International Work- shop on Using Linguistic Information for Hybrid Ma- chine Translation (LIHMT-2011), 2011.
SMT vs. AI redux: How semantic frames evaluate MT more accurately. Chi-Kiu Lo, Dekai Wu, Twentysecond International Joint Conference on Artificial Intelligence (IJCAI-11). Chi-kiu Lo and Dekai Wu. SMT vs. AI redux: How se- mantic frames evaluate MT more accurately. In Twenty- second International Joint Conference on Artificial Intel- ligence (IJCAI-11), 2011.
Structured vs. flat semantic role representations for machine translation evaluation. Chi-Kiu Lo, Dekai Wu, Fifth Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST-5). Chi-kiu Lo and Dekai Wu. Structured vs. flat semantic role representations for machine translation evaluation. In Fifth Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST-5), 2011.
Unsupervised vs. supervised weight estimation for semantic MT evaluation metrics. Chi-Kiu Lo, Dekai Wu, Sixth Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST-6). Chi-kiu Lo and Dekai Wu. Unsupervised vs. supervised weight estimation for semantic MT evaluation metrics. In Sixth Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST-6), 2012.
Can informal genres be better translated by tuning on automatic semantic metrics?. Chi-Kiu Lo, Dekai Wu, 14th Machine Translation Summit. MT Summit XIVChi-kiu Lo and Dekai Wu. Can informal genres be bet- ter translated by tuning on automatic semantic metrics? In 14th Machine Translation Summit (MT Summit XIV), 2013.
MEANT at WMT 2013: A tunable, accurate yet inexpensive semantic frame based mt evaluation metric. Chi-Kiu Lo, Dekai Wu, 8th Workshop on Statistical Machine Translation (WMT 2013). Chi-kiu Lo and Dekai Wu. MEANT at WMT 2013: A tun- able, accurate yet inexpensive semantic frame based mt evaluation metric. In 8th Workshop on Statistical Ma- chine Translation (WMT 2013), 2013.
Fully automatic semantic MT evaluation. Chi-Kiu Lo, Anand Karthik Tumuluru, Dekai Wu, 7th Workshop on Statistical Machine Translation (WMT 2012). Chi-kiu Lo, Anand Karthik Tumuluru, and Dekai Wu. Fully automatic semantic MT evaluation. In 7th Workshop on Statistical Machine Translation (WMT 2012), 2012.
Improving machine translation by training against an automatic semantic frame based evaluation metric. Chi-Kiu Lo, Karteek Addanki, Markus Saers, Dekai Wu, 51st Annual Meeting of the Association for Computational Linguistics (ACL 2013). Chi-kiu Lo, Karteek Addanki, Markus Saers, and Dekai Wu. Improving machine translation by training against an automatic semantic frame based evaluation metric. In 51st Annual Meeting of the Association for Computa- tional Linguistics (ACL 2013), 2013.
Improving machine translation into Chinese by tuning against Chinese MEANT. Chi-Kiu Lo, Meriem Beloucif, Dekai Wu, International Workshop on Spoken Language Translation (IWSLT 2013). Chi-kiu Lo, Meriem Beloucif, and Dekai Wu. Improv- ing machine translation into Chinese by tuning against Chinese MEANT. In International Workshop on Spoken Language Translation (IWSLT 2013), 2013.
Corpus-based and knowledge-based measures of text semantic similarity. Rada Mihalcea, Courtney Corley, Carlo Strapparava, The Twenty-first National Conference on Artificial Intelligence (AAAI-06). Menlo Park, CA; Cambridge, MALondon21775Rada Mihalcea, Courtney Corley, and Carlo Strapparava. Corpus-based and knowledge-based measures of text se- mantic similarity. In The Twenty-first National Confer- ence on Artificial Intelligence (AAAI-06), volume 21, page 775. Menlo Park, CA; Cambridge, MA; London;
BLEU: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, 40th Annual Meeting of the Association for Computational Linguistics (ACL-02). Philadelphia, PennsylvaniaKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. BLEU: a method for automatic evaluation of ma- chine translation. In 40th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL-02), pages 311-318, Philadelphia, Pennsylvania, July 2002.
A study of translation edit rate with targeted human annotation. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, John Makhoul, 7th Biennial Conference Association for Machine Translation in the Americas (AMTA 2006). Cambridge, MassachusettsMatthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. A study of translation edit rate with targeted human annotation. In 7th Bien- nial Conference Association for Machine Translation in the Americas (AMTA 2006), pages 223-231, Cambridge, Massachusetts, August 2006.
Accuracy and robustness in measuring the lexical similarity of semantic role fillers for automatic semantic MT evaluation. Chi-Kiu Anand Karthik Tumuluru, Dekai Lo, Wu, 26th Pacific Asia Conference on Language, Information, and Computation. PACLIC 26Anand Karthik Tumuluru, Chi-kiu Lo, and Dekai Wu. Ac- curacy and robustness in measuring the lexical similarity of semantic role fillers for automatic semantic MT eval- uation. In 26th Pacific Asia Conference on Language, Information, and Computation (PACLIC 26), 2012.
Task-based evaluation of machine translation (MT) engines: Measuring how well people extract who, when, where-type elements in MT output. Clare R Voss, Calandra R Tate, 11th Annual conference of the European Association for Machine Translation (EAMT 2006). Oslo, NorwayClare R. Voss and Calandra R. Tate. Task-based evalua- tion of machine translation (MT) engines: Measuring how well people extract who, when, where-type elements in MT output. In 11th Annual conference of the Euro- pean Association for Machine Translation (EAMT 2006), pages 203-212, Oslo, Norway, June 2006. |
235,258,283 | [] | Fine-grained Named Entity Annotation for Finnish
Jouni Luoma jouni.a.luoma@utu.fi
TurkuNLP group Department of Computing
Faculty of Technology
University of Turku
Finland
Li-Hsin Chang lhchan@utu.fi
TurkuNLP group Department of Computing
Faculty of Technology
University of Turku
Finland
Filip Ginter
TurkuNLP group Department of Computing
Faculty of Technology
University of Turku
Finland
Sampo Pyysalo sampo.pyysalo@utu.fi
TurkuNLP group Department of Computing
Faculty of Technology
University of Turku
Finland
Fine-grained Named Entity Annotation for Finnish
We introduce a corpus with fine-grained named entity annotation for Finnish, following the OntoNotes guidelines to create a resource that is cross-lingually compatible with existing resources for other languages. We combine and extend two NER corpora recently introduced for Finnish and revise their custom annotation scheme through a combination of automatic and manual processing steps. The resulting corpus consists of nearly 500,000 tokens annotated for over 50,000 mentions categorized into 18 name and numeric entity types. We evaluate this resource and demonstrate its compatibility with the English OntoNotes annotations by training state-of-the-art mono-, bi-, and multilingual deep learning models, finding both that the corpus allows highly accurate tagging at 93% F-score and that a comparable level of performance can be achieved by a bilingual Finnish-English NER model. 1
Introduction
Named Entity Recognition (NER), the identification and typing of text spans referring to entities such as people and organizations in text, is a key task in natural language processing. State of the art NER approaches apply supervised machine learning methods trained on corpora that have been manually annotated for mentions of entity names of interest. While extensive corpora with fine-grained NER annotation have long been available for high-resource languages such as English, NER for many lesser-resourced languages has been limited by smaller, lower-coverage corpora with comparatively coarse annotation.
A degree of language independence has long been a central goal in NER research. One notable example are the CoNLL shared tasks on Language-Independent Named Entity Recognition in 2002and 2003(Tjong Kim Sang, 2002Tjong Kim Sang and De Meulder, 2003). The Spanish, Dutch, English and German datasets introduced in these shared tasks were all annotated for the same types of entity mentions -persons, organizations, locations, and miscellaneous -and the datasets still remain key benchmarks for evaluating NER methods today (e.g. (Devlin et al., 2019)). Nevertheless, until recently most NER methods aimed for language independence only in that they supported training on corpora of more than one language, resulting in multiple separate monolingual models.
In recent years, advances in deep learning have made it possible to create multilingual language models that achieve competitive levels of performance when trained and applied on texts representing more than one language (e.g. Kondratyuk and Straka (2019)). One notable model is the multilingual version of the influential BERT model (Devlin et al., 2019), mBERT, trained on more than 100 languages. mBERT performs well on zero-shot cross-lingual transfer experiments, including NER experiments (Wu and Dredze, 2019). Moon et al. (2019) propose an mBERT-based model trained simultaneously on multiple languages. Training and validating on the OntoNotes v5.0 corpus (see Section 2.3) and the CoNLL datasets, they show that multilingual models outperform models trained on one single language and have cross-lingual zero-shot ability. The zeroshot cross-lingual transfer ability of mBERT also spikes interest in the study of multilingual representations, both on mBERT (Pires et al., 2019;K et al., 2020), and on multilingual encoders in general (Ravishankar et al., 2019;Zhao et al., 2020;Choenni and Shutova, 2020 In this paper, we aim to assess and realize the potential benefits from cross-and multi-lingual NER for Finnish, a lesser-resourced language that currently lacks NER resources annotated compatibly with larger similar resources in other languages. Recently, two NER corpora were introduced for Finnish: FiNER (Ruokolainen et al., 2019), focusing on the technology news domain, and the Turku NER corpus , covering 10 different text domains. The two corpora are both annotated in the same custom variant of the CoNLL'02 and '03 scheme, making them mutually compatible, but incompatible with resources existing in other languages. This incompatibility has so far made it impossible to directly evaluate the performance of cross-and multi-lingually trained NER methods on manually annotated Finnish resources. To solve this incompatibility issue, we combine and extend these two corpora and adjust the annotations to follow the OntoNotes scheme. The resulting corpus has close to 500,000 tokens annotated for over 50,000 mentions assigned to the 18 OntoNotes name and numeric entity types. We show that our OntoNotes Finnish NER corpus is compatible with the English OntoNotes annotations through training state-of-the-art bi-and multilingual NER models on the combination of these two resources.
Data
In the following, we introduce the corpora used in this study, additional text sources for the new corpus, and the pre-trained models used in our experiments. The properties and key statistics of the corpora are presented in Table 1 FiNER is annotated for mentions of dates (type DATE) and five entity types: person (PER), organization (ORG), location (LOC), product (PRO) and event (EVENT). Of these, PER, ORG and LOC are broadly compatible with the CoNLL types of the same names. The original corpus includes a small number of nested annotations (under 5% of the total) that were excluded in our work.
Turku NER corpus
The Turku NER corpus ) is a Finnish NER corpus initially created on the basis of the Universal Dependencies (Nivre et al., 2016) representation of the manually annotated Turku Dependency Treebank (TDT) (Haverinen et al., 2014;Pyysalo et al., 2015), a multi-domain corpus spanning ten different genres.
The Turku NER annotation follows the types and annotation guidelines of the FiNER corpus. An evaluation by demonstrated the compatibility of the two Finnish NER corpora by showing that models trained on the simple concatenation of the two corpora outperformed ones trained on either resource in isolation.
OntoNotes corpus
OntoNotes (Hovy et al., 2006;Weischedel et al., 2013) is a large, multilingual (English, Chinese, and Arabic), multi-genre corpus annotated with several layers covering text structure as well as shallow semantics. In this work, we focus exclusively on the OntoNotes English language NER annotation and refer to this part of the data simply as OntoNotes for brevity. Specifically, we use the NER annotations of the OntoNotes v5.0 release (Weischedel et al., 2013), cast into CoNLL-like format by . 2 Sections of the corpus lacking NER annotation (such as the Old and New Testament texts) are excluded.
The OntoNotes NER annotation uses a superset of the ACE entity annotation representation (LDC, 2008), applying the 18 types summarized in Table 2. We note that while OntoNotes PERSON, EVENT and DATE largely correspond one-to-one to types annotated in the Finnish NER corpora, the great majority of the types either require a more complex mapping or need to be annotated without support from existing data to create OntoNotes annotation for Finnish.
Additional texts
During annotation, we noted that the FiNER and Turku NER corpora contained relatively few mentions of laws, which could potentially lead to methods trained on the combined revised corpus performing poorly on the recognition of LAW entity mentions. To address this issue, we augmented the combined texts of the two corpora with a random selection of 60 current acts and decrees of Finnish Acts of Parliament, 3 totaling approximately 24K tokens.
Pre-trained models
We perform NER tagging experiments by finetuning monolingual and multilingual BERT models. Specifically, for monolingual models, we tested English and Finnish (FinBERT) models, and for multilingual models, we tested the mBERT model trained on 104 languages, and a bilingual model trained on only English and Finnish (biBERT). Devlin et al. (2019) trained the original English BERT on the BooksCorpus (Zhu et al., 2015) and English Wikipedia. FinBERT is trained on an internet crawl, news, as well as online forum discussions (Virtanen et al., 2019). The bilingual BERT is trained on English Wikipedia and a reconstructed BooksCorpus, as well as the data used to train FinBERT (Chang et al., 2020). The multilingual BERT is trained on the Wikipedia dump for languages with the largest Wikipedias. The pre-trained models and their key statistics are summarized in Table 3. We note that while a number of variations and improvements to the pre-training of transformer- 2020)), BERT remains by far the most popular choice for training monolingual deep language models and an important benchmark for evaluating methods for tasks such as NER. As the focus of our evaluation is more on assessing the quality and compatibility of corpora through the application of comparable models rather than optimizing absolute performance, we have here opted to use exclusively BERT models. For the same reason, we only consider BERT base models instead of a mix of base and large models.
Annotation
We next summarize the primary steps performed to revise and extend the annotation of the two source corpora to conform with the OntoNotes NER guidelines (Weischedel et al., 2013). Trivial mappings Of the mentions annotated in the existing Finnish NER corpora, effectively all annotations with the type PER are valid OntoNotes PERSON annotations. Similarly, most EVENT and DATE annotations were valid as-is as OntoNotes annotations of the same names. These annotations were carried over into the initial revised data, changing only the type name when required.
Conditional mappings By contrast to the types allowing trivial mapping from existing to revised annotation, LOC, ORG and PRO required more complex mapping rules. For example, the existing annotations mark both geo-political entities (GPEs) and other locations with the type LOC without distinguishing between the two. To create OntoNotes-compatible annotation, source LOC annotations were mapped to either LOC or GPE annotations on the basis of the annotated text using manually created rules. For example, Suomi/LOC ("Finland") was mapped to Suomi/GPE and Välimeri/LOC ("Mediterranean") to Välimeri/LOC. Similar rules were implemented to distinguish e.g. FAC from ORG and LOC as well as WORK OF ART and LAW from PRO.
Dictionary-based tagging Not all mentions in scope of the OntoNotes annotation guidelines are in scope of the FiNER annotation guidelines applied to mark the previously introduced Finnish NER corpora. In addition to most OntoNotes numeric types (see below), in particular nationalities, religious and political groups (NORP in OntoNotes) and languages (LANGUAGE) were not annotated in the source corpora. To create initial OntoNotes annotation for these semiclosed categories of mentions, we performed dictionary-based tagging using lists compiled from sources such as Wikipedia and manually translated OntoNotes English terms tagged with the relevant types. 4
Numeric types To annotate OntoNotes numeric types (CARDINAL, ORDINAL, etc.) in the Turku NER corpus section of the data, we mapped the manual part-of-speech and feature annotation of the source corpus (TDT) to initial annotations that were then manually revised to identify the more specific types such as PERCENT, QUANTITY and MONEY based on context. For the FiNER texts, annotation for these types followed a similar process with the exception that automatic part-of-speech and feature annotation created by the Turku neural parser (Kanerva et al., 2018) was used as a starting point as no manual syntactic annotation was available for the texts.
Fine-grained tokenization The FiNER annotation guidelines specify that annotated name men- tions must start and end on the boundaries of syntactic words. As hyphenated compound words that include names as part, such as Suomi-fani ("fan of Finland"), are comparatively common in Finnish, the FiNER guidelines have a somewhat complex set of rules for the annotation of such compound words (we refer to Ruokolainen et al. (2019) and the relevant guidelines for details). In the revised corpus, we chose to apply a fine-grained tokenization where punctuation characters (including hyphens) are separate tokens, eliminating most of the issues with names as part of hyphenated compounds. To map FiNER-style annotation to the fine-grained version, we wrote a custom tool using regular expressions and manually compiled whiteand blacklists of suffixes that can and cannot be dropped from name mention spans. 5
Semi-automatic and manual revision After initial automatic revisions, a series of semiautomatic and manual revision rounds were performed using the BRAT annotation tool (Stenetorp et al., 2012). In particular, the consistency of mention annotation and typing was checked using the search functionality of the tool 6 and all cases where a string was inconsistently marked or typed were revisited and manually corrected when in error. Additionally, the automatically created pre-annotation for the newly added text (Section 2.4) was revised and corrected in a full, manual annotation pass. All manual revisions of the data were performed by a single annotator familiar with the corpora as well as the FiNER and OntoNotes guidelines. While the single-annotator setting regrettably precludes us from reporting inter-annotator agreement, our monolingual and cross-lingual results below suggest that the consistency of the annotation has not decreased from that of the source corpora. 5 The implementation is available from https:// github.com/spyysalo/finer-postprocessing 6 search.py -cm and -ct options.
Methods
We next present the applied NER method and detail the experimental setup.
NER method
We use the BERT-based named entity tagger introduced by . In brief, the method is based on adding a simple timedistributed dense layer on top of BERT to predict IOB2 named entity tags in a locally greedy manner. The model is both trained and applied with examples consisting of sentences catenated with their context sentences, resulting in multiple predictions for each token (appearing in both "focus" and context sentences). These predictions are then summarized using majority voting. For brevity, we refer to for further details. 7 Here, we do not use the wrapping of data in documentwise manner as in (Luoma and Pyysalo, 2020), but in bilingual experiments the Finnish and English data are separated with a document boundary token (-DOCSTART-) to avoid constructing examples where one input would contain sentences in two languages. in each corpus, separating the data for the two languages with a document boundary token. The hyperparameters are selected based on a grid search following the setup in with the exception that batch size 2 is omitted. The reason for this is that the large combined dataset with a small batch size is too time-consuming on the computational resources available. The parameter selection grid is therefore the following:
Experimental setup
• Learning rate: 2e-5, 3e-5, 5e-5 • Batch size: 4, 8, 16 • Epochs: 1, 2, 3, 4
The size of the OntoNotes training set is considerably larger than e.g. that of the previously introduced Finnish corpora, and due to resource limitations (especially GPU computation time), we set the BERT maximum sequence length to 128 WordPiece tokens for all of our experiments.
Parameter selection is performed by evaluating on the development subsets of the corpora. The test sets are held out during preliminary experiments and parameter selection, and are only used to evaluate performance in the final experiments. All of the experiments are repeated 5 times, both for hyperparameter selection and the final test results. The reported results are means and standard deviations calculated from these repetitions. The hyperparameters for different final models are selected based on their performance on the target language development set as shown in Table 4.
For testing the zero-shot cross-lingual performance on Finnish, we train the mBERT and biBERT models only on the English OntoNotes data and evaluate performance on the Finnish test set. The hyperparameters providing the best results on the English OntoNotes data are used in these experiments, thus reflecting a setting where no annotated Finnish data is available.
Results
We next present summary statistics of the newly introduced corpus and then present the results of the machine learning experiments. Table 5 summarizes the statistics of the new annotation. The combined, extended corpus with the revised OntoNotes-like annotation contains in total nearly 500,000 tokens of text annotated for approximately 55,000 mentions of names and numeric types. While the corpus represents a substantial increase in size and number of annotations over either of the two previously released Finnish NER corpora, the name-annotated subset of the English OntoNotes corpus remains four times larger in terms of token count and over three times larger in terms of the number of annotated entities (Table 1), motivating our exploration of training bilingual models with combined Finnish and English data. Li et al. (2020)), we are satisfied that the implementation used here is broadly representative of BERT used for NER in a standard sequence tagging setting.
Corpus statistics
Monolingual results
For Finnish, we note that performed an evaluation of the combination of the FiNER and Turku NER corpora with the comparatively coarse-grained six FiNER corpus NE types, reporting an F-score of 93.66% on the combined test set. While not perfectly comparable, the training and evaluation texts of that experiment are strict subsets of the Finnish training and evaluation data here, and we find the F-score of 92.99% on the 18 fine-grained OntoNotes-like annotation a very positive sign of its quality and consistency: using the newly introduced dataset, we can train models to recognize mentions of three times as many name and numeric entity types as previously with only a modest decrease in overall tagging performance. Table 7 summarizes the results of the bi-and multilingual models trained on the combined Finnish and English data and evaluated on the two monolingual corpora. We first observe that the bilingual biBERT model achieves better results that the multilingual mBERT model, providing further support for the findings of Chang et al. (2020) indicating that multilingual training processes produce notably better models when only two languages are targeted. In the remaining, we focus on the results for the biBERT model. For Finnish, we find that the bilingual model fine-tuned on the combined bilingual training data falls just 0.2% points in F-score below the monolingual FinBERT model fine-tuned with monolingual data. For English, we unexpectedly find that the bilingually trained model outperforms the monolingual English model with an approx. 0.5% point absolute difference. These results indicate that the annotations of the English OntoNotes NER dataset and the newly introduced Finnish NER dataset are highly compatible, allowing bi-or multilingual methods trained on a bilingual dataset created by their simple concatenation to perform competitively with or even potentially outperform monolingual NER models.
Bilingual results
The detailed results presented in Table 8 further show that the performance of the monolingual and bilingual models track very closely, with the monolingual Finnish model slightly outperforming the bilingual for most mention types. An exception to this pattern is seen for NORP, FAC, LANGUAGE, DATE and PERCENT, where the bilingual model shows better performance. These results further suggest that there are no notable annotation inconsistencies in individual types, and that multilingual training may still hold benefit for some entity types.
Zero-shot cross-lingual results
Finally, Table 9 provides the results of zero-shot cross-lingual transfer from English to Finnish, where a bi-or multilingual model is trained exclusively on English data but then evaluated on Finnish data. We again find that the biBERT model considerably outperforms the mBERT model. While the model performance at 77% falls far behind the over 90% F-scores achieved by the monolingual and bilingual models, it is nevertheless interesting to note that this level of performance can be achieved without any target language data. This cross-lingual transfer approach could potentially be applied e.g. to bootstrap initial annotations for manual revision when creating named entity annotation for languages lacking a corpus annotated with OntoNotes types.
Discussion and conclusions
We have introduced a new corpus for Finnish NER created by combining and extending two previously released corpora, FiNER and the Turku NER corpus, and by mapping their custom annotations into the fine-grained OntoNotes representation through a combination of automatic and manual processing steps. The resulting corpus consists of over 50,000 annotations for nearly 500,000 tokens of text representing a broad selection of genres, topics and text types, and is not only the largest resource for Finnish NER created to date, but also identifies three times as many distinct name and numeric entity mention types as the previously introduced Finnish NER corpora.
To assess the internal consistency of the newly created annotation and to provide a baseline for further experiments on the data, we evaluated the performance of a BERT-based NER system initialized with the FinBERT model and fine-tuned on the new Finnish data. These experiments indicated that the annotations of the new corpus can be automatically recognized at nearly 93% Fscore, effectively matching previous results with much coarser-grained entity types. To further assess the compatibility of the newly introduced annotation with the original English OntoNotes corpus v5.0 name annotation, we fine-tuned bi-and multi-lingual BERT models on the combination of the Finnish and English corpora, finding that bilingual models can effectively match or potentially even outperform monolingual ones, thus confirming the compatibility of the newly created annotation with existing OntoNotes resources.
All resources introduced in the paper are available under open licenses from https:// github.com/TurkuNLP/turku-one
Figure 1 :
1Example annotations based deep language models have been proposed since the introduction of BERT (e.g. Conneau et al. (2019); Xue et al. (
Figure 1 shows visualizations of the annotation for selected sentences.
).Corpus
Language Tokens Entities Domain(s)
OntoNotes English
2.0M
162K News, magazines, conversation
FiNER
Finnish
290K
29K Technology news, Wikipedia
Turku NER Finnish
200K
11K News, magazines, blogs, Wikipedia, speech, fiction, etc.
Table 1 :
1Corpus features and statistics. OntoNotes token count only includes sections of the corpus annotated for name mentions. Entity counts include also non-name types such as DATE.
. FiNER (Ruokolainen et al., 2019) is a Finnish NER corpus consisting mainly of texts from the Finnish technology news source Digitoday, with an additional test set of Wikipedia documents used to assess cross-domain performance of methods trained on the FiNER training section.2.1 FiNER corpus
Table 2 :
2OntoNotes name annotation types. Adapted fromWeischedel et al. (2013).Model
Language(s)
Vocab. size Reference
BERT (original) English
30K Devlin et al. (2019)
FinBERT
Finnish
50K Virtanen et al. (2019)
mBERT
104 languages
120K Devlin et al. (2019)
biBERT
Finnish and English
80K Chang et al. (2020)
Table 3 :
3Pre-trained models. Cased base variants of all models are used.
Table 4 :
4Combinations of models, training and evaluation data included in the experiments.
Table 5 :
5Corpus annotation statistics
Table 6 :
6Monolingual NER evaluation results (percentages; standard deviation in parentheses)
Table 6
6summarizes the results of monolingual training and evaluation for the FinBERT model on the newly introduced Finnish NER corpus, with results for the original English BERT model on the English OntoNotes results for reference.For English OntoNotes, the applied method
achieves an F-score of 88.74%, comparable to
results for similar implementations reported in
the literature: for example, Li et al. (2020) re-
Table 7 :
7Bilingual NER model evaluation results (percentages; standard deviation in parentheses)Monolingual
Bilingual
Type
Prec.
Rec.
F-score Prec.
Rec.
F-score
PERSON
94.12
97.15
95.60
94.92
96.20
95.55
NORP
94.63
96.15
95.36
97.47
96.15
96.80
FAC
67.83
40.00
50.23
70.10
47.33
56.40
ORG
94.14
94.06
94.10
93.97
93.61
93.79
GPE
95.33
97.36
96.33
94.87
97.06
95.95
LOC
87.12
86.50
86.78
86.11
83.67
84.82
PRODUCT
87.53
88.08
87.81
87.11
88.34
87.72
EVENT
72.17
79.46
75.59
69.46
77.84
73.36
WORK OF ART 75.00
77.33
75.97
67.52
79.33
72.84
LAW
90.83
96.74
93.69
91.67
94.65
93.13
LANGUAGE
93.05
95.00
94.01
94.95
93.57
94.25
DATE
94.70
94.78
94.74
94.98
95.32
95.15
TIME
81.70
84.32
82.98
78.01
81.35
79.64
PERCENT
95.60
98.61
97.08
100.00 100.00 100.00
MONEY
95.36
94.79
95.08
95.80
91.60
93.65
QUANTITY
87.18
90.90
89.00
86.61
90.07
88.30
ORDINAL
90.33
91.37
90.84
89.56
90.21
89.88
CARDINAL
94.01
95.36
94.68
93.54
95.64
94.58
Table 8 :
8Result details for Finnish data in monolingual setting using FinBERT and bilingual setting using
biBERT (percentages)
port 89.16% F-score for BERT-Tagger on En-
glish OntoNotes 5.0; an approx. 0.4% point dif-
ference. While more involved state-of-the-art
methods building on BERT have been reported
to outperform this result (e.g. 91.11% F-score for
the BERT-MRC method of
Table 9 :
9Zero-shot cross-lingual evaluation results from English to Finnish (percentages; standard deviation in parentheses)
The corpus is available under an open license from https://github.com/TurkuNLP/turku-one
https://github.com/ontonotes/ conll-formatted-ontonotes-5.0
Available from https://finlex.fi/fi/laki/ ajantasa/
The accuracy of this initial dictionary-based tagging step was not evaluated separately.
The implementation is available from https:// github.com/jouniluoma/bert-ner-cmv
AcknowledgmentsThis work was funded in part by the Academy of Finland. We wish to thank CSC -IT Center for Science, Finland, for computational resources.
Li-Hsin Chang, Sampo Pyysalo, Jenna Kanerva, Filip Ginter, arXiv:2010.11639Towards fully bilingual deep language modeling. arXiv preprintLi-Hsin Chang, Sampo Pyysalo, Jenna Kanerva, and Filip Ginter. 2020. Towards fully bilingual deep lan- guage modeling. arXiv preprint arXiv:2010.11639.
What does it mean to be language-agnostic? Probing multilingual sentence encoders for typological properties. Rochelle Choenni, Ekaterina Shutova, arXiv:2009.12862arXiv preprintRochelle Choenni and Ekaterina Shutova. 2020. What does it mean to be language-agnostic? Probing mul- tilingual sentence encoders for typological proper- ties. arXiv preprint arXiv:2009.12862.
Unsupervised cross-lingual representation learning at scale. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1911.02116arXiv preprintAlexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, Minnesota1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota.
Building the essential resources for finnish: the turku dependency treebank. Katri Haverinen, Jenna Nyblom, Timo Viljanen, Veronika Laippala, Samuel Kohonen, Anna Missilä, Stina Ojala, Tapio Salakoski, Filip Ginter, Language Resources and Evaluation. 483Katri Haverinen, Jenna Nyblom, Timo Viljanen, Veronika Laippala, Samuel Kohonen, Anna Missilä, Stina Ojala, Tapio Salakoski, and Filip Ginter. 2014. Building the essential resources for finnish: the turku dependency treebank. Language Resources and Evaluation, 48(3):493-531.
Ontonotes: the 90% solution. Eduard Hovy, Mitch Marcus, Martha Palmer, Lance Ramshaw, Ralph Weischedel, Proceedings of the human language technology conference of the NAACL. the human language technology conference of the NAACLShort PapersEduard Hovy, Mitch Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: the 90% solution. In Proceedings of the human lan- guage technology conference of the NAACL, Com- panion Volume: Short Papers, pages 57-60.
Cross-lingual ability of multilingual bert: An empirical study. K Karthikeyan, Zihan Wang, Stephen Mayhew, Dan Roth, International Conference on Learning Representations. Karthikeyan K, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilin- gual bert: An empirical study. In International Con- ference on Learning Representations.
Turku neural parser pipeline: An end-to-end system for the conll 2018 shared task. Jenna Kanerva, Filip Ginter, Niko Miekka, Akseli Leino, Tapio Salakoski, Proceedings of the CoNLL 2018 Shared Task: Multilingual parsing from raw text to universal dependencies. the CoNLL 2018 Shared Task: Multilingual parsing from raw text to universal dependenciesJenna Kanerva, Filip Ginter, Niko Miekka, Akseli Leino, and Tapio Salakoski. 2018. Turku neural parser pipeline: An end-to-end system for the conll 2018 shared task. In Proceedings of the CoNLL 2018 Shared Task: Multilingual parsing from raw text to universal dependencies, pages 133-142.
75 languages, 1 model: Parsing universal dependencies universally. Dan Kondratyuk, Milan Straka, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Dan Kondratyuk and Milan Straka. 2019. 75 lan- guages, 1 model: Parsing universal dependencies universally. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2779-2795.
Ace english annotation guidelines for entities. Ldc, Linguistic Data ConsortiumTechnical reportLDC. 2008. Ace english annotation guidelines for en- tities. Technical report, Technical report, Linguistic Data Consortium.
A unified MRC framework for named entity recognition. Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, Jiwei Li, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsXiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020. A unified MRC framework for named entity recognition. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5849- 5859.
A broad-coverage corpus for finnish named entity recognition. Jouni Luoma, Miika Oinonen, Maria Pyykönen, Veronika Laippala, Sampo Pyysalo, Proceedings of The 12th Language Resources and Evaluation Conference. The 12th Language Resources and Evaluation ConferenceJouni Luoma, Miika Oinonen, Maria Pyykönen, Veronika Laippala, and Sampo Pyysalo. 2020. A broad-coverage corpus for finnish named entity recognition. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 4615- 4624.
Exploring cross-sentence contexts for named entity recognition with BERT. Jouni Luoma, Sampo Pyysalo, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsJouni Luoma and Sampo Pyysalo. 2020. Exploring cross-sentence contexts for named entity recogni- tion with BERT. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 904-914.
Taesun Moon, Parul Awasthy, Jian Ni, Radu Florian, arXiv:1912.01389Towards lingua franca named entity recognition with bert. arXiv preprintTaesun Moon, Parul Awasthy, Jian Ni, and Radu Florian. 2019. Towards lingua franca named entity recognition with bert. arXiv preprint arXiv:1912.01389.
Universal dependencies v1: A multilingual treebank collection. Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, D Christopher, Ryan Manning, Slav Mcdonald, Sampo Petrov, Natalia Pyysalo, Silveira, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). the Tenth International Conference on Language Resources and Evaluation (LREC'16)Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies v1: A multilingual treebank collec- tion. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1659-1666.
How multilingual is multilingual BERT?. Telmo Pires, Eva Schlinger, Dan Garrette, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsTelmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4996- 5001.
Towards robust linguistic analysis using ontonotes. Alessandro Sameer Pradhan, Nianwen Moschitti, Xue, Tou Hwee, Anders Ng, Olga Björkelund, Yuchen Uryupina, Zhi Zhang, Zhong, Proceedings of the Seventeenth Conference on Computational Natural Language Learning. the Seventeenth Conference on Computational Natural Language LearningSameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Björkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards ro- bust linguistic analysis using ontonotes. In Proceed- ings of the Seventeenth Conference on Computa- tional Natural Language Learning, pages 143-152.
Universal dependencies for finnish. Sampo Pyysalo, Jenna Kanerva, Anna Missilä, Veronika Laippala, Filip Ginter, Proceedings of the 20th Nordic Conference of Computational Linguistics. the 20th Nordic Conference of Computational LinguisticsSampo Pyysalo, Jenna Kanerva, Anna Missilä, Veronika Laippala, and Filip Ginter. 2015. Univer- sal dependencies for finnish. In Proceedings of the 20th Nordic Conference of Computational Linguis- tics (Nodalida 2015), pages 163-172.
Multilingual probing of deep pre-trained contextual encoders. Memduh Vinit Ravishankar, Lilja Gökırmak, Erik Øvrelid, Velldal, Proceedings of the First NLPL Workshop on Deep Learning for Natural Language Processing. the First NLPL Workshop on Deep Learning for Natural Language ProcessingTurku, FinlandVinit Ravishankar, Memduh Gökırmak, Lilja Øvrelid, and Erik Velldal. 2019. Multilingual probing of deep pre-trained contextual encoders. In Proceed- ings of the First NLPL Workshop on Deep Learn- ing for Natural Language Processing, pages 37-47, Turku, Finland.
A finnish news corpus for named entity recognition. Language Resources and Evaluation. Pekka Teemu Ruokolainen, Kauppinen, Miikka Silfverberg, and Krister LindénTeemu Ruokolainen, Pekka Kauppinen, Miikka Sil- fverberg, and Krister Lindén. 2019. A finnish news corpus for named entity recognition. Language Re- sources and Evaluation, pages 1-26.
BRAT: a web-based tool for nlp-assisted text annotation. Pontus Stenetorp, Sampo Pyysalo, Goran Topić, Tomoko Ohta, Sophia Ananiadou, Jun'ichi Tsujii, Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics. the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational LinguisticsPontus Stenetorp, Sampo Pyysalo, Goran Topić, Tomoko Ohta, Sophia Ananiadou, and Jun'ichi Tsu- jii. 2012. BRAT: a web-based tool for nlp-assisted text annotation. In Proceedings of the Demonstra- tions at the 13th Conference of the European Chap- ter of the Association for Computational Linguistics, pages 102-107.
Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. Erik F , Tjong Kim Sang, COLING-02: The 6th Conference on Natural Language Learning. Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002).
Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. Erik F Tjong, Kim Sang, Fien De Meulder, Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003. the Seventh Conference on Natural Language Learning at HLT-NAACL 2003Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natu- ral Language Learning at HLT-NAACL 2003, pages 142-147.
Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, Sampo Pyysalo, arXiv:1912.07076Multilingual is not enough: Bert for finnish. arXiv preprintAntti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, and Sampo Pyysalo. 2019. Multilingual is not enough: Bert for finnish. arXiv preprint arXiv:1912.07076.
Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Ontonotes release 5.0. Linguistic Data Consortium. Philadelphia, PA23Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. Ontonotes release 5.0. Lin- guistic Data Consortium, Philadelphia, PA, 23.
Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. Shijie Wu, Mark Dredze, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Shijie Wu and Mark Dredze. 2019. Beto, bentz, be- cas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 833-844.
Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, arXiv:2010.11934Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mT5: A massively multilingual pre-trained text-to-text transformer. arXiv preprintLinting Xue, Noah Constant, Adam Roberts, Mi- hir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mT5: A mas- sively multilingual pre-trained text-to-text trans- former. arXiv preprint arXiv:2010.11934.
Wei Zhao, Steffen Eger, arXiv:2008.09112Johannes Bjerva, and Isabelle Augenstein. 2020. Inducing languageagnostic multilingual representations. arXiv preprintWei Zhao, Steffen Eger, Johannes Bjerva, and Is- abelle Augenstein. 2020. Inducing language- agnostic multilingual representations. arXiv preprint arXiv:2008.09112.
Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV). the 2015 IEEE International Conference on Computer Vision (ICCV)Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), pages 19-27. |
||
8,813,310 | IMPLEMENTING THE GENERALIZED WORD ORDER GRAMMARS OF CHOMSKY AND DIDERICHSEN | [] | IMPLEMENTING THE GENERALIZED WORD ORDER GRAMMARS OF CHOMSKY AND DIDERICHSEN
Bengt Sigurd linglund@gemini.ldc.lu.se
Dept of Linguistics
Lund University
Helgonabacken 12S-223 62LundSWEDEN
IMPLEMENTING THE GENERALIZED WORD ORDER GRAMMARS OF CHOMSKY AND DIDERICHSEN
Many of the insights of Transformational
Grammar (TG) concern the movability of constituents. But in recent versions (Government & Binding, GB;cf. Chomsky, 1982, Sells,1985 observes that an adverb could introduce or be the fundament of a sentence, in which case the subject np "remains" in its "normal" position after the finite verb (Swedish example: ldag kom pojken; literally: Today came the boy). If the subject np introduces the sentence (Pojken kom idag) its "original" place after the finite verb must be empty (For comparisons between Transformational Grammar and Diderichsen's grammar, cf. Teleman, 1972, Platzack,1986 Further conditions may concern the topic (focus), mode, clause type, lacking constituent, etc. of the sentence, and this information may also be gathered as arguments in slots to the left of the arrow.
The system to be presented in this paper also incorporates many of the ideas of Referent Grammar (RG; Sigurd, 1987), a :functional generalized phrase structure grammar used in the automatic translation project Swetra (Sigurd & Gawronska-Werngren, 1988). I hereby acknowledge the help of Mats Eeg-Olofsson, ] Barbara Gawronska-Werngren and Per Warter in the Swetra group at Lund.
The generalized word order schemas of
Chomsky and Diderichsen
As can be seen from articles and text-books (e.g. Sells,1985), a typical Chomskyan NP(e:i))))))) "Whom hit the boy"
This simplified representation means that the object "vem" is found in a front slot called "XP", the finite verb is found in the slot called "C(omplement)" and the subject "pojken" is found in the "specifier" slot under IP. The "spec" under "VP" is empty and so are the verb slot under V' and the NP slot under V'.
The transformational (process) description would, say that "vem" ("whom") has been moved from its final position leaving a trace indexed with the same number (e:i) for reference. Similarly the transformational description would say that the finite verb "slog" and ',pojken" have left coindexed traces (e:j, e:k) behind. The Swedish sentence: "Vem slog pojken" is ambiguous and could also be interpreted as "Who hit the boy". In that case the question pronoun "vem" (now equivalent to
English "who") should be coindexed with a trace in the position where "pojken" was found in the first case and "pojken" should be found in the "object position" under V'.
Diderichsen uses a simpler model -he did his work long before Chomsky when formal grammar was not as highly developed.
He would have stated the facts in the following way: Gave the boy (not) the girl the dog today?").
The second alternative (after ;) shows the case of "verba dicendi" (vd) as in "Pojken lovade flickan att ggt" (Literally: The boy promised the girl to go). In that case the first noun phrase after the finite verb (Np2) is taken as a dative object and the infinitive clause represented by "Sunt" as the direct object.
Discussion and conclusion
the sentence representations (trees) include both the site of the moved constituent and the site from where it has been moved; the original site of the moved constituent is marked as a trace (t) or empty (e, []). In the sentence schema (Field or Position Grammar) developed by the Danish linguist Paul Diderichsen (1946), there are also positions both for the new and the old site of moved constituents. Thus Diderichsen
conditions may be called functional role conditions (f-conditions) as they build a functional structure (f-representation). This structure may be built in a certain slot (as an additional argument) to the left of the arrow.
first interpretation of the sentence the "object slot" S(ubstantive=nominal) is empty;for the second interpretation the subject slot s(ubstantive) is empty -besides the empty slots for sentence adverbs (a), non-finite verbs(V) and other adverbs (A) also marked by the minus sign(-). Diderichsen calls the first three slots "the nexus field" and the last three "the content field" (indholdsfeltet). This division suits sentences containing an auxiliary with infinitives or participles, but for other sentences the division between a nexus field and a content field is unfortunate. The objects (in S) get separated from the finite verb (v) in simple transitive sentences. In the model to be presented below infinitives and participles are treated as subordinate (minor) clauses with their own objects and adverbs.GWOG rules . a simple illustration The following (simplified) Prolog (Definite Clause Grammar) rules illustrate how examples like those mentioned in the introduction can :Idag kom pojken*/ This basic rule is a rewriting rule. It states that we get the information in the argument slots after "sent" if we find the (phrase or word) categories to the right of the arrow in the order they are given. Further phrase and word (lexical) rules defining an adverb (adv), an np, and an intransitive verb (vi), e.g. as described in Sigurd(1987) are needed. The lexical rules needed in order to generate our examples can have the following simplified form: np(np(pojken)) --> [pojken]. np([]) --> []. vi(kom) --> [kom]. adv(adv(idag)) --> [idag]. adv([]) --> []. The categories np and adv may be empty ([]). The verb is obligatory. Diderichsen's "fundament" ("fund") is an initial position unspecified as a syntactic category. Both an np and an adverb may occur as fundament in our simple example, so the following two fundament rules are therefore needeA: fund(F) --> np(F)./* an np is fundament */ fund(F)--> adv(F)./* an adv is fundament */ As can be seen, the schema would be overgenerating if no co-occurrence restrictions were introduced. Such restrictions or conditions are written within curly brackets ({ }) in Definite Clause Grammar, and they state which conditions are to hold on the second np (Np2) can be found after the intransitive finite verb. (This is our way of stating that an np has been fronted). In addition to the co-occurrence restrictions, the sample rules illustrate how information about functional roles and topic is stated. In the first case the fundament (Fund) is assigned the functional role of subject. The value of the fundament is also assigned to the Topic variable (T). In the second alternative, given after semicolon (;), an adverb is the fundament: adv(_,[Fund],[]). Then there must be an Np2 (Np2 cannot be empty: Np2\= []). In that case the subject is assigned the value (Np2) and the adverb (Fund) is the topic of the sentence. The value of tile adverb (Fund) is also assigned to the adverbial (Advl) of the functional representation. In both cases the Pred is assigned the value (V) of the verb, and in both cases the mode of the sentence is declarative, why M(ode) is set at d(eclarative). The two examples would both receive the following functional representation: s(subj (pojken),pred(kom),advl(idag)). This functional representation agrees with the standard format of Referent Grammar used in machine translation. The order in an RG functional representation is fixed: subject, predicate, dative obj, direct object, sentence adverbials, other adverbials. As can be seen there are slots for Mode, Topic and Functional representation with "sent". The output of the parsing of a sentence examples to the right show how slots may be filled differently: "Idag kom inte pojken" (Literally: Today came not the boy), "Gay pojken inte flickan hunden idag?" Literally: Gave the boy not the girl the dog today?),"Pojken lovade flickan att gfi" (Literally: The boy promised the girl to go). "Sunt" is the category containing subordinate clauses and minor (infinitive or participial) clauses. Compared to Diderichsen's model there is a longer sequence of categories, and nonfinite verbs are treated as subordinate clauses. Chomsky and his followers try to define functional roles configurationally, but our approach is rather a ],subj(Fund),dobj(Np2),obj(Sunt), M--d}./* pojken lovade flickan att gft */ The first condition states that if there is nothing (Fund=[]) before a doubly transitive finite verb(vtt), the mode must be "q(uestion)" and the noun phrases are assigned the roles: subject, dative object (dobj) and direct object (obj) in that order. This covers our example "Gay pojken (inte) flickan hunden idag?" (Literally:
) .
)Underlying both Chomskyan GB grammar and Diderichsen's Field Grammar is a grammatical system which consists of a general word or constituent order schema supplemented with co-occurrence restrictions. This type of system may be called Generalized Word Order Grammar (GWOG), and this paper deals with ways of implementing such a system on the computer using Definite Clause Grammar (DCG;Clocksin & Mellish, 1981), a formalism available in most Prolog versions. Definite Clause Grammar is a convenient rewriting system with an arrow (-->) familiar to generative linguists. It allows one to state the maximum sequence of constituents (the order schema) to the right of the arrow. A setup of constraining conditions can then be used to prohibit overgeneration. Such restrictions are stated within curly brackets in the Definite Clause Grammar formalism. Constraining conditions may require that certain slots be filled or empty, that a certain variable have a certain value, that certain constituents cannot occur at the same time (co-occurrence restrictions), etc.In addition one may have further conditions which state that a certain constituent is to have a certain functional role, e.g. be the subject or the object of the sentence. Such
It is clear that there is a trade-off between the extension (generality) of the order schema and the co-occurrence restrictions. A very general schema requires many constraining restrictions, several simpler schemas require fewer restrictions, but the overall system grows bigger. Chomsky and his followers seem to prefer to use one schema to cover all types of clauses in order to catch as many generalizations as possible. The node name to all sentences in GB. Diderichsen used one general schema for all types of main sentences, but a separate schema for subordinate clauses.For a general discussion of the potential of positional systems in syntax, morphology and phonology see Brodda & Karlgren, 1964. Some of our restrictions and constraints on the value of certain variables and cooccurrence of constituents, etc. can be related to the constraining principles and filters used in GB. Swedish subordinate clauses differ from main clauses by having the sentence adverbs before the finite verb, and generally subordinate clauses are characterized by initial complementizers, such as subjunctions, infinitive markers o1" relative pronouns. In the current implementation subordinate clauses are treated by separate rules. In Swedish, almost all information about clause type, topic, and mode is to be found in the positions before the finite verb. It is clear that the GWOG model suits the Nordic and Germanic languages well with their finite verb second and fairly fixed word order, but not languages with fairly free word order (e.g Slavic languages) where the schema must allow for almost any combination of the words. The program illustrated works nicely for' analysis, but when used for synthesis (generation) further conditions are needed and the components have to be rearranged somewhat. The program may be considered as an alternative to Pereira's Extraposition grammar (1981).4
"comp(lementizer)" clearly stems from
subordinate clauses, but it has been generalized
Relative positions of elements in linguistic strings. B Brodda & H. Karlgren, SMIL. 3B. Brodda & H. Karlgren, 1964. Relative positions of elements in linguistic strings. SMIL 3, 49-101
Some concepts and consequences of the theory of government and binding. N Chomsky, Nudansk grammatik. K~benhavn:Gyldendal F. Pereira. Cambridge, Mass; BerlinSpringer P. DiderichsenProgramming in Prolog. Extraposition grammarN. Chomsky, 1982. Some concepts and consequences of the theory of government and binding. Cambridge, Mass: MIT Press W.Clocksin & C. Mellish, 1981. Programming in Prolog. Berlin: Springer P. Diderichsen, 1946 (3:rd ed). Nudansk grammatik. K~benhavn:Gyldendal F. Pereira, 1981. Extraposition grammar.
. American Journal of Computational Linguistics. 7American Journal of Computational Linguistics 7,4, October-December 11981
Diderichsens positionsschema och generativ transformationsgrammatik. Chr, Platzack, Heltoft &. Chr.Platzack. 1986. Diderichsens positionsschema och generativ transformationsgrammatik. In: Heltoft &
Andersen, Saetningsskemaet og dets stilling -50 ftr efter. Nydanske studier. RoskildeAkademisk forlagAndersen (eds). Saetningsskemaet og dets stilling -50 ftr efter. Nydanske studier 16-17, Roskilde: Akademisk forlag
Lectures on contemporary syntactic theories. P Sells, Stanford: CSLI B. SigurdReferent Grammar. A generalized phrase structure granma~u" with built-in referentsP. Sells, 1985. Lectures on contemporary syntactic theories. Stanford: CSLI B. Sigurd, 1987. Referent Grammar. A generalized phrase structure granma~u" with built-in referents.
. Studia Linguistica. 412Studia Linguistica 41:2,115-135
The potential of SWETRA -a multilanguage MT-system. B Sigurd, & B Gawronska-Werngren, Computers andTranslation. 3B. Sigurd & B. Gawronska-Werngren, 1988. The potential of SWETRA -a multilanguage MT-system. Computers andTranslation 3, 238-250
Om Paul Diderichsens syntaktiska modell. U Teleman, Lund: Studentlitteratur 340Tre uppsatser om grammatik. U.Teleman, 1972. Om Paul Diderichsens syntaktiska modell. In: Tre uppsatser om grammatik. Lund: Studentlitteratur 340 |
|
256,461,137 | How (Un)Faithful is Attention? | Although attention weights have been commonly used as a means to provide explanations for deep learning models, the approach has been widely criticized due to its lack of faithfulness. In this work, we present a simple approach to compute the newly proposed metric AtteFa, which can quantitatively represent the degree of faithfulness of the attention weights. Using this metric, we further validate the effect of the frequency of informative input elements and the use of contextual vs. noncontextual encoders on the faithfulness of the attention mechanism. Finally, we apply the approach on several real-life binary classification datasets to measure the faithfulness of attention weights in real-life settings. | [
207556454,
982761,
6628106,
5590763,
18993998,
51979567,
1998416,
11212020,
67855860,
9549525,
15280949,
199552244,
182953113,
5959482,
2103669
] | How (Un)Faithful is Attention?
December 8, 2022
Hessam Amini hessam.amini@mail.concordia.ca
Department of Computer Science and Software Engineering
Computational Linguistics at Concordia (CLaC) Laboratory
Concordia University
MontrealCanada
Leila Kosseim leila.kosseim@concordia.ca
Department of Computer Science and Software Engineering
Computational Linguistics at Concordia (CLaC) Laboratory
Concordia University
MontrealCanada
How (Un)Faithful is Attention?
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLPDecember 8, 2022
Although attention weights have been commonly used as a means to provide explanations for deep learning models, the approach has been widely criticized due to its lack of faithfulness. In this work, we present a simple approach to compute the newly proposed metric AtteFa, which can quantitatively represent the degree of faithfulness of the attention weights. Using this metric, we further validate the effect of the frequency of informative input elements and the use of contextual vs. noncontextual encoders on the faithfulness of the attention mechanism. Finally, we apply the approach on several real-life binary classification datasets to measure the faithfulness of attention weights in real-life settings.
Introduction
Attention mechanism (Bahdanau et al., 2015) has become an indispensable part of many state-of-theart NLP models, and its application is becoming more and more prevalent in non-NLP use cases. In simple words and from a functionality perspective, attention can be described as a module which generates outputs from the representations of input elements by performing the following two steps:
1. Automatically compute weights corresponding to each input element 2. Use the computed weights to run a weighted average over the input representations Due to attention's explicit mechanism to assign weights to input elements, attention weights have been frequently used as explanations for model predictions. A common approach has been to provide attention heat maps to which input elements the attention component has attended to (e.g. Wang et al., 2016;Lee et al., 2017;Lin et al., 2017;Ghaeini et al., 2018).
However, the use of attention weights as explanations has been widely challenged, with regards to the observation that they are not faithful, meaning that different attention weights can result in similar model predictions (Jain and Wallace, 2019;Serrano and Smith, 2019;Wiegreffe and Pinter, 2019). Therefore, the explanations provided by the attention weights are neither unique nor closely related.
In this work, we extend the work of Wiegreffe and Pinter (2019) to a one-shot adversarial setup that can be used to compute a quantitative metric for the faithfulness of attention weights. We call the metric AtteFa which simply stands for Attention Faithfulness. We consider the adversarial training setup one-shot in the sense that it can provide us with the AtteFa metric by running the adversarial training only once.
To perform a sanity check on AtteFa, we run experiments in a controlled setting using synthetic datasets and two types of encoders (a noncontextual MLP and a contextual LSTM) that could help us validate if the values of this metric reflect what we expect it to. We later compute this metric on some real-life binary text classification datasets to validate how faithful the attention weights are in those settings.
Related Work
Since the rise of deep learning models, researchers have focused on devising techniques that could provide an explanation for the functioning of these so-called "black-box" models. Among different classes of explainability techniques, the following can be mentioned:
Gradient-based methods attribute model decisions to input features using gradient signals (Sundararajan et al., 2017;Selvaraju et al., 2017;Aubakirova and Bansal, 2016;Karlekar et al., 2018). Perturbation-based methods try to provide an explanation for the model behavior by evaluating its reactions to perturbations in input features (Ribeiro et al., 2016;Zintgraf et al., 2017).
Attention-based methods act as an intuitive way of interpreting the model's decision. They use the probability distribution or weights provided by an attention mechanism as a feature importance measure to find the features that the model is attending to (Luong et al., 2015;Xie et al., 2017;Mullenbach et al., 2018).
Despite the popularity of the attention-based explainability approaches, the reliability of these methods has been called into question, with the special focus on the faithfulness of the explanations provided by the attention mechanism. Jain and Wallace (2019) perform different experiments to evaluate the meaningfulness of explanations provided by attention weights. Their results show that attention weights are not correlated with gradientbased feature importance scores. Furthermore, they show that it is often possible to have different attention probability distributions that result in a similar output, arguing that a specific distribution cannot be treated as the definitive cause behind a model decision. Serrano and Smith (2019) investigate the ability of attention weights to act as importance measures through a different lens. They state that it is not sufficient for the weights to make sense to humans. The weights should also provide a faithful explanation for the model output in order to be considered reliable. Through performing multi-weight tests, they show that although there is a certain level of correlation between attention weights and the importance of features in the final prediction of the model, these weights in many cases cannot successfully identify the features that heavily impact a model's decision. Wiegreffe and Pinter (2019) propose additional tests for evaluating the ability of the attention mechanism to provide explainability. They challenged the findings reported by Jain and Wallace (2019) as they treated the attention as a stand-alone component within a network that is independent from the rest of the components. Through an end-to-end adversarial setup to train models to similar outputs while coming up with different attention distributions in binary classification tasks, they show that the explanations provided by attention are not as unfaithful as Jain and Wallace (2019) found them to be.
In this paper, we extend the adversarial setup by Wiegreffe and Pinter (2019) so that it can be used in a one-shot pass, i.e. training the adversar-ial models only once. This approach results in a metric, which we call AtteFa, that can provide us with a quantitative insight on how faithful the explanations by the attention component are, given a specific model and a specific dataset. To the best of our knowledge, this is the first work that provides such a quantitative measure to evaluate the faithfulness of attention.
Method
Base Model Training
First, we train a base model on the data. The base model is comprised of an embedding layer, followed by an encoder (LSTM or MLP), which is in turn followed by an attention component, and finally a classification head. To train the base model, cross-entropy loss is used, and training is done for 8 epochs. The final base model is the trained model at the end of the epoch where the ROC-AUC score on the test dataset is minimum.
Adversarial Model Training
With the base model at hand, we train an adversarial model with the same architecture as the base model, but with the following two characteristics: In order to measure the difference between the two models' predictions, namelyŷ a andŷ b , we use Total Variation Distance (TVD), which is computed using Equation 1:
TVD(ŷ j a ,ŷ j b ) = 1 2 |Y| j=1 |ŷ j a −ŷ j b |(1)
where |Y| represents the number of output heads (which is equal to 1 in our binary classification setting).
To compute the difference between attention distributions α a and α b , Jensen-Shannon Divergence (JSD) is used, which is computed using Equation 2:
JSD(α a , α b ) = 1 2 KL(α a ||ᾱ) + 1 2 KL(α b ||ᾱ) (2) whereᾱ = αa+α b 2
and the Kullback-Leibler (KL) divergence is computed using Equation 3:
KL(α a ||α b ) = |α| k=1 α k a × log(α k a + ϵ) − log(α k b + ϵ)(3)
where |α| corresponds to the size of the attention weight vector. The inclusion of ϵ in the KL equation is to prevent the logs from becoming infinite in cases where the values of α become equal to zero due to mathematical underflow. In our experiments, we set the value of ϵ equal to 1e-10.
Having the TVD of the predictions and the JSD of the attention weight distributions, we design the loss function so that it tries to minimize TVD and maximize JSD. The final loss formula is given in Equation 4:
L(M a , M b ) (i) = sTVD(ŷ (i) a ,ŷ (i) b )−sJSD(α (i) a , α (i) b ) (4)
In Equation 4, we use sTVD and sJSD to denote the scaled values of TVD and JSD, respectively. We apply the scaling in order to make sure that the value ranges for the TVD and JSD components of the loss are equal, and therefore the final value of the loss is affected equally by the two components. Knowing that the value of TVD is always between 0 and 0.5, sTVD is computed using Equation 5:
sTVD(ŷ a ,ŷ b ) = TVD(ŷ a ,ŷ b )/0.5(5)
To compute sJSD Equation 6 is used:
sJSD(α a , α b ) = JSD(α a , α b )/JSD max (6)
where JSD max is the calculated upper-bound for JSD when Equations 2 and 3 are used. JSD max is approximately equal to 0.6931, and is reached when α 1 and α 2 in Equation 2 are two one-hot vectors with the element 1 located in different indices.
The value of the loss is computed per sample. In order to compute the backpropagated loss value for each batch, we compute the average over the per-sample losses in the batch.
The training process is continued until the loss value on the test data does not improve for 10 consecutive epochs, or a maximum number of 80 epochs is reached. To calculate the total loss on the test data, instead of computing the per-sample losses and averaging them over the dataset, for simplicity and to leverage the metric implementations by Wiegreffe and Pinter (2019), we first average over the per-sample TVD and JSD in this dataset, and then compute the total loss using these averages. As the final adversarial model, we pick the one from the training epoch with the lowest value of loss on test data.
The key difference between our adversarial training setup with the one from Wiegreffe and Pinter (2019) is in the way the adversarial loss is computed. In Wiegreffe and Pinter (2019), KL divergence is used instead of JSD to compute the distribution divergence between the base model's attentions and the adversarial one. Since the value of KL is un-bounded, it is mandatory to use an additional hyperparameter λ to avoid the final value of the loss getting dragged fully towards the attention divergence. Knowing that JSD has a specific lower and upper bound, including that in the adversarial loss formula allows us to do away with the additional hyperparameter λ, and to be able to do the adversarial training in one shot, which in turn provides us with an easy and systematic way to compute a metric value for the attention faithfulness.
Computing AtteFa
Having the TVD of the predictions and the JSD of the attention distributions on the test data between the base model M b and adversarial version of the model M a , we compute the faithfulness score At-teFa of the attention module A M using Equation 7:
AtteFa(A M ) = min sTVD(ŷ a ,ŷ b ) sJSD(α a , α b ) , 1(7)
The formula is motivated by the assumption that, the degree of attention faithfulness has a direct relation with the value of the TVD of predictions, and an inverse relation with the value of the JSD of the attention weights. In other words, if the attention is faithful, meaning that the attention can find a limited set of informative sources, the adversarial setup will either converge to a point where both the TVD of predictions and the JSD of attention weights are low, or both of them are high. We believe that the second scenario is more probable, as the adversarial model has a much higher degree of freedom in order to converge to a different attention distribution from the base model than to achieve a similar output prediction. This will later be shown in Section 6 that, with the current adversarial setup, the adversarial model usually achieves a JSD close to its maximum value.
With this assumption, we believe that in most cases, the final value for sTVD(ŷ a ,ŷ b ) should be lower than sJSD(α a , α b ), but we still do not rule out the opposite scenario, which is why we force the value of AtteFa to be bounded between 0 and 1 through the use of the min function in Equation 7.
Datasets
Synthetic Datasets
In principle, we hypothesize that the faithfulness of the attention has a direct relation with the rareness of the informative elements in the input. In the task of text classification, considering the input elements being textual tokens and with an attention that assigns weight to each token, if there are very few informative tokens that could help with the task, our assumption is that the attention should probably focus on those and not the other tokens, and finding alternative attention weight distributions that would lead to a similar outcome would be difficult. Whereas in cases when many input tokens are informative and helpful to the task, the attention can simply shift its focus from one set of tokens to another, therefore the faithfulness will be low.
In order to verify this scenario, we designed a set of synthetic sentiment analysis datasets that include different proportions of informative texts. To that end, we synthetically created samples in a way that a specific portion of their tokens are words with sentiment weights that align with the sentiment label of the sample 1 , while filling the rest of the token slots with the uninformative token "something". This results in a simple-to-classify sentiment dataset that allows us to investigate the effect of the frequency of informative input elements on the faithfulness of attention, without the need to take into account the effectiveness of attention for the task at hand.
Our Mock datasets are comprised of 8000 training and 1000 testing samples. The distribution of the positive/negative labels is 50/50 in the datasets, and each sample has a random length between 50 and 100 tokens. These synthetic datasets are comprised of Mock-1, Mock-2, Mock-5, and Mock-10 datasets with 1, 2, 5, and 10 informative tokens in each sample, respectively, and Mock-1q, Mock-2q, Mock-3q, and Mock-4q, in which 25%, 50%, 75%, and 100% of the tokens in each sample are informative.
Real-life Datasets
The datasets used are the ones utilized in the work of Jain and Wallace (2019) and Wiegreffe and Pin-ter (2019). The description of the datasets are provided in section 3 of Jain and Wallace (2019).
Dataset Statistics
Experimental Setup
The LSTM models are comprised of the following components:
1. A 300d word embedding layer 2. A bidirectional LSTM layer (Hochreiter and Schmidhuber, 1997) with 128 units 3. The attention module 4. A fully-connected layer
The MLP models include embedding, attention, and fully-connected modules similar to the LSTM models, but utilize a feed-forward projection layer with 128 nodes followed by a tanh activation, instead of the bi-LSTM layer.
The attention has a two layer fully-connected network that first projects the input to half its size in its first layer, applies a tanh activation, and then maps it to a single logit in the second layer. A softmax function is then used to convert the logit to a probability distribution, which is used to compute a weighted average over the inputs and form the output of the attention.
Similar to Jain and Wallace (2019) and Wiegreffe and Pinter (2019), for the Diabetes and Anemia datasets, 300d Word2Vec embeddings (Mikolov et al., 2013) are pre-trained on the combined text from the two datasets. The training is done using CBOW with a window size of 10. For the rest of the datasets, 300d publicly-pretrained FastText embeddings (Bojanowski et al., 2017) are used.
Adam (Kingma and Ba, 2015) is used as the optimizer during training, and the learning rate and weight decay rates are set to 1e-3 and 1e-5, respectively. Weight decay is applied to every component in the network except the attention module.
Results and Discussion
First, we have included the F1 scores achieved by the base models in Table 2. In order to verify the correctness of our experiments, we have also included in the table the F1 scores reported by Wiegreffe and Pinter (2019 Table 3 contains the results achieved by the adversarial setup. It includes the F1 scores of the adversarial models, the TVD of their predictions from the base models, the JSD of their attention distributions from the base models, the number of epochs that resulted in the best loss on test, and their attention faithfulness score AtteFa. The numbers are reported in terms of average and standard deviation runs with 9 different random seeds. Individual results for each seed is available in Tables 4 and 5 in Appendix A.
Effect of Contextualization
Comparing the AtteFa columns for the LSTM and MLP models in Table 3, we can observe that the attentions incorporated in models with LSTM as their encoder are significantly less faithful than their counterparts in the models with MLP as their encoder. This observation was not surprising, as a lower degree of contextualization in token representations should inherently result in higher faithfulness in the attention that is applied on top of those representations.
To better understand this, imagine the task of detecting whether a text is about sports or fruits. Now imagine that you want to classify the following sample: football is life. We can simply agree that the only informative word in the sample is football, as it clearly indicates a sport. In an ideal scenario, a faithful attention should have a distribution highly centered on this word. Using an MLP encoder, the input tokens will retain their information, therefore the representation of token football retains its informativeness. This is, however, not necessarily the case if a contextual encoder such as LSTM is used to compute the token representations, as it can simply manipulate the tokens in a way that another word, such as is, has the informative representation.
Going back to our adversarial setting, when LSTM is used, the encoder has the capacity to manipulate the token representations so that a different set of tokens bear the useful information to achieve the task. In this setting, the attention can simply focus on the new set and obtain similar information. On the other hand, a non-contextual MLP encoder does not have the capacity that LSTM holds, and will retain the informativeness of the representation for each token. Therefore, it becomes more challenging for the attention to find a new set of tokens to attend to. That is why the prediction TVD in the MLP models is significantly lower than the LSTM ones, resulting in the MLP models having a noticeably higher AtteFa.
Simply put, our results show that attention components applied on top of contextual encoders are generally less faithful than the ones on top of noncontextual encoders.
Effect of the Frequency of Informative Sources
Looking at the rows corresponding to the results on the Mock-* datasets and the MLP model in Table 3, we can observe the general trend towards the reduction of AtteFa as the number of informative tokens increase. For the case of the MLP model, a relatively high AtteFa of 0.82 is achieved on the Mock-1 dataset, which only includes one informative token in each sample text, and the value drops to close to 0 for the case of Mock-3q and Mock-4q datasets. This shows that the faithfulness of the attention mechanism has an inverse correlation with the number of informative sources in the input. The trend is still observable in the case of the LSTM models, but with a magnitude that is considerably lower than what we have for the MLP models, as the AtteFa on the Mock-1 dataset is only 0.03. As discussed in Section 6.1, the contextualized LSTM encoder has the flexibility to re-distribute the task-relative information across different input tokens. Regardless of that, we can still observe the general trend towards the drop of AtteFa as we move from Mock-1 to Mock-4, which shows that, even with the case of contextualization, the frequency of informative elements in the source input can still affect the faithfulness of the attention mechanism.
We can observe anomalies in the trend mentioned before. For example, we can observe bumps in the AtteFa in Mock-1 to Mock-2 and Mock-1q to Mock-2q for the case of the LSTM model, and from Mock-5 to Mock-10 in the case of the MLP model. This can be partially justified by the behavior of the base model in terms of how successful it is in detecting informative tokens. An example of this can be found in Table 2, where the MLP model has achieved a lower F1 score on Mock-5 in comparison to Mock-10, meaning that the atten-tion used in the MLP model was more successful in identifying informative tokens in the Mock-5 dataset than in Mock-10.
We can also observe a 19% gap between the AtteFa of the MLP model trained on the Mock-1 dataset and the maximum value of AtteFa (i.e. 1). We argue that this is also related (at least partially) to how the base model performs. We can see in Table 2 that the base model trained on the Mock-1 dataset does not have an F1 score of 1 on the test dataset. This could partially be due to the failure of attention to detect the informative tokens and highly focus on them.
Overall, we conclude that there is generally an inverse relation between the frequency of informative sources in the input data and the faithfulness of the attention module trained on it. But there is still some noise in the AtteFa metric which is attributed to how well the base model performs.
Although we do not think that this rules out AtteFa as a suitable metric to compute the faithfulness of attention, we believe there is room for exploring alternative metrics that, for example, also incorporate the performance of the base models in their computation.
AtteFa on Real-life Datasets
Looking at Table 3, we can see that, for the case of the MLP models, the values of AtteFa on all the real-life datasets are significantly lower than the ones on Mock-1 to Mock-10. As discussed in Section 6.2, this could show that there is quite a large number of informative tokens in the samples belonging to these datasets, which allows the attention to shift its focus among them. This shows that, the attention mechanism in MLP models trained on all these datasets is not very faithful.
For the case of the LSTM model, however, we can observe that the AtteFa on these real-life datasets is comparable and sometimes higher than their counterparts on the Mock-* datasets. However, focusing only on the real-life datasets, the AtteFa of the LSTM models are still lower than the MLP ones. This can also be visually observed in Figure 1, which includes the violin plots of the distribution of AtteFa across the different datasets and models. We hypothesize that, in real-life datasets, we have a significantly lower number of completely uninformative tokens as we had in the Mock-* datasets. Although the LSTM encoder still retains its flexibility to redistribute information across different tokens, the lower number of completely uninformative tokens reduces the degree of the information redistribution capacity. This is something that we have not explored in our experiments with the synthetic datasets, and therefore, leaves room for more studies on this aspect.
One may argue that the number of input tokens on its own can affect the distribution of attention weights and can in turn affect the value of the attention JSD of the adversarial models, hence the final value of AtteFa. While we do not rule this out, we believe that it is not merely the input lengths that would affect the attention JSD, but rather the frequency of informative input tokens that could increase as the input lengths become higher. We also believe that the way information is distributed among their representations used by the attention component also plays a big role here.
In Figure 1, we can see that for the case of the MLP models, the values of AtteFa on datasets with lengthier samples, namely Diabetes and Anemia, are generally lower than the ones on the other datasets. This is, however, not the case for the LSTM models, as we can observe a relatively high AtteFa on the Anemia dataset with respect to the rest of the datasets. Even for the case of the MLP model, we can see that the AtteFa on the 20News dataset is higher than SST and AgNews that have lower average input lengths (see Table 1).
We therefore conclude that the distribution of task-related information across the input token representations used by the attention component plays a key role in the faithfulness of the attention.
Comparison of Our Adversial Setup with
Wiegreffe and Pinter's
In Figure 2, we have plotted the prediction TVD and attention JSD of our adversarial LSTM models against the results reported in Wiegreffe and Pinter (2019). The dotted lines in the plots resemble the ones in figure 5 from Wiegreffe and Pinter (2019). We can see that, with our adversarial setup, we have achieved comparable prediction TVDs to Wiegreffe and Pinter's on the Anemia, SST and IMDB datasets. However, on the Diabetes dataset, our prediction TVDs are significantly lower than Wiegreffe and Pinter's. Given that our adversarial setups are pretty similar, we believe that this is mainly due to our inability to properly reproduce their base LSTM model on the Diabetes dataset. We can observe this from the 0.042 drop in the F1 score of our model from what was reported in Wiegreffe and Pinter (2019).
Looking at Figure 2, we can see that the adversarial results that we have achieved are towards the higher-end of the attention JSDs reported by Wiegreffe and Pinter (2019). This is very close to the calculated upper-bound for JSD, which is 0.6931. Wiegreffe and Pinter used the hyperparameter λ in order to reduce the effect of the attention JSD in the value of their loss. With the removal of this hyperparameter in our setup (which is the equivalent of setting it to 1), the adversarial training leads the model to primarily maximize the attention JSD, as it is an easier objective than to minimize the prediction TVD. Therefore, we usually end up with an almost maxed-out attention JSD, and it is mainly the prediction TVD that determines the value of AtteFa. However, we argue that the JSD is not always fully maxed-out (see the plot for the SST dataset in Figure 2), and therefore, we cannot simply disregard it in the computation of AtteFa.
Limitations
There are certain limitations with the current work, in terms of both the methodology used to compute AtteFa, and the different factors affecting the attention faithfulness. In this section, we explore the ones that we believe are the most important:
The current methodology to compute AtteFa is scoped solely on binary text classification. In order to have AtteFa as a widely accepted metric in the NLP community, the methodology needs to be extended to other NLP tasks, such as multi-class classification, text retrieval, question answering, machine translation, etc.
In the current work, we have studied the effect of the frequency of informative tokens on the faithfulness of attention through running experiments on the Mock-* datasets, which are synthetic datasets for sentiment classification. The current selection of sentiment words and their positioning within the input texts were done in a random fashion. A more thorough experiment would explore the effect of the distribution of informative tokens across the input texts (centered towards the start/end/middle vs. scattered evenly), along with a more careful selection of the words to be used as the informative tokens (e.g. differentiating between words with strong vs. weak sentiments).
In terms of investigating the effect of encoder contextualization on the faithfulness of attention, we have explored using token-level MLP as a non-contextual encoder and LSTM as a contextual one. This can be extended to exploring other encoder architectures, such as CNNs (LeCun et al., 1999), GRUs (Cho et al., 2014), and transformers (Vaswani et al., 2017).
Another aspect in the current study which has room for exploration is the evaluation of the effect of softmax temperature on the faithfulness of attention. We believe that higher faithfulness may be achieved by using lower temperatures in the case of datasets with infrequent informative tokens, and higher temperature in the case of datasets with frequent informative tokens within their input.
Last, but not least, the experiments in this work are only focused on a specific type of single-head attention. We believe that the current approach does not transfer properly to multi-headed attentions, as we may still consider a multi-headed attention faithful if the only way for the adversarial model to come up with the same predictions as the base model is to change the order of the attention heads and not the attention weights computed by them. Due to the frequent use of multi-headed attentions in state-of-the-art NLP models, the extension of AtteFa to multi-headed attentions would play a big role in its widespread adoption by the NLP community.
Conclusion
In this paper, we presented an adversarial training approach for binary text classification tasks, which can provide us with the metric AtteFa that quantitatively measures the degree of faithfulness in the attention weights. We, then, measured the effect of contextualization, as well as the effect of the frequency of informative tokens on the attention faithfulness. Finally, we computed and evaluated AtteFa for models trained on several real-life binary text classification datasets.
We hope that the presented approach can act as a motivation for researchers to further explore automatic approaches to quantitatively measure the degree of model explainability or its different aspects (e.g. faithfulness, plausibility, sufficiency, etc.).
As future directions, we plan to address the limitations specified in Section 7 to come up with a more reliable and more widely applicable metric to measure the faithfulness of attention. We also plan to measure attention faithfulness in other settings, e.g. the use of different types of attention such as multi-headed and scaled dot-product (Vaswani et al., 2017), the use of attention components in different layers of a model, etc. 130
1 .
1Having predictions as similar as possible to the base model, and 2. Having attention weight distributions as different as possible from the base model
Figure 1 :
1Distribution of AtteFa across different models and real-life datasets.
Figure 2 :
2Visual comparison of averaged per-instance test set JSD and TVD from base model for each model variant between our adversarial setup and the one fromWiegreffe and Pinter (2019). The • show results fromWiegreffe and Pinter (2019), and the × show results from our setup.
Table 1
1shows the average number of tokens across samples, along with the distribution of the positive/negative samples for each dataset. Since all the synthetic datasets include the same number of samples, class distributions, and average number of tokens across samples, we have included the statistics for them under Mock-*.Dataset
Train
Test
Size (neg/pos) Avg Len (Tokens) Size (neg/pos) Avg Len (Tokens)
Mock-*
4000/4000
75
500/500
75
Diabetes
6650/1416
1985
1389/340
2385
Anemia
1742/2912
2368
512/857
2396
IMDB
8673/8539
180
2189/2174
176
SST
3310/3610
17
912/909
17
AgNews 25508/25492
36
1900/1900
36
20News
612/624
159
192/195
206
Table 1 :
1Summary statistics of the datasets.
).Dataset
LSTM
MLP
Reported Reproduced Reported Reproduced
Mock-1
-
0.974
-
0.975
Mock-2
-
0.988
-
0.989
Mock-5
-
0.999
-
1.000
Mock-10
-
1.000
-
0.999
Mock-1q
-
1.000
-
1.000
Mock-2q
-
1.000
-
1.000
Mock-3q
-
1.000
-
1.000
Mock-4q
-
1.000
-
1.000
Diabetes
0.775
0.733
0.699
0.665
Anemia
0.938
0.935
0.920
0.915
IMDB
0.902
0.908
0.888
0.882
SST
0.831
0.830
0.817
0.816
AgNews
0.964
0.959
-
0.956
20News
0.942
0.935
-
0.878
Table 2 :
2F1 scores by the base model achieved on the test datasets. The F1 scores reported by Wiegreffe and Pinter (2019) have been included under the Reported columns. The MLP setup is equivalent to the Trained MLP setup from Wiegreffe and Pinter (2019).
Mock-10 14±8 1.000±0.000 0.001±0.000 0.693±0.000 0.0012±0.0000 23±31 0.147±0.275 0.249±0.001 0.686±0.000 0.5028±0.0027 Mock-1q 23±12 1.000±0.000 0.000±0.000 0.693±0.000 0.0006±0.0000 37±31 0.465±0.480 0.135±0.120 0.689±0.001 0.2721±0.2407 Mock-2q 35±32 1.000±0.001 0.000±0.000 0.678±0.004 0.0008±0.0010 42±35 0.751±0.324 0.099±0.109 0.691±0.000 0.1983±0.2188dataset
LSTM
MLP
epoch
F1
TVD
JSD
AtteFa
epoch
F1
TVD
JSD
AtteFa
Mock-1
12±5 0.947±0.020 0.015±0.009 0.693±0.000 0.0304±0.0176 20±24 0.221±0.312 0.231±0.012 0.393±0.000 0.8153±0.0425
Mock-2
9±7
0.977±0.010 0.016±0.005 0.693±0.000 0.0329±0.0096
1±0
0.221±0.312 0.246±0.003 0.670±0.000 0.5086±0.0064
Mock-5
7±5
1.000±0.001 0.002±0.000 0.693±0.000 0.0037±0.0008 9±11 0.147±0.275 0.247±0.001 0.686±0.000 0.4987±0.0027
Mock-3q 21±21 1.000±0.000 0.000±0.000 0.681±0.008 0.0003±0.0001 13±13 0.999±0.001 0.001±0.001 0.691±0.000 0.0013±0.0011
Mock-4q
8±4
1.000±0.000 0.000±0.000 0.680±0.004 0.0002±0.0003
3±0
1.000±0.000 0.000±0.000 0.690±0.000 0.0002±0.0000
Diabetes
22±5 0.729±0.003 0.018±0.001 0.693±0.000 0.0367±0.0020 42±27 0.134±0.076 0.147±0.004 0.691±0.000 0.2945±0.0072
Anemia
20±6 0.901±0.018 0.058±0.011 0.693±0.000 0.1164±0.0211 23±10 0.832±0.007 0.093±0.004 0.692±0.000 0.1861±0.0083
SST
21±6 0.823±0.002 0.034±0.002 0.626±0.006 0.0760±0.0034 23±15 0.605±0.028 0.173±0.001 0.656±0.002 0.3645±0.0024
IMDB
49±12 0.889±0.006 0.038±0.004 0.691±0.001 0.0769±0.0090 21±14 0.158±0.056 0.190±0.001 0.689±0.000 0.3826±0.0019
AgNews 49±18 0.958±0.001 0.007±0.001 0.683±0.002 0.0136±0.0015 24±12 0.610±0.032 0.172±0.005 0.671±0.001 0.3558±0.0097
20News
18±5 0.865±0.013 0.046±0.007 0.689±0.001 0.0931±0.0149 24±18 0.340±0.149 0.208±0.004 0.650±0.008 0.4444±0.0133
Table 3 :
3Average and standard deviation of the results from our adversarial setup. The results for every row are reported from 9 different runs with different random seed initializations. The column epoch includes the the number of training epoch for each selected model.
Table 5 :
5All results from our adversarial setup on the real-life datasets.
We picked words with positive and negative sentiment from the following gazetteers, respectively: https://ptrckprry.com/course/ssd/data/ positive-words.txt, https://ptrckprry. com/course/ssd/data/negative-words.txt
AcknowledgmentsWe would like to express our gratitude to Sarah Wiegreffe, Yuval Pinter, Sarthak Jain, and Byron Wallace for the availability of their high quality code, which greatly helped us with the current work. We also thank the anonymous reviewers for their comments on an earlier version of this paper.This work was financially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC).A All Results on the Adversarial SetupTables 4 and 5 are the extended versions of Table 3, which include the results from the adversarial setup for each individual random seed that was used in the training of the adversarial models.
Interpreting neural networks to improve politeness comprehension. Malika Aubakirova, Mohit Bansal, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, Texas, USAMalika Aubakirova and Mohit Bansal. 2016. Interpret- ing neural networks to improve politeness compre- hension. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2035-2041, Austin, Texas, USA.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, Proceedings of the 3rd International Conference on Learning Representations. the 3rd International Conference on Learning RepresentationsSan Diego, California, USAICLR 2015Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Rep- resentations, (ICLR 2015), San Diego, California, USA.
Enriching word vectors with subword information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, 10.1162/tacl_a_00051Transactions of the Association for Computational Linguistics. 5Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.
Learning phrase representations using RNN encoder-decoder for statistical machine translation. Kyunghyun Cho, Caglar Bart Van Merrienboer, Dzmitry Gulcehre, Fethi Bahdanau, Holger Bougares, Yoshua Schwenk, Bengio, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. the 2014 Conference on Empirical Methods in Natural Language ProcessingDoha, QatarKyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), pages 1724-1734, Doha, Qatar.
Interpreting recurrent and attention-based neural models: a case study on natural language inference. Reza Ghaeini, Xiaoli Fern, Prasad Tadepalli, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018). the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018)Brussels, BelgiumReza Ghaeini, Xiaoli Fern, and Prasad Tadepalli. 2018. Interpreting recurrent and attention-based neural models: a case study on natural language inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018), pages 4952-4957, Brussels, Belgium.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, 10.1162/neco.1997.9.8.1735Neural Computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735- 1780.
Attention is not explanation. Sarthak Jain, Byron C Wallace, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019). the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019)Minneapolis, Minnesota, USASarthak Jain and Byron C. Wallace. 2019. Attention is not explanation. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies (NAACL-HLT 2019), pages 3543- 3556, Minneapolis, Minnesota, USA.
Detecting linguistic characteristics of Alzheimer's dementia by interpreting neural models. Sweta Karlekar, Tong Niu, Mohit Bansal, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesShort Papers; New Orleans, Louisiana2Sweta Karlekar, Tong Niu, and Mohit Bansal. 2018. De- tecting linguistic characteristics of Alzheimer's de- mentia by interpreting neural models. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Pa- pers), pages 701-707, New Orleans, Louisiana.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, Proceedings of the 3rd International Conference for Learning Representations (ICLR 2015). the 3rd International Conference for Learning Representations (ICLR 2015)San Diego, California, USADiederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference for Learning Representations (ICLR 2015), San Diego, Califor- nia, USA.
Object recognition with gradientbased learning. Yann Lecun, Patrick Haffner, Léon Bottou, Yoshua Bengio, 10.1007/3-540-46805-6_19Shape, Contour and Grouping in Computer Vision. Yann LeCun, Patrick Haffner, Léon Bottou, and Yoshua Bengio. 1999. Object recognition with gradient- based learning. In Shape, Contour and Grouping in Computer Vision, pages 319-345.
Interactive visualization and manipulation of attention-based neural machine translation. Jaesong Lee, Joong-Hwi Shin, Jun-Seok Kim, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkSystem DemonstrationsJaesong Lee, Joong-Hwi Shin, and Jun-Seok Kim. 2017. Interactive visualization and manipulation of attention-based neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017): System Demonstrations, pages 121-126, Copenhagen, Denmark.
A structured self-attentive sentence embedding. Zhouhan Lin, Minwei Feng, Cícero Nogueira, Mo Santos, Bing Yu, Bowen Xiang, Yoshua Zhou, Bengio, Proceedings of the 5th International Conference on Learning Representations (ICLR 2017). the 5th International Conference on Learning Representations (ICLR 2017)Toulon, FranceZhouhan Lin, Minwei Feng, Cícero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In Proceedings of the 5th In- ternational Conference on Learning Representations (ICLR 2017), Toulon, France.
Effective approaches to attention-based neural machine translation. Thang Luong, Hieu Pham, Christopher D Manning, 10.18653/v1/D15-1166Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalThang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing, pages 1412-1421, Lisbon, Portugal.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, Workshop Proceedings of the International Conference on Learning Representations. Scottsdale, Arizona, USAICLR 2013Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. In Workshop Proceedings of the International Conference on Learning Represen- tations (ICLR 2013), Scottsdale, Arizona, USA.
Explainable prediction of medical codes from clinical text. James Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, Jacob Eisenstein, 10.18653/v1/N18-1100Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2018). the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2018)New Orleans, LouisianaJames Mullenbach, Sarah Wiegreffe, Jon Duke, Ji- meng Sun, and Jacob Eisenstein. 2018. Explainable prediction of medical codes from clinical text. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (NAACL-HLT 2018), pages 1101-1111, New Orleans, Louisiana.
why should I trust you?": Explaining the predictions of any classifier. Marco Ribeiro, Sameer Singh, Carlos Guestrin, 10.18653/v1/N16-3020Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2016): Demonstrations. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2016): DemonstrationsSan Diego, California, USAMarco Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "why should I trust you?": Explaining the pre- dictions of any classifier. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2016): Demonstrations, pages 97-101, San Diego, California, USA.
Grad-cam: Visual explanations from deep networks via gradient-based localization. R Ramprasaath, Michael Selvaraju, Abhishek Cogswell, Ramakrishna Das, Devi Vedantam, Dhruv Parikh, Batra, Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization.
Proceedings of the 2017 IEEE international conference on computer vision (ICCV 2017). the 2017 IEEE international conference on computer vision (ICCV 2017)Venice, ItalyIn Proceedings of the 2017 IEEE international con- ference on computer vision (ICCV 2017), pages 618- 626, Venice, Italy.
Is attention interpretable?. Sofia Serrano, Noah A Smith, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019)Florence, ItalySofia Serrano and Noah A. Smith. 2019. Is attention interpretable? In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics (ACL 2019), pages 2931-2951, Florence, Italy.
Axiomatic attribution for deep networks. Mukund Sundararajan, Ankur Taly, Qiqi Yan, Proceedings of the 2017 International Conference on Machine Learning. the 2017 International Conference on Machine LearningSydney, AustraliaMukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Pro- ceedings of the 2017 International Conference on Machine Learning (ICML 2017), pages 3319-3328, Sydney, Australia.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Proceedings of the 2017 Conference on Advances in Neural Information Processing Systems (NIPS 2017). I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnettthe 2017 Conference on Advances in Neural Information Processing Systems (NIPS 2017)Long Beach, California, USAAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Proceedings of the 2017 Conference on Advances in Neural Information Processing Sys- tems (NIPS 2017), pages 5998-6008. Long Beach, California, USA.
Attention-based LSTM for aspectlevel sentiment classification. Yequan Wang, Minlie Huang, Xiaoyan Zhu, Li Zhao, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP 2016). the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP 2016)Austin, Texas, USAYequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016. Attention-based LSTM for aspect- level sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP 2016), pages 606- 615, Austin, Texas, USA.
Attention is not not explanation. Sarah Wiegreffe, Yuval Pinter, 10.18653/v1/D19-1002Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP 2019). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP 2019)Hong Kong, ChinaSarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP 2019), pages 11-20, Hong Kong, China.
An interpretable knowledge transfer model for knowledge base completion. Qizhe Xie, Xuezhe Ma, Zihang Dai, Eduard Hovy, 10.18653/v1/P17-1088Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017). the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017)Vancouver, CanadaQizhe Xie, Xuezhe Ma, Zihang Dai, and Eduard Hovy. 2017. An interpretable knowledge transfer model for knowledge base completion. In Proceedings of the 55th Annual Meeting of the Association for Com- putational Linguistics (ACL 2017), pages 950-962, Vancouver, Canada.
Visualizing deep neural network decisions: Prediction difference analysis. M Luisa, Taco S Zintgraf, Tameem Cohen, Max Adel, Welling, Proceedings of the 2017 International Conference on Learning Representations. the 2017 International Conference on Learning RepresentationsLuisa M Zintgraf, Taco S. Cohen, Tameem Adel, and Max Welling. 2017. Visualizing deep neural network decisions: Prediction difference analysis. In Pro- ceedings of the 2017 International Conference on Learning Representations (ICLR 2017). |
256,461,387 | Named Entity Recognition as Structured Span Prediction | Named Entity Recognition (NER) is an important task in Natural Language Processing with applications in many domains. While the dominant paradigm of NER is sequence labelling, span-based approaches have become very popular in recent times but are less well understood. In this work, we study different aspects of span-based NER, namely the span representation, learning strategy, and decoding algorithms to avoid span overlap. We also propose an exact algorithm that efficiently finds the set of non-overlapping spans that maximizes a global score, given a list of candidate spans. We performed our study on three benchmark NER datasets from different domains. We make our code publicly available at https://github.com/urchade/ span-structured-prediction. | [
5249216,
1222212,
2794372,
235266246,
52967399,
241583206,
218630027,
52118895,
6042994,
231698515,
52010710,
228084090
] | Named Entity Recognition as Structured Span Prediction
December 7, 2022
Urchade Zaratiana urchade.zaratiana@fi-group.com
LIPN
UMR 7030
CNRS
France
Nadi Tomeh tomeh@lipn.fr
LIPN
UMR 7030
CNRS
France
Pierre Holat pierre.holah@fi-group.com
LIPN
UMR 7030
CNRS
France
Thierry Charnois charnois@lipn.fr
LIPN
UMR 7030
CNRS
France
⋆ FI Group
Named Entity Recognition as Structured Span Prediction
Proceedings of the Workshop on Unimodal and Multimodal Induction of Linguistic Structures (UM-IoS)
the Workshop on Unimodal and Multimodal Induction of Linguistic Structures (UM-IoS)December 7, 2022
Named Entity Recognition (NER) is an important task in Natural Language Processing with applications in many domains. While the dominant paradigm of NER is sequence labelling, span-based approaches have become very popular in recent times but are less well understood. In this work, we study different aspects of span-based NER, namely the span representation, learning strategy, and decoding algorithms to avoid span overlap. We also propose an exact algorithm that efficiently finds the set of non-overlapping spans that maximizes a global score, given a list of candidate spans. We performed our study on three benchmark NER datasets from different domains. We make our code publicly available at https://github.com/urchade/ span-structured-prediction.
Introduction
Named Entity Recognition (NER) is an important task in natural language processing whose goal is to identify and extract salient entities such as persons, organizations and locations from texts. NER systems are typically designed as sequence labelling: token-level prediction utilizing the BIO scheme. While traditional approaches use hand-crafted features along with classical Machine Learning algorithms such as SVMs or decision trees (Carreras et al., 2002;Li et al., 2004), deep learning models learn features directly from the data using for example bi-directional LSTMs (Huang et al., 2015;Lample et al., 2016;Akbik et al., 2018) or more recently pre-trained language models such as BERT (Devlin et al., 2019;Yu et al., 2020).
Recently, span-based NER has gained in popularity. Unlike sequence tagging which operates at the token level, span-based NER operates directly at the span level. The main idea is to enumerate all possible contiguous sequence of tokens of an input text and predict their identity (Lee et al., 2017).
One of the major advantages of the span-based NER is that it can learn a rich representation of the span instead of only learning the representation of each token. In addition, a recent study by Fu et al. (2021) reveals that span-based NERs are better in a context with more OOV words and Li et al. (2021) showed that span-based NERs are much better than sequence labelling in settings with unlabelled entities (missing entities due to annotation errors).
However, unlike sequence labelling, unconstrained span-based approaches tend to produce overlapping entities, which is undesirable for flat, non-overlapping NER tasks. To avoid overlap in span-based NER, two main approaches have been adopted in the literature. The first is the Semi-Markov conditional random field (Sarawagi and Cohen, 2005) that trains a globally normalized model and then uses a Viterbi algorithm to produce the optimal segmentation without span overlap, we call this approach Semi-CRF. The second algorithm is the one employed by Li et al. (2021) for locally normalized span-based NER; it first eliminates all non-entity spans and deals with the overlap conflict by keeping the span with the highest prediction probability while eliminating the others. In this work, we call this approach greedy decoding.
In this paper, we analyze and compare two formulations of span-based NER. The first is a segmentation model of the Semi-CRF; the second is the two-step pipeline of span filtering and decoding. In addition to greedy decoding, we propose an exact algorithm based on Maximum Weighted Independent Set (MWIS) (Hsiao et al., 1992;Pal and Bhattacharjee, 1996) on internal graphs. We build such graphs to encode the overlapping structure between spans. This formulation of the NER task is novel up to our knowledge. For completeness, we include in the comparison a token-based sequence labeling model with a linear-chain CRF.
In order to understand the effect of span representation, we explore different alternatives includ-1 ing max-pooling, convolution and endpoints (representing span by its extreme tokens) and show that endpoints are effective across models and datasets.
Our contributions can be summarized as follow:
• We propose an exact decoding algorithm to eliminate span overlap on locally trained models that overcomes the myopic bias of the greedy approach (Li et al., 2021). We present a detailed comparison with global models.
• We investigate different span representations for span-based NER when using pretrained Transformer models. Our experiment provide a confirmation that the endpoint representation, the currently dominant representation strategy is the most robust.
• We conduct few-shot performance analysis for different modelling. We found that classical sequence labeling models provide strong result for datasets with few entity types, while span-based approaches are better for larger type sets.
Our code for models and experiments is publicly available. 1
Span Representation
Given an input sequence x = [x 1 , . . . , x n ], a span (i, j) is the contiguous segment of tokens [x i , . . . , x j ]. The goal of representation is to compute an embedding vector for each span of an input text which can be used for downstream prediction tasks. We denote h i ∈ R d h the representation of the word at the position i and s ij ∈ R ds the representation of the span (i, j) with the width k = j −i+1; here d h , d s ∈ N + are respectively the embedding sizes for word and span representations. The token representations are computed using a BERT-based model (Devlin et al., 2019). However, since BERT-based tokenization divides the input words into subwords, we take the first subword to represent the whole word, which has proven to be very competitive for several token classification tasks (Beltagy et al., 2019). In the following, we present different approaches for representing the spans.
Endpoints This representation consists in representing a span using the representation of the tokens of its right and left extremities, in addition to a 1 Anonymized for review.
Span representation Num params. span width feature. Specifically, the representation of the span (i, j), s ij is computed as:
Endpoints (2d h + d k )C Maxpool d h C Convolution 1 2 d 2 h K(K + 1) + d h C Convolution (shared) d 2 h K + d h C FirstToken d 2 h K + d h Cs ij := [h i ; h j ; w k ](1)
where w k is a learned vector of width k and [; ] denotes the concatenation operation. Endpoints have been widely used in previous works for span prediction tasks such as NER and coreference resolution (Lee et al., 2017;Luan et al., 2019;Zhong and Chen, 2021).
Max-pooling Since spans consist of a contiguous segment of tokens, pooling operations are a fairly natural way to compute their representations. In this context, we use an element-wise max-pooling operation to all tokens inside the span. Formally,
s ij := MAX([h i ; h i+1 ; . . . ; h j ])(2)
where MAX is the element-wise max pooling operation. Max-pooling has been previously used by Eberts and Ulges (2020) for joint entity and relation extraction.
Convolution Instead of simply applying the pooling operation, we explored aggregating tokens using learned filters via convolution. Specifically, representations of all spans of size k are computed simultaneously using a 1D convolution of kernel size k. To keep the number of parameters linear with respect to the maximum span width, we share the convolution weights across the different span widths. Lei et al. (2021) used this convolutional approach to represent spans for keyphrase extraction.
s ij := Conv1D k ([h i ; h i+1 ; . . . ; h j ]) (3)
FirstToken For this representation, we only use the start token along with span width information:
s ij := W (k) h i (4)
where W (k) ∈ R d h ×d h is the weight matrix associated with width k. Note that the computation of the representation of all spans for this approach can be done in parallel and in a single line of code using einsum operation (Rogozhnikov, 2022). This representation was inspired by the synthetic attention from Tay et al. (2021), where the authors predict attention scores without pairwise interaction.
Number of parameters
The number of parameters required for each span representation is shown in Table 1.
Span scores
We model the task of NER as assigning to each span (i, j) a label from a set of C different types that correspond to named-entity types and special null type, indicating that the span does not correspond to an entity. Label assignment is constrained so that no pair of overlapping spans have entity types (both different from null).
We present two models to solve this structured prediction problem: a locally normalized approach with a zero-order scoring function which does not take into consideration the interactions between label assignment ( §4); and a globally normalized approach with first-order scoring function which considers dependencies between pairs of consecutive spans ( §5).
Both formulations employ the following span scoring function. Given a span representation s ij , the logits ϕ(i, j) ∈ R C for the C different labels are computed using a non-linear activation function followed by an affine transformation:
ϕ(i, j) = W ReLU(s ij ) + f(5)
where W ∈ R ds×C is the final weight matrix, f ∈ R C is the bias vector, and ReLU is the activation function. We denote by ϕ(i, j, l) ∈ R the (unnormalized) score of the label l for the span (i, j).
Locally Normalized Models
Under this approach, we perform span labeling in two steps, span classification followed by a decoding step.
Span Classification
Each span (i, j) is assigned its highest scoring labell ij = arg max l ϕ(i, j, l), and we denotek ij the corresponding highest score. The set of spans classified as entities may contains overlapping spans, a decoding step is therefore required to select a subset with no overlaps.
We learn the parameters 2 of this classifier under a locally normalized setup. The training's objective is to maximize the likelihood for every span label (up to a maximum lenght K) from the training data. The loss function is as follows:
L = − (i,j,l)∈T log exp{ϕ(i, j, l)} l ′ exp{ϕ(i, j, l ′ )}(6)
which is the well-known cross-entropy loss.
Greedy Decoding
Let S = {(i, j) :l ij ̸ = null} be the set of spans classified as entities. The goal of decoding is to find the subset of S that maximizes a global score function:
E * = arg max E⊆S (i,j)∈Ek ij (7) s.t. ∀e, e ′ ∈ E : !overlap(e, e ′ ) ∀u / ∈ E, ∃e ∈ E : overlap(e, u)
where overlap(e, e ′ ) is True if the spans e and e ′ overlap but are not equal. The first constraint in Eq. 7 ensures that the set E is independent, i.e. it doesn't contains overlapping spans; the second constraint ensures that it is maximal, i.e. adding any other span breaks the no-overlap constraint. Greedy decoding constructs an approximation to E * by iteratively adding the highest-scoring entity not overlapping with any previously selected entity. This algorithm is efficient and has a complexity of O(n log n) with n = |S|.
Exact Decoding with MWIS
We define an overlapping graph as the graph G whose nodes are the elements of S and contains an edge between each pair of overlapping spans. Its adjacency matrix is defined as:
A[e, e ′ ] = 1, if overlap(e, e ′ ) 0, otherwise(8)
We associate a weight to each node as provided by its label score ϕ(i, j,l ij ).
An exact solution to Eq. 7 is given by the Maximum Weight Independent Set (MWIS) of the overlapping graph. For general graphs, computing the MWIS is NP-Hard but since our graph can be seen as an interval graph (spans can be considered as intervals over their start and end positions), MWIS has a complexity of O(n log n) or O(n) if the spans are sorted by their endpoint (Hsiao et al., 1992).
Exhaustive Search Decoding
For efficient decoding, the scoring function in Eq. 7 decomposes as a sum over graph nodes. More complex scoring functions do not necessarily admit efficient decoding. Finding an optimal set under the mean scoring function for instance, that is 1 |E| (i,j)∈Ek ij , requires enumerating all possible candidates subsets of S, which is NP-Hard (Johnson et al., 1988;Raman et al., 2007) but feasible for reasonably small interval graphs. In this paper, we experiment with this scoring functions but leave more complex ones for future work.
Globally normalized model
Under this approach, NER is modeled using a semi-Markov segmentation CRF introduced by Sarawagi and Cohen (2005). The input sentence x is segmented into a labeled sequence of spans y. Each segmentation is scored as: 3
Ω(y) = y k =(i,j,l) ϕ(i, j, l) + T l ′ ,l(9)
with y k = (i, j, l) being the labeled span at position k. Unlike the scoring function in Eq. 7, the score here contains the transition scores from label l ′ at position k − 1 to label l, in the learnable matrix T .
Training The parameters of the model are learned to maximize the conditional probability of the gold segmentation in the training data. The probability of a segmentation is computed by globally normalizing the score: P (y|x) = exp{Ω(y) − Z}, where Z is the log partition function log y∈Y(x) exp{Ω(y)}, which sums over all possible segmentation Y(x). This normalization term can be computed in polynomial time using dynamic programming. 3 We drop the dependence on the input x for simplicity. Table 2: This table reports the complexity of the different decoding algorithms. L is the input length, K the maximum segment width, |Y | the number of classes and n the number of spans after filtering non-entities, which is approximately equal to 0.15 × L empirically.
Decoding algorithm Time complexity
CRF O(L|Y | 2 ) Semi-CRF O(LK|Y | 2 ) Greedy decoding O(n log n) MWIS O(n log n) Exhaustive Search (EXT) O(3 n/3 )
Following (Sarawagi and Cohen, 2005), we assume that segments have strictly positive lengths, adjacent segments touch and we assume that nonentity spans have unit length. Decoding Selecting the most probable segmentationŷ = arg max y∈Y(x) Ω(y) is efficiently performed using the segmental variant of the Viterbi algorithm (Sarawagi and Cohen, 2005).
6 Experimental Setup
Datasets
We evaluated our model on three benchmark datasets for Named Entity Recognition: Conll-2003 (Tjong Kim Sang andDe Meulder, 2003), OntoNotes 5.0 (Weischedel et al., 2013) and TDM (Hou et al., 2021 Table 5: Performance for the baseline sequence labelling approach, a BERT-CRF tagger averaged over three seeds.
Evaluation metrics
Our evaluation is based on the exact match between predicted and gold entities. We report the microaveraged precision (P), recall (R) and the F1-score (F) on the test set for models selected on the dev set.
Implementation Details
Backbones For span encoding, we used RoBERTa-base for models trained on Conll-2003 and OntoNotes 5.0 because they come from general domains and we employed SciBERT (Beltagy et al., 2019) for models trained on TDM, which is a scientific NER data set.
Baseline model
We compare the span-based approaches to a sequence labelling BERT-CRF (Beltagy et al., 2019), which we trained on our datasets.
Hyperparameters All models were trained using a single V100 GPU. We trained for up to 25 epochs using Adam (Kingma and Ba, 2017) as the optimizer with a learning rate of 1e-5. We opted for a batch size of 10 and used early stopping with a patience of 5 (on the F1-score) and keep the best model on the validation set for testing.
Libraries We implement our model with pytorch (Paszke et al., 2019). The pre-trained transformer models were loaded from the Hugging-Face's Transformers (Wolf et al., 2020). We employed AllenNLP (Gardner et al., 2018) for data preprocessing and the seqeval library (Nakayama, 2018) for evaluating the baseline sequence labelling model. Our Semi-CRF implementation is based on pytorch-struct (Rush, 2020).
Results
Span Representation
In the following, we analyze the performance of the span representations on both the local model and the Semi-CRF model, as shown in the table 4.
Local models On local models, we find that Global models On Semi-CRF models, the Endpoints representation consistently achieves the best results across datasets. We also notice that the First-Token representation has better result than Maxpool and Convolution on two datasets, Conll-2003 and TDM in this setting. The Endpoints representation is the most reliable overall, since it achieves robust performance regardless of the context in which it is used. However, for optimal performance and given a sufficient amount of compute resources, the span representation should be best tuned on a held-out set. Table 4 shows the performance results of the different decoding algorithms under different settings. For the local models, we can see that the application of decoding always improves the performance of the F1 score, by increasing the precision and by decreasing the recall score. However, there is no significant difference between the greedy decoding and the global decoding since the models are already well trained and thus, the overlap filtering does not make much difference in terms of quantitative results. We will provide more insight on decoding in the subsections 7.3 and 7.5.
Comparison of Decoding for Local Models
Few-Shot Performance
We conducted a study to compare the performance of each model in a few-shot scenario. The evaluation was performed on the test set of each dataset using from 100 to the full training dataset. For this study, we used the Endpoints representation for spans because it is widely used and has shown good performance across different training and decoding schemes. The results of our few-shot evaluation are presented in Table 6.
Semi-CRF is better than the local spans-based approach when overlap filtering is not performed but the local approach performs better than Semi-CRF when the number of data become larger. Furthermore, while the difference between Greedy decoding and MWIS decoding is narrow in the high data regime, we can see that MWIS outperforms Greedy decoding in the low and very low data regime. Furthermore, we notice that the in- crease in performance by decoding is higher when a local model is training on a few datasets while the difference becomes less significant when the number of training data is large. We find that the baseline sequence labelling, BERT-CRF approach is indeed competitive. It most of the time obtains a better performance on Conll-2003 and TDM datasets across any dataset sizes. However, the span-based approach is better on the OntoNotes 5.0 dataset. This can be explained by the fact that OntoNotes 5.0 contains 18 entity types and, therefore, the labelling approach would require 37 labels since it uses a BIO scheme, which makes the task much more difficult.
Analysis of Local Modeling
We previously found that decoding had little effect on our local model performance, especially for high resource datasets. We believe this is due to the fact that we were training with all negative samples (non-entity spans). As a result, the model was overconfident regarding non-entity spans (and not confident enough to predict entity spans) due to this unbalanced training. To resolve this issue, we propose three alternative training procedures to make the classifier leave more room for the decoder.
Negative sampling This approach randomly drops a percentage of the non-entity spans during training, but keeps all positive samples (entity spans). By training with fewer non-entity spans, we expect the model to be less confident and thus predict more entities. This negative sampling has been previously used by Li et al. (2021) to avoid training NER models with unlabeled (or missing) entities.
Down-weighing This method is similar to negative sampling, but instead of randomly eliminating negative samples, this approach retains all negative samples and down-weights their loss contribution while keeping loss for entity spans intact.
Thresholding This approach separates the span classifier into two models: a filtering model to classify whether a span is an entity or not, and a second an entity classification model to classify the entity type. During training, both models are trained end-to-end by multi-task learning with equally weighted losses. For prediction, span filtering is first performed and then the result is passed to the entity classification layer. By default, a span is passed into the entity classification layer if its probability of being an entity is greater than 0.5; however, we here adjust this threshold on the dev set and select the one with best F1 score.
The result from this analysis is show in the table 7. The results of this analysis show that, overall, the use of regularization techniques leads to a significant improvement in decoding accuracy for most datasets. As the most striking example, we can see that on the TDM dataset, the downweighting approach which initially had a precision score of 57.79 was able to increase this score by 13.77 thanks to decoding improvements. Furthermore, it appears that the best approach according to these empirical results is the downw-eighting approach. Under this method, the decoder was most "successful" on both OntoNotes and TDM datasets, meaning it brought the largest improvements relatively to the performance of the local classifier before decoding. Figure 1: Shows how overlapping conflicts are handled by the different decoding algorithm on local span-based NER models. We only include overlaps involving at least three entities, because otherwise all decoding produce the same result.
Qualitative Comparison of Decoding
We performed a qualitative analysis to compare the three decoding approaches for local models. This study is presented in Figure 1, which shows the input text (truncated), the raw prediction with overlap, and the results after applying greedy decoding and the global decoding (MWIS and EXT). We only include overlaps involving more than two spans, because when two spans overlap, all algorithms take the span with the highest score.
We can see that the greedy approach always retrieves the most probable entity since it iteratively selects the best spans that do not overlap with previously selected spans. However, this algorithm tends to suffer from a myopic bias. Second, the MWIS approach, which maximizes the sum of span scores, tends to select as many spans as possible, which means that it favours shorter spans over longer ones. Also, MWIS decoding has a slightly higher recall score most of the time than other decoding algorithms. Finally, EXT decoding, which selects the set of spans that maximizes the average score, tends to select the smallest number of spans, but the selected spans generally have a high score. In general, this decoding tends to favour precision over recall score.
Related Works
Different approaches for NER NER is an important task in Natural Language Processing and is used in many downstream information extraction applications. Usually, NER tasks are designed as sequence labelling (Chiu and Nichols, 2016;Huang et al., 2015;Ma and Hovy, 2016;Lample et al., 2016;Strubell et al., 2017;Rei, 2017;Akbik et al., 2018) where the goal is to predict BIO tags. Recently, different approaches have been proposed to perform NER tasks that go beyond tradi-tional sequence labelling. One approach that has been widely adopted is the span-based approach (Liu et al., 2016;Luan et al., 2018Luan et al., , 2019Fu et al., 2021;Li et al., 2021;Zaratiana et al., 2022;Corro, 2022) where the prediction is done in the span level instead of entity level. Li et al. (2020) has also approached NER as a question answering task in which named entities are extracted by retrieving answer spans. In addition, recent work such as (Cui et al., 2021) considers NER as template filling by fine-tuning a BART encoder-decoder model.
Decoding For the spans-based approach, Semi-Markov has been used previously (Sarawagi and Cohen, 2005;Liu et al., 2016;Kong et al., 2016;Sato et al., 2017), however, their use with a BERTtype model has been little explored, something we did in this paper. The work of Fu et al. (2021) and Li et al. (2021) employed a heuristic decoding to avoid overlap for span-based NER. Their algorithm iteratively chooses the maximum probability entity span that does not overlap with a previously chosen entity span. In this paper, we have proposed an exact version of this algorithm.
Conclusion
We investigated different span representations for NER and found that the endpoint representation is the most robust. Moreover, we have proposed a new formulation of NER using overlapping graphs for which an exact and efficient decoding algorithm exists. We used the formulation to eliminate span overlap on locally trained models. Finally, we conducted few-shot performance analysis for different modelling approaches and found that classical sequence labeling models provide strong results for datasets with few entity types, while span-based approaches are better for larger type sets.
8
For instance, a segmentation of the sentence "Michael Jordan eats an apple ." would be Y =[(0, 1, PER),
Table 1 :
1Number of parameters for different represen-
tation, without including the word representation layer
which is the same for any approach. d h , K and C are
respectively the word embedding size, the maximum
span width and the number of classes. Blue terms are
parameters for computing span representations and Red
terms denote number of parameters for the final layer.
). Conll-2003 is a dataset from the news domain that was designed for extracting entities such as Person, Location and Organisation. OntoNotes 5.0 is a large corpus comprising various genres of text including newswire, broadcast news and telephone conversation. It contains in total 18 different entity types such as Person, Organization, Location, Product or Date. TDM is a NER dataset that was recently published and it was designed for extracting Tasks, Datasets, and Metrics entities from Natural Language Processing papers.Dataset
Entity
types
Train / Dev / Test
Conll-2003
4
14987 / 3466 / 3684
OntoNotes 5.0
18
48788 / 7477 / 5013
TDM
3
1000 / 500 / 500
Table 3 :
3Dataset statistics4
Table 4 :
4This table reports the main results of our study. It shows the performance along different settings including the datasets, the training, decoding and span representations. We report the average across three seeds. Bold numbers indicate the best model/decoding for a fixed representation and underlined numbers indicate the best representation for a fixed model/decoding.Dataset
P
R
F
Conll-2003
91.24 90.68 90.96
OntoNotes 5.0 87.80 88.92 88.36
TDM
69.77 73.65 71.66
Table 6 :
6Few-shot performance. We report the average F1-score across three different seeds in all datasets and different training set sizes.a result one notch below the others. On both the
conll and TDM datasets, Convolution performed
the best, yet the endpoints performed only slightly
worse. However, on OntoNotes, the Maxpool repre-
sentation outperforms all other approaches, while
the Endpoints and Convolution got very similar per-
formance. Out of all the datasets, FirstToken had
the lowest score.
Table 7 :
7Result for the local model when changing the training/loss. The best results before decoding are in bold and the best results after decoding are underlined. For this experiment, we use MWIS as decoding. We report the average over three seeds.
The parameters include all weight matrices from span representation and scoring functions. We omit the parameters from the notation for simplicity.
AcknowledgmentsThis work is partially supported by a public grant overseen by the French National Research Agency (ANR) as part of the program Investissements d'Avenir (ANR-10-LABX-0083). This work was granted access to the HPC/AI resources of [CINES/IDRIS/TGCC] under the allocation 20XX-AD011013096 made by GENCI.
Contextual string embeddings for sequence labeling. Alan Akbik, Duncan Blythe, Roland Vollgraf, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAAssociation for Computational LinguisticsAlan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence label- ing. In Proceedings of the 27th International Con- ference on Computational Linguistics, pages 1638- 1649, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Scibert: A pretrained language model for scientific text. Iz Beltagy, Kyle Lo, Arman Cohan, Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: A pretrained language model for scientific text.
Named entity extraction using AdaBoost. Xavier Carreras, Lluís Màrquez, Lluís Padró, COLING-02: The 6th Conference on Natural Language Learning. Xavier Carreras, Lluís Màrquez, and Lluís Padró. 2002. Named entity extraction using AdaBoost. In COLING-02: The 6th Conference on Natural Lan- guage Learning 2002 (CoNLL-2002).
P C Jason, Eric Chiu, Nichols, 10.1162/tacl_a_00104Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics. 4Jason P.C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Trans- actions of the Association for Computational Linguis- tics, 4:357-370.
A dynamic programming algorithm for span-based nested named-entity recognition in o. Caio Corro, abs/2210.04738ArXiv. 2Caio Corro. 2022. A dynamic programming algorithm for span-based nested named-entity recognition in o(n 2 ). ArXiv, abs/2210.04738.
Template-based named entity recognition using bart. Leyang Cui, Yu Wu, Jian Liu, Sen Yang, Yue Zhang, Leyang Cui, Yu Wu, Jian Liu, Sen Yang, and Yue Zhang. 2021. Template-based named entity recognition using bart.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Pro- ceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Min- nesota. Association for Computational Linguistics.
Span-based joint entity and relation extraction with transformer pretraining. Markus Eberts, Adrian Ulges, abs/1909.07755ArXiv. Markus Eberts and Adrian Ulges. 2020. Span-based joint entity and relation extraction with transformer pre- training. ArXiv, abs/1909.07755.
Span-NER: Named entity re-/recognition as span prediction. Jinlan Fu, Xuanjing Huang, Pengfei Liu, 10.18653/v1/2021.acl-long.558Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline. Association for Computational Linguistics1Jinlan Fu, Xuanjing Huang, and Pengfei Liu. 2021. Span- NER: Named entity re-/recognition as span prediction. In Proceedings of the 59th Annual Meeting of the As- sociation for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7183-7195, Online. Association for Computational Linguistics.
Allennlp: A deep semantic natural language processing platform. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Peters, Michael Schmitz, Luke Zettlemoyer, Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. Allennlp: A deep semantic natural language processing platform.
TDMSci: A specialized corpus for scientific literature entity tagging of tasks datasets and metrics. Yufang Hou, Charles Jochim, Martin Gleize, Francesca Bonin, Debasis Ganguly, 10.18653/v1/2021.eacl-main.59Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeOnline. Association for Computational LinguisticsYufang Hou, Charles Jochim, Martin Gleize, Francesca Bonin, and Debasis Ganguly. 2021. TDMSci: A spe- cialized corpus for scientific literature entity tagging of tasks datasets and metrics. In Proceedings of the 16th Conference of the European Chapter of the Asso- ciation for Computational Linguistics: Main Volume, pages 707-714, Online. Association for Computational Linguistics.
An efficient algorithm for finding a maximum weight 2-independent set on interval graphs. Chuan Yi Ju Yuan Hsiao, Ruay Shiung Tang, Chang, 10.1016/0020-0190(92)90216-IInformation Processing Letters. 435Ju Yuan Hsiao, Chuan Yi Tang, and Ruay Shiung Chang. 1992. An efficient algorithm for finding a maximum weight 2-independent set on interval graphs. Informa- tion Processing Letters, 43(5):229-235.
Bidirectional lstm-crf models for sequence tagging. Zhiheng Huang, Wei Xu, Kai Yu, Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging.
On generating all maximal independent sets. Mihalis David S Johnson, Christos H Yannakakis, Papadimitriou, Information Processing Letters. 273David S Johnson, Mihalis Yannakakis, and Christos H Pa- padimitriou. 1988. On generating all maximal indepen- dent sets. Information Processing Letters, 27(3):119- 123.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, Diederik P. Kingma and Jimmy Ba. 2017. Adam: A method for stochastic optimization.
Lingpeng Kong, Chris Dyer, Noah A Smith, abs/1511.06018Segmental recurrent neural networks. CoRR. Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2016. Segmental recurrent neural networks. CoRR, abs/1511.06018.
Neural architectures for named entity recognition. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer, 10.18653/v1/N16-1030Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsGuillaume Lample, Miguel Ballesteros, Sandeep Subrama- nian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceed- ings of the 2016 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, pages 260-270, San Diego, California. Association for Computational Linguistics.
End-to-end neural coreference resolution. Kenton Lee, Luheng He, Mike Lewis, Luke Zettlemoyer, 10.18653/v1/D17-1018Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsKenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188- 197, Copenhagen, Denmark. Association for Computa- tional Linguistics.
Keyphrase extraction with incomplete annotated training data. Yanfei Lei, Chunming Hu, Guanghui Ma, Richong Zhang, 10.18653/v1/2021.wnut-1.4Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021). the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)Online. Association for Computational LinguisticsYanfei Lei, Chunming Hu, Guanghui Ma, and Richong Zhang. 2021. Keyphrase extraction with incomplete annotated training data. In Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021), pages 26-34, Online. Association for Computational Linguistics.
Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer, and comprehensionMike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: De- noising sequence-to-sequence pre-training for natural language generation, translation, and comprehension.
2020. A unified mrc framework for named entity recognition. Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, Jiwei Li, Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020. A unified mrc framework for named entity recognition.
Empirical analysis of unlabeled entity problem in named entity recognition. Yangming Li, Shuming Shi, International Conference on Learning Representations. Yangming Li, lemao liu, and Shuming Shi. 2021. Empiri- cal analysis of unlabeled entity problem in named entity recognition. In International Conference on Learning Representations.
Svm based learning system for information extraction. Yaoyong Li, Kalina Bontcheva, Hamish Cunningham, Deterministic and Statistical Methods in Machine Learning. Yaoyong Li, Kalina Bontcheva, and Hamish Cunningham. 2004. Svm based learning system for information ex- traction. In Deterministic and Statistical Methods in Machine Learning.
Exploring segment representations for neural segmentation models. Yijia Liu, Wanxiang Che, Jiang Guo, Bing Qin, Ting Liu, Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI'16. the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI'16AAAI PressYijia Liu, Wanxiang Che, Jiang Guo, Bing Qin, and Ting Liu. 2016. Exploring segment representations for neu- ral segmentation models. In Proceedings of the Twenty- Fifth International Joint Conference on Artificial Intelli- gence, IJCAI'16, page 2880-2886. AAAI Press.
. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta: A robustly optimized bert pretraining approachYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach.
Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. Yi Luan, Luheng He, Mari Ostendorf, Hannaneh Hajishirzi, 10.18653/v1/D18-1360Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsYi Luan, Luheng He, Mari Ostendorf, and Hannaneh Ha- jishirzi. 2018. Multi-task identification of entities, re- lations, and coreference for scientific knowledge graph construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3219-3232, Brussels, Belgium. Association for Computational Linguistics.
A general framework for information extraction using dynamic span graphs. Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, Hannaneh Hajishirzi, Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs.
End-to-end sequence labeling via bi-directional lstm-cnns-crf. Xuezhe Ma, Eduard Hovy, Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf.
2018. seqeval: A python framework for sequence labeling evaluation. Hiroki Nakayama, Hiroki Nakayama. 2018. seqeval: A python framework for sequence labeling evaluation. Software available from https://github.com/chakki-works/seqeval.
A sequential algorithm for finding a maximum weight kindependent set on interval graphs. Madhumangal Pal, Bhattacharjee, International Journal of Computer Mathematics. 603-4Madhumangal Pal and GP Bhattacharjee. 1996. A se- quential algorithm for finding a maximum weight k- independent set on interval graphs. International Jour- nal of Computer Mathematics, 60(3-4):205-214.
Alban Desmaison. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Andreas Köpf. Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith ChintalaPytorch: An imperative style, highperformance deep learning libraryAdam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zem- ing Lin, Natalia Gimelshein, Luca Antiga, Alban Des- maison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high- performance deep learning library.
Efficient exact algorithms through enumerating maximal independent sets and other techniques. Theory of Computing Systems. Saket Venkatesh Raman, Somnath Saurabh, Sikdar, 41Venkatesh Raman, Saket Saurabh, and Somnath Sikdar. 2007. Efficient exact algorithms through enumerating maximal independent sets and other techniques. Theory of Computing Systems, 41(3):563-587.
Semi-supervised multitask learning for sequence labeling. Marek Rei, arXiv:1704.07156arXiv preprintMarek Rei. 2017. Semi-supervised multitask learning for sequence labeling. arXiv preprint arXiv:1704.07156.
Einops: Clear and reliable tensor manipulations with einstein-like notation. Alex Rogozhnikov, International Conference on Learning Representations. Alex Rogozhnikov. 2022. Einops: Clear and reliable tensor manipulations with einstein-like notation. In Interna- tional Conference on Learning Representations.
Torch-struct: Deep structured prediction library. Alexander M Rush, Alexander M. Rush. 2020. Torch-struct: Deep structured prediction library.
Semimarkov conditional random fields for information extraction. Sunita Sarawagi, William W Cohen, Advances in Neural Information Processing Systems. MIT Press17Sunita Sarawagi and William W Cohen. 2005. Semi- markov conditional random fields for information ex- traction. In Advances in Neural Information Processing Systems, volume 17. MIT Press.
Segment-level neural conditional random fields for named entity recognition. Motoki Sato, Hiroyuki Shindo, Ikuya Yamada, Yuji Matsumoto, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingTaipei, Taiwan2Asian Federation of Natural Language ProcessingMotoki Sato, Hiroyuki Shindo, Ikuya Yamada, and Yuji Matsumoto. 2017. Segment-level neural conditional random fields for named entity recognition. In Pro- ceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Pa- pers), pages 97-102, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Fast and accurate sequence labeling with iterated dilated convolutions. Emma Strubell, Patrick Verga, David Belanger, Andrew Mccallum, Emma Strubell, Patrick Verga, David Belanger, and An- drew McCallum. 2017. Fast and accurate sequence labeling with iterated dilated convolutions.
Synthesizer: Rethinking self-attention in transformer models. Yi Tay, Dara Bahri, Donald Metzler, Da-Cheng Juan, Zhe Zhao, Che Zheng, abs/2005.00743ArXiv. Yi Tay, Dara Bahri, Donald Metzler, Da-Cheng Juan, Zhe Zhao, and Che Zheng. 2021. Synthesizer: Re- thinking self-attention in transformer models. ArXiv, abs/2005.00743.
Introduction to the CoNLL-2003 shared task: Languageindependent named entity recognition. Erik F Tjong, Kim Sang, Fien De Meulder, Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003. the Seventh Conference on Natural Language Learning at HLT-NAACL 2003Erik F. Tjong Kim Sang and Fien De Meulder. 2003. In- troduction to the CoNLL-2003 shared task: Language- independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learn- ing at HLT-NAACL 2003, pages 142-147.
Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Ontonotes release 5.0 ldc2013t19. Linguistic Data Consortium. Philadelphia, PA23Ralph Weischedel, Martha Palmer, Mitchell Marcus, Ed- uard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. Ontonotes release 5.0 ldc2013t19. Linguis- tic Data Consortium, Philadelphia, PA, 23.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Xu, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Huggingface's transformers: State-of-the-art natural language processingThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chau- mond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davi- son, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Syl- vain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Huggingface's transformers: State-of-the-art natural language processing.
Named entity recognition as dependency parsing. Juntao Yu, Bernd Bohnet, Massimo Poesio, ACL. Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020. Named entity recognition as dependency parsing. In ACL.
GNNer: Reducing overlapping in spanbased NER using graph neural networks. Urchade Zaratiana, Nadi Tomeh, Pierre Holat, Thierry Charnois, 10.18653/v1/2022.acl-srw.9Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop. the 60th Annual Meeting of the Association for Computational Linguistics: Student Research WorkshopDublin, IrelandAssociation for Computational LinguisticsUrchade Zaratiana, Nadi Tomeh, Pierre Holat, and Thierry Charnois. 2022. GNNer: Reducing overlapping in span- based NER using graph neural networks. In Proceed- ings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 97-103, Dublin, Ireland. Association for Compu- tational Linguistics.
A frustratingly easy approach for entity and relation extraction. Zexuan Zhong, Danqi Chen, Zexuan Zhong and Danqi Chen. 2021. A frustratingly easy approach for entity and relation extraction. |
227,905,304 | [] | Chinese Grammatical Error Diagnosis with Graph Convolution Network and Multi-task Learning
Association for Computational LinguisticsCopyright Association for Computational LinguisticsDecember 4, 2020. 2020
Yikang Luo †luoyikang@sjtu.edu.cn‡zuyi.bzy
School of Software
Shanghai Jiao Tong University
ShanghaiChina
Zuyi Bao
Alibaba Group
Chen Li
Alibaba Group
Rui Wang
Alibaba Group
Chinese Grammatical Error Diagnosis with Graph Convolution Network and Multi-task Learning
Proceedings of the 6th Workshop on Natural Language Processing Techniques for Educational Applications
the 6th Workshop on Natural Language Processing Techniques for Educational ApplicationsSuzhou, ChinaAssociation for Computational LinguisticsDecember 4, 2020. 202044
This paper describes our participating system on the Chinese Grammatical Error Diagnosis (CGED) 2020 shared task. For the detection subtask, we propose two BERT-based approaches 1) with syntactic dependency trees enhancing the model performance and 2) under the multi-task learning framework to combine the sequence labeling and the sequenceto-sequence (seq2seq) models. For the correction subtask, we utilize the masked language model, the seq2seq model and the spelling check model to generate corrections based on the detection results. Finally, our system achieves the highest recall rate on the top-3 correction and the second best F1 score on identification level and position level. * This work was done when Yikang Luo was an intern in Alibaba Group.
Introduction
Chinese has become an influential language all over the world. More and more people choose Chinese as a second/foreign language (CSL/CFL). Their writings usually contain grammatical errors including spelling and collocation errors. For instance, a Japanese learner may write "我苹果喜欢" (I apple like) while its correct expression should be "我 喜欢苹果" (I like the apple). The inconsistency of Chinese and Japanese grammatical structures will lead to different expression order. Grammatical structure in Chinese is different from other languages and affects expression.
The previous works used to do feature engineering including pretrained features and parsing features to improve performance. In this paper, we fertilize the representations from BERT with the syntactic dependency tree and propose a multi-task learning of error detection and correction. We employ three strategies based on BERT for correction based on detection results. Experiment shows that our system is effective on both detection and correction level. Our contributions are summarized as follows:
• We propose the graph-convolutional-networkbased (GCN-based) approach to improve the baseline model's understanding of syntactic dependency and introduce the sequence-tosequence (seq2seq) model to improve the performance of the original sequence labeling task. • We combine three approaches including the masked language model, the seq2seq and the Chinese spelling check to correct the erroneous sentences based on the detection results. • We get the highest recall rate of the top-3 correction and the second highest F1 score at the identification level and position level of the detection.
This paper is organized as follows. Section 2 describes the CGED task. Section 3 describes our system for grammatical error detection and correction. Section 4 reports the experimental results conducted by the proposed methods. Section 5 concludes this work.
Chinese Grammatical Error Diagnosis
The CGED shared task has been held since 2014. Several sets of training data have been released written by CFL learners which contain a lot of grammatical errors. For detection, the CGED defines four types of errors: (1) R (redundant word Figure 1. The performance is measured at detection level, identification level and position level. For correction, systems are required to recommend at most 3 corrections for missing and selection errors.
System Description
BERT-CRF
Previous works regard the detection task as the sequence labeling problem solving by the LSTM-CRF model (Huang et al., 2015). We introduce the BERT model (Devlin et al., 2018) to replace the LSTM model. For different pretrained BERT models, we choose the StructBERT (Wang et al., 2019) as our main body model. One of the reasons is that its pretraining strategy Word Structural Objective accepts sentences with wrong word order, which is similar to the word ordering errors in this task.
BERT-GCN-CRF
Previous works (Yang et al., 2017;Fu et al., 2018) spent a lot of effort in feature engineering including pretrained features and parsing features. Part-ofspeech-tagging(POS), and dependency information are the most important parsing features, which indicates to us the task is closely associated with the structure of the sentence syntactic dependency. Specifically, the redundant error and the missing error sentences syntax tree are very different from the correct sentences as the Figure 2 shows.
To understand the dependency structure of an input sentence better, we introduce the Graph Convolution Network (GCN) (Kipf and Welling, 2016; Marcheggiani and Titov, 2017). Figure 3 shows our BERT-GCN-CRF model architecture. We will explain each part in detail.
Word Dependency
We split the input sentences into words and obtain the dependency relation of Graph Convolution Network The multi-layer GCN network accepts the high-level character information obtained by the BERT model and the adjacency matrix of the dependency tree. The convolution operation is adopted for each layer.
引起 吸烟 会 引 起 病 吸 烟 会 许多 病 许 多 引起 吸烟 会 引 起 病 吸 烟 会 许多 病 许 多f (A, H l ) = AH l W g l(1)
where W g l ∈ R D×D is a trainable matrix for the l-th layer, A is the adjacency matrix of the dependency tree, H l = (h 1 , h 2 , ..., h n ) is the hidden state of the characters. Words use the same input representation in the network to indicate the dependency relation of the characters.
Accumulated Output After the graph convolution network, we concatenate the representation H l for the l-th layer and the BERT hidden state passing to a linear classifier as the input of the CRF layer.
V = Linear(H 0 ⊕ H l )(2)
CRF Layer A CRF layer is introduced to predict the sequence tags for each token. Figure 4: The structure of the multi-task learning where X, Y ,Ŷ represents the input sequence, the truth tag sequence, and an arbitrary label sequence, V represents the emission scores, and A is the transition scores matrix of the CRF layer. The loss function is calculated as:
Score(X, Y ) = n i=0 A y i ,y i+1 + n i=1 V i,y i (3) P (Y |X) = exp(Score(X, Y )) Ŷ exp(Score(X,Ŷ ))(4)Loss sl = − log(P (Y |X))(5)
We use Viterbi Decoding (Huang et al., 2015) to inference answers.
Multi-task
Most previous works trained their model by the sequence tags (Yang et al., 2017;Li and Qi, 2018;Fu et al., 2018). We utilize not only tags but also correct sentences during the training process. Correct sentences are important for providing better representation in the hidden state. Moreover, with the correct sentences, the model can have a better understanding of the original meaning of the input sentence. Therefore, we introduce the seq2seq task (Sutskever et al., 2014;Vaswani et al., 2017) treating the training process as multi-task learning. As shown in Figure 4, the sequence labeling model is the encoder in our structure combined with the transformer decoders to predict the truth sentence. The sequence labeling loss and the seq2seq loss are combined by a hyper-parameter w: (6) During the inference phase, we use the sequence labeling module to predict answers.
Loss = w * Loss sl + (1 − w) * Loss seq2seq
Ensemble Mechanism
To take advantage of the predictions from multiple error detection models, we employ a two-stage voting ensemble mechanism.
In the first stage, predictions from multiple models are utilized to distinguish the correct sentences from the sentences with grammar errors. Specifically, we label the sentences as correct when less than θ det models detect errors in the sentence.
In the second stage, an edit-level voting is applied to the predictions for the sentences with grammar errors. We only include edits that appear in the predictions of more than θ edit models.
In the experiments, we use the grid search to choose the θ det and θ edit according to the performance on the validation data.
Correction
For the selection (S) and missing (M) errors, we introduce two methods to generate corrections.
In the first method, we insert mask tokens into the sentence and use BERT to generate correction by replacing mask tokens one by one in an autoregressive style. In the experiments, we insert 1 to 4 mask tokens to cover most of the cases and adopt the beam-search algorithm to reduce the search complexity.
In the second method, we generate the candidates by a seq2seq model trained by mapping the wrong sentences to the correct sentences. According to the detection result, we keep generating next characters until the correct character appears within the beam-search algorithm, and then replace the incorrect span.
Chinese Spelling Check
The Chinese Spelling Check (CSC) models are utilized to handle spelling errors. We combine the results from a rule-based checker and a BERT-based spelling checker learned from the CSC data (Bao et al., To appear). The rule-based checker is good at handling non-word errors. The BERT-based checker treats the CSC task as a sequence labeling problem and is good at handling real-word errors. The corrections are then segmented and aligned with the input sentences to get the edited results on the word-level. As the CSC models show a high precision on the validation data, we treat the spelling errors as word selection errors and directly merge the CSC results into the detection and correction results for our final submissions.
Experiments
Data and Experiment Settings
We trained our models by CGED 2015CGED , 2016CGED , 2017CGED , 2018 training data and used pairs of error sentences Table 2: Final results on the official evaluation testing data. "Run #1" represents the ensemble model with correction. "Run #2" represents the single best model with correction. "Run #3" represents the ensemble model with correction and CSC. "Top 1" reports the highest F1 score with its precision and recall at different levels.
and correct sentences for the seq2seq training without extra data. We used the CGED-2018 testing dataset as our validation dataset. We introduced the BIOES ( Ratinov and Roth, 2009) scheme for tagging. Language Technology Plantform (LTP) (Che et al., 2010) was introduced to obtain the dependency tree. The hyper-parameters are selected according to the performance on the validation data through official metrics. For the GCN model, the hidden vector size was 256 with 2 layers. The batch size, learning rate, and GCN dropout were set to 32, 1e-5, 0.2. For the multi-task model, the batch size, learning rate and w are set to 32, 3e-5, 0.9.
Transformer decoder parameters are initialized from the BERT parameters as much as possible.
Validation Results
We use the BERT-CRF (base) and StructBERT-CRF (large) as our baseline models. The results of different methods are listed in Table 1. The StructBERT-CRF (large) overwhelms the BERT-CRF (base) model by obtaining a significantly better recall rate on all levels.
Both GCN and multi-task approaches achieve improved performance over the baseline model in identification level and position level. Thus, we select StrcutBERT-GCN-CRF and StructBERT-CRF + multi-task models for ensemble.
To obtain diverse single models for ensemble, we trained 38 StrcutBERT-GCN-CRF models and 65 StructBERT-CRF + multi-task models with different random seeds and hyper-parameters. As shown in Table 1 nism achieves an obvious improvement over the single models. We evaluated the contribution of the GCN network of the redundant and missing error type. The experiment shows the effectiveness of the BERT-GCN-CRF model to resolve redundancy and missing errors.
Testing Results
For the final submission, we submitted three results from different strategies: (1) single best model with correction;
(2) ensemble model with correction; (3) ensemble model with correction and CSC.
As shown in Table 2, our system approach achieves the second highest F1 scores at identification level and position level by a balanced precision and recall and highest recall rate at top-3 correction. One of the reasons for the detection gap is that for an error sentence there are multiple methods to modify the sentence and the modification granularity is difficult to control.
Most of the sentences in our training data contain grammar errors and the ensemble mechanism is tuned based on the F1 score on the validation data. These factors hurt the precision at detection level as well as the False Positive Rate.
Conclusion
This article describes our system in the CGED shared task. We proposed two approaches including BERT-GCN-CRF model and multi-task learning to improve the baseline model to detect grammatical errors. We also designed three approaches including masked language model, seq2seq and spelling check to correct these errors. We got first place in the recall rate of the top-3 correction and got the second highest F1 scores at the identification level and position level.
Figure 1 :
1A sample of the training data.
Figure 2 :
2The different structures of syntax tree between the error sentence and right sentence. errors);(2) M (missing words); (3) W (word ordering errors);(4) S (word selection errors) as shown in
Figure 3 :
3The structure of BERT-GCN-CRF model each word. As BERT acts on character level in Chinese, we add extra dependency edges for one word to all of characters of the word.
Table 1 :
1The results of single models and ensemble model on validation dataset.Detection
Identification
Position
Correction
Top-3 Correction
Pre
Rec
F1
Pre
Rec
F1
Pre
Rec
F1
Pre
Rec
F1
Pre
Rec
F1
Run#1 92.8 84.4 88.4 72.2 61.2 66.3 43.7 33.7 38.1 13.6 11.0 12.1
7.7
18.4 10.8
Run#2 91.6 86.4 89.0 71.9 54.5 62.0 42.4 27.3 33.2 18.9 12.5 15.0
9.6
17.7 12.5
Run#3 92.5 86.0 89.1 72.3 62.9 67.3 44.3 36.1 39.8 17.8 15.3 16.5
9.3
22.8 13.3
Top 1
85.7 97.6 91.2 73.6 62.1 67.4 47.2 35.4 40.4 28.5 14.2 18.9 32.2 13.3 18.9
, the proposed ensemble mecha-Model
Type Precision Recall
F1
BERT-CRF
R
42.6
28.3
34.0
BERT-GCN-CRF
R
36.2
34.8
35.4
BERT-CRF
M
36.3
26.6
30.7
BERT-GCN-CRF
M
32.8
30.0
31.7
Table 3 :
3The position level performance of the BERT-CRF and BERT-GCN-CRF model on validation data. "R" denotes the redundant error and "M" denotes the missing error.
To appear. Chunkbased chinese spelling check with global optimization. Zuyi Bao, Chen Li, Rui Wang, Proceedings of the EMNLP 2020. the EMNLP 2020Zuyi Bao, Chen Li, and Rui Wang. To appear. Chunk- based chinese spelling check with global optimiza- tion. In Proceedings of the EMNLP 2020.
Ltp: A chinese language technology platform. Wanxiang Che, Zhenghua Li, Ting Liu, COL-ING 2010, 23rd International Conference on Computational Linguistics, Demonstrations Volume. Beijing, ChinaWanxiang Che, Zhenghua Li, and Ting Liu. 2010. Ltp: A chinese language technology platform. In COL- ING 2010, 23rd International Conference on Com- putational Linguistics, Demonstrations Volume, 23- 27 August 2010, Beijing, China.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Ming Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing.
Chinese grammatical error diagnosis using statistical and prior knowledge driven features with probabilistic ensemble enhancement. Ruiji Fu, Zhengqi Pei, Jiefu Gong, Wei Song, Dechuan Teng, Wanxiang Che, Shijin Wang, Guoping Hu, Ting Liu, 10.18653/v1/W18-3707Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications. the 5th Workshop on Natural Language Processing Techniques for Educational ApplicationsMelbourne, AustraliaAssociation for Computational LinguisticsRuiji Fu, Zhengqi Pei, Jiefu Gong, Wei Song, Dechuan Teng, Wanxiang Che, Shijin Wang, Guoping Hu, and Ting Liu. 2018. Chinese grammatical error di- agnosis using statistical and prior knowledge driven features with probabilistic ensemble enhancement. In Proceedings of the 5th Workshop on Natural Lan- guage Processing Techniques for Educational Appli- cations, pages 52-59, Melbourne, Australia. Associ- ation for Computational Linguistics.
Bidirectional lstm-crf models for sequence tagging. Zhiheng Huang, Wei Xu, Kai Yu, Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirec- tional lstm-crf models for sequence tagging.
Semisupervised classification with graph convolutional networks. N Thomas, Max Kipf, Welling, Thomas N. Kipf and Max Welling. 2016. Semi- supervised classification with graph convolutional networks.
Chinese grammatical error diagnosis based on policy gradient LSTM model. Changliang Li, Ji Qi, 10.18653/v1/W18-3710Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications. the 5th Workshop on Natural Language Processing Techniques for Educational ApplicationsMelbourne, AustraliaAssociation for Computational LinguisticsChangliang Li and Ji Qi. 2018. Chinese grammati- cal error diagnosis based on policy gradient LSTM model. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educa- tional Applications, pages 77-82, Melbourne, Aus- tralia. Association for Computational Linguistics.
Encoding sentences with graph convolutional networks for semantic role labeling. Diego Marcheggiani, Ivan Titov, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingDiego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for se- mantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing.
Design challenges and misconceptions in named entity recognition. Lev Ratinov, Dan Roth, Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009). the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009)Boulder, ColoradoAssociation for Computational LinguisticsLev Ratinov and Dan Roth. 2009. Design chal- lenges and misconceptions in named entity recog- nition. In Proceedings of the Thirteenth Confer- ence on Computational Natural Language Learning (CoNLL-2009), pages 147-155, Boulder, Colorado. Association for Computational Linguistics.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, Advances in neural information processing systems. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing sys- tems, pages 3104-3112.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.
Structbert: Incorporating language structures into pre-training for deep language understanding. Wei Wang, Bin Bi, Ming Yan, Chen Wu, Zuyi Bao, Liwei Peng, Luo Si, Wei Wang, Bin Bi, Ming Yan, Chen Wu, Zuyi Bao, Li- wei Peng, and Luo Si. 2019. Structbert: Incorpo- rating language structures into pre-training for deep language understanding.
Alibaba at IJCNLP-2017 task 1: Embedding grammatical features into LSTMs for Chinese grammatical error diagnosis task. Yi Yang, Pengjun Xie, Jun Tao, Guangwei Xu, Linlin Li, Luo Si, Asian Federation of Natural Language Processing. Taipei, TaiwanProceedings of the IJCNLP 2017, Shared TasksYi Yang, Pengjun Xie, Jun Tao, Guangwei Xu, Linlin Li, and Luo Si. 2017. Alibaba at IJCNLP-2017 task 1: Embedding grammatical features into LSTMs for Chinese grammatical error diagnosis task. In Pro- ceedings of the IJCNLP 2017, Shared Tasks, pages 41-46, Taipei, Taiwan. Asian Federation of Natural Language Processing. |
||
248,779,951 | Named Entity Recognition for Cancer Immunology Research Using Distant Supervision | Cancer immunology research involves several important cell and protein factors. Extracting the information of such cells and proteins and the interactions between them from text are crucial in text mining for cancer immunology research. However, there are few available datasets for these entities, and the amount of annotated documents is not sufficient compared with other major named entity types. In this work, we introduce our automatically annotated dataset of key named entities, i.e., T-cells, cytokines, and transcription factors, which engages the recent cancer immunotherapy. The entities are annotated based on the UniProtKB knowledge base using dictionary matching. We build a neural named entity recognition (NER) model to be trained on this dataset and evaluate it on a manually-annotated data. Experimental results show that we can achieve a promising NER performance even though our data is automatically annotated. Our dataset also enhances the NER performance when combined with existing data, especially gaining improvement in yet investigated named entities such as cytokines and transcription factors. | [
1222212,
52967399,
6628106,
7985741,
52118895,
53080784,
6042994
] | Named Entity Recognition for Cancer Immunology Research Using Distant Supervision
May 26, 2022
Hai-Long Trieu
Artificial Intelligence Research Center (AIRC)
National Institute of Advanced Industrial Science and Technology (AIST)
Japan
National Centre for Text Mining
University of Manchester
United Kingdom
Makoto Miwa makoto-miwa@toyota-ti.ac.jp
Artificial Intelligence Research Center (AIRC)
National Institute of Advanced Industrial Science and Technology (AIST)
Japan
Toyota Technological Institute
Japan
Sophia Ananiadou sophia.ananiadou@manchester.ac.uk
National Centre for Text Mining
University of Manchester
United Kingdom
Named Entity Recognition for Cancer Immunology Research Using Distant Supervision
Proceedings of the BioNLP 2022 workshop
the BioNLP 2022 workshopDublin, IrelandMay 26, 2022171
Cancer immunology research involves several important cell and protein factors. Extracting the information of such cells and proteins and the interactions between them from text are crucial in text mining for cancer immunology research. However, there are few available datasets for these entities, and the amount of annotated documents is not sufficient compared with other major named entity types. In this work, we introduce our automatically annotated dataset of key named entities, i.e., T-cells, cytokines, and transcription factors, which engages the recent cancer immunotherapy. The entities are annotated based on the UniProtKB knowledge base using dictionary matching. We build a neural named entity recognition (NER) model to be trained on this dataset and evaluate it on a manually-annotated data. Experimental results show that we can achieve a promising NER performance even though our data is automatically annotated. Our dataset also enhances the NER performance when combined with existing data, especially gaining improvement in yet investigated named entities such as cytokines and transcription factors.
Introduction
Cancer immunology research has a central focus on T lymphocytes (T-cells), which engage the immune system in fighting against cancer (Luckheeram et al., 2012;Waldman et al., 2020;Kim et al., 2021). The development of T-cells can be guided by cytokines and transcription factors (Hosokawa and Rothenberg, 2018). Transcription factors (TF) are nuclear proteins that bind specific gene sequences and involved in decision-making processes during T-cell differentiation (Naito et al., 2011;Xia et al., 2019). Meanwhile, cytokines are signaling molecules secreted and sensed by immune and other cell types (Kveler et al., 2018). Extracting T-cell, cytokine, and TF entities and the interactions between them can be crucial for text mining in cancer immunology research.
However, there are few existing datasets containing these entities to train text mining models. At the core of text mining tasks, the named entity recognition (NER) task also lacks such datasets for training NER models to detect these named entities, which may limit the development of text mining systems in this cancer immunology research field. There is an existing T-cell related named entity dataset called TCRE (Czech and Hammerbacher, 2019), but the amount of annotated data is also limited to only 89 documents. Several knowledge bases related to immune system have been proposed such as immuneXpresso (Kveler et al., 2018) and DES-Tcell (AlSaieedi et al., 2021), which contain cell type and cytokine information, but they lack utilizing and evaluating with modern NER models on these named entities.
In this paper, as a step to fill these gaps and promote the development of text mining systems on these named entities in cancer immunology research articles, we present our automatically annotated dataset containing named entities of T-cell, cytokine and TF, which are important for mining and understanding cancer immunology research articles. The entities in the dataset are automatically annotated using dictionary matching based on the UniProtKB (UniProt-Consortium, 2021), a knowledgebase of protein sequences with functional information. 1 From the annotations of cytokine and TF entries in UniProtKB, a dictionary is constructed to annotate cytokine and TF named entities in their referenced PubMed articles. Additionally, we utilized the existing JNLPBA corpus, which contains manually annotated protein named entities, to annotate cytokine and TF entities. We build a NER model based on the span-based model with pre-trained BERT. We trained the NER model on our automatically annotated dataset and evaluated the model on an existing manually annotated T-cell related named entity TCRE dataset (Czech Item cytokine TF # UniProtKB entries 1,001 3,418 # Dictionary size 6,859 20,055 # Collected articles 585 1,903 Table 1: UniProtKB entries and annotated data and Hammerbacher, 2019). We achieve a promising result that the NER model trained on our automatically annotated data gains a slightly lower performance than a supervised NER model trained on a manually annotated data, although our data is automatically annotated. Furthermore, our data enhances NER performance when combined with the existing manually annotated data.
Approach
We present our datasets containing three named entity types: cell_type, cytokine, and transcription factor (TF). The datasets are automatically annotated using dictionary matching with the entries in the UniProtKB in two different ways.
UniProtKB
Cytokine and TF queries From the UniProtKB, we obtain entries by querying cytokine. We filtered the options to keep only Reviewed annotations (manually annotated, added by expert biocuration team) and for Human organism. Similarly, we conducted for transcription factor. They are equivalent to the following queries.
• cytokine AND reviewed:yes AND organism:"Homo sapiens (Human) [9606]".
• transcription factor AND reviewed:yes AND organism:"Homo sapiens ( abstracts that contain a large number (≥ k) of cytokine/TF protein and gene names (we set k = 20, which we based on several preliminary experiments to remove abstracts containing few annotations). We present the statistics of UniProtKB entries and related annotated data in Table 1.
Automatically Annotated Datasets
We constructed two automatically annotated datasets using the UniProtKB-dictionary. The statistics for automatically annotated datasets are presented in Table 2.
Knowledge-based Annotation (KB-T-cell)
Annotating cytokine and TF From the UniPro-tKB dictionary, we identify the position of each name in the collected articles by strict text matching to annotate cytokine and TF named entities.
Annotating cell_type We found that JNLPBA (Collier and Kim, 2004) is a large manually annotated dataset for NER, which contains named entities of cell_type, protein, etc. Therefore, we utilized the JNLPBA data to train a NER model to predict cell_type named entities in the collected articles. We build a neural-based NER method with span-based and pre-trained BERT model, which we present in §3. These cell_type entities are combined with the cytokine and TF named entities, and we named KB-T-cell.
Dictionary-based Re-annotation (Dic-T-cell)
Since the JNLPBA dataset contains protein entities while CT and TF are proteins, we utilized the annotated protein names in the JNLPBA to annotate cytokine and TF entities. Specifically, if an annotated protein name in the JNLPBA is included in the UniProtKB-dictionary, we re-annotate it as cytokine or TF, correspondingly. We ignored documents which do not contain any matched CT/TF entity. We named this dataset as Dic-T-cell.
NER model
We explain the NER model to be trained on the annotated datasets. We build a neural-based NER model using a span-based method (Lee et al., 2017;Luan et al., 2018) and finetuned pre-trained BERT (Devlin et al., 2019). Specifically, each sentence is split into sub-word sequences, which are passed through the BERT layer for contextual representations. Then, for each span (i.e., a sequence of continuous words in a sentence), its representation is calculated by concatenating the representations of the first, last, and averaged sub-words of the span, which follows (Sohrab and Miwa, 2018a; Trieu et al., 2020). Finally, each span representation is passed to classifiers to predict named entity types for each span.
Experiments
Data
We used our datasets KB-T-cell and Dic-T-cell to train NER models using the NER model introduced in §3 and evaluated NER performance.
TCRE For evaluation data, we employed the TCRE (Czech and Hammerbacher, 2019), an existing manually annotated data which contains 89 documents of cell_type, cytokine, and TF named entities. We utilized this data for training supervised NER models and for evaluation. The original TCRE dataset contains a mixture of both abstract and full-text documents. For the scope of this paper, we aim at utilizing only abstracts from both UniProtKB's references and JNLPBA data. Therefore, we used only the abstract documents and the abstract section of full-text documents from the TCRE data. The data statistics of the datasets are presented in Table 2.
Settings
Cross validation We conducted k-fold cross validation evaluation on the TCRE dataset. Since the TCRE data size is quite small, we set k = 3 to ensure a reasonable amount of data in the test set. For each fold, we further randomly split the training set into train/development sets so that we can tune hyper-parameters to get the best models on the development set. Finally, all of our reported results are based on the TCRE test set in each fold.
NER training settings Our model was implemented on PyTorch (Paszke et al., 2017). We used the BERT model from the PyTorch Pretrained BERT repository 2 as our BERT layer. We employed the pre-trained SciBERT model (Beltagy et al., 2019) trained on large-scale biomedical texts. The model is trained on multiple GPUs in the AI Bridging Cloud Infrastructure (ABCI) 3 . We train the model with the Adam optimizer (Kingma and Ba, 2015), gradient clipping, dropout, and L2 regularization. The model is trained with earlystopping, and the training mini-batch size is set as 16.
Evaluation settings We compared the following NER models, which mostly differ in the training data settings.
1. Matching-NER: we created a baseline using dictionary matching. The dictionary is built from the entity's texts of the JNLPBA training data (for cell_type) and the UniProtKBdictionary for cytokine and TF.
2. Supervised-NER: we used the training set of the TCRE data to train the NER model. The results are reported based on the commonly used micro-averaged precision (P), recall (R), and F-score (F) metrics at entity level.
Results
We compare the results of different NER models on each data fold in Table 3.
Enhancement Using our automatically annotated dataset, we achieved the best performance with 2-5% point improvements in Fscore (Enhanced-KB-NER) in comparison with the Supervised-NER in all of the data folds. Table 4: Results on each entity type in F-score (%). The underline scores are higher than the Supervised-NER's.
Supervised vs. unsupervised When training NER models on our automatically annotated datasets (KB-NER, Dic-NER, KB-Dic-NER), the performance is lower than the Supervised-NER, which is trained on a time-consuming manually annotated data. The degraded performance is about 5-7% points in F-score, which are acceptable considering that our datasets are automatically annotated. We can further improve the quality of our datasets in future work, such as filtering noisy annotations.
Dictionary matching Since our automatically annotated data is based on the dictionary built from the UniProtKB and JNLPBA, we may raise a question whether using only the dictionary with the same vocabulary is still enough. The results of KB-NER and Dic-NER show that our automatically annotated data can improve from 11-15% in comparison with the Matching-NER. Table 3 also shows that the NER models based on the KB-T-cell (KB-NER, Enhanced-KB-NER) obtain higher performance than those based on the Dic-T-cell (Dic-NER, Enhanced-Dic-NER). When combining these two datasets, the performance decreased even though the data size of the Dic-T-cell is mostly double of the KB-T-cell, which indicates that we need to investigate a better combination. Another possible direction can be filtering noisy annotations of the Dic-T-cell.
KB vs Dic
Analyses and Discussions
We further investigate the detailed performance on each entity type: cell_type, cytokine, and TF. The results from Table 4 show that the Enhanced-KB-NER achieves improvements on all entity types except for the TF entity type in Fold-3.
Comparing the performance among the entity types between the Supervised-NER and the enhanced models, the CT type performance gains improvement (3-5% points) in most cases. The reason may come from the quality of the CT type in the large manually annotated JNLPBA data. Meanwhile, the improvement of the CY type is 3-6% points, and the improvement of TF is 11-22% points. When training only on our automatically annotated datasets (KB-NER, Dic-NER), we still obtain the higher performance for the CT type. We obtain some reasonable performance in cytokines (lower than the Supervised-NER but much better than the Matching-NER).
Limitation The performance of CY and TF from KB-NER and Dic-NER is low in most cases. There is no correct TF prediction . For CY, the performance is also low from Dic-NER (3% to 13% F-score), but it is slightly better in KB-NER (18% to 31% F-score). These results show a challenge to extract CY and TF entities based on only our automatically annotated corpus. This work is our first investigation in utilizing the UniProtKB and the existing JNLPBA corpus for our research goal in extracting T-cell related entities, and we accept this limitation in this first version. It is required to conduct further investigation and improvement especially for these CY and TF types in future work.
Future work
We would like to improve the performance of CY and TF. We also plan to conduct the evaluation not only on the TCRE task but other NER tasks such as JNLPBA (Collier and Kim, 2004), NCBI (Dogan et al., 2014), and BC5CDR (Li et al., 2016). Additionally, we intend to extend our corpus for other tasks such as relation and event extraction on these T-cell named entities.
Related Work
Distant supervision methods for NER have been investigated in several previous works. (Shang et al., 2018) revised the LSTM-CRF NER model (Lample et al., 2016) and utilized the MeSH database for chemical and disease entities. Some methods are proposed to reduce noisy annotations for Chinese NER (Yang et al., 2018), or general domain OntoNotes (Liang et al., 2020;Meng et al., 2021).
The span-based method has been used to build our NER model in this work. The method was proposed and employed in previous work (Lee et al., 2017;Luan et al., 2018;Sohrab and Miwa, 2018b;Trieu et al., 2020), which have shown the advantages in extracting nested or continuous text sequences and successful in many sequence labeling tasks such as NER or coreference resolution.
Immunotherapy has achieved remarkable advances in recent years and can be important cancer treatment in future (Falzone et al., 2018;Kruger et al., 2019). However, there are few related work or annotated datasets in text mining on this domain. immuneXpresso (Kveler et al., 2018) is a text mining engine related to mammalian immune system, and NER is evaluated on cells and cytokine using dictionary matching. DES-Tcell (AlSaieedi et al., 2021) is a knowledgebase containing concepts of T-cell and other types of drugs, diseases, genes, etc in PubMed documents. However, it lacks utilizing novel text mining methods in the creation and evaluation the extracted data including NER tasks.
For the datasets used in our work, TCRE is manually annotated by Czech and Hammerbacher (2019) containing cell_type, cytokine, and TF entities, which are closed to our goal, and we used for our evaluation. A limitation of the TCRE is that it contains only 89 documents, which is insufficient to train powerful NER models. Therefore, our annotation method in this work can advance the task in extracting T-cell named entities. JNLPBA (Collier and Kim, 2004) contains manually annotated cell_type and protein entities. Meanwhile, UniPro-tKB (UniProt-Consortium, 2021) is a large and useful knowledgebase containing protein sequences annotated by experts with corresponding PubMed references. The UniProtKB and JNLPBA are leveraged to build our corpus.
Conclusion
We introduce our automatically annotated dataset for NER containing cell_type, cytokine, and TF entities, which are important in cancer immunology research, using a distant supervision method. The dataset is automatically annotated based on the entries in the UniProtKB knowledge base. We built a dictionary of the protein and gene names of cytokines and TF from the UniProtKB annotations. We then collected referenced PubMed articles and annotated these names in the texts using text matching with the dictionary entries. Additionally, we utilized the large manually annotated JNLPBA dataset, which contains cell_type and protein named entities to build our dataset. We trained NER models on our automatically annotated dataset and evaluated them on a manually annotated T-cell corpus. The results show that our automatically annotated dataset helps to improve the NER performance by extracting more named entities of cytokines and TF accurately. For future work, we plan to improve and extend our dataset to extract interactions or events related to these entities for text mining in cancer immunology research.
3 .
3KB-NER, Dic-NER, KB-Dic-NER: we train the NER models on our annotated datasets: KB-T-cell, Dic-T-cell, and merged the KB-Tcell and Dic-T-cell, respectively. 4. Enhanced-KB-NER, Enhanced-Dic-NER, Enhanced-KB-Dic-NER: we merge the training set of the TCRE with the KB-T-cell, Dic-T-cell, and merged KB-T-cell and Dic-T-cell, respectively, to train NER models.
Table 2 :
2Statistics of the datasets (Docs: documents;
CT (cell type), CY (cytokine), TF (transcription factor))
Table 3 :
3Comparison NER results of the models (the best scores are in bold) KB-Dic-NER 77.55 64.08 30.77 81.33 41.44 37.68 80.06 63.64 15.79Model
Fold-1
Fold-2
Fold-3
CT
CY
TF
CT
CY
TF
CT
CY
TF
Matching-NER
65.18 1.45 15.07 66.42 6.00 18.44 65.96 6.86
5.97
Supervised-NER
71.22 56.64 41.18 76.36 56.36 32.14 76.15 65.45 57.78
KB-NER
69.57 31.46 52.38 73.70 18.95 0.00 70.79 20.95 8.00
Dic-NER
72.81 3.33
0.00 79.50 5.56
3.03 76.00 13.19 0.00
KB-Dic-NER
73.62 22.54 35.29 79.21 8.33
0.00 78.06 18.69 8.00
Enhanced-KB-NER
76.32 62.50 63.77 82.16 60.66 43.48 80.65 68.91 21.74
Enhanced-Dic-NER
77.49 55.32 18.18 81.61 36.51 39.44 79.21 60.34 27.03
Enhanced-
https://www.uniprot.org/uniprot/
https://github.com/huggingface/ pytorch-pretrained-BERT/tree/34cf67fd6c 3 https://abci.ai/
AcknowledgementsThis paper is supported by the Artificial Intelligence Research Center (AIRC, Japan) and BBSRC, Japan Partnering Award, BB/P025684/1.
Des-tcell is a knowledgebase for exploring immunology-related literature. Ahdab Alsaieedi, Adil Salhi, Faroug Tifratene, Arwa Bin Raies, Arnaud Hungler, Mahmut Uludag, Christophe Van Neste, Vladimir B Bajic, Takashi Gojobori, Magbubah Essack, Scientific reports. 111Ahdab AlSaieedi, Adil Salhi, Faroug Tifratene, Arwa Bin Raies, Arnaud Hungler, Mahmut Uludag, Christophe Van Neste, Vladimir B Bajic, Takashi Gojobori, and Magbubah Essack. 2021. Des-tcell is a knowledgebase for exploring immunology-related literature. Scientific reports, 11(1):1-11.
SciB-ERT: A pretrained language model for scientific text. Iz Beltagy, Kyle Lo, Arman Cohan, 10.18653/v1/D19-1371Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsIz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3606- 3611, Hong Kong, China. Association for Computa- tional Linguistics.
Introduction to the bio-entity recognition task at JNLPBA. Nigel Collier, Jin-Dong Kim, Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP). the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP)Geneva, Switzerland. COLINGNigel Collier and Jin-Dong Kim. 2004. Introduction to the bio-entity recognition task at JNLPBA. In Pro- ceedings of the International Joint Workshop on Nat- ural Language Processing in Biomedicine and its Ap- plications (NLPBA/BioNLP), pages 73-78, Geneva, Switzerland. COLING.
Extracting T cell function and differentiation characteristics from the biomedical literature. bioRxiv. Eric Czech, Jeff Hammerbacher, 643767Eric Czech and Jeff Hammerbacher. 2019. Extracting T cell function and differentiation characteristics from the biomedical literature. bioRxiv, page 643767.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of NAACL. NAACLJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of NAACL, pages 4171- 4186.
Ncbi disease corpus: a resource for disease name recognition and concept normalization. Robert Rezarta Islamaj Dogan, Zhiyong Leaman, Lu, Journal of biomedical informatics. 47Rezarta Islamaj Dogan, Robert Leaman, and Zhiyong Lu. 2014. Ncbi disease corpus: a resource for dis- ease name recognition and concept normalization. Journal of biomedical informatics, 47:1-10.
Evolution of cancer pharmacological treatments at the turn of the third millennium. Luca Falzone, Salvatore Salomone, Massimo Libra, Frontiers in pharmacology. 1300Luca Falzone, Salvatore Salomone, and Massimo Libra. 2018. Evolution of cancer pharmacological treat- ments at the turn of the third millennium. Frontiers in pharmacology, page 1300.
Cytokines, transcription factors, and the initiation of T-cell development. Cold Spring Harbor perspectives in biology. Hiroyuki Hosokawa, Ellen V Rothenberg, 1028621Hiroyuki Hosokawa and Ellen V Rothenberg. 2018. Cy- tokines, transcription factors, and the initiation of T-cell development. Cold Spring Harbor perspec- tives in biology, 10(5):a028621.
Cancer immunotherapy with T-cell targeting cytokines: IL-2 and IL-7. Ji-Hae Kim, Kun-Joo Lee, Seung-Woo Lee, BMB reports. 54121Ji-Hae Kim, Kun-Joo Lee, and Seung-Woo Lee. 2021. Cancer immunotherapy with T-cell targeting cy- tokines: IL-2 and IL-7. BMB reports, 54(1):21.
Adam: A method for stochastic optimization. P Diederik, Jimmy Lei Kingma, Ba, Proceedings of the 3rd International Conference on Learning Representations. the 3rd International Conference on Learning RepresentationsDiederik P Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In Proceed- ings of the 3rd International Conference on Learning Representations (ICLR2015).
Advances in cancer immunotherapy 2019-latest trends. Stephan Kruger, Matthias Ilmer, Sebastian Kobold, L Bruno, Stefan Cadilha, Steffen Endres, Gesa Ormanns, Schuebbe, W Bernhard, Jan Renz, G D' Haese, Hans Schloesser, Journal of Experimental & Clinical Cancer Research. 381Stephan Kruger, Matthias Ilmer, Sebastian Kobold, Bruno L Cadilha, Stefan Endres, Steffen Ormanns, Gesa Schuebbe, Bernhard W Renz, Jan G D'Haese, Hans Schloesser, et al. 2019. Advances in cancer immunotherapy 2019-latest trends. Journal of Ex- perimental & Clinical Cancer Research, 38(1):1-11.
Immune-centric network of cytokines and cells in disease context identified by computational mining of PubMed. Ksenya Kveler, Elina Starosvetsky, Amit Ziv-Kenet, Yuval Kalugny, Yuri Gorelik, Gali Shalev-Malul, Netta Aizenbud-Reshef, Tania Dubovik, Mayan Briller, John Campbell, Nature biotechnology. 367Ksenya Kveler, Elina Starosvetsky, Amit Ziv-Kenet, Yu- val Kalugny, Yuri Gorelik, Gali Shalev-Malul, Netta Aizenbud-Reshef, Tania Dubovik, Mayan Briller, John Campbell, et al. 2018. Immune-centric network of cytokines and cells in disease context identified by computational mining of PubMed. Nature biotech- nology, 36(7):651-659.
Neural architectures for named entity recognition. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesGuillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 260-270.
End-to-end neural coreference resolution. Kenton Lee, Luheng He, Mike Lewis, Luke Zettlemoyer, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingKenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference reso- lution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188-197.
Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Jiao Li, Yueping Sun, J Robin, Daniela Johnson, Chih-Hsuan Sciaky, Robert Wei, Allan Peter Leaman, Carolyn J Davis, Mattingly, C Thomas, Zhiyong Wiegers, Lu, Database. Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sci- aky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu. 2016. Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database, 2016.
Bond: Bert-assisted open-domain named entity recognition with distant supervision. Chen Liang, Yue Yu, Haoming Jiang, Siawpeng Er, Ruijia Wang, Tuo Zhao, Chao Zhang, Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningChen Liang, Yue Yu, Haoming Jiang, Siawpeng Er, Rui- jia Wang, Tuo Zhao, and Chao Zhang. 2020. Bond: Bert-assisted open-domain named entity recognition with distant supervision. In Proceedings of the 26th ACM SIGKDD International Conference on Knowl- edge Discovery & Data Mining, pages 1054-1064.
Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. Yi Luan, Luheng He, Mari Ostendorf, Hannaneh Hajishirzi, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingYi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 3219-3232.
CD4+ T cells: differentiation and functions. Clinical and developmental immunology. Rishi Vishal Luckheeram, Rui Zhou, Asha Devi Verma, Bing Xia, Rishi Vishal Luckheeram, Rui Zhou, Asha Devi Verma, and Bing Xia. 2012. CD4+ T cells: differentiation and functions. Clinical and developmental immunol- ogy, 2012.
Distantlysupervised named entity recognition with noiserobust learning and language model augmented selftraining. Yu Meng, Yunyi Zhang, Jiaxin Huang, Xuan Wang, Yu Zhang, Ji Heng, Jiawei Han, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingYu Meng, Yunyi Zhang, Jiaxin Huang, Xuan Wang, Yu Zhang, Heng Ji, and Jiawei Han. 2021. Distantly- supervised named entity recognition with noise- robust learning and language model augmented self- training. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10367-10378.
Transcriptional control of T-cell development. Taku Naito, Hirokazu Tanaka, Yoshinori Naoe, Ichiro Taniuchi, International immunology. 2311Taku Naito, Hirokazu Tanaka, Yoshinori Naoe, and Ichiro Taniuchi. 2011. Transcriptional control of T-cell development. International immunology, 23(11):661-668.
Automatic differentiation in pytorch. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary Devito, Zeming Lin, Alban Desmaison, Luca Antiga, Adam Lerer, NIPS-W. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS- W.
Learning named entity tagger using domain-specific dictionary. Jingbo Shang, Liyuan Liu, Xiaotao Gu, Xiang Ren, Teng Ren, Jiawei Han, 10.18653/v1/D18-1230Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsJingbo Shang, Liyuan Liu, Xiaotao Gu, Xiang Ren, Teng Ren, and Jiawei Han. 2018. Learning named en- tity tagger using domain-specific dictionary. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 2054- 2064, Brussels, Belgium. Association for Computa- tional Linguistics.
Deep exhaustive model for nested named entity recognition. Golam Mohammad, Makoto Sohrab, Miwa, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingMohammad Golam Sohrab and Makoto Miwa. 2018a. Deep exhaustive model for nested named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 2843-2849.
Deep exhaustive model for nested named entity recognition. Golam Mohammad, Makoto Sohrab, Miwa, Proceedings of EMNLP. EMNLPACLMohammad Golam Sohrab and Makoto Miwa. 2018b. Deep exhaustive model for nested named entity recognition. In Proceedings of EMNLP, pages 2843- 2849. ACL.
Deepeventmine: end-to-end neural nested event extraction from biomedical texts. Hai-Long Trieu, Thy Thy Tran, N A Khoa, Anh Duong, Makoto Nguyen, Sophia Miwa, Ananiadou, Bioinformatics. 3619Hai-Long Trieu, Thy Thy Tran, Khoa NA Duong, Anh Nguyen, Makoto Miwa, and Sophia Ananiadou. 2020. Deepeventmine: end-to-end neural nested event extraction from biomedical texts. Bioinfor- matics, 36(19):4910-4917.
UniProt: the universal protein knowledgebase in 2021. Uniprot-Consortium, Nucleic acids research. 49D1UniProt-Consortium. 2021. UniProt: the universal pro- tein knowledgebase in 2021. Nucleic acids research, 49(D1):D480-D489.
A guide to cancer immunotherapy: from T cell basic science to clinical practice. Jill M Alex D Waldman, Michael J Fritz, Lenardo, Nature Reviews Immunology. 2011Alex D Waldman, Jill M Fritz, and Michael J Lenardo. 2020. A guide to cancer immunotherapy: from T cell basic science to clinical practice. Nature Reviews Immunology, 20(11):651-668.
T cell dysfunction in cancer immunity and immunotherapy. Anliang Xia, Yan Zhang, Jiang Xu, Tailang Yin, Xiao-Jie Lu, Frontiers in immunology. 101719Anliang Xia, Yan Zhang, Jiang Xu, Tailang Yin, and Xiao-Jie Lu. 2019. T cell dysfunction in cancer im- munity and immunotherapy. Frontiers in immunol- ogy, 10:1719.
Distantly supervised ner with partial annotation learning and reinforcement learning. Yaosheng Yang, Wenliang Chen, Zhenghua Li, Zhengqiu He, Min Zhang, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsYaosheng Yang, Wenliang Chen, Zhenghua Li, Zhengqiu He, and Min Zhang. 2018. Distantly su- pervised ner with partial annotation learning and re- inforcement learning. In Proceedings of the 27th International Conference on Computational Linguis- tics, pages 2159-2169.
Current status and future directions of cancer immunotherapy. Hongming Zhang, Jibei Chen, Journal of Cancer. 9101773Hongming Zhang and Jibei Chen. 2018. Current sta- tus and future directions of cancer immunotherapy. Journal of Cancer, 9(10):1773. |
15,243,221 | Indonesian Dependency Treebank: Annotation and Parsing | We introduce and describe ongoing work in our Indonesian dependency treebank. We described characteristics of the source data as well as describe our annotation guidelines for creating the dependency structures. Reported within are the results from the start of the Indonesian dependency treebank. | [
7637262,
10756783,
1916754,
1204756,
252796
] | Indonesian Dependency Treebank: Annotation and Parsing
Nathan Green green@ufal.mff.cuni.cz
Charles University
Prague
Septina Dian Larasati larasati@ufal.mff.cuni.cz
Charles University
Prague
SIA Tilde Institute of Formal and Applied Linguistics Vienibas gatve 75a Faculty of Mathematics and Physics
Czech Republic Riga
LV-1004PragueLatvia
Zdeněkžabokrtský
Indonesian Dependency Treebank: Annotation and Parsing
We introduce and describe ongoing work in our Indonesian dependency treebank. We described characteristics of the source data as well as describe our annotation guidelines for creating the dependency structures. Reported within are the results from the start of the Indonesian dependency treebank.
We also show ensemble dependency parsing and self training approaches applicable to under-resourced languages using our manually annotated dependency structures. We show that for an under-resourced language, the use of tuning data for a meta classifier is more effective than using it as additional training data for individual parsers. This meta-classifier creates an ensemble dependency parser and increases the dependency accuracy by 4.92% on average and 1.99% over the best individual models on average. As the data sizes grow for the the under-resourced language a meta classifier can easily adapt. To the best of our knowledge this is the first full implementation of a dependency parser for Indonesian. Using self-training in combination with our Ensemble SVM Parser we show aditional improvement. Using this parsing model we plan on expanding the size of the corpus by using a semi-supervised approach by applying the parser and correcting the errors, reducing the amount of annotation time needed.
Introduction
Treebanks have been a major source for the advancement of many tools in the NLP pipeline from sentence alignment to dependency parsers to an end product, which is often machine translation. While useful for machine learning as well and linguistic analysis, these treebanks typically only exist for a handful of resource-rich languages. Treebanks tend to come in two linguistic forms, dependency based and constituency based each with their own pros and cons. Dependency treebanks have been made popular by treebanks such as the Prague dependency treebank (Hajic, 1998) and constituency treebanks by the Penn treebank (Marcus et al., 1993). While some linguistic phenomena are better represented in one form instead of another, the two forms are generally able to be transformed into one another. While many of the world's 6,000+ languages could be considered under-resourced due to a limited number of native speakers and low overall population in their countries, Indonesia is the fourth most populous country in the world with over 23 million native and 215 million non-native Bahasa Indonesia speakers. The development of language resources, treebanks in particular, for Bahasa Indonesia will have an immediate effect for Indonesian NLP.
Further development of our Indonesian dependency treebank can affect part of speech taggers, named entity recognizers, and machine translation systems. All of these systems have technical benefits to the 238 million native and non-native Indonesian speakers ranging for spell checkers, improved information retrieval, to improved access to more of the Web due to better page translation.
Some other NLP resources exist for Bahasa Indonesia as described in Section 2. While these are a nice start to language resources for Indonesian, dependency relations can have a positive effect on 137 word reordering, long range dependencies, as well as anaphora resolution. Dependency relations have also been shown to be integral to deep syntactic transfer machine translation systems (Žabokrtský et al., 2008).
Related Work
There was research done on developing a rule-base Indonesian constituency parser applying syntactic structure to Indonesian sentences. It uses a rulebased approach by defining the grammar using PC-PATR (Joice, 2002). There was also research that applied the above constituency parser to create a probabilistic parser (Gusmita and Manurung, 2008). To the best of our knowledge no dependency parser has been created and publicly released for Indonesian.
Semi-supervised annotation has been shown to be a useful means to to increase the amount of annotated data in dependency parsing (Koo et al., 2008), however typically for languages which already have plentiful annotated data such as Czech and English. Self-training was also shown to be useful in constituent parsing as means of seeing known tokens in new context (McClosky et al., 2008). Our work differs in the fact that we examine the use of ensemble collaborative models' effect on the self-training loop as well as starting with a very reduced training set of 100 sentences. The use of model agreement features for our SVM classifier is useful in its approach since under-resourced languages will not need any additional analysis tools to create the classifier.
Ensemble learning (Dietterich, 2000) has been used for a variety of machine learning tasks and recently has been applied to dependency parsing in various ways and with different levels of success. (Surdeanu and Manning, 2010;Haffari et al., 2011) showed a successful combination of parse trees through a linear combination of trees with various weighting formulations. Parser combination with dependency trees have been examined in terms of accuracy (Sagae and Lavie, 2006;Sagae and Tsujii, 2007;Zeman andŽabokrtský, 2005). POS tags were used in parser combination in for combining a set of Malt Parser models with an SVM classifier with success, however we believe our work is novel in its use an SVM classifier solely on model agreements.
Data Description
The treebank that we use in this work is a collection of manually annotated Indonesian dependency trees. It consists of 100 Indonesian sentences with 2705 tokens and a vocabulary size of 1015 unique tokens. The sentences are taken from the IDENTIC corpus (Larasati, 2012). The raw version of the sentences originally were taken from the BPPT articles in economy from the PAN localization (PAN, 2010) project output. The treebank used Parts-Of-Speech tags (POS tags) provided by MorphInd (Larasati et al., 2011). Since the MorphInd output is ambiguous, the tags are also disambiguated and corrected manually, including the unknown POS tag. The distribution of the POS tags can be seen in Table 1.
The annotation is done using the visual tree editor, TreD (Pajas, 2000) and stored in CoNLL format (Buchholz and Marsi, 2006) for compatibility with several dependency parsers and other NLP tools.
Annotation Description
Currently the annotation provided in this treebank is the unlabeled relationship between the head and its dependents. We follow a general annotation guidelines as follows:
• The main head node of the sentence is attached to the ROOT node.
• Similarly as the main head node, the sentence separator punctuation is also attached to the ROOT node.
• The Subordinate Conjunction (with POS tag 'S-') nodes are attached to its subordinating clause head nodes. The subordinating clause head nodes are attached to its main clause head nodes.
• The Coordination Conjunctions (with POS tag 'H-') nodes, that connect between two phrases (using the conjunction or commas), are attached to the first phrase head node. The second phrase head nodes are attached to the conjunction node. It follows this manner when there are more than two phrases. 138
• The Coordination Conjunctions (with POS tag 'H-') nodes, that connect between two clauses (using the conjunction or commas), are attached to the first clause head node. The second clause head nodes are attached to the conjunction node. It follows this manner when there are more than two clauses.
• The prepositions nodes with the POS tag 'R-' are the head of Prepositional Phrases (PP).
• In Quantitative Numeral Phrases such as "3 thousand", 'thousand' node will be the head and '3' node attached to 'thousand' node.
In general, the trees have the verb of the main clause as the head of the sentence where the Subject and the Object are attached to it. In most cases, the most left noun tokens are the noun phrase head, since most of Indonesian noun phrases are constructed in Head-Modifier construction.
Figure 1: Dependency tree example for the sentence "He said that the rupiah stability protection is used so that there is no bad effect in economy." When dealing with small data sizes it is often not enough to show a simple accuracy increase. This increase can be very reliant on the training/tuning/testing data splits as well as the sampling of those sets. For this reason our experiments are conducted over 18 training/tuning/testing data split configurations which enumerates possible configurations for testing sizes of 5%,10%,20% and 30%. For each configuration we randomly sample without replacement the training/tuning/testing data and rerun the experiment 100 times, each time sampling new sets for training,tuning, and testing. These 1800 runs, each on different samples, allow us to better show the overall effect on the accuracy metric as 139 well as the statistically significant changes as described in Section 5.1.5. Figure 2 shows this process flow for one run of this experiment.
POS tag Description
Parsers
Dependency parsing systems are often optimized for English or other major languages. This optimization, along with morphological complexities, leads other languages toward lower accuracy scores in many cases. The goal here is to show that while the corpus is not the same in size as most CoNLL data, a successful dependency parser can still be trained from the annotated data and provide semisupervised annotation to help increase the corpus size.
Transition-based parsing creates a dependency structure that is parameterized over the transitions used to create a dependency tree. This is closely related to shift-reduce constituency parsing algorithms. The benefit of transition-based parsing is the use of greedy algorithms which have a linear time complexity. However, due to the greedy algorithms, longer arc parses can cause error propagation across each transition (Kübler et al., 2009). We make use of Malt Parser , which in the CoNLL shared tasks was often tied with the best performing systems.
For the experiments in this paper we only use Malt Parser, but we use different training parameters to create various parsing models. For Malt Parser we use a total of 7 model variations as shown in Table 2.
Ensemble SVM System
We train our SVM classifier using only model agreement features. Using our tuning set, for each correctly predicted dependency edge, we create N 2 features where N is the number of parsing models. We do this for each model which predicted the correct edge in the tuning data. So for N = 3 the first feature would be a 1 if model 1 and model 2 agreed, feature 2 would be a 1 if model 1 and model 3 agreed, and so on. This feature set is widely applicable to many languages since it does not use any additional linguistic tools. For each edge in the ensemble graph, we use our classifier to predict which model should be correct, by first creating the model agreement feature set for the current edge of the unknown test data. The SVM predicts which model should be correct and this model then decides to which head the current node is attached. At the end of all the tokens in a sentence, the graph may not be connected and will likely have cycles. Using a Perl implementation of minimum spanning tree, in which each edge has a uniform weight, we obtain a minimum spanning forest, where each component is then connected and cycles are eliminated in order to achieve a well formed dependency structure. Figure 3 gives a graphical 140 Figure 3: General flow to create an Ensemble parse tree representation of how the SVM decision and MST algorithm create a final Ensemble parse tree which is similar to the construction used in Green andŽabokrtský, 2012). Future iterations of this process could use a multi-label SVM or weighted edges based on the parser's accuracy on tuning data.
Data Set Split Configurations
Since this is a relatively small treebank and in order to confirm that our experiments are not heavily reliant on one particular sample of data we try a variety of data splits. To test the effects of the training, tuning, and testing data we try 18 different data split configurations, each one being sampled 100 times. The data splits in Section 5.2 use the format trainingtuning-testing. So 70-20-10 means we used 70% of the Indonesian Treebank for training, 20% for tuning the SVM classifier, and 10% for evaluation.
Evaluation
Made a standard in the CoNLL shared tasks competition, two standard metrics for comparing dependency parsing systems are typically used. Labeled attachment score (LAS) and unlabeled attachment score (UAS). UAS studies the structure of a dependency tree and assesses how often the output has the correct head and dependency arcs. In addition to the structure score in UAS, LAS also measures the accuracy of the dependency labels on each arc (Buchholz and Marsi, 2006). Since we are mainly concerned with the structure of the ensemble parse, we report only UAS scores in this paper.
To test statistical significance we use Wilcoxon paired signed-rank test. For each data split configuration we have 100 iterations of the experiment. Each model is compared against the same samples so a paired test is appropriate in this case. We report statistical significance values for p < 0.01. For each of the data splits, Table 3 shows the percent increase in our SVM system over both the average of the 7 individual models and over the best individual model. As the Table 3 shows, we obtain above average UAS scores in every data split. The increase is statistical significant in all data splits except one, the 90-5-5 split. This seems to be logical since this data split has the least difference in training data between systems, with only 5% tuning data. Our highest average UAS score was with the 70-20-10 split with a UAS of 62.48%. The use of 20% tuning data is of interest since it was significantly better than models with 10%-25% more training data as seen in Figure 4. This additional data spent for tuning appears to be worth the cost.
Results and Discussion
The selection of the test data seems to have caused a difference in our results. While all our ensemble SVM parsings system have better UAS scores, it is a lower increase when we only use 5% for testing. Which in our treebank means we are only using 5 sentences randomly selected per experiment. This does not seem to be enough to judge the improvement. 141 6 Self-training 6.1 Methodology
The following methodology was run 12 independent times. Each time new testing/tuning/and training datasets were randomly selected without replacement. In each iteration the SVM classifier and dependency models were retrained using self-training. Also for each of the 12 experiments, new random self-training datasets were selected from the larger corpus. The results in the next section are averaged amongst these 12 independent runs. Figure 5 shows this process flow for one run of this experiment. The data for self-training is also taken from IDENTIC and it consists of 45,000 sentences. The data does not have any dependency relation information but it is enriched with POS tags. It is processed with the same morphology tools as the training data described in section 3 but without the manual disambiguation and correction. This data and its annotation information are available on the IDENTIC homepage 1 .
For self-training we present two scenarios. First, all parsing models are retrained with their own pre-1 http://ufal.mff.cuni.cz/ larasati/identic/ dicted output. Second, all parsing models are retrained with the output of our SVM ensemble parser. Self-training in both cases is done of 10 iterations of 20 sentences. Sentences are chosen at random from unannotated data. This allows us to examine selftraining to a training data size of twice the original set.
The next section examines the differences between these two approaches and the effect on the overall parse.
6.2 Results of Self-training Figure 6: We can see that the self-trained Malt Parser 2Planar model that is trained with the ensemble output consistently outperforms the self-trained model that uses its own output. Results are graphed over the 10 selftraining iterations As can be seen in Figure 6, the base models did better when trained with additional data that was parsed by our SVM ensemble system. The higher UAS accuracy seems to of had a better effect then receiving dependency structures of a similar nature to the current model. We show the 2Planar model in Figure 6 but this was the case for each of the 7 individual models. On an interesting note, the SVM system had least improvement, 0.60%, when the component base models were trained on its own output. This seems warranted as other parser combination papers have shown that ensemble systems prefer models which differ more so that a clearer decision can be made Green andŽabokrtský, 2012). The improvements when self-training on our SVM output over the individual parsers' output can be seen in Table 3. Again these are averages over 12 runs of the system, each run containing 10 self-training loops of 20 additional 143 sentences.
Model
% Improvement % 2planar 1.10% nivreeager 0.40% nivrestandard 1.62% planar 0.87% stackeager 2.28% stacklazy 2.20% stackproj 1.95% svm 0.60% Table 4: The % Improvement of all our parsing models including our ensemble svm algorithm over 12 complete iterations of the experiment.
Conclusion
We have shown a successful implementation of self-training for dependency parsing on an underresourced language. Self-training in order to improve our parsing accuracy can be used to help semisupervised annotation of additional data. We show this for an initial data set of 100 sentences and an additional self-trained data set of 200 sentences. We introduce and show a collaborative SVM classifier that creates an ensemble parse tree from the predicted annotations and improves individual accuracy on average of 4.92%. This additional accuracy can release some of the burden on annotators for under-resourced language annotation who would use a dependency parser as a pre-annotation tool. Using these semi-supervised annotation techniques should be applicable to many languages since the SVM classifier is essentially blind to the language and only considers the models' agreement.
The treebank is the first of its kind for the Indonesian language. Additionally all sentences and annotations are being made available publicly online. We have described the beginnings of the Indonesian dependency treebank. Characteristics of the sentences and dependency structure have been described.
Figure 2 :
2Process Flow for one run of our SVM Ensemble system. This Process in its entirety was run 100 times for each of the 18 data set splits.
Figure 4 :
4Surface plot of the UAS score for the tuning and training data split.
Figure 5 :
5Process Flow for one run of our self-training system. There is one alternative scenario in which the system either does self-training with each N parser or with the ensemble SVM parser. These constitute two different experiments. For all experiments i=10 and N =7
Table 1 :
1The distribution of the Part-Of-Speech tag occurrence.5 Ensemble SVM Dependency Parsing
5.1 Methodology
5.1.1 Process Flow
Table 2 :
2Table of the Malt Parser Parameters used during training. Each entry represents one of the parsing algorithms used in our experiments. For more information see http://www.maltparser.org/options.html
Table 3 :
3Average increases and decreases in UAS score for different Training-Tuning-Test samples.The average was
AcknowledgmentsThe research leading to these results has received funding from the European Commission's 7th Framework Program under grant agreement n • 238405 (CLARA), by the grant LC536 Centrum Komputační Lingvistiky of the Czech Ministry of Education, and this work uses language resources developed and/or stored and/or distributed by the LINDAT-Clarin project of the Ministry of Education of the Czech Republic (project LM2010013).
CoNLL-X shared task on multilingual dependency parsing. Sabine Buchholz, Erwin Marsi, Proceedings of the Tenth Conference on Computational Natural Language Learning, CoNLL-X '06. the Tenth Conference on Computational Natural Language Learning, CoNLL-X '06Stroudsburg, PA, USAAssociation for Computational LinguisticsSabine Buchholz and Erwin Marsi. 2006. CoNLL- X shared task on multilingual dependency parsing. In Proceedings of the Tenth Conference on Compu- tational Natural Language Learning, CoNLL-X '06, pages 149-164, Stroudsburg, PA, USA. Association for Computational Linguistics.
Ensemble methods in machine learning. G Thomas, Dietterich, Proceedings of the First International Workshop on Multiple Classifier Systems, MCS '00. the First International Workshop on Multiple Classifier Systems, MCS '00London, UKSpringer-VerlagThomas G. Dietterich. 2000. Ensemble methods in ma- chine learning. In Proceedings of the First Interna- tional Workshop on Multiple Classifier Systems, MCS '00, pages 1-15, London, UK. Springer-Verlag.
Hybrid Combination of Constituency and Dependency Trees into an Ensemble Dependency Parser. Nathan Green, Zdeněkžabokrtský, Proceedings of the Workshop on Innovative Hybrid Approaches to the Processing of Textual Data. the Workshop on Innovative Hybrid Approaches to the Processing of Textual DataAvignon, FranceAssociation for Computational LinguisticsNathan Green and ZdeněkŽabokrtský. 2012. Hybrid Combination of Constituency and Dependency Trees into an Ensemble Dependency Parser. In Proceedings of the Workshop on Innovative Hybrid Approaches to the Processing of Textual Data, pages 19-26, Avignon, France, April. Association for Computational Linguis- tics.
Some initial experiments with indonesian probabilistic parsing. R H Gusmita, R Manurung, Proceedings of the 2nd International MALINDO Workshop. the 2nd International MALINDO WorkshopR.H. Gusmita and R. Manurung. 2008. Some ini- tial experiments with indonesian probabilistic parsing. In Proceedings of the 2nd International MALINDO Workshop.
An ensemble model that combines syntactic and semantic clustering for discriminative dependency parsing. Gholamreza Haffari, Marzieh Razavi, Anoop Sarkar, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesOregon, USAPortland. Association for Computational LinguisticsGholamreza Haffari, Marzieh Razavi, and Anoop Sarkar. 2011. An ensemble model that combines syntactic and semantic clustering for discriminative dependency parsing. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguistics: Human Language Technologies, pages 710-714, Port- land, Oregon, USA, June. Association for Computa- tional Linguistics.
Building a syntactically annotated corpus: The prague dependency treebank. Issues of valency and meaning. Jan Hajic, Jan Hajic. 1998. Building a syntactically annotated cor- pus: The prague dependency treebank. Issues of va- lency and meaning, pages 106-132.
Single Malt or Blended? A Study in Multilingual Parser Optimization. Johan Hall, Jens Nilsson, Joakim Nivre, Gülsen Eryigit, Beáta Megyesi, Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL. the CoNLL Shared Task Session of EMNLP-CoNLLMattias Nilsson, and Markus SaersJohan Hall, Jens Nilsson, Joakim Nivre, Gülsen Eryigit, Beáta Megyesi, Mattias Nilsson, and Markus Saers. 2007. Single Malt or Blended? A Study in Mul- tilingual Parser Optimization. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 933-939.
Pengembangan lanjut pengurai struktur kalimat bahasa indonesia yang menggunakan constraint-based formalism. Joice , Faculty of Computer Science, University of Indonesiaundergraduate thesis. Master's thesisJoice. 2002. Pengembangan lanjut pengurai struk- tur kalimat bahasa indonesia yang menggunakan constraint-based formalism. undergraduate thesis. Master's thesis, Faculty of Computer Science, Univer- sity of Indonesia.
Simple semi-supervised dependency parsing. Terry Koo, Xavier Carreras, Michael Collins, Proceedings of ACL-08: HLT. ACL-08: HLTColumbus, OhioAssociation for Computational LinguisticsTerry Koo, Xavier Carreras, and Michael Collins. 2008. Simple semi-supervised dependency parsing. In Pro- ceedings of ACL-08: HLT, pages 595-603, Columbus, Ohio, June. Association for Computational Linguis- tics.
Dependency parsing. Synthesis lectures on human language technologies. Sandra Kübler, Ryan Mcdonald, Joakim Nivre, Morgan & Claypool, USSandra Kübler, Ryan McDonald, and Joakim Nivre. 2009. Dependency parsing. Synthesis lectures on hu- man language technologies. Morgan & Claypool, US.
Indonesian morphology tool (morphind): Towards an indonesian corpus. Systems and Frameworks for Computational Morphology. Vladislav Septina Dian Larasati, Dan Kuboň, Zeman, Septina Dian Larasati, Vladislav Kuboň, and Dan Zeman. 2011. Indonesian morphology tool (morphind): To- wards an indonesian corpus. Systems and Frameworks for Computational Morphology, pages 119-129.
Identic corpus:morphologically enriched indonesian-english parallel corpus. Larasati Septina Dian, Septina Dian Larasati. 2012. Identic cor- pus:morphologically enriched indonesian-english parallel corpus.
Building a large annotated corpus of english: the Penn Treebank. Mitchell P Marcus, Mary Ann Marcinkiewicz, Beatrice Santorini, Comput. Linguist. 19Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beat- rice Santorini. 1993. Building a large annotated cor- pus of english: the Penn Treebank. Comput. Linguist., 19:313-330, June.
When is self-training effective for parsing?. David Mcclosky, Eugene Charniak, Mark Johnson, Proceedings of the 22nd International Conference on Computational Linguistics. the 22nd International Conference on Computational LinguisticsManchester, UK, August. ColingDavid McClosky, Eugene Charniak, and Mark Johnson. 2008. When is self-training effective for parsing? In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 561- 568, Manchester, UK, August. Coling 2008 Organiz- ing Committee.
MaltParser: A languageindependent system for data-driven dependency parsing. Joakim Nivre, Johan Hall, Jens Nilsson, Atanas Chanev, Gulsen Eryigit, Sandra Kübler, Svetoslav Marinov, Erwin Marsi, Natural Language Engineering. 132Joakim Nivre, Johan Hall, Jens Nilsson, Atanas Chanev, Gulsen Eryigit, Sandra Kübler, Svetoslav Marinov, and Erwin Marsi. 2007. MaltParser: A language- independent system for data-driven dependency pars- ing. Natural Language Engineering, 13(2):95-135.
Tree editor tred, prague dependency treebank. Petr Pajas, charles university, prague. See URLPetr Pajas. 2000. Tree editor tred, prague depen- dency treebank, charles university, prague. See URL http://ufal. mff. cuni. cz/˜pajas/tred.
Localization Project PAN. Pan localization projectLocalization Project PAN. 2010. Pan localization project.
Parser combination by reparsing. Kenji Sagae, Alon Lavie, Proceedings of the Human Language Technology Conference of the NAACL. the Human Language Technology Conference of the NAACLNew York City, USAAssociation for Computational LinguisticsShort PapersKenji Sagae and Alon Lavie. 2006. Parser combina- tion by reparsing. In Proceedings of the Human Lan- guage Technology Conference of the NAACL, Com- panion Volume: Short Papers, pages 129-132, New York City, USA, June. Association for Computational Linguistics.
Dependency parsing and domain adaptation with LR models and parser ensembles. Kenji Sagae, Jun'ichi Tsujii, Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL. the CoNLL Shared Task Session of EMNLP-CoNLLPrague, Czech RepublicAssociation for Computational LinguisticsKenji Sagae and Jun'ichi Tsujii. 2007. Dependency pars- ing and domain adaptation with LR models and parser ensembles. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 1044-1050, Prague, Czech Republic, June. Association for Com- putational Linguistics.
Ensemble models for dependency parsing: cheap and good?. Mihai Surdeanu, D Christopher, Manning, Human Language Technologies: The. Mihai Surdeanu and Christopher D. Manning. 2010. En- semble models for dependency parsing: cheap and good? In Human Language Technologies: The 2010
Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10. Stroudsburg, PA, USAAssociation for Computational LinguisticsAnnual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10, pages 649-652, Stroudsburg, PA, USA. Associ- ation for Computational Linguistics.
TectoMT: Highly Modular MT System with Tectogrammatics Used as Transfer Layer. Jan Zdeněkžabokrtský, Petr Ptáček, Pajas, Proceedings of the 3rd Workshop on Statistical Machine Translation, ACL. the 3rd Workshop on Statistical Machine Translation, ACLZdeněkŽabokrtský, Jan Ptáček, and Petr Pajas. 2008. TectoMT: Highly Modular MT System with Tec- togrammatics Used as Transfer Layer. In Proceedings of the 3rd Workshop on Statistical Machine Transla- tion, ACL, pages 167-170.
Improving parsing accuracy by combining diverse dependency parsers. Daniel Zeman, Zdeněkžabokrtský, In: Proceedings of the 9th International Workshop on Parsing Technologies. Daniel Zeman and ZdeněkŽabokrtský. 2005. Improving parsing accuracy by combining diverse dependency parsers. In In: Proceedings of the 9th International Workshop on Parsing Technologies. |
259,833,818 | Effect Graph: Effect Relation Extraction for Explanation Generation | Argumentation is an important means of communication. For describing especially arguments about consequences, the notion of effect relations has been introduced recently. We propose a method to extract effect relations from large text resources and apply it on encyclopedic and argumentative texts. By connecting the extracted relations, we generate a knowledge graph which we call effect graph. For evaluating the effect graph, we perform crowd and expert annotations and create a novel dataset. We demonstrate a possible use case of the effect graph by proposing a method for explaining arguments from consequences. | [
44146624,
6015236,
14068874,
12440940,
226262286,
236477425,
11902548,
53083029,
44145304,
236460348
] | Effect Graph: Effect Relation Extraction for Explanation Generation
June 13
Jonathan Kobbe heiner.stuckenschmidt@uni-mannheim.dei.r.karnstedt-hulpus@uu.nl
Ioana Hulpus
² ¹ University of Mannheim
Germany
Heiner Stuckenschmidt ¹
Utrecht University
Netherlands
Effect Graph: Effect Relation Extraction for Explanation Generation
Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (@ACL 2023)
the 1st Workshop on Natural Language Reasoning and Structured Explanations (@ACL 2023)2023June 13
Argumentation is an important means of communication. For describing especially arguments about consequences, the notion of effect relations has been introduced recently. We propose a method to extract effect relations from large text resources and apply it on encyclopedic and argumentative texts. By connecting the extracted relations, we generate a knowledge graph which we call effect graph. For evaluating the effect graph, we perform crowd and expert annotations and create a novel dataset. We demonstrate a possible use case of the effect graph by proposing a method for explaining arguments from consequences.
Introduction
Argumentation is a challenging task because its goal is to convince an audience. One broadly used type of arguments is the argument from consequences, which has been specifically addressed in recent literature (Reisert et al., 2018;Al-Khatib et al., 2020;Kobbe et al., 2020). The premise of an argument from consequences states that if A is brought about, good or bad consequences will plausibly occur, which leads to the conclusion that A should or should not be brought about (Walton et al., 2008). The following statement is such an argument in favor of legal abortions:
Legal abortions protect women. Our main motivation is to further back up such premises by generating structured explanations. Table 1 shows some potential explanations.
1 Abortions protect women from the harm caused by giving birth and being pregnant. 2 Abortions prevent long term damage caused by complications during the pregnancy and birth process. 3 Legal Abortions protect the women's right to self-determination. 4 Abortions protect women from the financial burden of raising a child. 5 Abortions can protect girls from becoming mothers too early. First, we note that it is not possible to find the one and only explanation for why legal abortions protect women. As demonstrated, there exist multiple different explanations and, from merely reading the statement, we cannot know which of these explanations the author had in mind. Thus, our goal is not to reconstruct the original explanation, but to propose meaningful ones.
For automatically generating possible explanations, we propose an approach that is specific for explaining effect relations. Given A → B, we aim to find an instance C such that A → C → B. Because of the structure of such an explanation, we call it Effect-Effect-Explanation. Of course, this way, we cannot capture all the details in the explanations in table 1. But we can capture some key aspects and describe the explanations in a welldefined way that allows for further processing in downstream tasks. Table 2 shows possible formalized versions of explanations 1 to 4.
Effect-Effect-Explanations are, however, still very limited in their nature. While we cannot fully overcome this limitation, we show that it is possible to expand upon them for instance by incorporating lexical knowledge: Given A → B, an explanation could also be (A → C, C instanceOf / hypernym / synonym B) or, vice versa, (A instanceOf / hy- pernym / synonym C, C → B). Analoguesly, we call these Effect-Lexical-Explanation. An example for explanation 5 in table 1 would be Abortions
+ − → girls hypernym − −−−− → women.
The main challenge for both of the proposed explanation schemes is to get the additional information (i.e., C and its links to A and B). For the lexical relations, we use WordNet (Fellbaum, 2010). For the effect relations, we propose a simple, yet efficient, extraction method which we denote by EREx (Effect Relation Extractor). We then apply it on large text resources and connect the extracted relations in a graph which we refer to as effect graph 1 . While we build the graph having explanation generation in mind, it might also be of value for other tasks as it contains a widely used type of knowledge.
In the following, we discuss related work (section 2). In section 3, we describe the generation of the effect graph which we evaluate in section 4. Lastly, we showcase our envisioned explanation generation (section 5) and conclude with a discussion (section 6).
Related Work
Our method to extract effect relations is most similar to the one proposed by Kobbe et al. (2020). They extract effect relations in order to classify stances of arguments from consequences. Just as ours, their extraction method is purely heuristic and relies on dependency parsing. The main differences we introduced are due to the following reasons: First, the method of Kobbe et al. (2020) relies on sentence-topic pairs to identify the effect relation's subject, instead of sentences only. Second, it requires the effect relation's object to have a sentiment in order to calculate the stance which is not necessary for our task. Because of this and the first reason, the subjects and objects which are derived by detecting patterns in the dependency parse are no longer controlled for by either linking to the topic or a sentiment lexicon, so we pose other restrictions on both of them. Third, it is designed to extract an effect relation whenever possible, thus emphasizing recall, in order to enable the stance detection. In contrast, we want to rather focus on precision.
Al- Khatib et al. (2020) also extract effect relations from argumentative text and, like ourselves, use them to build a knowledge graph. Their graph is then used as background knowledge by Al Khatib et al. (2021) who use it to support neural argument generation, and by Yuan et al. (2021) who try to identify the correct response to an argument among five possible options. However, in terms of methodology, there are only little similarities to our approach. While EREx is completely unsupervised, Al-Khatib et al. (2020) divide the relation extraction task into several subtasks for which they train specific classifiers, with one exception: For identifying the effect relation's subject and object, they use the supervised OpenIE model of Stanovsky et al. (2018).
OpenIE (Open Information Extraction) is the task to extract relationships between entities from text. In contrast to conventional information extraction, in OpenIE, the relationships are not predefined (Etzioni et al., 2008). However, OpenIE can also be applied for relation extraction with domain specific relations by performing Relation Mapping (Soderland et al., 2010). While Soderland et al. (2010) propose a supervised approach, in our case, we consider it sufficient to filter and map the relations using an effect lexicon. Similarly to Corro and Gemulla (2013), Angeli et al. (2015), Gashteovski et al. (2017), we base our relation extraction on dependency parsing. In comparison to these works, however, our effect relation extraction approach is much less sophisticated. Evolving around effect verbs specifically, we use only a small set of manually defined patterns, but are still able to gain comparable or even better results when compared to OpenIE with an effect lexicon based relation mapping.
Similar to our effect graph which we build from effect relations, Martinez-Rodriguez et al. (2018) use ClausIE (Corro and Gemulla, 2013) for extracting relations in order to build an OpenIE-based knowledge graph. Before applying OpenIE, they extract entities and link them to existing knowledge graphs. We experiment with both, using only enti-ties which we can link to Wikipedia pages, or not requiring any linking. Further, they annotate noun phrases (NPs) and expand the extracted entities to encompass the complete NP. Similarly, in EREx we only consider NPs as entities.
Lastly, we want to mention another type of relations than effect relations, namely causal relations (Davidson, 1967). Other than in effect relations, A's effect on B, if they are in a causal relation, is clearly defined as A being the cause for B. Girju and Moldovan (2002), Girju (2003) introduced the task of automatically extracting causal relations from text, and it has been a matter of research since then (Yang et al., 2022).
Also for causal relations, there exists research on using them for building a knowledge graph. Heindorf et al. (2020) bootstrap dependency parse patterns to extract claimed causal relations from text. While their method to start with a small, very accurate seed set of patterns and to extend it consecutively is very appealing, we find it to be rather difficult to apply on our approach: Their patterns involve very concrete words that all trigger causal relations while we chose to keep our patterns general in order to apply to a large set of different effect words. Also like us, Heindorf et al. (2020) do not fact check their extractions, but emphasize that they merely collect claimed causal relations.
Effect Graph Generation
Our aim is to generate a graph where the nodes are entities such as global warming, CO2 emissions, solar panel. The edges represent the effect relations and indicate either a negative or positive effect from the source to the target node, e.g., (solar panel) − − → (CO2 emissions). We also store the concrete word indicating the effect. In the previous example, this could be for instance reduce or prevent.
Effect Relation Extraction
We use a subset of the dependency parse patterns presented in Kobbe et al. (2020) in order to identify subject and object relations as well as negations. The patterns are presented in table 3.
Using these patterns, we look for triples (S, P, O) such that the predicate P has subject S and object O. In order for the triple to qualify as effect relation, P has to express a positive or negative effect on its object. We identify such effects by applying the Connotation Frame lexicon (Rashkin
Pattern
Interpretation
1 P * − → O P has object O 3 P ⋄ − → S P has subject S 5 NegP pobj −−→ X X is negated 6 X − → NegP ∧ ∄NegP pobj −−→ X is negated 7 X neg − − → X is negated * ∈ {dobj, cobj, nsubjpass, csubjpass};
⋄ ∈ {nsubj, csubj}; NegP stands for negative preposition Kobbe et al. (2020). The effect relation's subject, which we denote by A, is then the statement's substring which is represented by the dependency parse's subtree whose root is S. Analoguesly, the object B is the statement's substring represented by the subtree whose root is O. Thereby, leading articles are ignored and A and B have to be non-stopwords and NPs. To ensure that they are meaningful entities in different contexts, we check whether A and B link to an entry in Wikipedia. Only if they both do, and if neither A nor B nor P are negated, we consider A P − → B to be an effect relation.
Graph Construction
For building the effect graph, we extract effect relations from the following three datasets:
Debatepedia Debatepedia was an online portal where users could add pro and contra arguments to a variety of topics. We use the featured debates which overall have high quality.
Debate.org As Debatepedia is rather small, we also use Debate.org Cardie, 2018, 2019) to extract effect relations from a large argumentative text basis. In Debate.org, two users engage in a debate about a certain topic and present their arguments and counter arguments over three rounds.
Simple Wiki Lastly, we use an encyclopedic text resource to also capture non-argumentative knowledge which can be relevant for explaining arguments. To save computational resources and increase the accuracy of the extraction process, we use the Wikipedia version in simple English. Both argumentative text resources mainly contain defeasible arguments. Thus, the effect relations which we extract from them and, consequentially, the effect graph should not be treated as facts.
After extracting the effect relations from text, we remove duplicates. We only consider an effect relation to be a duplicate, if it was extracted from the same sentence in the same resources twice, which most often happens because of citations. We intentionally keep effect relations that are identical except for the sentence they were extracted from because this might indicate that the effect relation is especially relevant.
For building the effect graph, we connect the extracted effect relations as follows: The lemmas of the subjects S and the objects O become nodes. We add one edge between S and O for every respective effect relation we extracted. Since we do not collapse the edges to not lose any information, the resulting graph is expected to contain multi-edges.
Evaluation
We evaluate the effect graph as follows: In section 4.1, we evaluate the effect relation extraction process using the subtasks defined by Al-Khatib et al. (2020). Then, we evaluate the extracted graph itself. In section 4.2, we compare the graph statistics. Afterwards, we evaluate both precision (section 4.3) and recall (section 4.4). In this context, precision expresses the chance that a randomly selected edge of the graph is correct. We consider a statement to be correct if it is in accordance with the statement it was extracted from. Recall on the other hand is meant to measure the chance that a given effect relation is contained in the graph.
Baselines For the evaluation of the extraction subtasks defined by Al-Khatib et al. (2020), we use their models as a baseline, denoted by Al-Khatib.
For evaluating the effect graph as a whole, we build the effect graph as described in section 3.2, but using different extraction methods. We use the OpenIE implementation which is part of Stanford CoreNLP (Manning et al., 2014;Angeli et al., 2015) to extract subject-verb-object triples, applying a confidence threshold of 0.9. We accept such triples as effect relations where the verb is an effect word and the subject and object link to Wikipedia pages. Further, we use a version of EREx where we do not require the subject and object to link to Wikipedia, denoted by EREx*. We expect this version to have a higher recall, but also more noise.
Extraction Subtasks
Al-Khatib et al. (2020) propose several subtasks for effect relation extraction. These subtasks include:
• Relation Classification: Classify whether a statement does contain an effect relation;
• Relation Type Classification: Predict the effect relation's polarity;
• Identification of Concept 1: Identify the effect relation's subject;
• Identification of Concept 2: Identify the effect relation's object.
For the first two subtasks, Al-Khatib et al. (2020) propose a supervised model, while for the last two they rely on the OpenIE approach of Stanovsky et al. (2018). To make the comparison fair, we slightly adopt EREx such that it predicts a relation type and identifies concepts even if it does not detect an effect relation. For the evaluation, we use the dataset published by Al-Khatib et al. (2020), which contains crowd annotations for the different subtasks, and compare our results to the results reported in their paper. 2 The results are presented in table 4.
Concerning Relation Classification, EREx misses effect relations considerably more often than it wrongly predicts one (1582 vs 174 instances), which fits our focus on precision rather than recall. When counting only such instances where EREx extracts a relation, it correctly detects its polarity in 85%, the subject in 80% and the object in 41% of the instances. While both models' scores of identifying the object are low, this can be explained at least partly by the measure: The object is considered to be wrong if it is off by one word, even if it is an article. In the dataset, it is inconsistent whether articles are part of the object or not. Table 5 shows the number of edges, i.e., extracted effect relations, per dataset. Table 6 contains some basic statistics of the effect graph. The number of connected node pairs is included because of the high ratio of multi-edges. We consider (A,B) and (B,A) as the same node pair. Table 7 shows the number of overlapping nodes between the different effect graph versions. Overall, using OpenIE results in the largest graph and using EREx in the smallest. That Ope-nIE extracts fewer nodes than EREx* is likely due to the required linking to Wikipedia. For all three methods, there are considerably more positive than negative effect relations.
Graph Statistics
Precision
As the effect graph is generated by extraction from large text resources, we do not have a ground truth of whether or not a statement was extracted correctly. Thus, we evaluate precision a posteriori. For this purpose, we randomly select 250 edges per graph. For each, we annotate whether it was extracted correctly, given the original statement (yes, rather yes, unsure, rather no, no). We both do an expert annotation by one of the authors and crowd annotations via mturk.
Instructions
We require the crowd workers to successfully pass an instruction before working on the task. The instruction consists of a short description of the task, two examples with comments, three instances which had to be annotated correctly, and an optional field where the workers could write comments. The description, examples and the first instance are provided in appendix A. Overall, the task should be as intuitive as possible. For this purpose, we did not show the concrete verb of the effect relation, but just the effect's polarity. Instead of explaining that we are not interested in modality, we framed the polarity as "(may) negatively affect". We addressed the risk of confusion with sentiment by addressing it in the instructions: Though most would likely agree that ending war is desirable, we highlight that the effect which is expressed on war is a negative one. The workers then have to correctly identify two further such effects as negative (coal power reducing CO2-emissions) respectively positive (current EU policy leading to a financial crisis). Similarly, we exemplify and control that the subject and object have to be identified correctly.
Annotation Process
We only accept workers who live in the US and have a HIT approval rate greater than 98% and more than 10,000 approved HITs in total. Additionally, they have to have passed the instructions with three correct answers out of three. As the cases in the instructions were not ambiguous, we count rather yes and rather no as wrong answers, as well as unsure. Overall, only 9 out of 50 workers passed the instructions.
We have a total of 750 instances to be annotated. Each instance is annotated by three crowd workers and one expert. Overall, seven of the nine qualified workers did actually address the task. Of these seven workers, three did annotate the vast majority of the instances (747, 739 and 650 respectively). categorial label value yes 2 rather yes 1 unsure 0 rather no −1 no −2
Agreement
We treat the five labels either as polarities, mapping rather yes to yes and rather no to no. Or we treat them as scalars as indicated in table 8. The mapping allows us to intuitively combine multiple labels by computing their mean. This is relevant later for generating the label to ultimately measuring the precision. But it also enables us to measure the agreement between the combined label and the expert annotator (expert). Additionally, we compute the agreement among the crowd workers (crowd). For mapping back from numbers to labels, we always round up positive values and round down negative values. This way, the labels yes and no are only provided if there are no opposing polarities and the label unsure is given as rarely as possible.
We use the following agreement scores: Fleiss Kappa for categorial agreement respecting the label distribution; Randolph Kappa (Randolph, 2005) for categorial agreement without respecting the label distribution; Krippendorff Alpha (Krippendorff, 2011) for scalar agreement, especially in the crowd setup as it allows for multiple annotators; Pearson Correlation for scalar agreement in the expert setup, using the mean as is; Spearman Correlation for rank agreement in the expert setup, mapping the mean to labels.
The scores are presented in table 9. Overall, the agreement is rather weak. Concerning polarities, we note two things: First, there is a big difference between Fleiss and Randolph which can be explained by the fact that the crowd workers tended to annotate yes or rather yes way more often than no or rather no . Second, for Fleiss, the involvement of the expert leads to higher scores, while for Randolph it is vice versa. This tendency might be explained by the fact that the expert annotated yes or rather yes even less often than no or rather no. So the expert reduces the imbalance between these two labels which in turn causes Fleiss and Randolph to approach each other.
For the scalar agreement, the scores are a bit better which makes sense as only in this scenario the labels' ranks are considered properly. However, we still conclude that the agreement is weak which we have to consider when interpreting the results.
Results
The precision scores are calculated by dividing the number of correctly extracted effect relations by the sum of the numbers of correctly and incorrectly extracted ones. As for what we consider a correctly extracted effect relation, we again consider different settings to provide a full picture. For one, we use either the expert label or the aggregated crowd label. Further, we either consider only the labels we are confident about, namely yes and no (denoted by exclusive), or we again aggregate yes and rather yes as well as no and rather no (denoted by inclusive). We never consider the relatively few cases where the (aggregated) label is unsure. The results are shown in table 10.
The expert's tendency to annotate yes considerably less often than the crowd workers is reflected by the overall lower precision scores. Despite this large difference of the scores, the tendency among the datasets is consistent for the crowd workers' and the expert's annotations: EREx and EREx* clearly outperform OpenIE, while EREx seams to be at least slightly better than EREx*. This was to be expected as EREx is more restrictive in selecting subjects and objects than EREx*.
We conclude that EREx and EREx* are most likely more precise than the OpenIE baseline, but whether or not they are precise enough for our envisioned use case is yet to be shown.
Recall
For evaluating recall, we check whether the graph does contain such effect relations which we would expect it to contain. In order to do so, we build an evaluation dataset. We choose one random argumentative claim per topic from the Debatepedia dataset of arguments related to consequences (Kobbe et al., 2020). This results in 180 claims. From each claim, we manually extract all effect relations which we consider reasonable. This results in 308 effect relations. If there is more than one possible effect relation for a claim, we annotate whether they are either equivalent to (≡), disjoint to (̸ ≡), or part of (⊃) the other ones. Table 11 shows some examples which we will briefly discuss.
In example 1, there exist three reasonable effect relations which differ only in the concreteness of the object, a being the most concrete and c the least. Note that the effect verb eliminate is only correct when mentioning the ability of restaurants. Still, the statement indirectly also expresses that calorie counts negatively effect restaurants, which is why in effect relation c, there is no effect verb annotated. Example 2 briefly shows a case where there exist two effect relations which are roughly equivalent in terms of the information they contain. In contrast, in example 3 exist two completely distinct effect relations, though the second one is rather implicit. Example 4 is a bit more complex: a is as concrete as possible, but it can be split in b and c which together are equivalent to a.
For calculating recall, we use two straightforward formulas: We either divide the number of the ground truth effect relations which are contained in the effect graph by the total number of ground truth effect relations (total), or we divide the number of claims for which at least one ground truth effect relation is contained in the effect graph by the number of claims in the dataset (per statement). Further, we optionally exclude the effect relations which were extracted from Debatepedia from the effect graph (w/o DP). Though it is unclear what results one can expect this way, we consider it to be a purer way of calculating recall.
The results (see table 12) show a clear trend: EREx has lower recall than OpenIE, while EREx* has a significantly higher recall than OpenIE only when Debatepedia is included in the graph. Im-portantly, we note that EREx* is only better than EREx in the full graph setting. This fits our observation that the effect relations extracted by EREx* tend to be overly specific oftentimes, which is one reason why we proposed the linking to Wikipedia as an additional requirement.
As the recall is particularly low for the settings without Debatepedia, we take a brief look at the few successes in table 13: It is noticeable though unsurprising that the graphs generated with EREx and EREx* contain the exact same test instances. Further, two of them (7,8) are not identified by Ope-nIE which in turn contains seven instances which EREx and EREx* do not (9-15). One of the latter instances cannot be included in EREx or EREx* because it contains a non-nounphrase as subject (14) -but considering the unspecificity of instance 14, this restriction seems to be justifiable.
Explanation Generation
For generating explanations, we use the effect graph generated by EREx. As outlined in the introductory section, we envision two different types of explanations which we will describe separately in the sections 5.1 and 5.2. Afterwards, we introduce a measure to rank the potential explanations (5.3).
Effect-Effect-Explanation
For an Effect-Effect-Explanation to be meaningful, the polarities have to fit the relation we aim to explain. Concretely, we explain a positive relation either by two positive or two negative relations, and a negative relation by combining a positive and a negative one. To generate explanation candidates , we use the effect graph in a straight forward way by querying for paths of length two between the instances of interest with appropriate edge polarities. As a result, we get a list of explanation candidates.
For explaining how abortions protect women, this list includes 370 explanation candidates, though many of them are similar to each other because of our loose definition of duplicates. In- stead of listing all candidates, we list all the interim nodes C used within the explanation candidates: *, choice, country, fetus, god, man, nothing, order, people, person, pregnancy, right, sex, society, t, unwanted pregnancy, woman 's rights. One can easily imagine that some of the concepts mentioned are useful for explaining why abortions protect women, while others are non-sense.
Effect-Lexical-Explanation
Sometimes, we need additional lexical knowledge for explaining an effect relation. As mentioned previously, we use WordNet to incorporate some of the potentially relevant lexical knowledge. Concretely, this includes hyperonymy, meronymy and synonymity.
To extract explanation candidates for A ± − → B, we again look for instances C, considering the following cases:
A ± − → C WN − − → B and A WN − − → C ± − → B.
The polarities have to be identical and WN − − → indicates one of the lexical relations mentioned above.
For the example, we find 10 different explanation candidates. Half of them argue that abortions are good for mothers in some way, and mother is a hyponym for woman. While being trivial, we still think that there is a benefit in this explanation. It states correctly that the positive effect of the abortion is on the mother (and not on the fetus, for instance) and finds the relation between mother and woman. The other five explanation candidates use the interim nodes people, action, failure, man and none of these explanations seems useful to us.
Explanation Candidate Ranking
Since the proposed methods to generate explanations often result in a list of explanation candidates of varying quality, we further propose a simple means of ranking them which is inspired by tfidf. The idea is to measure the importance of the interim node C based on its degree in the effect graph (denoted by deg e ), where we assume a lower degree to be better as it indicates specificity, and its degree in the subgraph connecting A and B (denoted by deg s ), where we consider a higher degree to indicate relevance. The core idea for measuring importance is the quotient of these two quantities. This quotient, however, does not respect the absolute quantities and will thus lead to the same score for C having degree 1 in both graphs and having degree 5 in both graphs, though we consider the latter to be considerably better. In order to account for that, we apply the idea of additive smoothing and increment the denominator by 1. Further considering that we rather prefer medium in-and out-degree rather than a high (low) in-and low (high) out-degree, we calculate C's importance for Effect-Effect-Explanations as follows:
indeg s (C) indeg e (C) + 1 · outdeg s (C) outdeg e (C) + 1
Considering Effect-Lexical-Explanations, we are only interested in either C's out-or in-degree. For better comparability, we use the square of the relevant quotient to measure the importance.
When applying the importance measure on the example, the five most important nodes are in descending order: unwanted pregnancy, woman 's rights, mother, fetus, pregnancy. The corresponding explanation via unwanted pregnancy unfortunately does not make sense due to an extraction mistake, although the concept seems to be ranked that high for good reason. We already discussed the one via mother in section 5.2. The others suggest that abortions kill fetuses which in turn harm, damage or endanger the woman; that abortions end pregnancies which also harms the woman; and that abortions support women's rights which in turn are good for women.
Conclusion
We propose a method to extract effect relations from text and use it to build an effect graph. We further propose a method to use the effect graph as background knowledge for automatically generating structured explanations, for example for arguments from consequences. However, the effect graph's precision remains unclear while its recall is low. The latter issue might be addressed by either improving the extraction method or, to a certain degree, by running the method on larger text resources. The effect graph can be seen as a valuable resource on its own, as it can potentially be used to also address other tasks than explanation generation, like identifying (counter-) arguments for a specific topic or extending common sense knowledge graphs such as ConceptNet (Speer et al., 2017).
Limitations
While the proposed methods are attractive due to their efficiency, explainability and not needing training data, the limitations are also manifold: The pipeline nature propagates all errors that occur. For instance, the dependency parser in use performs rather poorly on informal texts such as tweets. Further, our definition of positive and negative effect relations is quite shallow and does not always live up to the real world's complexity. We only capture effect relations that are formulated explicitly within one sentence, and only one effect relation per sentence. Requiring the nodes to link to Wikipedia might be too restrictive while not even truly solving the problem of filtering non-sense nodes. Both the low inter-annotator-agreement in our effect graph evaluation as well as the discrepancy of the crowds' and the expert's annotations make it hard to assess the correctness of the extracted effect relations. And lastly, while we showcase some generated explanations, we did not properly evaluate how reliable the approach is in finding reasonable explanations. Indeed, first results suggest that this approach of generating explanations works rather inconsistently, though the ranking helps to a certain degree.
What one might consider another limitation is that we do not check the effect relations for factual correctness, which ultimately leads to contradictions and inconsistencies in the effect graph. While fact checking is a difficult and controversial task, we also purposefully decided against any form of fact or consistency checking. Each edge in the effect graph is meant to represent one effect relation exactly as it was expressed. Including critical effect relations in the graph allows for identifying, analyzing, and potentially disproving them.
124
At the core of an argument from consequences is what Al-Khatib et al. (2020) call effect relation: A typically expresses either a positive or negative effect on an instance B, which we denote by A + − → B or A − − → B. In the example, the effect relation is legal abortions + − → women because of the positive effect expressed by the verb protect.
Table 1 :
1Some possible explanations.
Table 2 :
2Formalized Effect-Effect-Explanations.
Table 3 :
3Dependency graph patterns, adapted fromKobbe et al. (2020).et al., 2016) with a threshold of ±0.2, expanded
using WordNet as proposed in
Table 4 :
4Effect relation extraction evaluation.
Table 5 :
5Effect relation extraction statisticsEREx EREx* OpenIE
# Nodes
53k
734k
129k
# Edges
195k
872k
1474k
# Positive edges
157k
729k
1250k
# Negative edges
38k
142k
223k
# Connected node pairs
126k
733k
603k
Table 6 :
6Effect graph statistics.
Table 7 :
7Effect graph: Node overlap.
Table 8 :
8Mapping categorial answers to values.crowd expert
polarities Fleiss
0.15
0.26
Randolph
0.47
0.44
scalar
Krippendorff 0.20
0.34
Pearson
0.57
Spearman
0.56
Table 9 :
9Agreement scores for effect relation evaluation.
Table 10 :
10Effect graph precision.
Ex. 1 Calorie counts eliminate ability of restaurants to be spontaneous.a
(Calorie counts) [-eliminate] (ability of restaurants to be spontaneous)
b
⊃ (Calorie counts) [-eliminate] (ability of restaurants)
c
⊃ (Calorie counts) [-] (restaurants)
Ex. 2 Circumcision creates risk of infections in infants
a
(Circumcision) [+creates] (risk of infections)
b
≡ (Circumcision) [+creates] (infections)
Ex. 3 Assassinations protect publics from terrorism; even while it's hard to measure
a
(Assassinations) [+protect] (publics)
b
̸ ≡ (Assassinations) [-protect from] (terrorism)
Ex. 4 Network neutrality damages competition and niche suppliers
a
(Network neutrality) [-damages] (competition and niche suppliers)
b
≡[ (Network neutrality) [-damages] (competition)
c
̸ ≡ (Network neutrality) [-damages] (niche suppliers)]
Table 11 :
11Examples: Effect relation annotation for recall evaluation.total
per statement
full w/o DP full w/o DP
OpenIE 0.07
0.04
0.14
0.09
EREx
0.05
0.03
0.09
0.06
EREx* 0.14
0.03
0.28
0.06
Table 12 :
12Effect graph recall.
Table 13 :
13Effect graph recall (w/o DP): Successes.
The resources created for this paper are available at https://github.com/dwslab/Effect-Graph.
As the train-test-split used byAl-Khatib et al. (2020) is unknown to us, we use the full dataset for the evaluation. Thus, unfortunately, the results are not directly comparable.
AcknowledgmentsThis work has been funded by the Deutsche Forschungsgemeinschaft (DFG) within the project ExpLAIN, Grant Number STU 266/14-1, as part of the Priority Program "Robust Argumentation Machines (RATIO)" (SPP-1999).
End-to-end argumentation knowledge graph construction. Khalid Al-Khatib, Yufang Hou, Henning Wachsmuth, Charles Jochim, Francesca Bonin, Benno Stein, 10.1609/aaai.v34i05.6231Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Khalid Al-Khatib, Yufang Hou, Henning Wachsmuth, Charles Jochim, Francesca Bonin, and Benno Stein. 2020. End-to-end argumentation knowledge graph construction. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):7367-7374.
Employing argumentation knowledge graphs for neural argument generation. Al Khalid, Lukas Khatib, Henning Trautner, Yufang Wachsmuth, Benno Hou, Stein, 10.18653/v1/2021.acl-long.366Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1Khalid Al Khatib, Lukas Trautner, Henning Wachsmuth, Yufang Hou, and Benno Stein. 2021. Employing ar- gumentation knowledge graphs for neural argument generation. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 4744-4754, Online. Association for Computa- tional Linguistics.
Leveraging linguistic structure for open domain information extraction. Gabor Angeli, Melvin Jose Johnson Premkumar, Christopher D Manning, 10.3115/v1/p15-1034Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingAssociation for Computational Linguistics1Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguis- tic structure for open domain information extraction. In Proceedings of the 53rd Annual Meeting of the As- sociation for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics.
Clausie: clause-based open information extraction. Luciano Del Corro, Rainer Gemulla, 10.1145/2488388.2488420Proceedings of the 22nd international conference on World Wide Web. the 22nd international conference on World Wide WebACMLuciano Del Corro and Rainer Gemulla. 2013. Clausie: clause-based open information extraction. In Pro- ceedings of the 22nd international conference on World Wide Web, pages 355-366. ACM.
Causal relations. Donald Davidson, 10.2307/2023853The Journal of Philosophy. 6421691Donald Davidson. 1967. Causal relations. The Journal of Philosophy, 64(21):691.
Exploring the role of prior beliefs for argument persuasion. Esin Durmus, Claire Cardie, 10.18653/v1/n18-1094Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics1Esin Durmus and Claire Cardie. 2018. Exploring the role of prior beliefs for argument persuasion. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1035-1045. Association for Computational Linguistics.
A corpus for modeling user and language effects in argumentation on online debating. Esin Durmus, Claire Cardie, 10.18653/v1/p19-1057Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational LinguisticsEsin Durmus and Claire Cardie. 2019. A corpus for modeling user and language effects in argumentation on online debating. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics. Association for Computational Linguis- tics.
Open information extraction from the web. Oren Etzioni, Michele Banko, Stephen Soderland, Daniel S Weld, 10.1145/1409360.1409378Communications of the ACM. 5112Oren Etzioni, Michele Banko, Stephen Soderland, and Daniel S. Weld. 2008. Open information extrac- tion from the web. Communications of the ACM, 51(12):68-74.
Princeton university: About wordnet. Christiane Fellbaum, Christiane Fellbaum. 2010. Princeton university: About wordnet.
MinIE: Minimizing Facts in Open Information Extraction. Kiril Gashteovski, Rainer Gemulla, Luciano Del Corro, 10.18653/v1/D17-1278Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingDenmarkAssociation for Computational LinguisticsEvent-place: CopenhagenKiril Gashteovski, Rainer Gemulla, and Luciano Del Corro. 2017. MinIE: Minimizing Facts in Open Information Extraction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2630-2640. Association for Computational Linguistics. Event-place: Copen- hagen, Denmark.
Automatic detection of causal relations for question answering. Roxana Girju, 10.3115/1119312.1119322Proceedings of the ACL 2003 Workshop on Multilingual Summarization and Question Answering. the ACL 2003 Workshop on Multilingual Summarization and Question Answering12USA. Association for Computational LinguisticsMultiSumQA '03Roxana Girju. 2003. Automatic detection of causal re- lations for question answering. In Proceedings of the ACL 2003 Workshop on Multilingual Summarization and Question Answering -Volume 12, MultiSumQA '03, page 76-83, USA. Association for Computa- tional Linguistics.
Text mining for causal relations. Roxana Girju, Dan Moldovan, FLAIRS conference. Roxana Girju and Dan Moldovan. 2002. Text mining for causal relations. In FLAIRS conference, pages 360-364.
Causenet: Towards a causality graph extracted from the web. Stefan Heindorf, Yan Scholten, Henning Wachsmuth, Axel-Cyrille Ngonga Ngomo, Martin Potthast, 10.1145/3340531.3412763Proceedings of the 29th ACM International Conference on Information & Knowledge Management, CIKM '20. the 29th ACM International Conference on Information & Knowledge Management, CIKM '20New York, NY, USAAssociation for Computing MachineryStefan Heindorf, Yan Scholten, Henning Wachsmuth, Axel-Cyrille Ngonga Ngomo, and Martin Potthast. 2020. Causenet: Towards a causality graph extracted from the web. In Proceedings of the 29th ACM Inter- national Conference on Information & Knowl- edge Management, CIKM '20, page 3023-3030, New York, NY, USA. Association for Computing Machin- ery.
Unsupervised stance detection for arguments from consequences. Jonathan Kobbe, Ioana Hulpus, Heiner Stuckenschmidt, 10.18653/v1/2020.emnlp-main.4Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsJonathan Kobbe, Ioana Hulpus , , and Heiner Stucken- schmidt. 2020. Unsupervised stance detection for arguments from consequences. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 50-60, Online. Association for Computational Linguistics.
Computing krippendorff's alpha-reliability. Klaus Krippendorff, Klaus Krippendorff. 2011. Computing krippendorff's alpha-reliability.
The Stanford CoreNLP Natural Language Processing Toolkit. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, David Mcclosky, 10.3115/v1/P14-5010Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. 52nd Annual Meeting of the Association for Computational Linguistics: System DemonstrationsMarylandAssociation for Computational LinguisticsBaltimoreChristopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations, pages 55-60, Bal- timore, Maryland. Association for Computational Linguistics.
OpenIE-based approach for knowledge graph construction from text. L Jose, Ivan Martinez-Rodriguez, Ana B Lopez-Arevalo, Rios-Alvarado, 10.1016/j.eswa.2018.07.017Expert Systems with Applications. 113Jose L. Martinez-Rodriguez, Ivan Lopez-Arevalo, and Ana B. Rios-Alvarado. 2018. OpenIE-based ap- proach for knowledge graph construction from text. Expert Systems with Applications, 113:339-355.
Free-marginal multirater kappa (multirater k [free]): An alternative to fleiss' fixed-marginal multirater kappa. J Justus, Randolph, Online submissionJustus J Randolph. 2005. Free-marginal multirater kappa (multirater k [free]): An alternative to fleiss' fixed-marginal multirater kappa. Online submission.
Connotation Frames: A Data-Driven Investigation. Sameer Hannah Rashkin, Yejin Singh, Choi, 10.18653/v1/P16-1030Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics1Long Papers)Hannah Rashkin, Sameer Singh, and Yejin Choi. 2016. Connotation Frames: A Data-Driven Investigation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 311-321. Association for Computational Linguistics. Event-place: Berlin, Ger- many.
Feasible Annotation Scheme for Capturing Policy Argument Reasoning using Argument Templates. Paul Reisert, Naoya Inoue, Tatsuki Kuribayashi, Kentaro Inui, Proceedings of the 5th Workshop on Argument Mining. the 5th Workshop on Argument MiningBrussels, BelgiumAssociation for Computational LinguisticsPaul Reisert, Naoya Inoue, Tatsuki Kuribayashi, and Kentaro Inui. 2018. Feasible Annotation Scheme for Capturing Policy Argument Reasoning using Argu- ment Templates. In Proceedings of the 5th Workshop on Argument Mining, pages 79-89, Brussels, Bel- gium. Association for Computational Linguistics.
Adapting open information extraction to domain-specific relations. Stephen Soderland, Brendan Roof, Bo Qin, Shi Xu, Mausam , Oren Etzioni , 10.1609/aimag.v31i3.2305AI Magazine. 313Stephen Soderland, Brendan Roof, Bo Qin, Shi Xu, Mausam, and Oren Etzioni. 2010. Adapting open information extraction to domain-specific relations. AI Magazine, 31(3):93-102.
ConceptNet 5.5: An open multilingual graph of general knowledge. Robyn Speer, Joshua Chin, Catherine Havasi, 10.1609/aaai.v31i1.11164Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence31Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. ConceptNet 5.5: An open multilingual graph of gen- eral knowledge. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1):4444-4451.
Supervised open information extraction. Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, Ido Dagan, 10.18653/v1/N18-1081Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaLong Papers1Association for Computational LinguisticsGabriel Stanovsky, Julian Michael, Luke Zettlemoyer, and Ido Dagan. 2018. Supervised open information extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long Papers), pages 885-895, New Orleans, Louisiana. Association for Computa- tional Linguistics.
. Douglas Walton, Christopher Reed, Fabrizio Macagno, 10.1017/cbo9780511802034Argumentation Schemes. Cambridge University PressDouglas Walton, Christopher Reed, and Fabrizio Macagno. 2008. Argumentation Schemes. Cam- bridge University Press.
2022. A survey on extraction of causal relations from natural language text. Jie Yang, Caren Soyeon, Josiah Han, Poon, 10.1007/s10115-022-01665-wKnowledge and Information Systems. 645Jie Yang, Soyeon Caren Han, and Josiah Poon. 2022. A survey on extraction of causal relations from natural language text. Knowledge and Information Systems, 64(5):1161-1186.
Leveraging argumentation knowledge graph for interactive argument pair identification. Jian Yuan, Zhongyu Wei, Donghua Zhao, Qi Zhang, Changjian Jiang, 10.18653/v1/2021.findings-acl.203Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Association for Computational LinguisticsJian Yuan, Zhongyu Wei, Donghua Zhao, Qi Zhang, and Changjian Jiang. 2021. Leveraging argumentation knowledge graph for interactive argument pair iden- tification. In Findings of the Association for Com- putational Linguistics: ACL-IJCNLP 2021, pages 2310-2319. Association for Computational Linguis- tics. |
10,736,122 | Hyperedge Replacement and Nonprojective Dependency Structures | Synchronous Hyperedge ReplacementGraph Grammars (SHRG) can be used to translate between strings and graphs. In this paper, we study the capacity of these grammars to create non-projective dependency graphs. As an example, we use languages that contain cross serial dependencies.Lexicalized hyperedge replacement grammars can derive string languages (as path graphs) that contain an arbitrary number of these dependencies so that their derivation trees reflect the correct dependency graphs. We find that, in contrast, string-to-graph SHRG that derive dependency structures on the graph side are limited to derivations permitted by the string side. We show that, as a result, string-to-graph SHRG cannot capture languages with an unlimited degree of crossing dependencies. This observation has practical implications for the use of SHRG in semantic parsing. | [
7138313,
7771402,
5634542,
15179694,
16394809,
18592508,
3545055
] | Hyperedge Replacement and Nonprojective Dependency Structures
June 29 -July 1, 2016
Daniel Bauer bauer@cs.columbia.edu
Columbia University New York
10027NYUSA
Owen Rambow rambow@cs.columbia.edu
Columbia University New York
10027NYUSA
Hyperedge Replacement and Nonprojective Dependency Structures
Proceedings of the 12th International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+12)
the 12th International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+12)Düsseldorf, GermanyJune 29 -July 1, 2016
Synchronous Hyperedge ReplacementGraph Grammars (SHRG) can be used to translate between strings and graphs. In this paper, we study the capacity of these grammars to create non-projective dependency graphs. As an example, we use languages that contain cross serial dependencies.Lexicalized hyperedge replacement grammars can derive string languages (as path graphs) that contain an arbitrary number of these dependencies so that their derivation trees reflect the correct dependency graphs. We find that, in contrast, string-to-graph SHRG that derive dependency structures on the graph side are limited to derivations permitted by the string side. We show that, as a result, string-to-graph SHRG cannot capture languages with an unlimited degree of crossing dependencies. This observation has practical implications for the use of SHRG in semantic parsing.
Introduction
Hyperedge Replacement Grammars (HRG) are a type of context free graph grammar. Their derived objects are hypergraphs instead of strings. A synchronous extension, Synchronous Hyperedge Replacement Grammars (SHRG) can be used to translate between strings and graphs. To construct a graph for a sentence, one simply parses the input using the string side of the grammar and then interprets the derivations with the graph side to assemble a derived graph.
SHRG has recently drawn attention in Natural Language Processing as a tool for semantic construction. For example, Jones et al. (2012) propose to use SHRG for semantics based machine translation, and Peng et al. (2015) describe an approach to learning SHRG rules that translate sentences into Abstract Meaning Representation (Banarescu et al., 2013).
Not much work has been done, however, on understanding the limits of syntactic and semantic structures that can be modeled using HRG and SHRG. In this paper, we examine syntactic dependency structures generated by these formalisms, specifially whether they can create correct dependency trees for non-projective phenomena. We focus on non-projectivity caused by copy language like constructions, specifically cross-serial dependencies in Dutch. Figure 1 shows a (classical) example sentence containing such dependencies and a dependency graph.
This paper looks at dependency structures from two perspectives. We first review HRGs that derive string languages as path graphs. The set of these languages is known to be the same as the languages generated by linear context free rewriting systems (Weir, 1992). We consider HRG grammars of this type that are lexicalized (each rule contains exactly one terminal edge), so we can view their derivation trees as dependency structures. We provide an example string-generating HRG that can analyze the sentence in Figure 1 with the correct dependency structure and can generate strings with an unlimited number of crossing dependencies of the same type.
Under the second perspective, we view the derived graphs of synchronous string-to-HRG grammars as dependency structures. These grammars can generate labeled dependency graphs in a more flexible way, including labeled dependency edges, local reordering of dependencies (allowing a more semantically oriented analysis of prepositional phrases and conjunctions), structures with arbitrary node degree, and reentrancies. We present a grammar to analyze the string/graph pair in Fig Figure 1: Example sentence illustrating cross-serial dependencies in Dutch. English translation: "because Wim saw Jan help Marie teach the children to swim." ure 1, that derives the correct labeled dependency structure, but whose derivation does not resemble syntactic dependencies. Using this example, we observe an important limitation of string-to-graph SHRG: With nonterminal hyperedges of bounded type (number of incident vertices), we cannot analyze cross-serial dependencies with an unlimited number of crossing edges. Specifically, for a given dependency edge covering a span of words, the number of nodes outside the span that can have a dependent or parent inside the span is limited. This is because, on the input side, the grammar is a plain string CFG. In a string CFG derivation, each node must correspond to a connected subspan of the input. Because of this constraint on the derivation, the dependency subgraphs constructed by the HRG must maintain a reference to all words that have a long distance dependent elsewhere in the string. These references are passed on through the derivation in the external nodes of each graph rhs of the SHRG rules. External nodes are special vertices at which graph fragments are connected to the surrounding graph.
To avoid this problem, instead of a plain string CFG one can use other formalisms that produce context free derivation trees, such as the stringgenerating HRGs we discuss in this paper or LTAG.
Semantic representations, such as Abstract Meaning Representation, resemble dependency structures. Therefore, while we do not discuss semantic graphs to skirt the issue of reentrancy, non-projective linguistic phenomena that appear in syntactic dependency structure are also relevant when translating strings into semantic representations. We believe that our observations are not only of theoretical interest, but affect practical applications of SHRG in semantic parsing.
The paper proceeds as follows: Section 2 pro-vides a formalization of Hyperedge Replacement Grammars and introduces necessary terminology.
In section 3, we discuss string generating HRGs and illustrate how they can be used to correctly analyze cross-serial dependencies in an example. Section 4 examines string-to-graph SHRGs and observes their limitations in generating crossserial dependencies. In section 5, we analyze this limitation in more detail, demonstrating a relationship between the order of a grammar (the maximum hyperedge type) and the maximum number of edges crossing another edge. Section 6 provides an overview of related work. Finally, we conclude and summarize our findings in section 7.
Hyperedge Replacement Graph Grammars
A directed, edge-labeled hypergraph is a tuple H = V, E, ℓ , where V is a finite set of vertices, E ⊆ V + is a finite set of hyperedges, each of which connects a number of vertices, and ℓ is a labeling function with domain E. The number of vertices connected by a hyperedge is called its type.
A hyperedge replacement grammar (HRG, Drewes et al. (1997) ) is a tuple G = N, Σ, P, S where N is a ranked, finite set of nonterminal labels, Σ is a finite set of terminal labels such that Σ ∩ N = ∅, S ∈ N is the designated start symbol, and P is a finite set of rules. Each rule r ∈ P is of the form (A → R, X), where A ∈ N , R = V, E, ℓ is a hypergraph with ℓ : E → N ∪ T , and X ∈ V * is a list of external nodes. We call the number of vertices |V | in a rule rhs the width of the rule. The maximum type of any nonterminal hyperedge in the grammar is called the order of the grammar. 1 R1: S →
N2
zag . Figure 2: A 'string-generating' lexicalized hyperedge replacement grammar for Dutch cross serial dependencies. The grammar can derive the sentence in figure 1. The derivation tree for this sentence represents the correct dependency structure.
Given a partially derived graph H we can use a rule (A → R, X) to rewrite a hyperedge e = (v 1 , · · · , v k ) if e has label A and k = length(X). In this operation, e is removed from H, a copy of R is inserted into H and the external nodes X = (u 1 , · · · , u k ) of the copy of R are fused with the nodes connected by e, such that u i is identified with v i for i = 1, . . . , k.
When showing rules in diagrams, such as Figure 2, we draw external nodes as black circles and number them with an index to make their order explicit. Nonterminal hyperedges are drawn as undirected edges whose incident vertices are ordered left-to-right.
The relation H ⇒ G H ′ holds if hypergraph H ′ can be derived from hypergraph H in a single step using the rules in G. Similarly H ⇒ * G H ′ holds if H ′ can be derived from H in a finite number of steps. The hypergraph language of a grammar G is the (possibly infinite) set of hypergraphs that can be derived from the start symbol S. L(G) =
(S → H, ) ∈ P {H ⇒ * G H ′ |H ′ has only terminals }
We will show examples for HRG derivations below.
HRG derivations are context-free in the sense that the applicability of each production depends on the nonterminal label and type of the replaced edge only. We can therefore represent derivations as trees, as for other context free formalisms. Context freeness also allows us to extend the formalism to a synchronous formalism, for example to are used in the literature. We use the word rank to refer to the maximum number of nonterminals in a rule right hand side.
translate strings into trees, as we do in section 4. We can view the resulting string and graph languages as two interpretations of the same set of possible derivation trees described by a regular tree grammar (Koller and Kuhlmann, 2011).
HRG Derivations as Dependency Structures
We first discuss the case in which HRG is used to derive a sentence and examine the dependency structure induced by the derivation tree. Hyperedge Replacement Grammars can derive string languages as path graphs in which edges are labeled with tokens. For example, consider the path graph for the sentence in Figure 1.
Wim Jan Marie dekinderen zag helpen leren zwemmen Engelfriet and Heyker (1991) show that the string languages generated by HRG in this way are equivalent to the output languages of Deterministic Tree Walking Transducers (DTWT). Weir (1992) shows that these languages are equivalent to the languages generated by linear context free rewriting systems (LCFRS) and that the LCFRS languages with fan-out k are the same as the HRG string languages with order 2k.
The analysis of cross-serial dependencies has been studied in a number of 'mildly context sensitive' grammar formalisms. For example, Rambow and Joshi (1997) show an analysis in LTAG. Because the string languages generated by these formalisms are equivalent to languages of LCFRS with fan-out 2, we know that we must be able to write an HRG of order 4 that can capture cross-serial dependencies. Figure 2 shows such a string generating HRG that can derive the example in Figure 1. Each rule rhs consists of one or more internally connected spans of strings paths of labeled edges. The external nodes of each rhs graph mark the beginning and end of each span. The nonterminal labels of other rules specify how these spans are combined and connected to the surrounding string. For illustration, consider the first two steps of a derivation in this grammar. Rule 1 introduces the verb 'zag' and its subject. Rule 2 inserts 'helpen' to the right of 'zag' and its subject and direct object to the right of the subject of 'zag'. This creates crossing dependencies between the subjects and their predicates in the derivation.
R1
N2 zag .
R2 N2 N2
helpen 0 1 2 3
V4
The partially derived graph now contains a span of nouns and a span of verbs. The nonterminal hyperedge labeled V 4 indicates where to append new nouns and where to add new verbs. Note that rule 2 (or an identical rule for a different verb) can be re-applied to generate cross-serial dependencies with an arbitrary number of crossings. It is easy to see that grammars of this type correspond to LCFRS almost directly.
Using the grammar in Figure 2, there is a single derivation tree for the example sentence in Fig This derivation tree represents the correct syntactic dependency structure for the sentence. This is not the case for all lexicalized 'mildly context sensitive' grammar formalisms, even if it is possible to write grammars for languages that contain cross-serial dependencies. In TAG, long distance dependencies are achieved using adjunction. Both dependents are introduced by the same auxiliary tree, stretching the host tree apart. An LTAG derivation for the example sentence would start with an elementary tree for 'zwemmen' and then adjoin 'leren'. The resulting dependency structure is therefore inverted.
Deriving Dependency Graphs with
Synchronous String-to-Graph Grammars
We now consider grammars whose derived graphs represent the dependency structure of a sentence. The goal is to write a synchronous context-free string-to-graph grammar that translates sentences into their dependency graphs. If the string side of the grammar is a plain string CFG, as we assume here, the derivation cannot reflect non-projective dependencies directly. Instead, we must use the graph side of the grammar to assemble a dependency structure. This approach has several potential advantages in applications. In the string-generating HRG discussed in the previous section, the degree of a node in the dependency structure is limited by the rank of the grammar. Using a graph grammar to derive the graph, we can add an arbitrary number of dependents to a node, even if the rules contributing these dependency edges are nested in the derivation. This is especially important for more semantically inspired representations where all semantic arguments should become direct dependents of a node (for example, deep subjects). We can also make the resulting graphs reentrant. In addition, because HRGs produce labeled graphs, we can add dependency labels. Finally, even though the example grammar in Figure 3 is lexicalized on the string side, lexicalization is no longer required to build a dependency structure. Unfortunately, 'decoupling' the derivation from the dependency structure in this way can be problematic, as we will see. Figure 3 shows a synchronous hyperedge replacement grammar that can translate the sentence from Figure 1 into its dependency graph. A synchronous hyperedge replacement grammar (SHRG) is a synchronous context free grammar in which at least one of the right hand sides uses hypergraph fragments. The two sides of the grammar are synchronized in a strong sense. Both rhs of each grammar rule contain exactly the same instances of nonterminals and the instances Figure 3: A synchronous string-to-graph grammar for Dutch cross-serial dependencies. The grammar can derive the sentence/dependency graph pair in in Figure 1, but the derivation tree does not reflect syntactic dependencies.
are related by a bijective synchronization relation (in case of ambiguity we make the bijection explicit by indexing nonterminals when representing grammars). In a SHRG, each nonterminal label can only be used to label hyperedges of the same type. For example, V 2 is only used for hyperedges of type 2. As a result, all derivations for the string side of the grammar are also valid derivations for graphs.
In the grammar in Figure 3, vertices represent nodes in the dependency structure (words). Because HRGs derive edge labeled graphs but no vertex labels, we use a unary hyperedge (a hyperedge with one incident vertex) to label each node. For example, the only node in the rhs of rule 1 has the label 'zwemmen'.
Nonterminal hyperedges are used to 'pass on' vertices that we need to attach a dependent to at a later point in the derivation. External nodes define how these nodes are connected to the surrounding derived graph. To illustrate this, a derivation using the grammar in Figure 3 could start with rule 1, then replace the nonterminal V 1 with the rhs of rule 2. We then substitute the new nonterminal V 1 introduced by rule 2 with rule 3. At this point, the partially derived string is 'V 2 helpen leren zwemmen' and the partially derived graph is .
The nonterminal V 2 passes on a reference to two nodes in the graph, one for 'helpen' and one for 'leren'. This allows subsequent rules in the derivation to attach subjects and objects to these nodes, as well as the parent node ('zag') to 'helpen'. To derive the string/graph pair in Figure 1, the rules of this grammar are simply applied in order (rule 1 ⇒ rule 2 ⇒ · · · ⇒ rule 9). Clearly, the resulting derivation is just a chain and bears no resemblance to the syntactic dependency structure.
While the grammar can derive our example sentence, it does not permit us to derive dependency structures with an arbitrary number of crossing dependencies. This is because the nonterminal edges need to keep track of all possible sites at which long distance dependents can be attached at a later point in the derivation. To add more crossing Figure 4: Sketch of the derivation tree of a synchronous hyperedge replacement grammar, showing two dependency edges (u, w) and (v, x), and u < v < w < x.The graph fragment associated with the rule at node α needs to contain nodes u, w and v. v must be an external node. dependencies we therefore need to create special rules with nonterminal hyperedges of a larger type, as well as the corresponding rules with a larger number of external nodes. Because any grammar has a finite number of rules and a fixed order, we cannot use this type of SHRG grammar to model languages that permit an arbitrary degree of crossing edges in a graph. While the graph grammar can keep track of long-distance dependencies, the string grammar is still context free, so any nonlocal information needs to be encoded in the nonterminals. The penalty we pay for being able to remember a limited set of dependents through the derivation is that we need a refined alphabet of nonterminals (V 1 , V 2 , V 3 , · · · ; instead of just V).
Edge Degree and Hyperedge Type
In section 4 we demonstrate that we need an everincreasing hyperedge type if we want to model languages in which a dependency edge can be crossed by an arbitrary number of other dependency edges. So far, we have only illustrated this point with an example. In this section we will demonstrate that no such grammar can exist.
It is clear that the problem is not with generating the tree language itself. We could easily extend the string-generating grammar from section 3, whose derivation trees reflect the correct dependency structure, by adding a second graph rhs that derives an image of the derivation tree (potentially with dependency labels). Instead, the problem appears to be that we force grammar rules to be applied according to the string derivation.
Specifically, the partially derived string associated with each node in the derivation needs to be a contiguous subspan. This prevents us from assembling dependencies locally.
To make this intuition more formal, we demonstrate that there is a relationship between number of crossing dependencies and the the minimum hyperedge type required in the SHRG. We first look at a single pair of crossing dependency edges and then generalize the argument to multiple edges crossing into the span of an edge. For illustration, we provide a sketch of a SHRG derivation tree in Figure 4.
Assume we are given a sentence s = (w 0 , w 1 , · · · , w n−1 ), and a corresponding dependency graph G = V, E, ℓ where V = {0, 1, · · · , n − 1}. We define the range of a dependency edge (u, v) to be the interval [u, v] if v > u or else [v, u]. For each dependency edge (u, v) the number of crossing dependencies is the number of dependency nodes properly outside its range, that share a dependency edge with any node properly inside its range. The degree of crossing dependencies of a dependency graph is the maximum number of crossing dependencies for any of its edges.
Given a SHRG derivation tree for s and G, each terminal dependency edge (u, w) ∈ E must be produced by the rule associated with some derivation node β (see Figure 4). Without loss of generality, assume that u < w. String token s[u] is produced by the rule associated with some derivation node τ u and s[w] is produced by the rule of some derivation node τ w . On the graph side, τ u and τ w must contain the nodes u and w because they generate the unary hyperedges labeling these vertices. There must be some common ancestor α of β, τ u , and τ w that contains both u and w. u and w must be connected in α by a nonterminal hyperedge, because otherwise there would be no way to generate the terminal edge (u, w) in β (note that it is possible that α and β are the same node in which case the rule of this node does not contain a nonterminal edge). Now consider another pair of nodes v and x such that u < v < w < x and there is a de-
pendency edge (v, x) ∈ E or (x, v) ∈ E. s[v]
is generated by τ v and s[x] is generated by τ x . As before, there must be a common ancestor γ of τ v and τ x , in which v and x are connected by a nonterminal hyperedge. Because u < v < w < x either α is an ancestor of γ or γ is an ancestor of α. For illustration, we assume the second case. The case where α dominates γ is analogous.
Since the graph fragments of all derivation nodes on the path from γ to τ v must contain a vertex that maps to v, α must contain such a vertex. This vertex needs to be an external node of the rule attached to α because otherwise v could not be introduced by γ.
We can extend the argument to an arbitrary number of crossing dependency edges. As before, let (u, w) be a dependency edge and α be the derivation node whose graph fragment first introduces the nonterminal edge between u and w. For all dependency edges (x, y) or (y, x) for which y is in the range of (u, w) and x is outside of the range of (u, w) (either x < u < y < w or u < y < w < x) there must be some path in the derivation tree that leads through α. All graph fragments on this path contain a vertex mapped to y. As a result, the graph fragment in α needs to contain one external node for each x that has a dependency edge to some node y inside the range (u, w). In other words, α needs to contain as many external nodes as there are nodes outside the range (u, w) that share a dependency edge with a node inside the range (u, w).
Because every HRG has a fixed order (the maximum type of any nonterminal hyperedge), no SHRG that generates languages with an arbitrary number of cross-serial dependencies can exist. It is known that the hypergraph languages HRL k that can be generated by HRGs of order k form an infinite hierarchy, i.e. HRL 1 HRL 2 · · · (Drewes et al., 1997). Therefore, the string-tograph grammars required to generate cross-serial dependencies up to edge degree k are strictly more expressive than those that can only generate edge degree k − 1.
Related Work
While the theory of graph grammars dates back to the 70s (Nagl, 1979;Drewes et al., 1997), their use in Natural Language Processing is more recent. Fischer (2003) use string generating HRG to model discontinuous constituents in German. Jones et al. (2012) introduce SHRG and demonstrate an application to construct intermediate semantic representations in machine translation. Peng et al. (2015) automatically extract SHRG rules from corpora annotated with graph based meaning representations (Abstract Meaning Representation), using Markov Chain Monte Carlo techniques. They report competitive results on string-to-graph parsing. Braune et al. (2014) empirically compare SHRG to cascades of tree transducers as devices to translate English strings into reentrant semantic graphs. In agreement with the result we show more formally in this paper, they observe that, to generate graphs that contain a larger number of long-distance dependencies, a larger grammar with more nonterminals is needed, because the derivations of the grammar are limited to string CFG derivations.
Synchronous context free string-graph grammars have also been studied in the framework of Interpreted Regular Tree Grammar (Koller and Kuhlmann, 2011) using S-Graph algebras (Koller, 2015). In the TAG community, HRGs have been discussed by Pitsch (2000), who shows a construction to convert TAGs into HRGs. Finally, Joshi and Rambow (2003) discuss a version of TAG in which the derived trees are dependency trees, similar to the SHRG approach we present here.
To use string-generating HRG in practice we need a HRG parser. Chiang et al. (2013) present an efficient graph parsing algorithm. However, their implementation assumes that graph fragments are connected, which is not true for the grammar in section 3. On the other hand, since string-generation HRGs are similar to LCFRS, any LCFRS parser could be used. The relationship between the two parsing problems merits further investigation. Seifert and Fischer (2004) describe a parsing algorithm specificaly for string-generating HRGs.
Formal properties of dependency structures generated by lexicalized formalisms have been studied in detail by Kuhlmann (2010).
He proposes measures for different types of nonprojectivity in dependency structures, including edge degree (which is related to the degree of crossing dependencies we use in this paper), and block degree. A qualitative measure of dependency structures is well nestedness, which indicates whether there is an overlap between subtrees that do not stand in a dominance relation to each other. In future work, we would like to investigate how these measures relate to dependency structures generated by HRG derivations and SHRG derived graphs.
Conclusion
In this paper we investigated the capability of hyperedge replacement graph grammars (HRG) and synchronous string-to-graph grammar (SHRG) to generate dependency structures for non-projective phenomena. Using Dutch cross-serial dependencies as an example, we compared two different approaches: string-generating HRGs whose derivation trees can be interpreted as dependency structures, and string-to-graph SHRGs, whose can create dependency structures as their derived graphs.
We provided an example grammar for each case. The derivation tree of the HRG adequately reflected syntactic dependencies and the example grammar could in principle generate an arbitrary number of crossing dependencies. However, these derivation trees are unlabeled and cannot be extended to represent deeper semantic relationships (e.g semantic argument structure and coreference). For the string-to-graph SHRG, we saw that the derived graph of our grammar represented the correct dependencies for the example sentence, while the derivation tree did not.
The main observation of this paper is that, unlike the string-generating HRG, the string-tograph SHRG was only able to generate a limited number of crossing dependencies. With each additional crossing edge in the example, we needed to add a new rule with a higher hyperedge type, increasing the order of the grammar. We argued that the reason for this is that the synchronous derivation for the input string and output graph is constrained to be a valid string CFG derivation. Analyzing this observation more formally, we showed a relationship between the order of the grammar and the maximum permitted number of edges crossing into the span of another edge.
An important conclusion is that, unless the correct syntactic dependencies are already local in the derivation, HRGs cannot derive dependency graphs with an arbitrary number of cross-serial dependencies. We take this to be a strong argument for using lexicalized formalisms in synchronous grammars for syntactic and semantic analysis, that can process at least a limited degree of non-projectivity, such as LTAG.
In future work, we are aiming to develop a lexicalized, synchronous string-to-graph formalisms of this kind. We would also like to relate our results to other measures of non-projectivity discussed in the literature. Finally, we hope to expand the results of this paper to other non-projective phenomena and to semantic graphs.
-...omdat
Wim Jan Marie
de
kinderen
zag
helpen
leren
zwemmen .
... because Wim Jan Marie the
children
saw
help
teach
swim
.
ccomp
xcomp
xcomp
subj
nsubj
dobj
dobj
det
punc
We choose the term order instead of rank. Both terms
Abstract meaning representation for sembanking. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Linguistic Annotation Workshop. Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan SchneiderLaura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Linguistic Annotation Work- shop.
Mapping between english strings and reentrant semantic graphs. Fabiene Braune, Daniel Bauer, Kevin Knight, Proceedings of LREC. LRECReykjavik, IcelandFabiene Braune, Daniel Bauer, and Kevin Knight. 2014. Mapping between english strings and reen- trant semantic graphs. In Proceedings of LREC, Reykjavik, Iceland.
Parsing graphs with hyperedge replacement grammars. David Chiang, Jacob Andreas, Daniel Bauer, Karl-Mortiz Hermann, Bevan Jones, Kevin Knight, Proceedings of ACL. ACLSofia, BulgariaDavid Chiang, Jacob Andreas, Daniel Bauer, Karl- Mortiz Hermann, Bevan Jones, and Kevin Knight. 2013. Parsing graphs with hyperedge replacement grammars. In Proceedings of ACL, Sofia, Bulgaria.
Handbook of Graph Grammars and Computing by Graph Transformation. Frank Drewes, Annegret Habel, Hans-Jörg Kreowski, Grzegorz RozenbergWorld ScientificHyperedge replacement graph grammarsFrank Drewes, Annegret Habel, and Hans-Jörg Kre- owski. 1997. Hyperedge replacement graph gram- mars. In Grzegorz Rozenberg, editor, Handbook of Graph Grammars and Computing by Graph Trans- formation, pages 95-162. World Scientific.
The string generating power of context-free hypergraph grammars. Joost Engelfriet, Linda Heyker, Journal of Computer and System Sciences. 432Joost Engelfriet and Linda Heyker. 1991. The string generating power of context-free hypergraph gram- mars. Journal of Computer and System Sciences, 43(2):328-360.
Modeling discontinuous constituents with hypergraph grammars. Ingrid Fischer, International Workshop on Applications of Graph Transformations with Industrial Relevance (AGTIVE). Ingrid Fischer. 2003. Modeling discontinuous con- stituents with hypergraph grammars. In Interna- tional Workshop on Applications of Graph Transfor- mations with Industrial Relevance (AGTIVE), pages 163-169.
. Bevan Jones, Jacob Andreas, Daniel Bauer, Karl-Moritz Hermann, and Kevin KnightBevan Jones, Jacob Andreas, Daniel Bauer, Karl- Moritz Hermann, and Kevin Knight. 2012.
Semantics-based machine translation with hyperedge replacement grammars. Proceedings of COLING. COLINGMumbai, IndiaFirst authorship sharedSemantics-based machine translation with hyper- edge replacement grammars. In Proceedings of COLING, Mumbai, India. First authorship shared.
A formalism for dependency grammar based on tree adjoining grammar. Aravind Joshi, Owen Rambow, Proceedings of the Conference on Meaning-Text Theory. the Conference on Meaning-Text TheoryAravind Joshi and Owen Rambow. 2003. A formal- ism for dependency grammar based on tree adjoin- ing grammar. Proceedings of the Conference on Meaning-Text Theory, pages 207-216.
A generalized view on parsing and translation. Alexander Koller, Marco Kuhlmann, Proceedings of the 12th International Conference on Parsing Technologies. the 12th International Conference on Parsing TechnologiesAssociation for Computational LinguisticsAlexander Koller and Marco Kuhlmann. 2011. A gen- eralized view on parsing and translation. In Pro- ceedings of the 12th International Conference on Parsing Technologies, pages 2-13. Association for Computational Linguistics.
Semantic construction with graph grammars. Alexander Koller, Proceedings of the 11th International Conference on Computational Semantics (IWCS). the 11th International Conference on Computational Semantics (IWCS)Alexander Koller. 2015. Semantic construction with graph grammars. In Proceedings of the 11th Inter- national Conference on Computational Semantics (IWCS), pages 228-238.
Dependency Structures and Lexicalized Grammars: An Algebraic Approach. Marco Kuhlmann, Springer6270Marco Kuhlmann. 2010. Dependency Structures and Lexicalized Grammars: An Algebraic Approach, volume 6270. Springer.
A tutorial and bibliographical survey on graph grammars. Manfred Nagl, Proceedings of the International Workshop on Graph-Grammars and Their Application to Computer Science and Biology. the International Workshop on Graph-Grammars and Their Application to Computer Science and BiologyLondon, UK, UKSpringer-VerlagManfred Nagl. 1979. A tutorial and bibliographical survey on graph grammars. In Proceedings of the International Workshop on Graph-Grammars and Their Application to Computer Science and Biology, pages 70-126, London, UK, UK. Springer-Verlag.
A synchronous hyperedge replacement grammar based approach for amr parsing. Xiaochang Peng, Linfeng Song, Daniel Gildea, Proceedings of CONLL. CONLLXiaochang Peng, Linfeng Song, and Daniel Gildea. 2015. A synchronous hyperedge replacement gram- mar based approach for amr parsing. In Proceedings of CONLL.
Hyperedge replacement and tree adjunction. Gisela Pitsch, Tree Adjoining Grammars. CSLI. Anne Abeillè and Owen RambowGisela Pitsch. 2000. Hyperedge replacement and tree adjunction. In Anne Abeillè and Owen Rambow, editors, Tree Adjoining Grammars. CSLI.
A formal look at dependency grammars and phrase-structure grammars, with special consideration of word-order phenomena. Owen Rambow, Aravind Joshi, Recent Trends in Meaning-Text Theory. Leo WannerAmsterdam and PhiladelphiaOwen Rambow and Aravind Joshi. 1997. A formal look at dependency grammars and phrase-structure grammars, with special consideration of word-order phenomena. In Leo Wanner, editor, Recent Trends in Meaning-Text Theory, pages 167-190. John Ben- jamins, Amsterdam and Philadelphia.
Parsing string generating hypergraph grammars. Sebastian Seifert, Ingrid Fischer, International Conference on Graph Transformations(ICGT). Sebastian Seifert and Ingrid Fischer. 2004. Pars- ing string generating hypergraph grammars. In International Conference on Graph Transforma- tions(ICGT), pages 352-367.
Linear context-free rewriting systems and deterministic tree-walking transducers. David J Weir, Association for Computational Linguistics. Newark, Delaware, USAProceedings of ACLDavid J. Weir. 1992. Linear context-free rewriting systems and deterministic tree-walking transducers. In Proceedings of ACL, pages 136-143, Newark, Delaware, USA, June. Association for Computa- tional Linguistics. |
1,353,004 | Effective Morphological Feature Selection with MaltOptimizer at the SPMRL 2013 Shared Task | The inclusion of morphological features provides very useful information that helps to enhance the results when parsing morphologically rich languages. MaltOptimizer is a tool, that given a data set, searches for the optimal parameters, parsing algorithm and optimal feature set achieving the best results that it can find for parsers trained with MaltParser. In this paper, we present an extension of Mal-tOptimizer that explores, one by one and in combination, the features that are geared towards morphology. From our experiments in the context of the Shared Task on Parsing Morphologically Rich Languages, we extract an in-depth study that shows which features are actually useful for transition-based parsing and we provide competitive results, in a fast and simple way. | [
14147937,
1585700,
9168100,
2252070,
16122410,
18352846,
6214640,
9117271,
2373017,
15541882,
1708411,
1743005,
15430366
] | Effective Morphological Feature Selection with MaltOptimizer at the SPMRL 2013 Shared Task
Association for Computational LinguisticsCopyright Association for Computational Linguistics18 October 2013. 2013
Miguel Ballesteros miguel.ballesteros@upf.edu
Natural Language Processing Group Pompeu
Fabra University
BarcelonaSpain
Effective Morphological Feature Selection with MaltOptimizer at the SPMRL 2013 Shared Task
Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically Rich Languages
the Fourth Workshop on Statistical Parsing of Morphologically Rich LanguagesSeattle, Washington, USAAssociation for Computational Linguistics18 October 2013. 2013
The inclusion of morphological features provides very useful information that helps to enhance the results when parsing morphologically rich languages. MaltOptimizer is a tool, that given a data set, searches for the optimal parameters, parsing algorithm and optimal feature set achieving the best results that it can find for parsers trained with MaltParser. In this paper, we present an extension of Mal-tOptimizer that explores, one by one and in combination, the features that are geared towards morphology. From our experiments in the context of the Shared Task on Parsing Morphologically Rich Languages, we extract an in-depth study that shows which features are actually useful for transition-based parsing and we provide competitive results, in a fast and simple way.
Introduction
Since the CoNLL Shared Tasks on Syntactic Dependency parsing (Buchholz and Marsi, 2006;, the number of treebanks and new parsing methods have considerably increased. Thanks to that, it has been observed that parsing morphologically rich languages (henceforth, MRLs) is a challenge because these languages include multiple levels of information that are difficult to classify and, therefore, to parse. This is why there has been recent research in this direction, with for instance a Special Issue in Computational Linguistics (Tsarfaty et al., 2012b).
MaltOptimizer (Ballesteros and Nivre, 2012b;Ballesteros and Nivre, 2012a) is a system that is ca-pable of providing optimal settings for training models with MaltParser (Nivre et al., 2006a), a freely available transition-based parser generator. MaltOptimizer, among other things, performs an in-depth feature selection, selecting the attributes that help to achieve better parsing results. In this paperand in this participation in the Shared Task on Parsing Morphologically Rich Languages (Seddah et al., 2013) -we present an extension of MaltOptimizer that performs a deeper search over the morphological features that are somewhat one of the keys to parsing MRLs. Instead of lumping all morphosyntactic features together, we define a different field for each individual feature (case, number, gender, etc.). Hence, we are able to extract a study that shows which features are actually useful for parsing MRLs with MaltParser.
The new SPMRL-MaltOptimizer implementation is available for download at http://nil.fdi.ucm.es/maltoptimizer/spmrl.html. It is worth noting that it can be applied to any treebank in CoNLL data format. 1 The rest of the paper is organized as follows. Section 2 describes MaltOptimizer. Section 3 shows how we modified MaltOptimizer to make it able to perform a more complete morphological feature selection. Section 4 describes the experiments that we carried out with the data sets of the Shared Task on Parsing Morphologically Rich Languages. Section 5 reports the results of the experiments and the conclusions that we can extract. Section 6 discusses related work on MaltOptimizer and parsing morphologically rich languages. And finally, Section 7 con-cludes.
MaltOptimizer
MaltOptimizer is a system written in Java that implements a full optimization procedure for Malt-Parser based on the experience acquired from previous experiments Nivre and Hall, 2010). MaltOptimizer attempts to find the best model that it can find, but it does not guarantee that the outcome is the best model possible because of the difficulty of exploring all the possibilities that are provided by the parameters, parsing algorithms and different feature windows. The optimization procedure is divided in 3 different phases, as follows:
1. Data analysis and initial optimization.
2. Parsing algorithm selection.
3. Feature selection and LIBLINEAR optimization.
MaltOptimizer divides the treebank into a training set and a held-out test set for evaluation. In the first phase, MaltOptimizer makes an analysis of the treebank in order to set up the rest of the optimization, and it attempts the optimization with some general parameters, such as the way of handling covered roots. 2 After that, it tests the parsing algorithms that are available in MaltParser by selecting the one that provides best results in default settings. In the third phase, it explores a wide range of features that are based on previous parsing steps and/or the information annotated in the treebanks. Finally, it also explores the single hyper-parameter (c) of the LIBLIN-EAR classifier.
In the next Section, we present how we updated MaltOptimizer for our participation in the Shared Task of parsing MRLs.
Morphological Feature Exploration
The CoNLL data format contains several columns of information that help to perform the dependency parsing of a sentence. One of the columns is the FEATS column that normally contains a set of morphological features, which is normally of the format a=x|b=y|c=z. At the time of writing, the available 2 A covered root is a root node covered by a dependency arc. version of MaltOptimizer explores the features included in this column as a single feature, by lumping all morphosyntactic features in the MaltParser classifier, and by splitting the information but including all of them at the same time without making any distinctions. This is what MaltParser allows by using the standard CoNLL format, which contains the following information per column. However, MaltParser also provides the option of parsing new data formats that are derived from the original CoNLL format. Therefore, there is the possibility to add new columns that may contain useful information for parsing. The new MaltOptimizer implementation automatically generates a new data format and a new data set. It creates new columns that only contain the information of a single feature which is included in the FEATS column. Figure 1 shows two versions of a sentence annotated in the French treebank from the Shared Task. The one shown above is in the standard CoNLL format, and the one shown below is the extended format generated by MaltOptimizer in which the FEATS column has been divided in 10 different columns. Figure 1: A sentence from the French treebank in the standard (above) and complex (below) formats. The projective columns have been removed for simplicity.
P _ _ _ _ _ _ ADV+ y _ _ 4 mod 2 tout tout D DET ind m s _ _ _ _ y _ _ 1 dep_cpd 3 cas cas N NC c m _ _ _ _ _ y _ _ 1 dep_cpd 4 est łtre V V _ _ s 3 ind pst _ _ _ _ 0 root 5 -il il CL CLS suj m s 3 _ _ _ _ _ _ 4 suj 6 plus plus ADV ADV _ _ _ _ _ _ _ _ _ _ 7 mod 7 nuanc nuanc A ADJ qual m s _ _ _ _ _ _ _ 4 ats 8 . . PONCT PONCT s _ _ _ _ _ _ _ _ _ 4 ponct
Experiments
With the intention of both assessing the usefulness of the new MaltOptimizer implementation and testing which features are useful for each targeted language, we carried out a series of experiments over the data sets from the Shared Task on Parsing MRLs (Seddah et al., 2013). We run the new MaltOptimizer implementation for all the data sets provided by the Shared Task organizers and we run Malt-Parser with the model suggested. Therefore, we had 36 different runs, 4 for each language (gold and predicted scenarios with 5k treebanks, and gold and predicted scenarios with full treebanks). In order to have a comparable set of results, we performed all the optimization processes with the smaller versions of the treebanks (5k) and both optimization and training steps with both the small and larger version for all languages. Each MaltOptimizer run took approximately 3-4 hours for optimization (the running time also depends on the size of the set of morphological features, or other parameters, such as the number of dependency relations) and it takes around 20 extra minutes to get the final model with MaltParser. These estimates are given with an Intel Xeon server with 8 cores, 2.8GHz and a heap space of, at least, 8GB. Table 1 shows the results for gold-standard input while Table 2 shows the results for the provided predicted inputs for the best model that the new Mal-tOptimizer implementation can find (Dev-5k, Dev, Test-5k and Test) and a baseline, which is Malt-Parser in default settings (Malt-5k and Malt) on the test sets. The first conclusion to draw is that the difference between gold and predicted inputs is normally of 2 points, however for some languages such as French the drop reaches 6 points. It is also evidenced that, as shown by Ballesteros and Nivre (2012a), some languages benefit more from the feature selection phase, while others achieve higher improvements by selecting a different parsing algorithm.
Results and Discussion
In general terms, almost all languages benefit from having an accurate stemmed version of the word in the LEMMA column, providing very substantial improvements when accurately selecting this feature. Another key feature, for almost all languages, is the grammatical CASE that definitely enhances the performance; we can therefore conclude that it is essential for MRLs. Both aspects evidence the lexical challenge of parsing MRLs without using this information.
There is a positive average difference comparing with the MaltParser baseline of 4.0 points training over the full treebanks and predicted scenario and 5.6 points training over the full treebanks and gold scenario. It is therefore evident how useful MaltOptimizer is when it can perform an in-depth morphological feature exploration. In the following subsections we explain the results for each targeted language, giving special emphasis to the ones that turn out to be more meaningful.
Arabic
For Arabic, we used the shared task Arabic data set, originally provided by the LDC (Maamouri et (Seddah et al., 2013). For the gold input, the most useful feature is, by far, DASHTAG 3 with an improvement of 2 points. CASE is also very useful, as it is for most of the languages, with 0.67 points. Moreover, SUBCAT (0.159) and CAT (0.129) provide improvements as well.
In the pred scenario, there is no DASHTAG, and this allows other features to rise, for instance, CASE (0.66), CPOSTAG (0.12), GENDER (0.08), SUB-CAT (0.07) and CAT (0.06) provide improvements. Finally it is worth noting that the TED accuracy 3 DASHTAG comes from the original constituent data, when a DASHTAG was present in a head node label, this feature was kept in the Catib corpus. (Tsarfaty et al., 2011) for the lattices is 0.8674 with the full treebanks and 0.8563 with 5k treebanks, which overcomes the baseline in more than 0.06 points, this shows that MaltOptimizer is also useful under TED evaluation constraints.
Basque
The improvement provided by the feature selection for Basque (Aduriz et al., 2003) is really high. It achieves almost 13 points improvement with the gold input and around 8 points with the predicted input. The results in the gold scenario are actually a record if we also consider the experiments performed over the treebanks of the CoNLL Shared Tasks (Ballesteros and Nivre, 2012a). One of the reasons is the treatment of covered roots that is optimized during the first phase of optimization. This corpus has multiple root labels, ROOT being the most common one and the one selected by MaltOp-timizer as default.
For the gold input, the CPOSTAG and LEMMA columns turn out to be very useful, providing an improvement of 2.5 points and slightly less than 1 point respectively, MaltOptimizer selects them all over the more central tokens over the stack and the buffer. The Basque treebank contains a very big set of possible features in the FEATS column, however only some of them provide significant improvements, which evidences the usefulness of selecting them one by one. The most useful feature with a huge difference is KASE (or CASE) that provides 5.9 points by itself. MaltOptimizer fills out all the available positions of the stack and the buffer with this feature. Another useful feature is ERL [type of subordinated sentence], providing almost 0.8 points. Moreover, NUMBER (0.3), NORK2 (0.15), ASP [aspect] (0.09), NOR1 (0.08), and NMG (0.06) provide slighter, but significant, improvements as well. 4 Surprisingly, the predicted input provides better results in the first 2 phases, which means that for some reason MaltParser is able to parse better by using just the predicted POS column, however, the improvement achieved by MaltOptimizer during Phase 3 are (just) a bit more than 7 points. In this case, the CPOSTAG column is less useful, providing only 0.13 points, while the LEMMA (1.2) is still very useful. CASE provides 4.5 points, while NUM (0.17), ASP (0.13) and ADM (0.11) provide improvements as well.
French
For French (Abeillé et al., 2003) there is a huge difference between the results with gold input and the results with predicted input. With gold input, the feature selection provides a bit less than 8 points while there is just an improvement of around 2 points with predicted input. In this case, the lack of quality in the predicted features is evident. It is also interesting that the lexical column, FORM, provides a quite substantial improvement when Mal-tOptimizer attempts to modify it, which is something that does not happen with the rest of languages.
For the gold input, apart from LEMMA that provides around 0.7 points, the most useful feature is 4 NORK2, NOR1 and NMG are auxiliaries case markers. MWEHEAD [head of a multi word expression, if exists] that does not exist in the predicted scenario. MWEHEAD provides more than 4 points; this fact invites us to think that a predicted version of this feature would be very useful for French, if possible. PRED [automatically predicted] (0.8), G [gender] (0.6), N [number] (0.2) and S [subcat] (0.14) are also useful.
In the predicted scenario, the CPOSTAG column provides some improvements (around 0.1) while the LEMMA is less useful than the one in the gold scenario (0.2). The morphological features that are useful are S [subcat] (0.3) and G [gender] (0.3).
German
For German (Brants et al., 2002) the results are more or less in the average. For the gold input, LEMMA is the best feature providing around 0.8 points; from the morphological features the most useful one is, as expected, CASE with 0.58 points. GENDER (0.16) and NUMBER (0.16) are also useful.
In the predicted scenario, CASE is again very useful (0.67). Other features, such as, NUMBER (0.10) and PERSON (0.10) provide improvements, but as we can observe a little bit less than the improvements provided in the gold scenario.
Hebrew
For the Hebrew (Sima'an et al., 2001;Tsarfaty, 2013) treebank, unfortunately we did not see a lot of improvements by adding the morphological features. For the gold input, only CPOSTAG (0.08) shows some improvements, while the predicted scenario shows improvements for NUM (0.08) and PER (0.08). It is worth noting that the TED accuracy (Tsarfaty et al., 2011) for the lattices is 0.8305 which is ranked second.
This outcome is different from the one obtained by Goldberg and Elhadad (2010), but it is also true that perhaps by selecting a different parsing algorithm it may turn out different, because two parsers may need different features, as shown by Zhang and Nivre (2012). This is why, it would be very interesting to perform new experiments with MaltOptimizer by testing different parsing algorithms included in MaltParser with the Hebrew treebank.
Hungarian
The Hungarian (Vincze et al., 2010) results are also very consistent. During the feature selection phase, MaltOptimizer achieves an improvement of 10 points by the inclusion of morphological features. This also happens in the initial experiments performed with MaltOptimizer (Ballesteros and Nivre, 2012a), by using the Hungarian treebank of the CoNLL 2007 Shared Task. The current Hungarian treebank presents covered roots and multiple root labels and this is why we also get substantial improvements during Phase 1.
For the gold input, as expected the LEMMA column is very useful, providing more than 1.4 points, while MaltOptimizer selects it all over the available feature windows. The best morphological feature is again CASE providing an improvement of 5.7 points just by itself, in a similar way as in the experiments with Basque. In this case, the SUBPOS [grammatical subcategory] feature that is included in the FEATS column is also very useful, providing around 1. In the predicted scenario, we can observe a similar behavior for all features. MOOD provides 0.4 points while it does not provide improvements in the gold scenario. The results of the SUBPOS feature are a bit lower in this case (0.5 points), which evidences the quality lost by using predicted inputs.
Korean
As Korean (Choi, 2013) is the language in which our submission provided the best results comparing to other submissions, it is interesting to dedicate a section by showing its results. For the 5k input, our model provides the best results of the Shared Task, while the results of the model trained over the full treebank qualified the second.
For the gold input, the most useful feature is CPOSTAG providing around 0.6 points. Looking into the morphological features, CASE, as usual, is the best feature with 0.24 points, AUX-Type (0.11), FNOUN-Type (0.08) are also useful.
In the predicted scenario, MaltOptimizer performs similarly, having CPOSTAG (0.35) and CASE (0.32) as most useful features. ADJ-Type (0.11) and PUNCT-Type (0.06) are also useful. The results of the features are a bit lower with the predicted input, with the exception of CASE which is better.
Polish
Polish (Świdziński and Woliński, 2010) is one of the two languages (with Swedish) in which our model performs with the worst results.
In the gold scenario only the LEMMA (0.76) shows some substantial improvements during the optimization process; unfortunately, the morphological features that are extracted when MaltOptimizer generates the new complex data format did not fire.
For the predicted input, LEMMA (0.66) is again the most useful feature, but as happened in the gold scenario, the rest of the features did not fire during the feature selection.
Swedish
As happened with Polish, the results for Swedish (Nivre et al., 2006b) are not as good as we could expect; however we believe that the information shown in this paper is useful because MaltOptimizer detects which features are able to outperform the best model found so far and the model trained with MaltParser in default settings by a bit less than 2 points in the predicted scenario and more than 2 points in the gold scenario.
For the gold scenario only two features are actually useful according to MaltOptimizer, MaltOptimizer shows improvements by adding GENDER (0.22) and PERFECTFORM (0.05).
For the predicted input, MaltOptimizer shows improvements by adding DEGREE (0.09), GENDER (0.08) and ABBRV (0.06). However, as we can see the improvements for Swedish are actually lower compared to the rest of languages.
Related Work
There has been some recent research making use of MaltOptimizer. For instance, Seraji et al. (2012) used MaltOptimizer to get optimal models for parsing Persian. Tsarfaty et al. (2012a) worked with MaltOptimizer and Hebrew by including the optimization for presenting new ways of evaluating statistical parsers. Mambrini and Passarotti (2012), Agirre et al. (2012), Padró et al. (2013) and applied MaltOptimizer to test different features of Ancient Greek, Basque and Spanish (the last 2) respectively; however at that time MaltOptimizer did not allow the FEATS column to be divided. Finally, applied MaltOptimizer for different parsing algorithms that are not included in the downloadable version showing that it is also possible to optimize different parsing algorithms.
Conclusions
This new MaltOptimizer implementation helps the developers to adapt MaltParser models to new languages in which there is a rich set of features. It shows which features are able to make a change in the parsing results and which ones are not, in this way, it is possible to focus annotation effort for the purpose of parsing. We clearly observe that MaltOptimizer outperforms very substantially the results shown in the baseline, which is MaltParser in default settings, and it is also nice to see that the improvements provided by MaltOptimizer for the morphological features are actually very high, if we compare to the ones obtained by MaltOptimizer for the corpora of the CoNLL shared tasks (Ballesteros and Nivre, 2012a).
It is worth noting that the experiments with Mal-tOptimizer do not take so long. The time needed to perform the optimization is actually very short if we compare to the efforts needed to achieve results in the same range of accuracy by careful manual optimization. The MaltOptimizer process was sped up following heuristics derived from deep proven experience (Nivre and Hall, 2010), which means that there are several combinations that are untested; however, it is worth noting that these heuristics resulted in similar performance to more exhaustive search for a big set of languages (Ballesteros, 2013).
From the feature study shown in Section 5, we expect that it could be useful for people doing parsing research and interested in parsing MRLs. Finally, comparing our submission with the results of other teams, we believe that we provide a fast and effective parser optimization for parsing MRLs, having competitive results for most of the languages.
2 points. Other features that are useful are NUMP [number of the head] (0.2), NUM [number of the current token] (0.16), DEF [definiteness] (0.11) and DEG [degree] (0.09).
Table 1 :
1Language Default Phase 1 Phase 2 Phase 3 Diff Dev-5k Dev Malt-5k Malt Test-5k Test Labeled attachment score per phase compared to default settings for all training sets from the Shared Task on PMRLs in the gold scenario on the held-out test set for optimization. The first columns shows results per phase (the procedure of each phase is briefly described in Section 2) on the held-out sets for evaluation. The Dev-5k and Dev columns report labeled attachment score on the development sets. The columns Malt and Malt-5k report results of MaltParser in default settings on the test sets. And the columns, Test-5k and Test report results for the best model found by SPMRL-MaltOptimizer on the test sets.Arabic
83.48
83.49
83.49
87.95 4.47
85.98 87.60
80.36 82.28 85.30 87.03
Basque
67.05
67.33
67.45
79.89 13.30 80.35 81.65
67.13 69.19 81.40 82.07
French
77.96
77.96
78.27
85.24 7.28
85.19 86.30
78.16 79.86 84.93 85.71
German
79.90
81.09
84.85
87.70 7.80
87.32 90.40
76.64 79.98 83.59 86.96
Hebrew
76.78
76.80
79.37
80.17 3.39
79.83 79.83
76.61 76.61 80.03 80.03
Hungarian 70.37
71.11
71.98
81.91 11.54 80.69 80.74
71.27 72.34 82.37 83.14
Korean
87.22
87.22
87.22
88.94 1.72
86.52 90.20
81.69 88.43 83.74 89.39
Polish
75.52
75.58
79.28
80.27 4.75
81.58 81.91
76.64 77.70 79.79 80.49
Swedish 76.75
76.75
78.91
79.76 3.01
74.85 74.85
75.73 75.73 77.67 77.67
Language Default Phase 1 Phase 2 Phase 3 Diff Dev-5k Dev Malt-5k Malt Test-5k Test
Arabic
83.20
83.21
83.21
85.68 2.48
80.35 82.28
78.30 80.36 79.64 81.90
Basque
68.80
69.33
69.89
77.24 8.44
78.12 79.46
68.12 70.11 77.59 78.58
French
77.43
77.43
77.63
79.42 1.99
77.65 79.33
76.54 77.98 77.56 79.00
German
78.69
79.87
82.58
83.97 5.28
83.39 86.63
74.81 77.81 79.22 82.75
Hebrew
76.29
76.31
79.01
79.67 3.38
73.40 73.40
69.97 69.97 73.01 73.01
Hungarian 68.26
69.12
69.96
78.71 10.45 76.82 77.62
69.08 70.15 79.00 79.63
Korean
80.08
80.08
80.08
81.63 1.55
77.96 83.02
74.87 82.06 75.90 82.65
Polish
74.43
74.49
76.93
78.41 3.98
80.61 80.83
75.29 75.63 79.50 80.49
Swedish 74.53
74.53
76.51
77.66 3.13
72.90 72.90
73.21 73.21 75.82 75.82
Table 2 :
2Labeled attachment score per phase compared to default settings for all training sets from the Shared Task on PMRLs in the predicted scenario on the held-out test set for optimization. The columns of this table report the results in the same way asTable 1but using predicted inputs.al., 2004), specifically its SPMRL 2013 dependency
instance, derived from the Columbia Catib Tree-
bank (Habash and Roth, 2009; Habash et al., 2009),
extended according to the SPMRL 2013 extension
scheme
http://ilk.uvt.nl/conll/#dataformat
AcknowledgmentsI would like to thank Koldo Gojenola who initially gave me the idea presented in this paper. I am also very thankful to Joakim Nivre for his constant help and support. Finally, special thanks to the organizers Djamé Seddah, Reut Tsarfaty and Sandra Kübler.
Building a treebank for french. Anne Abeillé, Lionel Clément, François Toussenel, Anne AbeilléKluwerDordrechtAnne Abeillé, Lionel Clément, and François Toussenel. 2003. Building a treebank for french. In Anne Abeillé, editor, Treebanks. Kluwer, Dordrecht.
Construction of a Basque dependency treebank. I Aduriz, M J Aranzabe, J M Arriola, A Atutxa, A Díaz De Ilarraza, A Garmendia, M Oronoz, Proceedings of the 2nd Workshop on Treebanks and Linguistic Theories (TLT). the 2nd Workshop on Treebanks and Linguistic Theories (TLT)I. Aduriz, M. J. Aranzabe, J. M. Arriola, A. Atutxa, A. Díaz de Ilarraza, A. Garmendia, and M. Oronoz. 2003. Construction of a Basque dependency treebank. In Proceedings of the 2nd Workshop on Treebanks and Linguistic Theories (TLT), pages 201-204.
Contribution of complex lexical information to solve syntactic ambiguity in Basque. Eneko Agirre, Aitziber Atutxa, Kepa Sarasola, Proceedings of the 24th International Conference on Computational Linguistics (COLING 2012). the 24th International Conference on Computational Linguistics (COLING 2012)Mumbai, IndiaEneko Agirre, Aitziber Atutxa, and Kepa Sarasola. 2012. Contribution of complex lexical information to solve syntactic ambiguity in Basque. In Proceedings of the 24th International Conference on Computational Lin- guistics (COLING 2012), Mumbai, India, 12/2012.
MaltOptimizer: A System for MaltParser Optimization. Miguel Ballesteros, Joakim Nivre, Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC. the Eighth International Conference on Language Resources and Evaluation (LRECMiguel Ballesteros and Joakim Nivre. 2012a. MaltOp- timizer: A System for MaltParser Optimization. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC 2012).
Mal-tOptimizer: An Optimization Tool for MaltParser. Miguel Ballesteros, Joakim Nivre, Proceedings of the System Demonstration Session of the Thirteenth Conference of the European Chapter of the Association for Computational Linguistics. the System Demonstration Session of the Thirteenth Conference of the European Chapter of the Association for Computational LinguisticsMiguel Ballesteros and Joakim Nivre. 2012b. Mal- tOptimizer: An Optimization Tool for MaltParser. In Proceedings of the System Demonstration Session of the Thirteenth Conference of the European Chapter of the Association for Computational Linguistics (EACL 2012).
Optimizing Planar and 2-Planar Parsers with MaltOptimizer. Miguel Ballesteros, Carlos Gómez-Rodríguez, Joakim Nivre, Procesamiento del Lenguaje Natural. esamiento del Lenguaje Natural49Miguel Ballesteros, Carlos Gómez-Rodríguez, and Joakim Nivre. 2012. Optimizing Planar and 2- Planar Parsers with MaltOptimizer. Procesamiento del Lenguaje Natural, 49, 09/2012.
Exploring Morphosyntactic Annotation Over a Spanish Corpus for Dependency Parsing. Miguel Ballesteros, Simon Mille, Alicia Burga, Proceedings of the Second International Conference on Dependency Linguistics. the Second International Conference on Dependency LinguisticsDEPLINGMiguel Ballesteros, Simon Mille, and Alicia Burga. 2013. Exploring Morphosyntactic Annotation Over a Spanish Corpus for Dependency Parsing . In Proceed- ings of the Second International Conference on De- pendency Linguistics (DEPLING 2013).
Exploring Automatic Feature Selection for Transition-Based Dependency Parsing. Miguel Ballesteros, Procesamiento del Lenguaje Natural. esamiento del Lenguaje Natural51Miguel Ballesteros. 2013. Exploring Automatic Feature Selection for Transition-Based Dependency Parsing. Procesamiento del Lenguaje Natural, 51.
The TIGER treebank. Sabine Brants, Stefanie Dipper, Silvia Hansen, Wolfgang Lezius, George Smith, Erhard Hinrichs and Kiril Simov, editors, Proceedings of the First Workshop on Treebanks and Linguistic Theories. BulgariaSozopolSabine Brants, Stefanie Dipper, Silvia Hansen, Wolf- gang Lezius, and George Smith. 2002. The TIGER treebank. In Erhard Hinrichs and Kiril Simov, edi- tors, Proceedings of the First Workshop on Treebanks and Linguistic Theories (TLT 2002), pages 24-41, So- zopol, Bulgaria.
CoNLL-X shared task on multilingual dependency parsing. Sabine Buchholz, Erwin Marsi, Proceedings of the 10th Conference on Computational Natural Language Learning (CoNLL). the 10th Conference on Computational Natural Language Learning (CoNLL)Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proceedings of the 10th Conference on Computational Natural Language Learning (CoNLL), pages 149-164.
Preparing Korean Data for the Shared Task on Parsing Morphologically Rich Languages. D Jinho, Choi, ArXiv e-printsJinho D. Choi. 2013. Preparing Korean Data for the Shared Task on Parsing Morphologically Rich Lan- guages. ArXiv e-prints, September.
Easy first dependency parsing of modern hebrew. Yoav Goldberg, Michael Elhadad , Proceedings of the NAACL HLT 2010 First Workshop on Statistical Parsing of Morphologically-Rich Languages, SPMRL '10. the NAACL HLT 2010 First Workshop on Statistical Parsing of Morphologically-Rich Languages, SPMRL '10Stroudsburg, PA, USAAssociation for Computational LinguisticsYoav Goldberg and Michael Elhadad. 2010. Easy first dependency parsing of modern hebrew. In Proceed- ings of the NAACL HLT 2010 First Workshop on Sta- tistical Parsing of Morphologically-Rich Languages, SPMRL '10, pages 103-107, Stroudsburg, PA, USA. Association for Computational Linguistics.
Catib: The columbia arabic treebank. Nizar Habash, Ryan Roth, Proceedings of the ACL-IJCNLP 2009 Conference Short Papers. the ACL-IJCNLP 2009 Conference Short PapersSuntec, SingaporeAssociation for Computational LinguisticsNizar Habash and Ryan Roth. 2009. Catib: The columbia arabic treebank. In Proceedings of the ACL- IJCNLP 2009 Conference Short Papers, pages 221- 224, Suntec, Singapore, August. Association for Com- putational Linguistics.
Syntactic Annotation in the Columbia Arabic Treebank. Nizar Habash, Reem Faraj, Ryan Roth, Proceedings of MEDAR International Conference on Arabic Language Resources and Tools. MEDAR International Conference on Arabic Language Resources and ToolsCairo, EgyptNizar Habash, Reem Faraj, and Ryan Roth. 2009. Syn- tactic Annotation in the Columbia Arabic Treebank. In Proceedings of MEDAR International Conference on Arabic Language Resources and Tools, Cairo, Egypt.
Single malt or blended? A study in multilingual parser optimization. Johan Hall, Jens Nilsson, Joakim Nivre, Gülsen Eryigit, Beáta Megyesi, Proceedings of the CoNLL Shared Task of EMNLP-CoNLL. the CoNLL Shared Task of EMNLP-CoNLLMattias Nilsson, and Markus SaersJohan Hall, Jens Nilsson, Joakim Nivre, Gülsen Eryigit, Beáta Megyesi, Mattias Nilsson, and Markus Saers. 2007. Single malt or blended? A study in multilingual parser optimization. In Proceedings of the CoNLL Shared Task of EMNLP-CoNLL 2007, pages 933-939.
The Penn Arabic Treebank: Building a Large-Scale Annotated Arabic Corpus. Mohamed Maamouri, Ann Bies, Tim Buckwalter, Wigdan Mekki, NEMLAR Conference on Arabic Language Resources and Tools. Mohamed Maamouri, Ann Bies, Tim Buckwalter, and Wigdan Mekki. 2004. The Penn Arabic Treebank: Building a Large-Scale Annotated Arabic Corpus. In NEMLAR Conference on Arabic Language Resources and Tools.
Will a Parser Overtake Achilles? First experiments on parsing the Ancient Greek Dependency Treebank. Francesco Mambrini, Marco Carlo Passarotti, Proceedings of the Eleventh International Workshop on Treebanks and Linguistic Theories. the Eleventh International Workshop on Treebanks and Linguistic TheoriesFrancesco Mambrini and Marco Carlo Passarotti. 2012. Will a Parser Overtake Achilles? First experiments on parsing the Ancient Greek Dependency Treebank. In Proceedings of the Eleventh International Workshop on Treebanks and Linguistic Theories (TLT11).
A quick guide to MaltParser optimization. Joakim Nivre, Johan Hall, Technical reportmaltparser.orgJoakim Nivre and Johan Hall. 2010. A quick guide to MaltParser optimization. Technical report, malt- parser.org.
Maltparser: A data-driven parser-generator for dependency parsing. Joakim Nivre, Johan Hall, Jens Nilsson, Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC). the 5th International Conference on Language Resources and Evaluation (LREC)Joakim Nivre, Johan Hall, and Jens Nilsson. 2006a. Maltparser: A data-driven parser-generator for depen- dency parsing. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC), pages 2216-2219.
Tal-banken05: A Swedish treebank with phrase structure and dependency annotation. Joakim Nivre, Jens Nilsson, Johan Hall, Proceedings of LREC. LRECGenoa, ItalyJoakim Nivre, Jens Nilsson, and Johan Hall. 2006b. Tal- banken05: A Swedish treebank with phrase structure and dependency annotation. In Proceedings of LREC, pages 1392-1395, Genoa, Italy.
The CoNLL 2007 shared task on dependency parsing. Joakim Nivre, Johan Hall, Sandra Kübler, Ryan Mcdonald, Jens Nilsson, Sebastian Riedel, Deniz Yuret, Proceedings of the CoNLL Shared Task of EMNLP-CoNLL. the CoNLL Shared Task of EMNLP-CoNLLJoakim Nivre, Johan Hall, Sandra Kübler, Ryan McDon- ald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The CoNLL 2007 shared task on dependency parsing. In Proceedings of the CoNLL Shared Task of EMNLP-CoNLL 2007, pages 915-932.
Finding dependency parsing limits over a large spanish corpus. Muntsa Padró, Miguel Ballesteros, Hector Martínez, Bernd Bohnet, IJCNLP, Nagoya, Japan. Association for Computational Linguistics. Muntsa Padró, Miguel Ballesteros, Hector Martínez, and Bernd Bohnet. 2013. Finding dependency pars- ing limits over a large spanish corpus. In IJCNLP, Nagoya, Japan. Association for Computational Lin- guistics.
Overview of the spmrl 2013 shared task: A cross-framework evaluation of parsing morphologically rich languages. Djamé Seddah, Reut Tsarfaty, Sandra Kübler, Marie Candito, Jinho Choi, Richárd Farkas, Jennifer Foster, Iakes Goenaga, Koldo Gojenola, Yoav Goldberg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Joakim Nivre, Adam Przepiorkowski, Ryan Roth, Wolfgang Seeker, Yannick Versley, Veronika Vincze, Marcin Woliński, Alina Wróblewska, Proceedings of the 4th Workshop on Statistical Parsing of Morphologically Rich Languages: Shared Task. the 4th Workshop on Statistical Parsing of Morphologically Rich Languages: Shared TaskSeattle, WADjamé Seddah, Reut Tsarfaty, Sandra Kübler, Marie Can- dito, Jinho Choi, Richárd Farkas, Jennifer Foster, Iakes Goenaga, Koldo Gojenola, Yoav Goldberg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Joakim Nivre, Adam Przepiorkowski, Ryan Roth, Wolfgang Seeker, Yannick Versley, Veronika Vincze, Marcin Woliński, and Alina Wróblewska. 2013. Overview of the spmrl 2013 shared task: A cross-framework evaluation of parsing morphologi- cally rich languages. In Proceedings of the 4th Work- shop on Statistical Parsing of Morphologically Rich Languages: Shared Task, Seattle, WA.
Dependency parsers for persian. Mojgan Seraji, Beáta Megyesi, Joakim Nivre, 24th International Conference on Computational Linguistics (COLING 2012). ACL Anthology. Proceedings of 10th Workshop on Asian Language ResourcesMojgan Seraji, Beáta Megyesi, and Joakim Nivre. 2012. Dependency parsers for persian. In Proceedings of 10th Workshop on Asian Language Resources, at 24th International Conference on Computational Linguis- tics (COLING 2012). ACL Anthology.
Building a Tree-Bank for Modern Hebrew Text. Khalil Sima'an, Alon Itai, Yoad Winter, Traitement Automatique des Langues. Alon Altman, and Noa NativKhalil Sima'an, Alon Itai, Yoad Winter, Alon Altman, and Noa Nativ. 2001. Building a Tree-Bank for Modern Hebrew Text. In Traitement Automatique des Langues.
Towards a bank of constituent parse trees for Polish. Marcin Marekświdziński, Woliński, Text, Speech and Dialogue: 13th International Conference (TSD). Brno, Czech RepublicSpringerMarekŚwidziński and Marcin Woliński. 2010. Towards a bank of constituent parse trees for Polish. In Text, Speech and Dialogue: 13th International Conference (TSD), Lecture Notes in Artificial Intelligence, pages 197-204, Brno, Czech Republic. Springer.
Evaluating dependency parsing: Robust and heuristics-free cross-annotation evaluation. Reut Tsarfaty, Joakim Nivre, Evelina Andersson, EMNLP. Edinburgh, Scotland, UK.Association for Computational LinguisticsReut Tsarfaty, Joakim Nivre, and Evelina Anders- son. 2011. Evaluating dependency parsing: Robust and heuristics-free cross-annotation evaluation. In EMNLP, pages 385-396, Edinburgh, Scotland, UK., July. Association for Computational Linguistics.
Cross-framework evaluation for statistical parsing. Reut Tsarfaty, Joakim Nivre, Evelina Andersson, EACL. Reut Tsarfaty, Joakim Nivre, and Evelina Andersson. 2012a. Cross-framework evaluation for statistical parsing. In EACL, pages 44-54.
Parsing Morphologically Rich Languages: Introduction to the Special Issue. Reut Tsarfaty, Djamé Seddah, Sandra Kuebler, Joakim Nivre, Computational Linguistics. Reut Tsarfaty, Djamé Seddah, Sandra Kuebler, and Joakim Nivre. 2012b. Parsing Morphologically Rich Languages: Introduction to the Special Issue. Compu- tational Linguistics, November.
A Unified Morpho-Syntactic Scheme of Stanford Dependencies. Reut Tsarfaty, Proceedings of ACL. ACLReut Tsarfaty. 2013. A Unified Morpho-Syntactic Scheme of Stanford Dependencies. Proceedings of ACL.
Zoltán Alexin, and János Csirik. Veronika Vincze, Dóra Szauter, Attila Almási, György Móra, LREC. Hungarian dependency treebankVeronika Vincze, Dóra Szauter, Attila Almási, György Móra, Zoltán Alexin, and János Csirik. 2010. Hun- garian dependency treebank. In LREC.
Analyzing the effect of global learning and beam-search on transition-based dependency parsing. Yue Zhang, Joakim Nivre, COLING. Yue Zhang and Joakim Nivre. 2012. Analyzing the effect of global learning and beam-search on transition-based dependency parsing. In COLING, pages 1391-1400. |
352,962 | A Principle-based Korean/Japanese Machine Translation System: NARA | This paper presents methodological and theoretical principles for constructing a machine translation system between Korean and Japanese. We focus our discussion on the real time computing problem of the machine translation system. This problem is characterized in the time and space complexity during the machine translation. The NARA system has the real time computing algorithm which is based on a mathematical model integrating the linguistic competence and the linguistic performance of both languages, with consequence that the system NARA has also the functional characteristic: the two-way translation mechanism. | [] | A Principle-based Korean/Japanese Machine Translation System: NARA
Hee Sung Chung
Electronics and Telecommunications Research Institute
Chung-Nam
Dae-Dog DanjiP.O.Box 8Republic of KOREA
A Principle-based Korean/Japanese Machine Translation System: NARA
This paper presents methodological and theoretical principles for constructing a machine translation system between Korean and Japanese. We focus our discussion on the real time computing problem of the machine translation system. This problem is characterized in the time and space complexity during the machine translation. The NARA system has the real time computing algorithm which is based on a mathematical model integrating the linguistic competence and the linguistic performance of both languages, with consequence that the system NARA has also the functional characteristic: the two-way translation mechanism.
Introduction
We are developing a two-way (bidirectional) simultaneous Korean/Japanese machine translation: NARA [7]. The NARA system is designed by a specific computing model, which is a mathematical model based on the methodological and theoretical principles involving the formalization of the two-way simultaneity. The most significant characteristics of using the formal description for the NARA system is that the descriptive contents of representative algorithm do not depend upon the conventional approaches to machine translation. They have only methodological and theoretical arguments such as transfer method and pivot method, and adopt linguistic theory for language model with an ad hoc manner.
In other words, current approaches to machine translation are usually focusing on the engineering feasibility; therefore they explain only what kinds of data structure of the language they employ, how they are analyzed and how they are translated. They do not give the details of their capability and limitation for the methodology of the machine translation system. The aim to develop the machine translation system is to translate an enormous amount of information written in a foreign language. In this purpose, it is said that the translation is changing from an art to a technique. If we give a clear-cut answer to the argument that the translation is the technique, it is natural that the translation merely means the physical transference of the contents of the language. To realize the idea for mechanizing the translation process, we need to formalize the translation mechanism. In this paper, we propose a methodology needed for the improving the quality and quantity of a machine translation system.
The general principles of the NARA system
In the NARA system we take a methodological principle into consideration on the general and specific aspects for the machine translation system. The former is the hypothesis for constructing the machine translation system. The latter is the computational model that applies the general principles to the NARA system. The computational model for the NARA system is constructed by the principles of computing theory that produce a vital link between what and how. What means the linguistic knowledge for constructing machine translation system and How means the procedure that maps the collection of inputs to the desired outputs.
Our approach is intuitively motivated by Chomsky's hypothesis [4]: homogeneous communication by the same linguistic performance is possible among those who have the same linguistic competence. The linguistic performance means the real time processing of the language, and the linguistic competence means the knowledge of a language. The performance theory can not be developed without the competence theory.
This hypothesis suggests that mutual communication is possible among different human language systems. Thus we may represent the above concept as follows:
the description of mutual communication environment = the description of linguistic competence + the description of linguistic performance.
And we may analogously represent the concept of two way simultaneous translation as follows:
the description of two way simultaneous translation = the description of knowledge of both languages + the description of performance knowledge of the both languages.
This schema can be expanded a step further to be: a two way simultaneous translation algorithm = the model of the corresponding data structure of the source language and the target language + the model of real time processing.
A key point of contact between the theory of grammar and the translation control is the natural link between the theory of knowledge representation and the theory of knowledge processing for the machine translation system.
We define the knowledge representation and the knowledge processing for machine translation system as a competence model and as a performance model, respectively. The competence model consists of the various kind of linguistic knowledge: morphology, syntax and semantics for the NARA system, and the performance model consists of several subareas. The first is concerned with which knowledge representations are constructed during simultaneous translation; the second is concerned with how the representations are utilized during translation; the third is concerned with the measure of computational complexity during translation. We presume that these three components constitute a complete computational model for the machine translation mechanism: a knowledge representation, an algorithm and a complexity.
We summarize the following items as the subjects of the general principle for our computational approach of the NARA system.
(1) The theory of common grammar:
we are requiring a common grammar to be suitable for the description of both languages. The common grammar is similar to the significance such that modern linguistic theory interprets the theory of universal grammar (UG) as part of a theory of language acquisition [3]. whereby we adopted a unification-based grammar formalism: K-J(J-K) grammar as a common grammar based on the correspondence existing between both languages.
(2) The notion of direct realization of translation:
as mentioned above, in order to guarantee two-way simultaneous translation, we identify the rules of the grammar with the manipulative units of translation in a one-to-one fashion. This notion is based on the grammatical covering, type transparency, grammar modification and invariants of formal language [10].
(3) The notion of complexity measure: the complexity of the algorithm, which is the direct association between the cost time and the sequential operation during translation, should be measured.
(4) The notion of translation results:
we compare our translation results to Thorn's hypothesis: a principle of isomorphism [12].
The specific principles of the NARA system
The NARA system adopts several specific and theoretical principles and they are described in the following.
(1) Equivalence of grammar: only if two grammars generate the same sets of surface sentences, they are weakly equivalent. In addition, if the two grammars generate the same language by means of the same two structure (here, by a one-to-one correspondence of rule steps), the two grammars are strongly equivalent [10]. Paraphrasing it, grammar G 1 and G 2 are weakly equivalent if the string language generated by G 1 , L(G 1 ), is identical to that of G 2 , L(G 2 ). If G 1 and G 2 are strongly equivalent, G 1 and G 2 can assign the same structural description for each word in L(G l ) and L(G 2 ). We apply this notion to the correspondence between Korean and Japanese.
(2) Grammar covering and grammar modification:
intuitively, a grammar is said to cover another if the first grammar can be used to easily recover all the parse structure that the second grammar assigns to an input sentence. In other words, grammar covering means that the first grammar can be used instead of the second grammar to parse a sentence of the language generated by the second grammar. This grammatical covering relation is easy to understand from the mere fact that we use the first language to study the second language. More importantly, one of the two grammars can serve as the true competence grammar for a language because it generates the proper structural description. The reason for using this principle is that the covering grammar may be more suitable for the efficiency of the processing in terms of time and space, and if a grammar covers another, the semantic rule for translation between both languages can be used to pair exactly the same input string and its meaning.
(3) Type transparency: in our view, the type transparency is the relationship between a covering grammar and the operation units of translation. From the usual linguistic claim that a more compact grammar is more easily processed, we impose the condition that the logical organization of rules and the structure incorporated in a grammar may be mirrored exactly in the mechanism of translation.
According to our theoretical and specific principles, we can represent the structural description of translation processing, and then apply a simple mapping to the translation mechanism. This mapping is from a parse tree to a parse tree.
The competence model of the NARA system
In this section, we focus our attention on the concrete language knowledges such that what kinds of linguistic description are used. In order to investigate the correspondence between both languages, we partition a grammar into components: segmented word, word order, morphology, syntax and semantics. The hierarchical separation of a grammar constitutes an important step in the modularization of a translation subsystem.
Morphology
The study of the structure of words, occupies an important place within the competence model, sandwiched as it is between phonology and syntax. Morphemes may also be partitioned into lexical and grammatical classes. In both languages, lexical morphemes are generally free, while many of the grammatical morphemes are bound.
In a given Korean-Japanese/Japanese-Korean dictionary, let D k be the set of morphological words of Korean and D j be the set of morphological words of Japanese. Consider the cartesian product, D k x D j , of the two sets. A mapping between the sets may be defined as follows.
I(D K ) = D J .
implying that the image of Dk is Dj; taking the inverse mapping
I -1 (D j ) = D k
By generalizing the relation and the mapping between the two sets, we may consider the word set of the source language to be the domain, and the target language word set to be the range. Assuming the same cardinality for both the domain and the range, D k and D j may be partitioned as shown below. Here we suppose
(k 1 , k 2 , k 3 , .... k n ) ∈ D k (j 1 ,j 2 ,j 3 ,....... j n ) ∈ D j (a) one-to-one (b) one-to-many (c) many-to-many
Obviously, one-to-one correspondence is isomorphic. Thus our attention will be focused on one-to-many and many-to-many relations. The translation of these relations depends on various factors: allomorphs, synonyms and homonyms of both languages. As an elementary strategy for the translation of those correspondence, we adopted a normalization pro-cedure which ensures the decomposition of one-to-many and many-to-many correspondence into one-to-one correspondence. As for the translation which is dependent on synonyms or homonyms, we specify the canonical form and the semantic feature, respectively. In reality, there are some linguistic representation (words) which exists in Korean but do not exist in Japanese(and the converse in also true); therefore, the need to make new words.
Word order in the segmented words
Between Korean and Japanese, some common properties are observed, such as an agglutinative language structure and same word order (SOV) [5]. In this subsection, we examine the word order in a segmented word. There are some corresponding properties in word order of the segmented words between both languages as follows:
[property 1] correspondency.
[property 2] inversion.
[property 3] abbreviation.
Among the properties, property 3 depends upon Korean pragmatic information.
Korean and Japanese have a remarkable characteristics; namely, the structure of segmented words. The segmented words are the important language structure as a utterance unit, and play an important role in the analysis of both languages.
The production form of the segmented words can be described in the forms of a regular grammar: In other words, for each symbol a in the vocabulary of some regular set R, let R b be a particular regular set. Suppose that we replace each word a 1 , a 2 , a 3 ,... a n in R by the set of words of form w 1 , w 2 , w 3 ,... w n , where w i is an arbitrary word in Rb. Then the result is always a regular set. More formally, a substitution f is a mapping which is from vocabulary A to subsets (language family) of vocabulary B. Thus the mapping f is extended to strings as follows:
1) f(ξ) = ξ , 2 ) f(xa) = f(x)f(a).
The mapping f is extended to languages by defining
f(L) = ∪ f(x). x∈L
A type of substitution that is of special interest is a homomorphism. A homomorphism h is a substitution such that h(a) contains a single string for each a. We generally take h(a) to be the string itself, rather than the set containing that string. It is useful to define the inverse homomorphic image of a language L to be
h -1 (L) = {x|h(x) is in L}
We also use, for string w;
h -1 (w) = {x|h(x) is in w}
Consequently, the translation between Korean and Japanese is closed in the substitution among the constituent which are called the segmented words.
Syntax
It is seen intuitively from the correspondence in the segmented words and word order, that Korean and Japanese have the similar language structure [6]. Let us compare the two parse trees of the actual example sentences. It is obvious that the parse trees correspond to each other in a one-to-one fashion, but the lexical categories do not coincide with each other. This means that both languages do not generate the same set of sentential forms: S(G) = {w ∈ (NUT) + | S -*-> w}. Furthermore, there is no algorithm for deciding whether or not two given context-free grammars generate the same sentential forms [9]. This proposition reveals the reason why we adopt the covering grammar and the grammar modification principle.
Semantics
If a sentence is syntactically ambiguous, it has more than one canonical derivation and is semantically ambiguous if, for a given canonical derivation, it has more than one translation. Derivations are not related directly to a language but to a grammar that generates it. In the translation between Korean and Japanese, there exist several kinds of inherently ambiguous sentences which are generated only by ambiguous grammar of both languages.
In the NARA system, the semantic knowledge is used to eliminate the ambiguity in the syntactic-based translation. But, its role is a minimum essential in the NARA system. Because a semantic theory of natural language, for example situation semantic theory, being underdeveloped, and is not necessary and sufficient condition for the Korean and Japanese translation system. However, for the word that involves the ambiguity in the translation processing, we specify the lexical semantic features and introduce the individual semantic features into the syntactic feature system. In consequence, the lexical semantic features of the constituents are kept in the phrase structure and are applied to the semantic-based translation. That is, the constraints for the semantic sensitive translation are described in the partial phrase structure, and play a role of adjusting semantic sensitive translation.
K-J Grammar
In this section, we design a K-J (or J-K) grammar which eliminates syntactic or semantic ambiguity of both languages. This grammar corresponds to the communicative competence model for the translation system between Korean and Japanese. The grammar is motivated by grammar modification and covering grammar; the original grammar is not often suitable for a particular parsing technique but can be modified into an equivalent form which is suitable.
ALGORITHM: irregularity categories removal or adjustment and semantic features insertion Input: a 5 tuple phrase structure grammar G = (N, T k , T j , P, S) for the translation.
Output: an equivalent 5 tuple phrase structure grammar G' = (N', T k[sem-k] , T j , P', S).
Method: empirical and heuristic method.
Where N and N' are nonterminals, T k , T j , T k , and T j , are terminals, sem-k is semantic features, P and P' are production rules, and S is the start symbol. The J-K grammar is designed by the method analogous to that of the K-J grammar. In the unification-based grammar framework, the semantic features are accepted by a special phrase structure rule, a linking rule (unification), which causes the relevant information about the phrase to be passed down the tree as a feature on the syntactic nodes. Therefore, translation procedure is constructed by a succinct algorithm founded on the K-J(J-K) grammar.
The performance model of the NARA system
System architecture of the NARA system
Before describing the performance model of the NARA system, we briefly describe the NARA system below:
Analysis -Morphological analysis
As the preprocessing for the morphological level translation, segmented word analysis is carried out on each word given by the lexicon information.
-Syntactic analysis
The structure of a sentence is analyzed on the phrase structure level. A tree structure which serves as an intermediate structure for translation is constructed.
K-J(J-K) system
We formulate the internal interface for the translation. This interface corresponds to the transducer of translation. We can define the K-J(J-K) system as a 3-tuple grammar G=(w j , w k , k(or j)), where w k and w j are Korean words and Japanese words, respectively, k(j): w j -> w k (w k -> w j ) is the homomorphism. The K-J(J-K) system ul G defines the following sequence preserving the word order:
w k l = k(w j l ), w k 1 w k 2 = k(w j 1 ) k(w j 2 ),....
It also defines the following language
L(G) = {k i (w j ) | i > 0 } .
As mentioned above, the K-J system constitutes a simple device for the translation. A language defined by the K-J(J-K) system corresponds to the target language. Inversely, the mapping j of w k into w j is such that the inverse homomorphism
j(w j ) = {w j | k(w j ) = w k }, j = k -1
exists. Thus, we define the two-way simultaneous translation system NARA by:
j(L k ) = k -1 (L k ) = {w j |k (w j ) ∈ L k }.
We can define the NARA system using the extended notion; the inverse homomorphism can be replaced by the direct operation of a finite substitution as follows. Consider a grammar (e.g. Korean) G j = (N j , T j , P j , S j , ) and let j be a finite substitution, defined on the vocabulary (N k ∪ T k )* such that j(a) is a finite (possibly empty) set of word for each word a. We denote
j(N k ) = N j , j(T k ) = T j , P j ⊃ j(P k ), S j ⊃ j(S k ).
Then, the grammar (e.g. Japanese) G j = (N j ,T j , P j , S j ) is the translation of G k . If I(G k ), I(G j ) are the sets of all translation of G k and G j , respectively, then I(G k ) = I(G j ), and I is an invariant for G k and G j .
Synthesis
From the morphology dependent on the target language is generated with the aid the phonological form file, the correct phonological form of the target language, which is subsequently output.
Dictionary
The dictionary consists of the K-J(J-K) grammar and the lexicon. A dictionary compiler is used to transform a visual dictionary into a system dictionary implemented in the form of a B-tree. The modularity of the grammar and the ease way of operation which to update the dictionary serve as major factor in the system.
Complexity of system NARA
In this section, we present how we can predict the time or memory space or sequential operation that will be needed to perform the computing model of the NARA system, and how the translation process can be specified clearly and unambiguously. The complexity of the algorithm in usually measured by the growth rate of its time or space requirements, as a function of the size of the input (or the length of input string) to which the algorithm is applied. We shall now define the time characteristics of translation process. There are some kinds of syntactical relations such that structural distance is naturally involved in the simultaneous translation. Consider the following example: Such ambiguity arises in translation due to one-to-many relation on morphological level, the sentence 2) is the well formed translation. The reason is co-occurrence relation; namely, a Japanese verb a-u (meet) co-occurs with a postposition ni (dative case), and a Korean verb mana-da co-occurs with a postposition lul (accusative case). If the postposition proceeds the verb, then simultaneous translation is impossible. In this case, delay time p > 1 for complete translation is required before two words bind, and one more operation is required to rescan the translated sentence. We refer to this case as quasi-real time translation. We formalize the time complexity of translation. An utterance string of the source language L is the sequence string S t (L). S t (L) = (k 1 , k 2 .... k t ) is a partial utterance string up to time t, and K-J(S t (t)) is a translation sequence string up to time t. Also T t (L) = (j 1 , j 2 ,...j t ) is a target language which is generated by the K-J system. The translation operates in real time so that delay time is 0. Therefore, K-J(S t (L)) = (T t (L)) where S t = T t . The translation operates in the quasi-real time so that delay time p > 1. Therefore, K-J(S t (L)) = (T t (L)) where T t -S t > 1. However, the nature of on-line translation is unchangeable.
[
We compare our translation results to Rene Thorn's hypothesis: the principle of isomorphism concerning linguistic universality. Let T 1 be a text of language L 1 , and T 2 be a text to be translated from T 1 into language L 2 . Suppose {Q 1 i } and {Q 2 j } are phrase elements of decompositions of T 1 and T 2 , respectively, then the following principle of isomorphism holds:
[Principle of Isomorphism] A one to one correspondence exists between {Q 1 i } and {Q 2 j } which conserve each signification. Moreover, this correspondence nearly preserves the order of phrase elements; in other words, if the ith element Q 1 i of T 1 corresponds to the jth element Q 2 j of T 2 , then |j-i| < 4.
Concluding remarks
Our approach for constructing the NARA system included logical study and experimental study; the former was given by the mathematical formalization, the latter by the correspondence of two languages. In the view of computational linguistics, we separated the mechanism of two way simultaneous translation system into the levels of abstract theory, algorithm, and implementation to carve out the results at each level in more independent fashion. In order to do so, we specified four important levels of the description: the lowest level is the morphology, the second level is the segmented words, the third levels are the syntax and the semantic, and the top level controls the computing model of each level. Hence, we could determine the range of correspondence between internal representations of both grammars, and the basic architecture of the machinery actually instantiates the algorithm. Consequently, our model produces the extra power by the proposed theory with multiple levels of representation and systematic mapping between the corresponding levels of two languages, because translation efficiency requires both a functional and a mathematical argument. In the view of software engineering, going through each level of abstraction we expect to make an elegant program which satisfies the requirements of the machine translation system such as simplicity, reliability, adoptability and modularity. Nevertheless, the complete pragmatic translation remains quite obscure.
S --> uB S, B ∈ N (nonterminal symbols) S --> u u ∈ T (terminal symbols) They are both right linear. Denoting the language defined by such a regular grammar by L = L(G) lead to the existence of a finite state automaton M such that L(G) = T(M) ={ w | M accepts w}. And, if L(G) = L(G'), there is a sequence equivalence such as S(G) = S(G').
Fig 1 :
1syntactic trees of "(I) thought (somebody) went (somewhere)"
[[tomotachi]ni] [[[kino] [[hisashiburi]ni]] atta]]. This sentence can be translated into Korean as follows: 1) [chingu ege [[oje orenman e] mannatta]]. 2) [chingu lul [[oje orenman e] mannatta]].
AcknowledgementThe author is deeply grateful to Dr. Gil Rok Oh and Dr. Min Ho Kang for their encouragement.
. B Arden, M.I.T. PressReadingArden, B., ed, What can be Automated?, M.I.T. Press, Reading, 1980.
The Grammatical Basis of Linguistic Performance: Language Use and Acquisition. R Berwick, A Weiberg, M.I.T. pressReadingBerwick, R., and Weiberg, A., The Grammatical Basis of Linguistic Performance: Language Use and Acquisition, M.I.T. press, Reading, 1984.
The Acquisition of Syntactic Knowledge. R Berwick, M.I.T. pressReadingBerwick, R., The Acquisition of Syntactic Knowledge, M.I.T. press, Reading, 1985.
Aspects of the Theory of Syntax. N Chomsky, M.I.T. PressReadingChomsky, N., Aspects of the Theory of Syntax, M.I.T. Press, Reading, 1963.
H Chung, Current Korean: Elementary Sentence Patterns and Structures. Komasholin, Readingin JapaneseChung, H., Current Korean: Elementary Sentence Patterns and Structures, Komasholin, Reading, 1982 (in Japanese).
Korean language Information Processing. H Chung, Tokyo Univ.Ph.D. dissertationChung, H., Korean language Information Processing, Ph.D. dissertation, Tokyo Univ., 1986.
Chung, T Kunii, A Two-way Simultaneous Interpretation System between Korean and Japanese: NARA, Proceeding of COLING'86. Chung, H and Kunii, T., A Two-way Simultaneous Interpretation System between Kore- an and Japanese: NARA, Proceeding of COLING'86, 1986.
On the decidability of homomorphism equivalence for language. L Culik, A Salomaa, Journal of Computer and System Science. 18Culik, L. and Salomaa, A., On the decidability of homomorphism equivalence for language, Journal of Computer and System Science, 18, 1978.
Introduction to Formal Language Theory, Reading. M Harrison, Addison-WesleyHarrison, M., Introduction to Formal Language Theory, Reading, Addison-Wesley, 1978.
Context-free Grammar: Cover, Normal Forms and Parsing. A Niholt, SpringerReadingNiholt, A., Context-free Grammar: Cover, Normal Forms and Parsing, Springer, Read- ing, 1980.
Jewels of Formal Language Theory. A Salomaa, Computer Science PressReadingSalomaa, A., Jewels of Formal Language Theory, Computer Science Press, Reading, 1981.
R Thom, Topologie, Linguistique, Essays on Topology and Related Topics. SpringerThom, R., Topologie et Linguistique, in Essays on Topology and Related Topics, Springer, 1970. |
1,409,410 | UCD-PN: Selecting General Paraphrases Using Conditional Probability | We describe a system which ranks humanprovided paraphrases of noun compounds, where the frequency with which a given paraphrase was provided by human volunteers is the gold standard for ranking. Our system assigns a score to a paraphrase of a given compound according to the number of times it has co-occurred with other paraphrases in the rest of the dataset. We use these co-occurrence statistics to compute conditional probabilities to estimate a sub-typing or Is-A relation between paraphrases. This method clusters together paraphrases which have similar meanings and also favours frequent, general paraphrases rather than infrequent paraphrases with more specific meanings. | [
682154,
219303534
] | UCD-PN: Selecting General Paraphrases Using Conditional Probability
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 2010. 2010
Paul Nulty paul.nulty@ucd.ie
University College Dublin Dublin
Ireland
Fintan Costello fintan.costello@ucd.ie
University College Dublin Dublin
Ireland
UCD-PN: Selecting General Paraphrases Using Conditional Probability
Proceedings of the 5th International Workshop on Semantic Evaluation, ACL 2010
the 5th International Workshop on Semantic Evaluation, ACL 2010Uppsala, Sweden; cAssociation for Computational LinguisticsJuly 2010. 2010
We describe a system which ranks humanprovided paraphrases of noun compounds, where the frequency with which a given paraphrase was provided by human volunteers is the gold standard for ranking. Our system assigns a score to a paraphrase of a given compound according to the number of times it has co-occurred with other paraphrases in the rest of the dataset. We use these co-occurrence statistics to compute conditional probabilities to estimate a sub-typing or Is-A relation between paraphrases. This method clusters together paraphrases which have similar meanings and also favours frequent, general paraphrases rather than infrequent paraphrases with more specific meanings.
Introduction
SemEval 2010 Task 9, "Noun Compound Interpretation Using Paraphrasing Verbs", requires systems to rank paraphrases of noun compounds according to which paraphrases were most frequently produced for each compound by human annotators (Butnariu et al., 2010). This paper describes a system which ranks a paraphrase for a given compound by computing the probability of the paraphrase occurring given that we have previously observed that paraphrase co-occurring with other paraphrases in the candidate paraphrase list. These co-occurrence statistics can be built using either the compounds from the test set or the training set, with no significant difference in results.
The model is informed by two observations: people tend to use general, semantically light paraphrases more often than detailed, semantically heavy ones, and most paraphrases provided for a specific compound indicate the same interpretation of that compound, varying mainly according to level of semantic detail.
Given these two properties of the data, the objective of our system was to test the theory that conditional probabilities can be used to estimate a sub-typing or Is-A relation between paraphrases. No information about the compounds was used, nor were the frequencies provided in the training set used.
Motivation
Most research on the disambiguation of noun compounds involves automatically categorizing the compound into one of a pre-defined list of semantic relations. Paraphrasing compounds is an alternative approach to the disambiguation task which has been explored by (Lauer, 1995) and (Nakov, 2008). Paraphrases of semantic relations may be verbs, prepositions, or "prepositional verbs" like found in and caused by. (Lauer, 1995) categorized compounds using only prepositions. (Nakov, 2008) and the current task use only verbs and prepositional verbs, however, many of the paraphrases in the task data are effectively just prepositions with a copula, e.g. be in, be for, be of.
The paraphrasing approach may be easier to integrate into applications such as translation, query-expansion and question-answering -its output is a set of natural language phrases rather than an abstract relation category. Also, most sets of pre-defined semantic relations have only one or maybe two levels of granularity. This can often lead to semantically converse relations falling under the same abstract category, for example a headache tablet is a tablet for preventing headaches, while headache weather is weather that induces headaches -but both compounds would be assigned the same relation (perhaps instrumental or causal) in many taxonomies of semantic relations. Paraphrases of compounds using verbs or verb-preposition combinations can provide as much or as little detail as is required to adequately disambiguate the compound.
General paraphrases are frequent
The object of SemEval 2010 Task 9 is to rank paraphrases for noun compounds given by 50-100 human annotators. When deciding on a model we took into account several observations about the data. Firstly, the model does not need to produce plausible paraphrases for noun compounds, it simply needs to rank paraphrases that have been provided. Given that all of the paraphrases in the training and test sets have been produced by people, we presume that all of them will have at least some plausible interpretation, and most paraphrases for a given compound will indicate generally the same interpretation of that compound. This will not always be the case; some compounds are genuinely ambiguous rather than vague. For example a stone bowl could be a bowl for holding stones or a bowl made of stone. However, the mere fact that a compound has occurred in text is evidence that the speaker who produced the text believed that the compound was unambiguous, at least in the given context.
Given that most of the compounds in the dataset have one clear plausible meaning to readers, when asked to paraphrase a compound people tend to observe the Grician maxim of brevity (Grice, 1975) by using simple, frequent terms rather than detailed, semantically weighty paraphrases. For the compound alligator leather in the training data, the two most popular paraphrases were be made from and come from. Also provided as paraphrases for this compound were hide of and be skinned from. These are more detailed, specific, and more useful than the most popular paraphrases, but they were only produced once each, while be made from and come from were provided by 28 and 20 annotators respectively. This trend is noticeable in most of the compounds in the training data -the most specific and detailed paraphrases are not the most frequently produced.
According to the lesser-known of Zipf's lawsthe law of meaning (Zipf, 1945) -words that are more frequent overall in a language tend to have more sub-senses. Frequent terms have a shorter lexical access time (Broadbent, 1967), so to minimize the effort required to communicate meaning of a compound, speakers should tend to use the most common words -which tend to be semantically general and have many possible subsenses. This seems to hold for paraphrasing verbs and prepositions; terms that have a high overall frequency in English such as be in, have and be of are vague -there are many more specific paraphrases which could be considered sub-senses of these common terms.
Using conditional probability to detect subtypes
Our model uses conditional probabilities to detect this sub-typing structure based on the theory that observing a specific, detailed paraphrase is good evidence that a more general parent sense of that paraphrase would be acceptable in the same context. The reverse is not true -observing a frequently occurring, semantically light paraphrase is not strong evidence that any sub-sense of that paraphrase would be acceptable in the same context. For example, consider the spatial and temporal sub-senses of the paraphrase be in. A possible spatial sub-sense of this paraphrase is be located in, while a possible temporal sub-sense would be occur during. The fact that occur during is provided as a paraphrase for a compound almost always means that be in is also a plausible paraphrase. However, observing be in as a paraphrase does not provide such strong evidence for occur during also being plausible, as we do not know which sub-sense of in is intended. If this is correct, then we would expect that the conditional probability of a paraphrase B occurring given that we have observed another paraphrase A in the same context is a measure of the extent to which B is a more general type (parent sense) of A.
System Description
The first step in our model is to generate a conditional probability table by going over all the compounds in the data and calculating the probability of each paraphrase occurring given that we observed another given paraphrase co-occurring for the same compound. We compute the conditional probability of every paraphrase with all other paraphrases individually. We could use either the training or the test set to collect these co-occurrence statistics, as the frequencies with which the paraphrases are ranked are not used -we simply note how many times each paraphrase co-occurred as a possible paraphrase for the same compound with each other paraphrase. For the submitted system we used the test data, but subsequently we con-firmed that using only the training data for this step is not detrimental to the system's performance.
For each paraphrase in the data, the conditional probability of that paraphrase is computed with respect to all other paraphrases in the data. For any two paraphrases B and A:
P (B|A) = P (A ∧ B) P (A)
As described in the previous section, we anticipate that more general, less specific paraphrases will be produced more often than their more detailed sub-senses. Therefore, we score each paraphrase by summing its conditional probability with each other paraphrase provided for the same compound.
For a list of paraphrases A provided for a given compound, we score a paraphrase b in that list by summing its conditional probability individually with every other paraphrase in the list.
score(b) = a∈A P (b|a)
This gives the more general, broad coverage, paraphrases a higher score, and also has a clustering effect whereby paraphrases that have not cooccurred with the other paraphrases in the list very often for other compounds are given a lower score -they are unusual in the context of this paraphrase list. Table 1 shows the results of the top 3 systems in the task. Our system achieved the second highest correlation according to the official evaluation measure, Spearman's rank correlation coefficient. Results were also provided using Pearson's correlation coefficient and the cosine of the vector of scores for the gold standard and submitted predictions. Our system performed best using the cosine measure, which measures how closely the predicted scores match the gold standard frequencies, rather than the rank correlation. This could be helpful as the scores provide a scale of acceptability.
Results and Analysis
Task results
As mentioned in the system description, we collected the co-occurrence statistics for our submitted prediction from the test set of paraphrases alone. Since our model does not use the frequencies provided in the training set, we chose to use the test set as it was larger and had more annotators. This could be perceived as an unfair use of the test data, as we are using all of the test compounds and their paraphrases to calculate the position of a given paraphrase relative to other paraphrases. This is a kind of clustering which would not be possible if only a few test cases were provided. To check that our system did not need to collect cooccurrence probabilities on exactly the same data as it made predictions on, we submitted a second set of predictions for the test based on the probabilities from the training compounds alone. 1 These predictions actually achieved a slightly better score for the official evaluation measure, with a Spearman rho of 0.444, and a cosine of 0.631. This suggests that the model does not need to collect co-occurrence statistics from the same compounds as it makes predictions on, as long as sufficient data is available.
Error Analysis
The most significant drawback of this system is that it cannot generate paraphrases for noun compounds -it is designed to rank paraphrases that have already been provided.
Using the conditional probability to rank paraphrases has two effects. Firstly there is a clustering effect which favours paraphrases that are more similar to the other paraphrases in a list for a given compound. Secondly, paraphrases which are more frequent overall receive a higher score, as frequent verbs and prepositions may co-occur with a wide variety of more specific terms. These effects lead to two possible drawbacks. Firstly, the system would not perform well if detailed, specific paraphrases of compounds were needed. Although less frequent, more specific paraphrases may be more useful for some applications, these are not the kind of paraphrases that people seem to produce spontaneously. Also, because of the clustering effect, this system would not work well for compounds that are genuinely ambiguous e.g. stone bowl (bowl made of stone vs bowl contains stones). Most examples are not this ambiguous, and therefore almost all of the provided paraphrases for a given compound are plausible, and indicate the same relation. They vary mainly in how specific/detailed their explanation of the relation is.
The three compounds which our system produced the worst rank correlation for were diesel engine, midnight train, and bathing suit. Without access to the gold-standard scores for these compounds it is difficult to explain the poor performance, but examining the list of possible paraphrases for the first two of these suggests that the annotators identified two distinct senses for each: diesel engine is paraphrased by verbs of containment (e.g. be in) and verbs of function (e.g. runs on), while midnight train is paraphrased by verbs of location (e.g. be found in, be located in) and verbs of movement (e.g. run in, arrive at). Our model works by separating paraphrases according to granularity, and cannot disambiguate these distinct senses. The list of possible paraphrases for bathing suit suggests that our model is not robust if implausible paraphrases are in the candidate list -the model ranked be in, be found in and emerge from among the top 8 paraphrases for this compound, even though they are barely comprehensible as plausible paraphrases. The difficulty here is that even if only one annotator suggests a paraphrase, it is deemed to have co-occurred with other paraphrases in that list, since we do not use the frequencies from the training set.
The compounds for which the highest correlations were achieved were wilderness areas, consonant systems and fiber optics. The candidate paraphrases for the first two of these seem to be fairly homogeneous in semantic intent. Fiber optics is probably a lexicalised compound which hardly needs paraphrasing. This would lead people to use short and semantically general paraphrases.
Conclusion
We have described a system which uses a simple statistical method, conditional probability, to estimate a sub-typing relationship between possible paraphrases of noun compounds. From a list of candidate paraphrases for each noun compound, those which were judged by this method to be good "parent senses" of other paraphrases in the list were scored highly in the rankings.
The system does require a large dataset of compounds with associated plausible paraphrases, but it does not require a training set of human provided rankings and does not use any information about the noun compound itself, aside from the list of plausible paraphrases that were provided by the human annotators.
Given the simplicity of our model and its performance compared to other systems which used more intensive approaches, we believe that our initial observations on the data are valid: people tend to produce general, semantically light paraphrases more often than specific or detailed paraphrases, and most of the paraphrases provided for a given compound indicate a similar interpretation, varying instead mainly in level of semantic weight or detail.
We have also shown that conditional probability is an effective way to compute the sub-typing relation between paraphrases.
Thanks to DiarmuidÓ Séaghdha for pointing this out and scoring the second set of predictions
SemEval-2 Task 9: The Interpretation of Noun Compounds Using Paraphrasing Verbs and Prepositions. Donald E Broadbent, Proceedings of the 5th SIGLEX Workshop on Semantic Evaluation. Cristina Butnariu and Su Nam Kim and Preslav Nakov and DiarmuidÓ Séaghdha and Stan Szpakowicz and Tony Vealethe 5th SIGLEX Workshop on Semantic EvaluationUppsala, Sweden74Word-frequency effect and response biasDonald E. Broadbent 1967. Word-frequency effect and response bias.. Psychological Review, 74, Cristina Butnariu and Su Nam Kim and Preslav Nakov and DiarmuidÓ Séaghdha and Stan Szpakowicz and Tony Veale. 2010. SemEval-2 Task 9: The In- terpretation of Noun Compounds Using Paraphras- ing Verbs and Prepositions, Proceedings of the 5th SIGLEX Workshop on Semantic Evaluation, Upp- sala, Sweden
Studies in the Way of Words. Paul Grice, Harvard University PressCambridge, MassPaul Grice. 1975. Studies in the Way of Words. Har- vard University Press, Cambridge, Mass.
Designing statistical language learners: experiments on noun compound. Mark Lauer, AustraliaPhD Thesis Macquarie UniversityMark Lauer 1995. Designing statistical language learners: experiments on noun compound, PhD The- sis Macquarie University, Australia
Solving Relational Similarity Problems using the Web as a Corpus. Preslav Nakov, Marti Hearst, Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL-08). the 46th Annual Meeting of the Association for Computational Linguistics (ACL-08)Columbus, OHPreslav Nakov and Marti Hearst 2008. Solving Re- lational Similarity Problems using the Web as a Corpus. In Proceedings of the 46th Annual Meet- ing of the Association for Computational Linguistics (ACL-08), Columbus, OH.
The Meaning-Frequency Relationship of Words. George Kingsley, Zipf , Journal of General Psychology. 33George Kingsley Zipf. 1945. The Meaning-Frequency Relationship of Words. Journal of General Psychol- ogy, 33, |
13,974,751 | Multilingual Resources, Technologies and Evaluation for Central and Eastern European Languages | This paper discusses the building of the first Bulgarian-Polish-Lithuanian (for short, BG-PL-LT) experimental corpus. The BG-PL-LT corpus (currently under development only for research) contains more than 3 million words and comprises two corpora: parallel and comparable. The BG-PL-LT parallel corpus contains more than 1 million words. A small part of the parallel corpus comprises original texts in one of the three languages with translations in two others, and texts of official documents of the European Union available through the Internet. The texts (fiction) in other languages translated into Bulgarian, Polish, and Lithuanian form the main part of the parallel corpus. The comparable BG-PL-LT corpus includes: (1) texts in Bulgarian, Polish and Lithuanian with the text sizes being comparable across the three languages, mainly fiction, and (2) excerpts from E-media newspapers, distributed via Internet and with the same thematic content. Some of the texts have been annotated at paragraph level. This allows texts in all three languages and in pairs BG-PL, PL-LT, BG-LT, and vice versa to be aligned at paragraph level in order to produces aligned three-and bilingual corpora. The authors focused their attention on the morphosyntactic annotation of the parallel trilingual corpus, according to the Corpus Encoding Standard (CES). The tagsets for corpora annotation are briefly discussed from the point of view of possible unification in future. Some examples are presented. | [] | Multilingual Resources, Technologies and Evaluation for Central and Eastern European Languages
2009
Ludmila Dimitrova ludmila@cc.bas.bg
IMI-BAS Acad. G. Bonchev St bl
Violetta Koseska
ISS-PAS ul.Bartoszewicza
1B m.17 00-3371113Sofia, WarsawBulgaria
Danuta Roszko danuta.roszko@ispan.waw.pl
ISS-PAS ul.Bartoszewicza
1B m.17 00-337Warsaw
Roman Roszko roman.roszko@ispan.waw.pl
ISS-PAS ul.Bartoszewicza
1B m.17 00-337Warsaw
Multilingual Resources, Technologies and Evaluation for Central and Eastern European Languages
Bulgaria2009Bilingual and multilingual corporaparallel and comparable corporacorpus annotationlexical databasebilingual dictionaries
This paper discusses the building of the first Bulgarian-Polish-Lithuanian (for short, BG-PL-LT) experimental corpus. The BG-PL-LT corpus (currently under development only for research) contains more than 3 million words and comprises two corpora: parallel and comparable. The BG-PL-LT parallel corpus contains more than 1 million words. A small part of the parallel corpus comprises original texts in one of the three languages with translations in two others, and texts of official documents of the European Union available through the Internet. The texts (fiction) in other languages translated into Bulgarian, Polish, and Lithuanian form the main part of the parallel corpus. The comparable BG-PL-LT corpus includes: (1) texts in Bulgarian, Polish and Lithuanian with the text sizes being comparable across the three languages, mainly fiction, and (2) excerpts from E-media newspapers, distributed via Internet and with the same thematic content. Some of the texts have been annotated at paragraph level. This allows texts in all three languages and in pairs BG-PL, PL-LT, BG-LT, and vice versa to be aligned at paragraph level in order to produces aligned three-and bilingual corpora. The authors focused their attention on the morphosyntactic annotation of the parallel trilingual corpus, according to the Corpus Encoding Standard (CES). The tagsets for corpora annotation are briefly discussed from the point of view of possible unification in future. Some examples are presented.
Introduction
Due to the recent development of information and communication technologies and the increased mobility of people around the globe, the number of electronic dictionaries has increased extraordinarily. This concerns, in particular, bilingual dictionaries, in which one of the languages is English. An Internet search shows that no electronic dictionaries exist at all for pairs of languages such as Bulgarian-Polish or Bulgarian-Lithuanian. Traditional printed paper dictionaries are either an antiquarian rarity (the most recent Bulgarian-Polish and Polish-Bulgarian dictionaries were published more than 20 years ago) or have never been published at all (Bulgarian-Lithuanian). It can not be expected however that all people know English to communicate with each other, especially if their native languages (Bulgarian and Polish) belong to the same language family. For the creation of a bilingual electronic or online dictionary for Bulgarian, Polish and Lithuanian an electronic corpus is necessary which will provide the material for lexical database, supporting the dictionary and its subsequent expansion and update. In the recent decades many multilingual corpora were created in the field of corpus linguistics, such as MULTEXT corpus [6], one of the largest EU projects in the domain of language technologies, the MULTEXT-East corpus (MTE for short, annotated parallel and comparable), an extension of the project MULTEXT for Central and Eastern European (CEE) languages [2], Hong Kong bilingual parallel English-Chinese corpus of legal and documentary texts [5], etc.
From Bilingual to Trilingual corpus
The MTE project has developed a multilingual corpus, in which three languages: Bulgarian, Czech and Slovene, belong to the Slavic group. The MTE model is being used in the design of the first Bulgarian-Polish corpus, currently under development in the framework of the joint research project -Semantics and Contrastive linguistics with a focus on a bilingual electronic dictionary‖ between Institute of Mathematics and Informatics-Bulgarian Academy of Sciences and Institute of Slavic Studies-Polish Academy of Sciences, coordinated by L. Dimitrova and V. Koseska. This bilingual corpus supports the lexical database (LDB) of the first experimental online Bulgarian-Polish dictionary [3].
Bulgarian-Polish corpus
The Bulgarian-Polish corpus consists of two parts: a parallel and a comparable corpus [4]. All texts in the corpus are texts published in and distributed over the Internet. Some texts in the ongoing version of the corpus are annotated at paragraph level. The Bulgarian-Polish parallel corpus includes two parallel sub-corpora: 1) a pure Bulgarian-Polish corpus consists of original texts in Polishliterary works by Polish writers and their translation in Bulgarian, and original texts in Bulgarianshort stories by Bulgarian writers and their translation in Polish.
2) a translated Bulgarian-Polish corpus consists of texts in Bulgarian and in Polish of brochures of the EC, documents of the EU and the EU-Parliament, published in Internet; Bulgarian and Polish translations of literary works in third language (mainly English).
The Bulgarian-Polish comparable corpus includes texts in Bulgarian and Polish: excerpts from newspapers and textual documents, shown in internet, excerpts from several original fiction, novels or short stories, with the text sizes being comparable across the two languages. Some of the Bulgarian texts are annotated at -paragraph‖ and -sentence‖ levels, according to CES [7].
Bulgarian-Polish-Lithuanian corpus
The first Bulgarian-Polish-Lithuanian (for short, BG-PL-LT) corpus (currently under development only for research) contains more than 3 million words and comprises two corpora: parallel and comparable. The BG-PL-LT parallel corpus contains more than 1 million words. A small part of the parallel corpus comprises original texts in one of the three languages with translations in two others, and texts of official documents of the European Union available through the Internet. The texts (fiction) in other languages translated into Bulgarian, Polish, and Lithuanian form the main part of the parallel corpus.
It turned out that it is extremely difficult to find electronic texts of translations from Bulgarian to Lithuanian or vice versathe two languages are spoken by small nations in comparison to other languages of the EU and are distributed in remote areas of Europe. It can be assumed (provisionally of course) that the Polish language ‗builds a bridge' between them: for the pairs of languages Bulgarian-Polish and Polish-Lithuanian one can find freely available translations on the Internet.
The comparable BG-PL-LT corpus includes: (1) texts in Bulgarian, Polish and Lithuanian with the text sizes being comparable across the three languages, mainly fiction, and (2) excerpts from E-media newspapers, distributed on the Internet and with the same thematic content.
Some of the texts have been annotated at paragraph level. This allows texts in all three languages and in pairs BG-PL, PL-LT, BG-LT, and vice versa to be aligned at paragraph level in order to produces aligned three-and bilingual corpora. -Alignment‖ means the process of relating pairs of words, phrases, sentences or paragraphs in texts in different languages which are translation equivalent. One may say that -alignment‖ is a type of annotation performed over parallel corpora. Excerpts of texts of the 3-languages parallel corpus, marked at paragraph level follow: The BG-PL-LT corpus will be annotated according to the standards for morphosyntactic annotation of digital language resources. The main goal in collecting the trilingual corpus is the design and development of a BG-LT digital dictionary based on the BG-PL digital online dictionary.
Bulgarian
The corpus will provide a sample of the vocabulary, which is to be included in an initial experimental versions of BG-LT digital dictionary.
We attempt to perform a comparison of the morphosyntactic characteristics of the words of parallel texts across the three languages from the point of view of a possible future unification.
Corpus annotation
Corpus annotation is the process of adding linguistic information in an electronic form to a text corpus [7], [8]. We would like to mention the following two most common types of corpus annotation: morphosyntactic annotation (also called grammatical tagging or part of speech (POS) tagging) and lemma annotation (where each word in the text is associated with the corresponding lemma). Lemma annotation is closely related to morphosyntactic annotation. Morphosyntactic annotation (POS tagging, where each word in the text is associated with its grammatical classification) is the task of labeling each word in a sequence of words with its appropriate part-of-speech. Words are often ambiguous with respect to their POS; for example, in Bulgarian the neuter singular forms of most adjectives serve double duty as adverbs, for example, BG: внимателно //EN: attentive/careful (neuter), attentively/carefully //: (1) внимателно → POS specifications: adjective, Gender: neuter, Number: singular, Definiteness: no. MTE MorphoSyntactic Descriptor (MSD) for this adjective is A--ns-n.
(2) внимателно → POS: adverb, Type: adjectival. MTE MSD for this adverb is Ra. The set of POS tags is called tagset. The size and choice of the tagsets vary across languages. The classical POS tagging system is based on a set of parts of speech including noun, adjective, numeral, pronoun, verb, participle, adverb, preposition, conjunction, interjection, particle, and often (depending on the language) article, etc. Of course, morphologically rich languages need more detailed tagsets that reflect to various inflectional categories.
The applications of the morphosyntactic annotation include lexicography, parsing, language models in speech recognition, disambiguation clues for ambiguous words (machine translation), information retrieval, spelling correction, etc.
Problems related to POS classification
The POS classification varies across different languages. Often there is more than one possible POS classification for a given language.
Here we would like to show that one cannot formally go about a direct use of the morphosyntactic annotation of a multilingual corpus. An in-depth contrastive study of specific phenomena in the respective languages is necessary. Next we will briefly review the POS classification of the participle (one of the important verbal forms) in the three languages, in comparison to another POS, the adjective.
Functions of the participle
The classification of a participle, not only as a verb form, is an important problem: the role of the participle varies significantly across languages, because its properties and functions are different. In contrast to English, for instance, where the participle are invariant, in the Slavic languages the forms of the participles are inflected and contain information about the aspect and tense of the verbal form. As is well-known the information about the aspect is important for the Slavic languages, but does not exist in English. Bulgarian, Polish and Lithuanian distinguish between the following functions of the participle form: predicative function, attributive function and adverbial function or semipredicative function, which are illustrated by the following examples:
Participle and verb
It is important to emphasize that participles preserve some properties of the main form of the verb, such as voice, tense and aspect. In Bulgarian, Polish and Lithuanian there are active and passive participles: Polish has a more modest stock of verbal forms with temporal meaning than Bulgarian or Lithuanian. In any case when the lexical means modifying the temporal meanings are taken into account, the participles, and verbal nouns, it is clear that Polish can express also the same temporal meanings.
Features of the adjective
Adjectives in Polish and Lithuanian can be declined for gender, number and case (in Bulgarian only for gender and number), but do not express a temporal or aspect relation on their own, unlike the participle. These arguments show that participles deserve a separate treatment from adjectives.
Towards development of annotated trilingual electronic resources
Morphosyntactic descriptions for Bulgarian have been developed in several projects, the first of which are for the purposes of corpora processing at the morpho-lexical level in MTE project of EC. The MTE consortium developed morphosyntactic specifications and word-form lexical lists (so called lexicons) covering at least the words appearing in the MTE corpus. For each of the six MTE languages, a lexical list containing at least 15,000 lemmata was developed for use with the morphological analyzer. Each lexicon entry includes information about the inflected-form, lemma, POS, and morphosyntactic specifications. A mapping from the morphosyntactic information contained in the lexicon to a set of corpus tags (used by the POS disambiguator) was also provided, according to the MULTEXT tagging model. The structure of the lexicon entry is the following: word-form ‹TAB› lemma ‹TAB› MSD ‹TAB› comments where word-form represents an inflected form of the lemma, characterised by a combination of feature values encoded by MSD-code (MSD: MorphoSyntactic Description); the fourth (optional) column, comments, is currently ignored and may contain either comments or information processable by other tools. Here is an excerpt from the Bulgarian Lexicon: обяснение = Ncns-n обяснението обяснение Ncns-y обяснения обяснение Ncnp-n обясненията обяснение Ncnp-y (обяснение ‗explanation').
The MSDs are provided as strings, using a linear encoding; an efficient and compact way for the representation of the flat attribute-value matrices. In this notation, the position in a string of characters corresponds to an attribute, and specific characters in each position indicate the value for the corresponding attribute. That is, the positions in a string of characters are numbered 0, 1, 2, etc., and are used in the following way: the character at position 0 encodes part-of-speech; each character at position 1, 2, …, n, encodes the value of one attribute (person, gender, number, etc.), using the one-character code; if an attribute does not apply, the corresponding position in the string contains the special marker --‖ (hyphen). By convention, trailing hyphens are not included in the MSDs. Such specifications provide a simple and compact encoding, and are similar to feature-structure encoding used in unification-based grammar formalisms. When the word form is the very lemma, then the equal sign is written in the lemma field of the entry (-=‖). For Bulgarian the morphosyntactic descriptions were designed on the basis of the traditional POS classification according to the traditional Bulgarian grammar (Bulgarian Grammar 1993). Each word form is assigned a label encoding the major category (POS), type where applicable (e.g., proper versus common noun) and inflectional features. Punctuation is also included, as are abbreviations, numbers written in digits, and unidentified objects (residuals). A further non-standard category contains markers of degrees of comparison. Those are formed in Bulgarian with the particles по (comparative) and най (superlative), preposed to the adjective or adverb but separated from it by a hyphen (лек ‗light', по-лек ‗lighter', най-лек ‗lightest'; леко ‗easy', по-леко ‗more easily', найлеко ‗most easily'). These particles are annotated as separate words: по → POS: Particle, Type: comparative, Formation: simple, най → POS: Particle, Type: superlative, Formation: simple. The morphosyntactic descriptions for Polish: the description of Polish by Saloni [15] serves as a basis for the morphosyntactic descriptions for Polish and has been adapted to a large degree to the MTE MSD format in [14]. The system of morphosyntactic tags developed for the Polish at the Institute of Computer Science, Polish Academy of Sciences (IPI PAN), is based on a sound methodological foundation comprising linguistic work by authors such as J.S. Bień, Z. Saloni, M.Świdziński. It is thanks to this foundation that the IPI PAN's tagset goes beyond the fossilised traditional framework dating back to Aristotle. On the other hand, the MTE tagset, which serves as a point of reference here, is based on the traditional subdivision into parts of speech (this is why, among others, pronouns have been singled out as a part of speech). Consequently, the aim of our work is neither to revise the good and highly refined IPI PAN tagset nor to replace it with a new tagset for Polish. The issue in question is what kind of compromise should be sought when developing a joint tagset to be used for simultaneous description of the three languages in the BG-PL-LT parallel corpus. For some reasons the MTE tagset (developed previously for many languages) has been selected as the leading one for this corpus. Therefore, the aim of our work is to provide a theoretical study of various categories of Polish (and Lithuanian), to set priorities (e.g. morphological, semantic, syntactic) in identifying various meanings and to provide a classification of morphosyntactic phenomena which does not contradict the MTE standard and does not deviate too strongly from the IPI PAN tagset. It cannot be excluded that due to the obvious difficulties in achieving consistency of the intertagset the BG-PL-LT corpus will use the IPI PAN tagset for Polish and its modification for Lithuanian. This solution would certainly necessitate a list of more or less close equivalents for the two tagsets: a tagset for Bulgarian on the one hand, and the IPI PAN tagset on the other (for Polish and an extended version for Lithuanian). It is important to emphasise that only a coherent tagset for a parallel multilingual corpus 1) allows complete linguistic confrontation, 2) enables identification of linguistic facts, 3) enables a search based on pre-defined unambiguous morphosyntactic characteristics. The morphosyntactic descriptions for Lithuanian: as a basis for morphosyntactic descriptions of Lithuanian serve the Academic grammar of the Lithuanian language [10] and the Functional grammar of Lithuanian [16]. A tool for morphosyntactic annotation for Lithuanian -MorfoLemahas been created by Vytautas Zinkevičius in Centre of Computational Linguistics of Vytautas Magnus University (Lithuania) [18]. The program MorfoLema can perform a morphosyntactic analysis and generate forms of Lithuanian words based on user's morphosyntactic characteristic. For disambiguation the MorfoLema uses "Two-level morphology" method of Kimmo Koskenniemi [9]. The next step of the development of a system for morphological annotation (Morfologinis anotatorius [20]) has been realised by Vidas Daudaravičius and Erika Rimkutė. Vidas Daudaravičius has created disambiguation tools for the Morfologinis anotatorius. More information about the Morfologinis anotatorius and used set of tags we can find on http://donelaitis.vdu.lt/main.php?id=4&nr=7_1 (in Lithuanian). (The names of tags are in Lithuanian, because the authors of the Morfologinis anotatorius didn't use English terms.) It is possible to perform online a morphosyntactic analysis through the web-page http://donelaitis.vdu.lt/main.php?id=4&nr=7_2. The results are visualized on the screen, and it is possible to receive the result as a file. The tag list for Polish and Lithuanian, based on [11], [12], [13], [17], and used in the example below, follows:
subst -noun nwok -nonvocal sgsingular adj -adjective plplurale verb -verb nomnominative praes -present gengenitive nonpraet -nonpraeteritum acc -accusative ter -3rd person loc -locative bezosobnik -non person form of verb m -masculine perf -perfective f -feminine imperf -imperfective -humnonhuman particle -particle -aninonanimate preppreposition A significant feature is the analytic character of Bulgarian, and the synthetic character of Lithuanian (with some analytic character, like word order in absolute constructions) and Polish. Bulgarian exhibits several linguistic innovations in comparison to the other Slavic languages (a rich system of verbal forms, a definite article), and has a grammatical structure closer to English, Modern Greek, or the Neo-Latin languages than Polish. The definite article in Bulgarian is postpositive, whereas in Lithuanian a similar function is served by qualitative adjectives and adjectival participial forms, both with pronominal declension. Bulgarian preserves some vestiges of case forms in the pronoun system. Polish and Lithuanian exhibit all features of synthetic languages (a very rich case paradigm for nouns). Although Lithuanian has lost the neuter gender of nouns, its case system is richer than the Polish one. Bulgarian and Lithuanian have a high number of verbal forms, but Polish has reduced most of the forms for past tense. Both Polish and Bulgarian have a strongly developed category of verbal aspect. In Lithuanian the verb can have more than one aspect depending on the usage of a base stem for present, past and future tense.
Conclusion
One of the main problems in human communication is the presence of a huge variety of written and spoken languages in the world. Finding ways to support the connection of people from different ethnical parts of the world is becoming more and more important. The advantage of processing a trilingual parallel corpus is to obtain context specific information about syntactic and semantic structures and usage of words in given language(s). The parallel BG-PL-LT corpus will enrich and uncover some unstudied features of the three languages. Furthermore, a trilingual corpus can find applications into the design and development of LDB of future bilingual dictionaries, for example, of a LDB supporting a BG-LT dictionary, based on a LDB that supports a BG-PL online dictionary. Finally we note that the trilingual corpus can be used in education, in schools as well as universities; it will be useful to students, instructors, and linguists-researchers alike.
( 1 )
1Examples of predicative function of the participle BG: украсен // PL: ozdobiony // LT: papuošta [neuter], papuoštas [masculine] //EN: decorated//: BG: Коридорът е хубаво украсен. PL: Korytarz jest ładnie ozdobiony. LT: Koridorius gerai papuošta. / Koridorius gerai papuoštas. EN: The corridor is beautifully decorated.(2) Examples of attributive function of the participle: BG: пишещ // PL: piszący // LT: rašantis // EN: one who wrote //, in the sentences: BG: Пишещият тези писма старец беше осемдесетгодишен. PL: Piszący te listy starzec był osiemdziesięciolatkiem. LT: Rašantis tuos laiškus senelis buvo aštuoniasdešimtmetis. EN: The old man who wrote these letters was eighty years old. (3) Examples of the semi-predicative function: BG: пишейки // PL: pisząc // LT: rašydamas // EN: while writing //, in the sentences: BG: Пишейки, гледах през прозореца. PL: Pisząc patrzyłem w okno. LT: Rašydamas žiūrėjau per langą. EN: While writing, I was looking out of the window.
а) Present active participle: BG: говорещ // PL: mówiący // LT: kalbąs / kalbantis // EN: speaking // (preserved active voice). b) Present passive participle: BG: любим 1 //PL: kochany // LT: mylimas // EN: beloved // (preserved passive voice with information about present tense). c) Past passive participle: BG: написан // PL: napisany //LT: parašytas // EN: written // (preserved passive voice with information about past tense and perfect aspect of the verbal form). An interesting fact is that participles preserve the valency properties of the respective verbal form, for instance in Polish and Lithuanian: PL: Ten mężczyzna zajmuje się drobnym handlem. -Zajmujący sie drobnym handlem mężczyzna. // LT: Tas vyras užsiima mažmenine prekyba. -Mažmenine prekyba užsiimantis vyras. // EN: This man deals in retail. -A man dealing in retail.
: <p>Вместо отговор Гандалф гръмогласно подвикна на коня си:</p> <p>-Напред, Сенкогрив! Трябва да бързаме. Няма време. Виж! Сигналните клади на Гондор горят, зоват за помощ. Войната е избухнала. Виж, огън бушува над Амон Дин, пламък покрива Ейленах, сигналът бърза на запад: Нардол, Ерелас, Мин-Римон, Каленхад и Халифириен на роханската граница.</p> Polish: <p>Zamiast odpowiedzieć hobbitowi, Gandalf krzyknął głośno do swego wierzchowca:</p> <p>-Naprzód, Gryfie! Trzeba się spieszyć. Czas nagli. Patrz! W Gondorze zapalono wojenne sygnały, wzywają pomocy. Wojna już wybuchła. Patrz, płoną ogniska na Amon Din, na Eilenach, zapalają się coraz dalej na zachodzie! Rozbłyska Nardol, Erelas, Min-Rimmon, Kalenhad, a także Halifirien na granicy Rohanu.</p> Lithuanian: <p>Užuot atsakęs Gendalfas garsiai riktelėjo žirgui:</p> <p>-Pirmyn, Žvaigždiki! Reikia skubėti. Laiko nebeliko. Žiūrėk! Jau dega Gondoro laužai, prašo pagalbos. Karo kibirkštis įžiebta. Matai, ant Amon Dino dega ugnis, liepsnoja ir Eilenachas, dar toliau vakaruose -Nardolas, Erelasas, Minas Rimonas, Kalenhadas ir Halifirienas prie Rohano sienos.</p> //EN: For answer Gandalf cried aloud to his horse. ‗On, Shadowfax! We must hasten. Time is short. See! The beacons of Gondor are alight, calling for aid. War is kindled. See, there is the fire on Amon Dîn, and flame on Eilenach; and there they go speeding west: Nardol, Erelas, Min-Rimmon, Calenhad, and the Halifirien on the borders of Rohan. (Part 3, Book 5 of The Return of the King of Tolkien's The Lord of the Rings)//
The phrase ‗deals in what? / dealing in what?' requires the instrumental case in Polish and Lithuanian 2 . The valence of the Polish and Lithuanian participle is the same as the valence of the finite verb form. A comparison of the three languages shows that in Bulgarian a subordinate clause in past perfect tense corresponds to a participle construction in Polish and Lithuanian: BG: След като си беше написал домашното, той започна да чете книга. // PL: Odrobiwszy lekcje zaczął czytać książkę. // LT: Paruošęs pamokas pradėjo skaityti knygą. // EN: Having written his homework, he started reading a book.1 Colloquial Bulgarian has lost this grammatical category. Such
forms occur mostly in scientific writing, being literary loans
from Russian or Church Slavonic. Because of their grammatical
unproductiveness, they are classified as adjectives,
corresponding to the Latin-derived adjectives in -able/-ible in
English: (не)допустим -(in)admissible, недосегаем -
intangible, съвместим -compatible, etc.
A parallel corpus of two Slavic languages and one Baltic language is of great interest from the viewpoint of describing the similarities and differences of the formal means of these three languages. Bulgarian belongs to the South subgroup, Polishto the West subgroup of the Slavic languages. Lithuanian belongs to the Eastern Baltic group. All three languages preserve the special features for each corresponding group.A comparison between experimental annotations of the
following sentence -The beacons of Gondor are alight,
calling for aid. 3 ‖ of the parallel corpus was performed:
BG: Сигналните клади на Гондор горят, зоват за
помощ.
PL: W Gondorze zapalono wojenne sygnały, wzywają
pomocy.
LT: Jau dega Gondoro laužai, prašo pagalbos.
The annotation of the Bulgarian text is done with MTE
MSDs, and ISSCO TAGGER [19] is used for
disambiguation. For manual annotation of the Polish and
Lithuanian text the above-mentioned descriptors are used,
because these languages lack developed MTE language
specifications. Establishing a 1-1-correspondence between
the tags used and the MTE tagset does not present an
insurmountable difficulty. The result follows:
Bulgarian (MTE annotation)
<cesAna version="1.0" type="lex disamb">
<chunkList>
<chunk type="s">
<tok type=WORD>
<orth>Сигналните</orth>
<disamb><base>сигнален</base><ctag>AP</ctag></disamb>
<lex><base>сигнален</base><msd>A---p-
y</msd><ctag>AP</ctag></lex>
</tok>
<tok type=WORD>
<orth> клади </orth>
<disamb><base>клада</base><ctag>NCFP-N</ctag></disamb>
<lex><base>клада</base><msd>Ncfp-
n</msd><ctag>NCFPN</ctag></lex></tok>
<tok type=WORD>
<orth>на</orth>
<disamb><base>на</base><ctag>SP</ctag></disamb>
<lex><base>на</base><msd>Qgs</msd><ctag>QG</ctag></lex>
<lex><base>на</base><msd>Sp</msd><ctag>SP</ctag></lex>
</tok>
<tok type=WORD>
<orth>Гондор</orth>
<disamb><base>Гондор</base><ctag>NPMS-
N</ctag></disamb>
<lex><base>Гондор</base><msd>Npms-n</msd><ctag>NPMS-
N</ctag></lex>
</tok>
<tok type=WORD >
<orth>горят</orth>
<disamb><base>горя</base><ctag>VMIP3P</ctag></disamb>
<lex><base> горя
</base><msd>Vmia3p</msd><ctag>VMIA3P</ctag></lex>
<lex><base> горя
</base><msd>Vmip3p</msd><ctag>VMIP3P</ctag></lex>
</tok>
<tok type=PUNCT >
<orth>,</orth>
<ctag>COMMA</ctag>
</tok>
<tok type=WORD >
<orth>зоват</orth>
<disamb><base>зова</base><ctag>VMIP3P</ctag></disamb>
<lex><base>зова</base><msd>Vmia3p</msd><ctag>VMIA3P</
ctag></lex>
<lex><base>зова</base><msd>Vmip3p</msd><ctag>VMIP3P</
ctag></lex>
</tok>
<tok type=WORD>
<orth>за</orth>
<disamb><base>за</base><ctag>SP</ctag></disamb>
<lex><base>за</base><msd>Sp</msd><ctag>SP</ctag></lex>
</tok>
<tok type=WORD>
<orth> помощ </orth>
<disamb><base> помощ </base><ctag>NCFS-
N</ctag></disamb>
<lex><base> помощ </base><msd>Ncfs-n</msd><ctag>NCFS-
N</ctag></lex>
</tok>
<tok type=PUNCT>
<orth>.</orth>
<ctag>PERIOD</ctag>
</tok>
</chunk>
</chunkList>
</cesAna>
Polish [11]
<cesAna version="1.0" type="lex disamb">
<chunkList>
<chunk type="s">
<tok>
<orth>W</orth>
<lex><base>w</base><ctag>prep:loc:nwok</ctag></lex>
</tok>
<tok>
<orth>Gondorze</orth>
<lex><base>Gondora</base><ctag>subst:sg:loc:f</ctag></lex>
</tok>
<tok>
<orth>zapalono</orth>
<lex><base>zapalić</base><ctag>verb:bezosobnik:perf</ctag></l
ex>
</tok>
<tok>
<orth>wojenne</orth>
<lex><base>wojenny</base><ctag>adj:pl:acc:-hum</ctag></lex>
</tok>
<tok>
<orth>sygnały</orth>
<lex><base>sygnał</base><ctag>subst:pl:acc:-hum</ctag></lex>
</tok>
<ns/>
<tok>
<orth>,</orth>
<lex disamb="1"><base>,</base><ctag>interp</ctag></lex>
</tok>
<tok>
<orth>wzywają</orth>
<lex disamb="1"><base>wzywać</base><ctag>
verb:nonpraet:pl:ter:imperf</ctag></lex>
</tok>
<tok>
<orth>pomocy</orth>
<lex><base>pomoc</base><ctag>subst:sg:gen:f</ctag></lex>
</tok>
<ns/>
<tok>
<orth>.</orth>
<lex disamb="1"><base>.</base><ctag>interp</ctag></lex>
</tok>
</chunk></chunkList></cesAna>
Lithuanian
<cesAna version="1.0" type="lex disamb">
<chunkList>
<chunk type="s">
<tok>
<orth>Jau</orth>
<lex><base>jau</base><ctag>particle</ctag></lex>
</tok>
<tok>
<orth>dega</orth>
<lex><base>degti</base><ctag> verb:praes.ter</ctag></lex>
</tok>
<tok>
<orth>Gondoro</orth>
<lex><base>Gondoras</base><ctag>subst:sg:gen:m</ctag></lex
>
</tok>
<tok>
<orth>laužai</orth>
<lex><base>laužas</base><ctag>subst:pl:nom:m</ctag></lex>
</tok>
<ns/>
<tok>
<orth>,</orth>
<lex disamb="1"><base>,</base><ctag>interp</ctag></lex>
</tok>
<tok>
<orth>prašo</orth>
<lex disamb="1"><base>prašyti</base><ctag>
verb:praes.ter</ctag></lex>
</tok>
<tok>
<orth>pagalbos</orth>
<lex><base>pagalba</base><ctag>subst:sg:gen:f</ctag></lex>
</tok>
<ns/>
<tok>
<orth>.</orth>
<lex disamb="1"><base>.</base><ctag>interp</ctag></lex>
</tok>
</chunk>
</chunkList>
</cesAna>
6. Annotation of parallel corpus -
problems and progress
This does not apply to Bulgarian which lacks a case paradigm for nouns.
Tolkien, J.R.R. The Lord of the Rings. Boston : Houghton Mifflin, 1994, p. 731.
Главна редакция Д. Тилков, Ст. Стоянов, К. Попов. Граматика на съвременния български книжовен език. Том 2 / МОРФОЛОГИЯ. Издателство на БАН. София. Bulgarian Grammar, In BulgarianBulgarian Grammar. (1993). Главна редакция Д. Тилков, Ст. Стоянов, К. Попов. Граматика на съвременния български книжовен език. Том 2 / МОРФОЛОГИЯ. Издателство на БАН. София. (In Bulgarian).
Multext-East: Parallel and Comparable Corpora and Lexicons for Six Central and Eastern European Languages. L Dimitrova, T Erjavec, N Ide, H.-J Kaalep, V Petkevic, D Tufis, Proceedings of COLING-ACL '98. COLING-ACL '98Montréal, Québec, CanadaDimitrova, L., Erjavec, T., Ide, N., Kaalep, H.-J., Petkevic, V., and Tufis, D. (1998). Multext-East: Parallel and Comparable Corpora and Lexicons for Six Central and Eastern European Languages. In: Proceedings of COLING-ACL '98. Montréal, Québec, Canada, pp. 315-319.
Lexical Database of the Experimental Bulgarian-Polish online Dictionary. In: Metalanguage and Encoding scheme Design for Digital Lexicography. L Dimitrova, R Panova, R Dutsova, Proceedings of the MONDILEX Third Open Workshop. the MONDILEX Third Open WorkshopBratislava, Slovak RepublicISBN 978-5-9900813-6-9Dimitrova, L., Panova, R., Dutsova, R. (2009). Lexical Database of the Experimental Bulgarian-Polish online Dictionary. In: Metalanguage and Encoding scheme Design for Digital Lexicography. Proceedings of the MONDILEX Third Open Workshop, Bratislava, Slovak Republic, 15-16 April 2009. 36-47. ISBN 978-5-9900813-6-9.
Some Problems in Multilingual Digital Dictionaries. L Dimitrova, V Koseska-Toszewa, International Journal Études Cognitives. 8SOWDimitrova, L., V. Koseska-Toszewa. (2008). Some Problems in Multilingual Digital Dictionaries. In: International Journal Études Cognitives. 8, SOW, 237-254.
An evaluation of an online bilingual corpus for the self-learning of legal English. May Fan, Xu Xunfeng, May Fan, Xu Xunfeng. (2002). An evaluation of an online bilingual corpus for the self-learning of legal English. http://langbank.engl.polyu.edu.hk/corpus/bili_legal.html
Multext (multilingual tools and corpora). N Ide, J Véronis, COLING'94. Kyoto, JapanIde, N., and Véronis, J. (1994). Multext (multilingual tools and corpora). In COLING'94, pages 90-96, Kyoto, Japan.
Corpus Encoding Standard: SGML Guidelines for Encoding Linguistic Corpora. N Ide, Proceedings of the First International Language Resources and Evaluation Conference. the First International Language Resources and Evaluation ConferenceGranada, SpainIde, N. (1998). Corpus Encoding Standard: SGML Guidelines for Encoding Linguistic Corpora. Proceedings of the First International Language Resources and Evaluation Conference,Granada, Spain, 463-70.
Developing Linguistic Corpora: a Guide to Good Practice Adding Linguistic Annotation. Geoffrey Leech. Geoffrey Leech. (2004). Developing Linguistic Corpora: a Guide to Good Practice Adding Linguistic Annotation. http://ahds.ac.uk/guides/linguistic-corpora/chapter2.htm
Two-level morphology: a general computational model for word-form recognition and production. Publication No. 11. Kimmo Koskenniemi, HelsinkiUniversity of Helsinki, Department of General LinguisticsKimmo Koskenniemi. (1983) Two-level morphology: a general computational model for word-form recognition and production. Publication No. 11. Helsinki: University of Helsinki, Department of General Linguistics.
Lithuanian Grammar, Еd. Vytautas Ambrazas, Baltos lankos. Vilnius802Lithuanian Grammar. (1997). Еd. Vytautas Ambrazas, Baltos lankos, Vilnius, pp.802.
Polish Tagger TaKIPI: Rule Based Constru-ction and Optimisation. Task Quarterly. M Piasecki, 11Piasecki, M. (2007). Polish Tagger TaKIPI: Rule Based Constru-ction and Optimisation. Task Quarterly. 11, p. 151- 167
Składniowe uwarunkowania znakowania morfosyntaktycznego w korpusie IPI PAN, Polonica, XXII-XXIII. A Przepiórkowski, In PolishPrzepiórkowski, A. (2003). Składniowe uwarunkowania znakowania morfosyntaktycznego w korpusie IPI PAN, Polonica, XXII-XXIII, p. 57-76 (In Polish)
Powierzchniowe przetwarzanie języka polskiego. Warszawa: Akademicka Oficyna Wydawnicza EXIT. A Przepiórkowski, In PolishPrzepiórkowski, A. (2008). Powierzchniowe przetwarzanie języka polskiego. Warszawa: Akademicka Oficyna Wydawnicza EXIT (In Polish)
Morphosyntactic Specifications for Polish. Theoretical foundations. In: Metalanguage and Encoding Scheme Design for Digital Lexicography. R Roszko, Bratislava. 140-150. ISBN 978-80-7399- 745-8Proceedings of the MONDILEX Third Open Workshop. the MONDILEX Third Open WorkshopRoszko, R. (2009). Morphosyntactic Specifications for Polish. Theoretical foundations. In: Metalanguage and Encoding Scheme Design for Digital Lexicography. Proceedings of the MONDILEX Third Open Workshop, 15- 16 April 2009, Bratislava. 140-150. ISBN 978-80-7399- 745-8.
Słownik gramatyczny języka polskiego, Wiedza Powszechna, Warszawa, CD + 177 s. Z Saloni, W Gruszczyński, M Woliński, R Wołosz, In PolishSaloni, Z., W. Gruszczyński, M. Woliński, R.Wołosz (2007). Słownik gramatyczny języka polskiego, Wiedza Powszechna, Warszawa, CD + 177 s. (In Polish)
Funkcinė lietuvių kalbos gramatika, Mokslo ir enciklopedijų leidybos institutas. A Valeckienė, 415VilniusIn LithuanianValeckienė, A. (1998). Funkcinė lietuvių kalbos gramatika, Mokslo ir enciklopedijų leidybos institutas, Vilnius, pp.415. (In Lithuanian)
System znaczników morfosyntaktycznych w korpusie IPI PAN, Polonica, XXII-XXIII. M Woliński, In PolishWoliński, M. (2003). System znaczników morfosyntaktycz- nych w korpusie IPI PAN, Polonica, XXII-XXIII, p. 39-55 (In Polish)
Lemuoklis -morfologinei analizei. Darbai ir dienos, 24, Vytauto Didžiojo universitetas. V Zinkevičius, In LithuanianZinkevičius, V. (2000). Lemuoklis -morfologinei analizei. Darbai ir dienos, 24, Vytauto Didžiojo universitetas, p. 245- 274 (In Lithuanian).
. Issco Tagger, ISSCO TAGGER: http://www.issco.unige.ch/staff/robert/tatoo/tagger.html#desi gn |
29,978,426 | On the geolinguistic change in Northern France between 1300 and 1900: a dialectometrical inquiry | With the supply of 8 closely interpreted dialectometrical maps, this paper analyses the linguistic change of the geolinguistic deep structures in Northern France (Domaine d'Oïl) between 1300 and 1900. As a matter of fact, the result will show -with one exception -the great stability of these deep structures. | [] | On the geolinguistic change in Northern France between 1300 and 1900: a dialectometrical inquiry
June 2007. c 2007
Hans Goebl hans.goebl@sbg.ac.at
Department of Romance Philology
Association for Computational Linguistics
Salzburg University
Akademiestrasse 24A-5020Salzburg
On the geolinguistic change in Northern France between 1300 and 1900: a dialectometrical inquiry
Proceedings of Ninth Meeting of the ACL Special Interest Group in Computational Morphology and Phonology
Ninth Meeting of the ACL Special Interest Group in Computational Morphology and PhonologyPragueJune 2007. c 2007
With the supply of 8 closely interpreted dialectometrical maps, this paper analyses the linguistic change of the geolinguistic deep structures in Northern France (Domaine d'Oïl) between 1300 and 1900. As a matter of fact, the result will show -with one exception -the great stability of these deep structures.
Introduction to the issue
Through the comparison of two data sets of 1300 and of 1900, the present contribution discusses, if and in which way the basic geolinguistic structure of Northern France (Domaine d'Oïl) changed in the course of this period. In this investigation, a number of different methods of dialectometry (DM) will be applied. DM is a subdiscipline of quantitative linguistics which concentrates on the exploration of the actual deep geolinguistic structures of a given space, using as data source linguistic atlases or similarly structured data collections (consisting of N inquiry points and p atlas or working maps). Of course, it has to be assumed that these deep structures were generated by a genuine specific activity of man (i.e of of the homo loquens), that is to say: the « linguistic (or dialectal) management of space by the homo loquens ». Insofar as man has obviously many other opportunities of managing a given natural space besides the linguistic management, there result many opportunities for interdisciplinary cooperation with DM.
The Salzburg-based DM (Goebl 2006a) pursues the genuine principles of traditional (Romance) linguistic geography with quantitative means. It therefore defines its main aim in the empowering of the diagnostic virtue of traditional linguistic geography by introducing global or synthetic (quantitative) methods.
Data basis
It consists of two machine-readable data matrices, the first resuming the period around 1300, the other one resuming the period around 1900.
Corpus 1300 (drawn from Dees 1980)
The medieval corpus was borrowed from the scripta-atlas (1980) of the Amsterdam Romance linguist A. Dees. This atlas is based on the comprehensive interpretation of 3300 original charters of Northern France of the second half of the 13th century, which were analysed in that instance according to a list of ca. 300 written (or scripta-) attributes. These scripta-attributes are mainly of phonetic relevance, most of them referring to vocalism (189 attributes), but also to consonantism (87 attributes), and some of them even to morphology (22 attributes). As a result, the data matrix holds 298 attributes and 85 « inquiry points ». The latter correspond actually to scripta-centres (scriptoria, chanceries) which are distributed as evenly as possible all over the Domaine d'Oïl. For the measuring of the graphic variation in the 3300 charters, A. Dees developped a specific method. As a result, he was able to determine -for each single attribute -its relative occurrence (in percentage) in the charters of the 85 scripta-centres. The content of the data matrix lies therefore on a metrical scale.
In the ninetees, A. Dees and his collaborator Piet von Reenen handed me over this data matrix, as a basis which allowed me to realize many dialectometrical experiments. Its only disadvantage is that the machine-readable matrix holds less attributes (268) than the printed atlas (298). Nevertheless, by applying the « Average Euclidean Metric » (AEM), the « Average Manhattan Metric » (AMM) and the « Bravais-Pearson correlation coefficent » [r(BP)], the dialectometrical results are very profitable (see Goebl 2006b). The scripta-atlas published by Dees in 1980 shows quantitative visualisations of the spatial distribution of the 298 attributes, but does not encompass global data interpretation with dialectometrical (or similar) methods.
The Dees-data: one illustrative example
In his scripta-atlas (1980: carte 87, p. 93), A. Dees also investigated the regional variation in the spelling of the French possessive pronoun: leur, leurs, leurz etc. which are all derived from the Latin etymon ILLÓRU. Most probably they were created under the influence of a specific regional dialect pronunciation. At the end of the 13th century, the geographic contrast between these eu-spellings and the older equivalent forms lor, lors, lors etc. was quite sharp in the Domaine d'Oïl. Hence, Dees checked the number of all occurrences of euspellings (belonging to the possessive pronoun) in the 3300 above mentioned charters and listed, for each of the 85 scripta-centres of his atlas, the percentages of those charters which show at least one occurence of the spelling -eu-. As a result, 81 out of the 93 charters of the scripta-region 26 « Somme, Pas-de-Calais » (located in the medieval Artois: see the top of the figures 1, 3, 5 and 7) showed a considerable amount of eu-spellings, unlike the remaining 12 charters. In the 105 charters of the scripta-region 1 « Charente, Charente-Maritime » (South-western corner of the Domaine d'Oïl), no occurrences of the eu-spellings were found. Obviously, the different spellings of the possessive pronoun in that region were still on -o-. Thus, Dees registered the value 87% (= 81 : 93) for the scripta-region 26 in the North and the value 0% for the scripta-region 1 in the South-west.
As Dees analysed 298 scripta-features in the same way, he succeeded in covering the whole range of the stressed and unstressed vocalism and consonantism of Old French.
Corpus 1900 (drawn from ALF)
The second corpus, referring to 1900, was drawn from the data of the French linguistic atlas ALF, precisely: from a data matrix which had been established in the process of dialectometrization of the total ALF grid. The dimensions of this data matrix are: N = 641 inquiry points (distributed all over France), p = 1687 working maps, 1117 referring to phonetics (612 to vocalism, and 505 to consonantism), 417 referring to vocabulary, and 99 to morphology. 347 inquiry points (out of the 641 points on the total ALF grid) are located in Northern France : they represent therefore the Domaine d'Oïl. Among these 347 inquiry points, 85 points were selected in geographic correspondance to the 85 scripta-centres of the Dees-atlas, and subsequently reunited to a new grid (see the right halves of Maps 1-8).
Among the 1687 workings maps mentioned above, we took only into consideration those of phonetic relevance, thus: 1117 maps. They derived from 247 original maps of the ALF by phonetic typization, which is a common procedure in Romance linguistics. The units of this ALF data matrix are upon the nominal scale. With the supply of the « Weighted Identity Value (with the weight 1) » [WIV (1)], the dialectometrical interpretation of this data matrix proved to be very successful (see Goebl 1984, I: 83-86, and2006a: 418-419).
The ALF-data : two illustrative examples
An example for two characteristic phonetic features is given in Map 812 of the ALF le marché « the market ». The 85 occurrences in the Domaine d'Oïl all derive from the Latin etymon MERCÁ-TU. The different dialectal followers of the stressed Á which is considered in this instance show the following results: a) pronunciation with -i (19 ALF-points), b) with (closed) -é (60 ALFpoints), c) with (open) -è (1 ALF-point), d) with -ö (4 ALF-points), e) with (neutral) -e (1 ALF-point). From the metrological point of view, these five phonetic types represent what is called « (nominal) multistate characters ». As the corresponding working map contains five different (phonetic) types (or « taxates » in Salzburg terminology), it is called as well a « 5-nymic working map ».
Nevertheless, the data of the same ALF-map can also be analysed according to consonantal principles, which is realized by listing the dialectal results of the postconsonantal C before stressed Á in MERCÁTU. The results are as follows: a) š (72 (1). On the map, these eight consonantal types show a geographic distribution which is far from being similar to the former one of the five vocalic types. Actually, this experience is also valid for the great majority of our ALFworking maps.
ALF-points), b) šy (2), c) ts (1), d) tšy (3), e) k (2), f) tš (3), g) ky (1), h) ty
The reduced data matrix drawn from the integral ALF-grid (with 1687 working maps) consists of 914 working maps: it starts with 2-nymic maps and has up to 23-nymic maps, embracing a total of 4263 phonetic types or « taxates ».
Establishement of the dialectometrical maps
DM is a map-based discipline: It visualises systematically all its results by using previously defined cartographic standards and by a very handsome computer program called VDM (« Visual DialectoMetry »), which supports and resets these visualisations perfectly. With VDM, choropleth maps and isarithmic maps, as well as trees can be generated. The results are always mapped in colours that are ranged according to the solar (or rainbow) spectrum, the warm colours lying above the arithmetic means of the respective frequency distribution, and the cold colours below it. The trees are all « spatialized » in principle, which means that their structural information is projected directly from the tree on the map.
The comparison between the medieval versus the modern data occurs basically in visual form, a methodically correct procedure, as the two corresponding iconic patterns are established according to the same cartographic norms. Further, the respective frequency distributions may also be correlated in order to gain a correlation map. For rea-sons of space, this procedure will not be demonstrated in this paper.
All the maps shown in section 4 are taken from two square similarity matrices (N x N) consisting of 85 items (N = 85), calculated by means of special similarity indexes -AEM and WIV(1) -on the basis of two data matrices (N = 85 ; p 1300 = 268 metrical attributes, p 1900 = 1117 nominal attributes). Hence, this demonstration includes two similarity maps, two parameter maps, two interpoint maps and two trees (with the respective spatializations). These four comparison planes are actually of special relevance, by allowing a global comparison which is also precise to the last detail of the medieval versus the modern data.
Four comparison planes between 1300 and 1900 4.1 Comparison plane 1 : two similarity maps
The most important instrument of DM is the similarity map. Each similarity map consists of a reference point and N-1 similarity values distributed in space, which values decrease proportionally with their geographical distance from the reference point. The geographic pattern of the progressive drop of these measurement values is clearly shown with the cartographic means of DM. In Maps 1 and 2, the reference point is located in the Poitou (South-west). The visual comparison of the two choropleth profiles shows their great similarity. The same effect occurs also from the remaining 84 reference points. This means that the linguistic management of the Domaine d'Oïl was very similar in the Middle Ages (through the linguistic activity of the scribes) and in modern times (through the linguistic activity of the dialect speakers). It must be added that, generally speaking, medieval non-Latin charters of the 13th and the 14th centuries (mainly) had a strong dialectal colouring, a phenomenon which was noted not in France only, they showed therefore a great number of local and/or regional written attributes. In the 19th century already, it was assumed that this graph(et)ic variation was generated or at least partly caused by the oral variation of the different medieval dialects. In Northern France, this regional colouring of the charters decreases rapidly after ca. 1400, and vanishes after 1450.
4.2 Comparison plane 2 : two parametermaps : synopsis of the skewness values Maps 3 and 4 reveal an entirely different question. The synopsis (or combination) of the N skewness values of a given similarity matrix indicates the degree of variation between different regions in regard to the so-called « linguistic compromise or exchange ». This phenomenon is defined as the degree of the intermixing of geolinguistic attributes with (respectively) regionally varying extension and/or intensity. Our DM-classification distinguishes therefore zones of high linguistic compromise (here : clear shadings) and zones of weak linguistic compromise (here : dark shadings). Where this linguistic exchange is high or great, a strong linguistic intermixing is prevailing. Where it is weak, the linguistic interaction is also low: these areas went on keeping a strong linguistic autonomy and were not yet seized by the general intermixing.
In Map 3 (left), the zones of high linguistic compromise or exchange form a kind of cross: they are located in the centre of the Domaine d'Oïl, whereas on its peripherical boarders the areas of different historical provinces (such as: Normandy, Picardy, Lorraine, etc.) are found. In Map 4 (right), the clear shaded zone occupies now the main part of the grid of the Domaine d'Oïl: in comparison with the left map it has virtually « exploded » (note the black circle), as a consequence of the continuous expansion of the language type of the Ile-de-France, which had been strongly supported by the French kings and after 1789 also by the Republic. Only on the Eastern peripherical boarders, some provinces (Picardy, the Walloons, Lorraine, etc.) could elude the general language compromise and thus the general linguistic intermixing.
Both maps consist of respectively 85 skewness values which were gained by respectively 85 similarity distributions. Since almost 20 years, it is well-known that the skewness value is an excellent instrument for measuring language compromise or exchange; in many instances, evidence of this fact has been given with different data sets (see Goebl 1984, I: 150-153, and2006a: 419-420).
Comparison plane 3 : two interpoint or
honeycomb maps Actually, Maps 5 and 6 represent two honeycomb maps, each of them consisting of 225 polygon sides which vary according to thickness and darkness. Every one of these polygon sides lies between (= inter) contiguous inquiry points (hence the name interpoint map), and indicates virtually the relative dialectal differences. Instead of the linguistic similarities (sim), the potential linguistic differences or distances (dist) were mapped. In quantitative regard, they are interrelated according to the formula: dist + sim = 100. Thus, the distance related counterpart of the above mentioned similarity index WIV(1) is the WDV (1) (« Weighted Distance Value (with the weight 1)) ».
The cartographic message of the two maps largely corresponds to the evidence of the traditional isogloss syntheses which were commonly established during the 20th century in Romance, German and English linguistics. The thick (and dark) polygon sides represent the so called « linguistic boundaries», a linguistic term which is rather colloquial and imprecise. One clearly recognizes that in Map 5 (left) in the North (Picardy) and the South-west (Poitou, Saintonge) there are very prominent and distinct « boundaries ». But it also shows very clearly in Map 6 (right) that in the period between 1300 and 1900 these « boundaries » were moved to the North (and East) as well as to the utmost borders of the South by an « invisible force » and that a zone with only very weak interpunctual demarcations emerged in the middle of the Domaine d'Oïl. Our knowledge of the history of the French language allows us to identify this « invisible force »: it is the irradiation of the linguistic type of the Ilede-France, pushed by the politics.
Comparison plane 4: two dendrographic analyses (following Ward's method)
Moreover, the two similarity matrices can first be processed by dendrographic methods, in a next step, the two trees are compared. In this procedure, one has to pay attention to those bifurcations of the tree which are located near the trunk (or the root). Among the relevant « hierarchic agglomerative methods » applied for the generation of trees, Ward's method has proved to be most appropriate. In Maps 7 and 8, the tree and the map were drawn and visualised, isolating thereby (respectively) three distinct cartographic clusters. These clusters are called « dendremes » in the tree, and their correspondences on the map « choremes ». The heuristic comparison of Maps 7 and 8 concentrates on the position of the dendremes in the tree and simultaneously on the position of the choremes on the map. First, the perfect spatial coherence of all choremes is striking. Further, it clearly results that the three dendremes (No. 1-3) at the top seize the East, the North and the Centre (including the West) of the Domaine d'Oïl, though in such a way that the central dendreme-choreme (No. 1) expanded in the course of the six centuries between 1300 and 1900 at the expense of the Eastern (No. 3) and the Northern (No. 2) choreme-dendreme. Again, this is a consequence of the irradiation of the dialect of the Ile-de-France, supported by the French royal dynasty and the Republic.
Final remarks
By the visual comparison of four pairs of maps established with dialectometrical methods, evidence was given that the geolinguistic deep structures of the Domaine d'Oïl (Northern France) -in the period between 1300 and 1900 -maintained a large stability, that is to say : remained mostly identical in regard to their phonetics. Hence the question arises on determining the chronological development and elaboration before 1300 of these phonetic deep structures. Nevertheless, the present investigation revealed the actual expansion of the linguistic type of the Ile-de-France between 1300 and 1900 which represents the typological basis for standard French. The dialectometrical techniques, which were again applied in this contribution, have proven many times their great diagnostic value in the last three decades. GOEBL a.d. 2007 N N N N N N N GOEBL a.d. 2007 N N N N N N N GOEBL a.d. 2007 N N N N N N N
used abbreviations (also in the legends of the Figures 1-8) AEM: Average Euclidean Metric: see chapter 2.1. ALF: Atlas linguistique de la France: see also the References AMM: Average Manhattan Metric: see chapter 2.
Figure 1 :Figure 2 :
12A similarity profile of the medieval Domaine d'Oïl: similarity map to the scripta-region 5 (Deux-Sèvres) Similarity index: AEM 5,k Corpus: 268 quantitative maps (from Dees 1980) Algorithm of visualization: MINMWMAX (6-tuple) A similarity profile of the modern Domaine d'Oïl: similarity map to the ALF-point 510 (Echiré,
Figure 3 :Figure 4 :Figure 5 :Figure 6 :Figure 7 : 3 Figure 8 :
3456738Choropleth map of the medieval Domaine d'Oïl: the synopsis of the skewness values of 85 similarity distributions Similarity index: AEM jk Corpus: 268 quantitative maps (from Dees 1980) Algorithm of visualization: MINMWMAX (2-tuple) Choropleth map of the modern Domaine d'Oïl: the synopsis of the skewness values of 85 similarity distributions Similarity index: WIV(1) jk Corpus: 914 phonetic working maps (from ALF) Algorithm of visualization: MINMWMAX (2-tuple) Honeycomb map of the medieval Domaine d'Oïl showing a synopsis of 225 interpoint distance values Distance index: AEM jk Corpus: 268 quantitative maps (from Dees 1980) Algorithm of visualization: MEDMW (6-tuple) Honeycomb map of the modern Domaine d'Oïl showing a synopsis of 225 interpoint distance values Distance index: WDV(1) jk Corpus: 914 phonetic working maps (from ALF) Algorithm of visualization: MEDMW (6-tuple) Dendrographic classification (and corresponding spatialization) of the medieval Domaine d'Oïl (85 scripta-regions according to Dees 1980) Similarity index: AEM jk Dendrographic algorithm: hierarchical grouping method of Ward Number of marked dendremes resp. choremes: Dendrographic classification (and corresponding spatialization) of the modern Domaine d'Oïl (85 ALF-points) Similarity index: WIV(1) jk Dendrographic algorithm: hierarchical grouping method of Ward Number of marked dendremes resp. choremes
. Alf , Jules Gilliéron, Edmond Edmont, 10Paris, ChampionAtlas linguistique de la FranceALF: Jules Gilliéron and Edmond Edmont. 1902-1910. Atlas linguistique de la France, 10 vol., Paris, Champion. |
5,584,560 | Bayesian Inference for Zodiac and Other Homophonic Ciphers | We introduce a novel Bayesian approach for deciphering complex substitution ciphers. Our method uses a decipherment model which combines information from letter n-gram language models as well as word dictionaries. Bayesian inference is performed on our model using an efficient sampling technique. We evaluate the quality of the Bayesian decipherment output on simple and homophonic letter substitution ciphers and show that unlike a previous approach, our method consistently produces almost 100% accurate decipherments. The new method can be applied on more complex substitution ciphers and we demonstrate its utility by cracking the famous Zodiac-408 cipher in a fully automated fashion, which has never been done before. | [
11020320,
10093992,
5673033,
586636,
14749549,
10977241
] | Bayesian Inference for Zodiac and Other Homophonic Ciphers
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 19-24, 2011. 2011
Sujith Ravi sravi@isi.edu
University of Southern California Information Sciences Institute Marina del Rey
90292California
Kevin Knight knight@isi.edu
University of Southern California Information Sciences Institute Marina del Rey
90292California
Bayesian Inference for Zodiac and Other Homophonic Ciphers
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics
the 49th Annual Meeting of the Association for Computational LinguisticsPortland, OregonAssociation for Computational LinguisticsJune 19-24, 2011. 2011
We introduce a novel Bayesian approach for deciphering complex substitution ciphers. Our method uses a decipherment model which combines information from letter n-gram language models as well as word dictionaries. Bayesian inference is performed on our model using an efficient sampling technique. We evaluate the quality of the Bayesian decipherment output on simple and homophonic letter substitution ciphers and show that unlike a previous approach, our method consistently produces almost 100% accurate decipherments. The new method can be applied on more complex substitution ciphers and we demonstrate its utility by cracking the famous Zodiac-408 cipher in a fully automated fashion, which has never been done before.
Introduction
Substitution ciphers have been used widely in the past to encrypt secrets behind messages. These ciphers replace (English) plaintext letters with cipher symbols in order to generate the ciphertext sequence.
There exist many published works on automatic decipherment methods for solving simple lettersubstitution ciphers. Many existing methods use dictionary-based attacks employing huge word dictionaries to find plaintext patterns within the ciphertext (Peleg and Rosenfeld, 1979;Ganesan and Sherman, 1993;Jakobsen, 1995;Olson, 2007). Most of these methods are heuristic in nature and search for the best deterministic key during deci-pherment. Others follow a probabilistic decipherment approach. Knight et al. (2006) use the Expectation Maximization (EM) algorithm (Dempster et al., 1977) to search for the best probabilistic key using letter n-gram models. Ravi and Knight (2008) formulate decipherment as an integer programming problem and provide an exact method to solve simple substitution ciphers by using letter n-gram models along with deterministic key constraints. Corlett and Penn (2010) work with large ciphertexts containing thousands of characters and provide another exact decipherment method using an A* search algorithm. Diaconis (2008) presents an analysis of Markov Chain Monte Carlo (MCMC) sampling algorithms and shows an example application for solving simple substitution ciphers.
Most work in this area has focused on solving simple substitution ciphers. But there are variants of substitution ciphers, such as homophonic ciphers, which display increasing levels of difficulty and present significant challenges for decipherment. The famous Zodiac serial killer used one such cipher system for communication. In 1969, the killer sent a three-part cipher message to newspapers claiming credit for recent shootings and crimes committed near the San Francisco area. The 408-character message (Zodiac-408) was manually decoded by hand in the 1960's. Oranchak (2008) presents a method for solving the Zodiac-408 cipher automatically with a dictionary-based attack using a genetic algorithm. However, his method relies on using plaintext words from the known solution to solve the cipher, which departs from a strict decipherment scenario.
In this paper, we introduce a novel method for solving substitution ciphers using Bayesian learning. Our novel contributions are as follows:
• We present a new probabilistic decipherment approach using Bayesian inference with sparse priors, which can be used to solve different types of substitution ciphers.
• Our new method combines information from word dictionaries along with letter n-gram models, providing a robust decipherment model which offsets the disadvantages faced by previous approaches.
• We evaluate the Bayesian decipherment output on three different types of substitution ciphers and show that unlike a previous approach, our new method solves all the ciphers completely.
• Using the Bayesian decipherment, we show for the first time a truly automated system that successfully solves the Zodiac-408 cipher.
Letter Substitution Ciphers
We use natural language processing techniques to attack letter substitution ciphers. In a letter substitution cipher, every letter p in the natural language (plaintext) sequence is replaced by a cipher token c, according to some substitution key.
For example, an English plaintext
"H E L L O W O R L D ..."
may be enciphered as:
"N O E E I T I M E L ..."
according to the key:
p: ABCDEFGHIJKLMNOPQRSTUVWXYZ c: XYZLOHANBCDEFGIJKMPQRSTUVW
where, " " represents the space character (word boundary) in the English and ciphertext messages.
If the recipients of the ciphertext message have the substitution key, they can use it (in reverse) to recover the original plaintext. The sender can encrypt the message using one of many different cipher systems. The particular type of cipher system chosen determines the properties of the key. For example, the substitution key can be deterministic in both the encipherment and decipherment directions as shown in the above example-i.e., there is a 1-to-1 correspondence between the plaintext letters and ciphertext symbols. Other types of keys exhibit nondeterminism either in the encipherment (or decipherment) or both directions.
Simple Substitution Ciphers
The key used in a simple substitution cipher is deterministic in both the encipherment and decipherment directions, i.e., there is a 1-to-1 mapping between plaintext letters and ciphertext symbols. The example shown earlier depicts how a simple substitution cipher works.
Data: In our experiments, we work with a 414letter simple substitution cipher. We encrypt an original English plaintext message using a randomly generated simple substitution key to create the ciphertext. During the encipherment process, we preserve spaces between words and use this information for decipherment-i.e., plaintext character " " maps to ciphertext character " ". Figure 1 (top) shows a portion of the ciphertext along with the original plaintext used to create the cipher.
Homophonic Ciphers
A homophonic cipher uses a substitution key that maps a plaintext letter to more than one cipher symbol.
For example, the English plaintext: Here, " " represents the space character in both English and ciphertext. Notice the non-determinism involved in the enciphering direction-the English letter "L" is substituted using different symbols (51, 84) at different positions in the ciphertext.
"H E L L O W O R L D .
These ciphers are more complex than simple substitution ciphers. Homophonic ciphers are generated via a non-deterministic encipherment process-the key is 1-to-many in the enciphering direction. The number of potential cipher symbol substitutes for a particular plaintext letter is often proportional to the frequency of that letter in the plaintext languagefor example, the English letter "E" is assigned more cipher symbols than "Z". The objective of this is to flatten out the frequency distribution of ciphertext symbols, making a frequency-based cryptanalysis attack difficult.
The substitution key is, however, deterministic in the decipherment direction-each ciphertext symbol maps to a single plaintext letter. Since the ciphertext can contain more than 26 types, we need a larger alphabet system-we use a numeric substitution alphabet in our experiments.
Data: For our decipherment experiments on homophonic ciphers, we use the same 414-letter English plaintext used in Section 2.1.
We encrypt this message using a homophonic substitution key (available from http://www.simonsingh.net/The Black Chamber/ho mophoniccipher.htm).
As before, we preserve spaces between words in the ciphertext. Figure 1 (middle) displays a section of the homophonic cipher (with spaces) and the original plaintext message used in our experiments.
Homophonic Ciphers without spaces (Zodiac-408 cipher)
In the previous two cipher systems, the wordboundary information was preserved in the cipher. We now consider a more difficult homophonic cipher by removing space characters from the original plaintext. The English plaintext from the previous example now looks like this: Without the word boundary information, typical dictionary-based decipherment attacks fail on such ciphers.
Zodiac-408 cipher: Homophonic ciphers without spaces have been used extensively in the past to encrypt secret messages. One of the most famous homophonic ciphers in history was used by the infamous Zodiac serial killer in the 1960's. The killer sent a series of encrypted messages to newspapers and claimed that solving the ciphers would reveal clues to his identity. The identity of the Zodiac killer remains unknown to date. However, the mystery surrounding this has sparked much interest among cryptanalysis experts and amateur enthusiasts.
The Zodiac messages include two interesting ciphers: (1) a 408-symbol homophonic cipher without spaces (which was solved manually by hand), and (2) a similar looking 340-symbol cipher that has yet to be solved.
Here is a sample of the Zodiac-408 cipher message:
... and the corresponding section from the original English plaintext message:
I L I K E K I L L I N G P E O P L E B E C A U S E I T I S S O M U C H F U N I T I S M O R E F U N T H A N K I L L I N G W I L D G A M E I N T H E F O R R E S T B E C A U S E M A N I S T H E M O S T D A N G E R O U E A N A M A L O F A L L T O K I L L S O M E T H I N G G I ...
Besides the difficulty with missing word boundaries and non-determinism associated with the key, the Zodiac-408 cipher poses several additional challenges which makes it harder to solve than any standard homophonic cipher. There are spelling mistakes in the original message (for example, the English word "PARADISE" is misspelt as "PARADICE") which can divert a dictionary-based attack. Also, the last 18 characters of the plaintext message does not seem to make any sense ("EBE-ORIETEMETHHPITI").
Data: Figure 1 (bottom) displays the Zodiac-408 cipher (consisting of 408 tokens, 54 symbol types) along with the original plaintext message. We run the new decipherment method (described in Section 3.1) and show that our approach can successfully solve the Zodiac-408 cipher.
Decipherment
Given a ciphertext message c 1 ...c n , the goal of decipherment is to uncover the hidden plaintext message p 1 ...p n . The size of the keyspace (i.e., number of possible key mappings) that we have to navigate during decipherment is huge-a simple substitution cipher has a keyspace size of 26!, whereas a homophonic cipher such as the Zodiac-408 cipher has 26 54 possible key mappings.
Next, we describe a new Bayesian decipherment approach for tackling substitution ciphers.
Bayesian Decipherment
Bayesian inference methods have become popular in natural language processing (Goldwater and Griffiths, 2007;Finkel et al., 2005;Blunsom et al., 2009;Chiang et al., 2010). Snyder et al. (2010) proposed a Bayesian approach in an archaeological decipherment scenario. These methods are attractive for their ability to manage uncertainty about model parameters and allow one to incorporate prior knowledge during inference. A common phenomenon observed while modeling natural language problems is sparsity. For simple letter substitution ciphers, the original substitution key exhibits a 1-to-1 correspondence between the plaintext letters and cipher types. It is not easy to model such information using conventional methods like EM. But we can easily specify priors that favor sparse distributions within the Bayesian framework.
Here, we propose a novel approach for deciphering substitution ciphers using Bayesian inference. Rather than enumerating all possible keys (26! for a simple substitution cipher), our Bayesian framework requires us to sample only a small number of keys during the decipherment process.
Probabilistic Decipherment: Our decipherment method follows a noisy-channel approach. We are faced with a ciphertext sequence c = c 1 ...c n and we want to find the (English) letter sequence p = p 1 ...p n that maximizes the probability P (p|c).
We first formulate a generative story to model the process by which the ciphertext sequence is generated.
1. Generate an English plaintext sequence p = p 1 ...p n , with probability P (p).
2. Substitute each plaintext letter p i with a ciphertext token c i , with probability P (c i |p i ) in order to generate the ciphertext sequence c = c 1 ...c n .
We build a statistical English language model (LM) for the plaintext source model P (p), which assigns a probability to any English letter sequence. Our goal is to estimate the channel model parameters θ in order to maximize the probability of the observed ciphertext c:
arg max θ P (c) = arg max θ p P θ (p, c) (1) = arg max θ p P (p) · P θ (c|p) (2) = arg max θ p P (p) · n i=1 P θ (c i |p i ) (3)
We estimate the parameters θ using Bayesian learning. In our decipherment framework, a Chinese Restaurant Process formulation is used to model both the source and channel. The detailed generative story using CRPs is shown below:
1. i ← 1 2. Generate the English plaintext letter p 1 , with probability P 0 (p 1 )
3. Substitute p 1 with cipher token c 1 , with probability P 0 (c 1 |p 1 ) 4. i ← i + 1 5. Generate English plaintext letter p i , with probability
α · P 0 (p i |p i−1 ) + C i−1 1 (p i−1 , p i ) α + C i−1 1 (p i−1 )
Plaintext: D E C I P H E R M E N T
I S T H E A N A L Y S I S O F D O C U M E N T S W R I T T E N I N A N C I E N T L A N G U A G E S W H E R E
T H E ... Ciphertext: i n g c m p n q s n w f c v f p n o w o k t v c v h u i h g z s n w f v r q c f f n w c w o w g c n w f k o w a z o a n v r p n q n f p n ...
Bayesian solution: D E C I P H E R M E N T I S T H E A N A L Y S I S O F D O C U M E N T S W R I T T E N I N A N C I E N T L A N G U A G E S W H E R E T H E ...
Plaintext: D E C I P H E R M E N T I S T H E A N A L Y S I S
Ciphertext:
Plaintext:
Bayesian solution (final decoding): 6. Substitute p i with cipher token c i , with probability
I L I K E K I L L I N G P E O P L E B E C A U S E I T I S S O M U C H F U N I T I A M O R E F U N T H A N K I L L I N G W I L D G A M E I N T H E F O R R E S T B E C A U S E M A N I S T H E M O A T D A N G E R T U E A N A M A L O F A L L ... (with spaces shown): I L I K E K I L L I N G P E O P L E B E C A U S E I T I S S O M U C H F U N I T I
A M O R E F U N T H A N K I L L I N G W I L D G A M E I N T H E F O R R E S T B E C A U S E M A N I S T H E M O A T D A N G E R T U E A N A M A L O F A L L ...β · P 0 (c i |p i ) + C i−1 1 (p i , c i ) β + C i−1 1 (p i )
7. With probability P quit , quit; else go to Step 4.
This defines the probability of any given derivation, i.e., any plaintext hypothesis corresponding to the given ciphertext sequence. The base distribution P 0 represents prior knowledge about the model parameter distributions. For the plaintext source model, we use probabilities from an English language model and for the channel model, we specify a uniform distribution (i.e., a plaintext letter can be substituted with any given cipher type with equal probability). C i−1 1 represents the count of events occurring before plaintext letter p i in the derivation (we call this the "cache"). α and β represent Dirichlet prior hyperparameters over the source and channel models respectively. A large prior value implies that characters are generated from the base distribution P 0 , whereas a smaller value biases characters to be generated with reference to previous decisions inside the cache (favoring sparser distributions).
Efficient inference via type sampling: We use a Gibbs sampling (Geman and Geman, 1984) method for performing inference on our model. We could follow a point-wise sampling strategy, where we sample plaintext letter choices for every cipher token, one at a time. But we already know that the substitution ciphers described here exhibit determinism in the deciphering direction, 1 i.e., although we have no idea about the key mappings themselves, we do know that there exists only a single plaintext letter mapping for every cipher symbol type in the true key. So sampling plaintext choices for every cipher token separately is not an efficient strategyour sampler may spend too much time exploring invalid keys (which map the same cipher symbol to different plaintext letters).
Instead, we use a type sampling technique similar to the one proposed by Liang et al. (2010). Under this scheme, we sample plaintext letter choices for each cipher symbol type. In every step, we sample a new plaintext letter for a cipher type and update the entire plaintext hypothesis (i.e., plaintext letters at all corresponding positions) to reflect this change. For example, if we sample a new choice p new for a cipher symbol which occurs at positions 4, 10, 18, then we update plaintext letters p 4 , p 10 and p 18 with the new choice p new .
Using the property of exchangeability, we derive an incremental formula for re-scoring the probability of a new derivation based on the probability of the old derivation-when sampling at position i, we pretend that the area affected (within a context window around i) in the current plaintext hypothesis occurs at the end of the corpus, so that both the old and new derivations share the same cache. 2 While we may make corpus-wide changes to a derivation in every sampling step, exchangeability allows us to perform scoring in an efficient manner.
Combining letter n-gram language models with word dictionaries: Many existing probabilistic approaches use statistical letter n-gram language models of English to assign P (p) probabilities to plaintext hypotheses during decipherment. Other decryption techniques rely on word dictionaries (using words from an English dictionary) for attacking substitution ciphers.
Unlike previous approaches, our decipherment method combines information from both sourcesletter n-grams and word dictionaries. We build an interpolated word+n-gram LM and use it to assign P (p) probabilities to any plaintext letter sequence p 1 ...p n . 3 The advantage is that it helps direct the sampler towards plaintext hypotheses that resemble natural language-high probability letter sequences which form valid words such as "H E L L O" instead of sequences like "'T X H R T". But in addition to this, using letter n-gram information makes our model robust against variations in the original plaintext (for example, unseen words or misspellings as in the case of Zodiac-408 cipher) which can easily throw off dictionary-based attacks. Also, it is hard for a point-wise (or type) sampler to "find words" starting from a random initial sample, but easier to "find n-grams".
Sampling for ciphers without spaces: For ciphers without spaces, dictionaries are hard to use because we do not know where words start and end. We introduce a new sampling operator which counters this problem and allows us to perform inference using the same decipherment model described earlier. In a first sampling pass, we sample from 26 plaintext letter choices (e.g., "A", "B", "C", ...) for every cipher symbol type as before. We then run a second pass using a new sampling operator that iterates over adjacent plaintext letter pairs p i−1 , p i in the current hypothesis and samples from two choices-(1) add a word boundary (space character " ") between p i−1 and p i , or (2) remove an existing space character between p i−1 and p i .
For example, given the English plaintext hypothesis "... A B O Y ...", there are two sampling choices for the letter pair A,B in the second step. If we decide to add a word boundary, our new plaintext hypothesis becomes "... A B O Y ...".
We compute the derivation probability of the new sample using the same efficient scoring procedure described earlier. The new strategy allows us to apply Bayesian decipherment even to ciphers without spaces. As a result, we now have a new decipherment method that consistently works for a range of different types of substitution ciphers.
Decoding the ciphertext: After the sampling run has finished, 4 we choose the final sample as our English plaintext decipherment output.
Experiments and Results
We run decipherment experiments on different types of letter substitution ciphers (described in Section 2). In particular, we work with the following three ciphers:
(a) 414-letter Simple Substitution Cipher (b) 414-letter Homophonic Cipher (with spaces) (c) Zodiac-408 Cipher Methods: For each cipher, we run and compare the output from two different decipherment approaches:
1. EM Method using letter n-gram LMs following the approach of Knight et al. (2006). They use the EM algorithm to estimate the channel parameters θ during decipherment training. The given ciphertext c is then decoded by using the Viterbi algorithm to choose the plaintext decoding p that maximizes P (p)·P θ (c|p) 3 , stretching the channel probabilities.
2. Bayesian Decipherment method using word+n-gram LMs (novel approach described in Section 3.1).
Evaluation:
We evaluate the quality of a particular decipherment as the percentage of cipher tokens that are decoded correctly.
Results: Figure 2 compares the decipherment performance for the EM method with Bayesian decipherment (using type sampling and sparse priors) on three different types of substitution ciphers. Results show that our new approach (Bayesian) outperforms the EM method on all three ciphers, solving them completely. Even with a 3-gram letter LM, our method yields a +63% improvement in decipherment accuracy over EM on the homophonic cipher with spaces. We observe that the word+3-gram LM proves highly effective when tackling more complex ciphers and cracks the Zodiac-408 cipher. Figure 1 shows samples from the Bayesian decipherment output for all three ciphers. For ciphers without spaces, our method automatically guesses the word boundaries for the plaintext hypothesis. For the Zodiac-408 cipher, we compare the performance achieved by Bayesian decipherment under different settings:
• Letter n-gram versus Word+n-gram LMs- Figure 2 shows that using a word+3-gram LM instead of a 3-gram LM results in +75% improvement in decipherment accuracy.
• Sparse versus Non-sparse priors-We find that using a sparse prior for the channel model (β = 0.01 versus 1.0) helps for such problems and produces better decipherment results (97.8% versus 24.0% accuracy).
• Type versus Point-wise sampling-Unlike point-wise sampling, type sampling quickly converges to better decipherment solutions. After 5000 sampling passes over the entire data, decipherment output from type sampling scores 97.8% accuracy compared to 14.5% for the point-wise sampling run. 5 We also perform experiments on shorter substitution ciphers. On a 98-letter simple substitution cipher, EM using 3-gram LM achieves 41% accuracy, whereas the method from Ravi and Knight (2009) scores 84% accuracy. Our Bayesian method performs the best in this case, achieving 100% with word+3-gram LM.
Conclusion
In this work, we presented a novel Bayesian decipherment approach that can effectively solve a va-5 Both sampling runs were seeded with the same random initial sample. riety of substitution ciphers. Unlike previous approaches, our method combines information from letter n-gram language models and word dictionaries and provides a robust decipherment model. We empirically evaluated the method on different substitution ciphers and achieve perfect decipherments on all of them. Using Bayesian decipherment, we can successfully solve the Zodiac-408 cipher-the first time this is achieved by a fully automatic method in a strict decipherment scenario.
For future work, there are other interesting decipherment tasks where our method can be applied. One challenge is to crack the unsolved Zodiac-340 cipher, which presents a much harder problem than the solved version.
"
HELLOWORLD ..." and the corresponding ciphertext is: "65 82 51 84 05 60 54 42 51 45 ..."
Figure 1 :
1Samples from the ciphertext sequence, corresponding English plaintext message and output from Bayesian decipherment (using word+3-gram LM) for three different ciphers: (a) Simple Substitution Cipher (top), (b) Homophonic Substitution Cipher with spaces (middle), and (c) Zodiac-408 Cipher (bottom).
Figure 2 :
2Comparison of decipherment accuracies for EM versus Bayesian method when using different language models of English on the three substitution ciphers: (a) 414-letter Simple Substitution Cipher, (b) 414-letter Homophonic Substitution Cipher (with spaces), and (c) the famous Zodiac-408 Cipher.
This assumption does not strictly apply to the Zodiac-408 cipher where a few cipher symbols exhibit non-determinism in the decipherment direction as well.
The relevant context window that is affected when sampling at position i is determined by the word boundaries to the left and right of i.3 We set the interpolation weights for the word and n-gram LM as (0.9, 0.1). The word-based LM is constructed from a dictionary consisting of 9,881 frequently occurring words collected from Wikipedia articles. We train the letter n-gram LM on 50 million words of English text available from the Linguistic Data Consortium.
For letter substitution decipherment we want to keep the language model probabilities fixed during training, and hence we set the prior on that model to be high (α = 10 4 ). We use a sparse prior for the channel (β = 0.01). We instantiate a key which matches frequently occurring plaintext letters to frequent cipher symbols and use this to generate an initial sample for the given ciphertext and run the sampler for 5000 iterations. We use a linear annealing schedule during sampling decreasing the temperature from 10 → 1.
AcknowledgementsThe authors would like to thank the reviewers for their comments. This research was supported by NSF grant IIS-0904684.References
An exact A* method for deciphering letter-substitution ciphers. Eric Corlett, Gerald Penn, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. the 48th Annual Meeting of the Association for Computational LinguisticsEric Corlett and Gerald Penn. 2010. An exact A* method for deciphering letter-substitution ciphers. In Proceed- ings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1040-1047.
Maximum likelihood from incomplete data via the EM algorithm. Arthur P Dempster, Nan M Laird, Donald B , Journal of the Royal Statistical Society, Series B. 391RubinArthur P. Dempster, Nan M. Laird, and Donald B. Ru- bin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1-38.
The Markov Chain Monte Carlo revolution. Persi Diaconis, Bulletin of the American Mathematical Society. 462Persi Diaconis. 2008. The Markov Chain Monte Carlo revolution. Bulletin of the American Mathematical So- ciety, 46(2):179-205.
Incorporating non-local information into information extraction systems by Gibbs sampling. Jenny Finkel, Trond Grenager, Christopher Manning, Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL). the 43rd Annual Meeting of the Association for Computational Linguistics (ACL)Jenny Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into infor- mation extraction systems by Gibbs sampling. In Pro- ceedings of the 43rd Annual Meeting of the Associa- tion for Computational Linguistics (ACL), pages 363- 370.
Statistical techniques for language recognition: An introduction and guide for cryptanalysts. Ravi Ganesan, Alan T Sherman, Cryptologia. 174Ravi Ganesan and Alan T. Sherman. 1993. Statistical techniques for language recognition: An introduction and guide for cryptanalysts. Cryptologia, 17(4):321- 366.
Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. Stuart Geman, Donald Geman, IEEE Transactions on Pattern Analysis and Machine Intelligence. 66Stuart Geman and Donald Geman. 1984. Stochastic re- laxation, Gibbs distributions and the Bayesian restora- tion of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6(6):721-741.
A fully Bayesian approach to unsupervised part-of-speech tagging. Sharon Goldwater, Thomas Griffiths, Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. the 45th Annual Meeting of the Association of Computational LinguisticsSharon Goldwater and Thomas Griffiths. 2007. A fully Bayesian approach to unsupervised part-of-speech tag- ging. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 744- 751.
A fast method for cryptanalysis of substitution ciphers. Thomas Jakobsen, Cryptologia. 193Thomas Jakobsen. 1995. A fast method for cryptanalysis of substitution ciphers. Cryptologia, 19(3):265-274.
Unsupervised analysis for decipherment problems. Kevin Knight, Anish Nair, Proceedings of the Joint Conference of the International Committee on Computational Linguistics and the Association for Computational Linguistics. the Joint Conference of the International Committee on Computational Linguistics and the Association for Computational LinguisticsNishit Rathod, and Kenji YamadaKevin Knight, Anish Nair, Nishit Rathod, and Kenji Ya- mada. 2006. Unsupervised analysis for decipherment problems. In Proceedings of the Joint Conference of the International Committee on Computational Lin- guistics and the Association for Computational Lin- guistics, pages 499-506.
Type-based MCMC. Percy Liang, Michael I Jordan, Dan Klein, Proceedings of the Conference on Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. the Conference on Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational LinguisticsPercy Liang, Michael I. Jordan, and Dan Klein. 2010. Type-based MCMC. In Proceedings of the Conference on Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the As- sociation for Computational Linguistics, pages 573- 581.
Robust dictionary attack of short simple substitution ciphers. Edwin Olson, Cryptologia. 314Edwin Olson. 2007. Robust dictionary attack of short simple substitution ciphers. Cryptologia, 31(4):332- 342.
Evolutionary algorithm for decryption of monoalphabetic homophonic substitution ciphers encoded as constraint satisfaction problems. David Oranchak, Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation. the 10th Annual Conference on Genetic and Evolutionary ComputationDavid Oranchak. 2008. Evolutionary algorithm for de- cryption of monoalphabetic homophonic substitution ciphers encoded as constraint satisfaction problems. In Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation, pages 1717-1718.
Breaking substitution ciphers using a relaxation algorithm. Shmuel Peleg, Azriel Rosenfeld, Comm. ACM. 2211Shmuel Peleg and Azriel Rosenfeld. 1979. Break- ing substitution ciphers using a relaxation algorithm. Comm. ACM, 22(11):598-605.
Attacking decipherment problems optimally with low-order n-gram models. Sujith Ravi, Kevin Knight, Proceedings of the Empirical Methods in Natural Language Processing (EMNLP). the Empirical Methods in Natural Language Processing (EMNLP)Sujith Ravi and Kevin Knight. 2008. Attacking deci- pherment problems optimally with low-order n-gram models. In Proceedings of the Empirical Methods in Natural Language Processing (EMNLP), pages 812- 819.
Probabilistic methods for a Japanese syllable cipher. Sujith Ravi, Kevin Knight, Proceedings of the International Conference on the Computer Processing of Oriental Languages (ICCPOL). the International Conference on the Computer Processing of Oriental Languages (ICCPOL)Sujith Ravi and Kevin Knight. 2009. Probabilistic meth- ods for a Japanese syllable cipher. In Proceedings of the International Conference on the Computer Pro- cessing of Oriental Languages (ICCPOL), pages 270- 281.
A statistical model for lost language decipherment. Benjamin Snyder, Regina Barzilay, Kevin Knight, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. the 48th Annual Meeting of the Association for Computational LinguisticsBenjamin Snyder, Regina Barzilay, and Kevin Knight. 2010. A statistical model for lost language decipher- ment. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1048-1057. |
15,196,995 | End-to-End Evaluation of Machine Interpretation Systems: A Graphical Evaluation Tool | VERBMOBIL as a long-term project of the Federal Ministry of Education, Science, Research and Technology aims at developing a mobile translation system for spontaneous speech. The source-language input consists of human speech (English, German or Japanese), the translation (bidirectional English-German and Japanese-German) and target-language output is effected by the VERBMOBIL system. As to the innovative character of the project new methods for end-to-end evaluation had to be developed by a subproject which has been established especially for this purpose. In this paper we present criteria for the evaluation of speech-tospeech translation systems and a tool for judging the translation quality which is called Graphical Evaluation Tool (GET) 2 . | [] | End-to-End Evaluation of Machine Interpretation Systems: A Graphical Evaluation Tool
Susanne J Jekat
Lorenzo Tessiore
Computer Science Department
University of Hamburg
Vogt-Kölln Str
30
End-to-End Evaluation of Machine Interpretation Systems: A Graphical Evaluation Tool
The responsibility for the contents of this study lies with the authors. 2 To simplify the presentation of this paper, we only refer to the language pair German-English.
VERBMOBIL as a long-term project of the Federal Ministry of Education, Science, Research and Technology aims at developing a mobile translation system for spontaneous speech. The source-language input consists of human speech (English, German or Japanese), the translation (bidirectional English-German and Japanese-German) and target-language output is effected by the VERBMOBIL system. As to the innovative character of the project new methods for end-to-end evaluation had to be developed by a subproject which has been established especially for this purpose. In this paper we present criteria for the evaluation of speech-tospeech translation systems and a tool for judging the translation quality which is called Graphical Evaluation Tool (GET) 2 .
Introduction
The performance of evaluation very often is driven by the characteristics of the system that has to be judged (Andenfilger, 1994). As to the Verbmobil project the evaluation should meet three aspects:
• the needs of the developers, • the needs of the user, • the constraints on the evaluation of translation quality in general. In our concept and performance of evaluation we tried to combine these three aspects but one should keep in mind that at least the constraints on translation quality in general were meant to describe human translation with all its varieties and specific stylistic features. As to the special case of machine interpretation still only texts from limited domains can be transferred. So in our view it seems to be legitimate to simplify some of the procedures that are applied to the evaluation of human translation. An evaluation method based on any well known standard (EAGLES, 1995;Spark Jones and Galliers, 1996;Manzi, 1996) could not have integrated the three cited aspects, as traditional evaluation methods are intended for comparative evaluations more than for the investigation of a system during its development; therefore, to meet the requirements we had, we developed an integrated methodology and a tool for speech-to-speech quality evaluation which also allows easy access to the data.
Translation Quality
Evaluation of translation quality is a complicated matter per se. One reason may be the diversification of the field, another the different manifestations of translation (from simultaneous interpretation to written translations). In this paper we will refer to dialogue interpretation, that is the transfer of spoken language 1 to spoken language 2. In our view, interpretation differs clearly from written translation: 1. The process of writing a translation can take as much time as necessary (within a reasonable time-frame), but interpretation has to be fast because the ongoing communication should not be disturbed. 2. Written translation has to be very clear in order to avoid misunderstandings. The receiver of a written translation in contrast to the receiver of an interpretation has no contact to the translator nor access to situational or pragmatic cues to solve ambiguity. On the other hand, written translation offers the opportunity to use footnotes and explanations, which for the condition of time pressure can not be used in interpretation. 3. The input for the translation of written texts is complete and well-formed. Deviations from the wellknown and well documented standard of written texts in most of the cases are no mistakes but motivated by special intentions of the author. Different from this, the input for interpretation is incomplete and sometimes ill-formed as compared to the structures of written language. Additionally, specific characteristics of a speaker or the actual situation may influence the appearance of the input. For these reasons, the claim that translation should preserve form and function of the original text is only applicable to written translations. An imagined ´perfect translation´ should contain every information of the source-language text. A deviation from this constraint should only be triggered by differences between source and target languages themselves (e.g. black grapes in English have to be referred to as blue grapes in German).
We are still far from an objective measurement for á perfect translation´ and perhaps this goal cannot be reached because form and function are not independent (e.g. maintaining of the source-language function in the target-language text can interfere with the generation of an optimal target-language form) and researchers and human beings in general make different judgements on the importance of form or function. As far as speech-to-speech interpretation is concerned, in our view the preservation of the text function is more important although function is not a fixed notion but has to be adapted to the concrete purpose of the actual communication. Speech-to-speech translation appears to be a classic case of covert translation where the texts to be translated are designed for consumption rather than edification and changes to form and content are possible in the interest of maintaining the function of the sourcelanguage text (House, 2000). We outlined some principal constraints on the evaluation of translation quality and some characteristics of interpretation. The special needs of a user of a machine interpretation system are discussed in the following section.
User-Oriented Evaluation of Translation Quality
VERBMOBIL is dedicated to facilitate communication between speakers of different mother tongues by generating an adequate translation. In a dialogue, a speaker has two important tasks:
• to receive all important messages from the other speaker, • to reach the communicative goal of the conversation. As mentioned above, translation in speech-to-speech communication should support this by generating a targetlanguage text which preserves all important messages and communicative functions of the source-language turn and which is enabling the ongoing conversation. Except in some rare cases, a word for word translation is no adequate strategy here.
Input and Output Quality
Above the criterion of simply continuing the communication and beyond the struggle for a perfect translation, the user of a machine interpretation system should feel comfortable with the linguistic features of the system´s output. In machine interpretation, where no pragmatic cues can be processed, the output is dependent on the input quality, so input and output in our evaluation are analysed following three linguistic criteria: 1. syntactical correctness of input/output, 2. semantical correctness of input/output , 3. possibility of misunderstanding of input/output. The design of this evaluation phase ignores the translation relation, input and output have to be judged separately even though we suppose that there exists a relation between input and output (i.e. translation) quality. By this procedure we try to avoid circularity of analysis.
Quality of Machine Interpretation
As to the translation quality of our system, the first step of analysis is the detection of a possible translation mismatch which consists of a loss or change of information in the translation process. This judgement is effected by a simple yes/no decision which is verified by a second step of analysing the translation quality. In this second step, information elements of the input are compared to those of the output from the user´s point of view: 1. count of all information elements in the input, 2. count of essential information elements in the input, 3. count of information elements lost during the translation process (comparison between number of all input elements and number of those that are preserved in the output), 4. count of all information elements in the output, 5. count of additional information elements in the ouput as compared to the input. For example, the turn 'when will we meet?' in our domain (cooparative negotiation dialogues in the travel planning domain) consists of five information elements: 1. when: wh-question referring to time, 2. will: tense marker, 3. we: both speakers will do s.th. together, 4. meet: central goal of the communication, 5. dialogue act (request-suggest): the whole question motivates the hearer to suggest a time for the meeting. The different steps of evaluation are put together to a complete analysis of input, output and translation quality which has to be confirmed by a final judgement: a) a translation should be judged as 'Good' if it is correct and if it does not contain any mismatches, b) a translation should be judged as 'Intermediate' if it contains mistakes or mismatches but communication is successfull, c) a translation should be judged as 'Bad' if it contains mistakes and/or mismatches and communication is interrupted.
Developer's Needs
End-to-end evaluation has to focus on the interaction between user and system in order to deliver reliable results of the system´s performance under realistic application conditions. Besides the comparison of different systems, the central goal of any evaluation is to enable developers to improve parts of or the whole system. As there do not exist systems with a comparable architecture to VERBMOBIL, our evaluation focuses on the latter effect. In the following, we will decribe those features of VERBMOBIL that determine the structure of evaluation.
Characteristics of the System
The VERBMOBIL system is completely speech driven and can be used either in face-to-face communication or via telephone. The spoken input is processed by different speech recognizers (which themselves are compared in a separate acoustic evaluation) and then proceeded to several different models of translation used in VERBMOBIL. The output always consists of only one of the translations, which is selected by the actual system's configuration. In the framework of this paper it would lead too far to describe all translation models. We only present two examples here: 1. the dialog-act based translation: the generation of the target-language turn is based on the central function (the dialog-act) of the input and some domain specific information elements, 2. the translation is based on a deep analysis of every lexical item and the syntactic structure of the sourcelanguage turn.
As evaluators should judge from the user's point of view and should not be influenced by a preference for one of the translation models, we test different configurations of the system but show them only one result at a time.
Examples for configurations are first wins (the first result that is generated is taken into consideration) and waiting for deep, where the result of the deep analysis gets a very high priority. During one evaluation phase (which normally lasts two weeks) the different configurations are tested in comparable numbers of test suites.
Feedback to Developers during Evaluation
Within one evaluation phase more than one version of the system is tested as changes or updates are possible. The test results are continuously transmitted to the developers and they are informed immediately if there arise problems with a certain configuration. During the tests, evaluators and developers try to find out the weak points of a module or a module combination; changes are implemented and installed whenever possible throughout the evaluation phase.
Feedback to Developers after Evaluation
Once an end-to-end evaluation phase is finished, all results are analysed under different aspects and presented to the project members through the Graphical Evaluation Tool GET (see figure 1 above); the use of the same tool both for the evaluation procedure and the presentation of the results allows complete transparency of the evaluation method and a strong effectiveness. Ongoing improvement of the system is then based on the latest evaluation results.
The Design of Experiments and the Evaluation Tool GET
During the end-to-end evaluation the VERBMOBIL system is tested by English and German native speakers, and the translations of the system are judged on the basis of the criteria mentioned in the section ´Translation Quality´ (see above). Evaluators use the Graphical Evaluation Tool, GET, which facilitates judgements according to these criteria (see also Jekat et al., 1999).
Methodology of Experiments
Each test session consists of a dialogue between an English and a German speaker. The communicative goal of the conversation is to plan a joint business trip. Both speakers do not need to know the other language, VERBMOBIL functions as machine interpreter. As visual cues are not processed by the system and therefore should not influence the ongoing communication, subjects are isolated in different rooms and do not see each other. They can only communicate by the help of the system. Both speakers are equipped with time tables and hotel lists, for every topic to be discussed they indicate their impression of having reached an agreement or not on a special sheet. A supervisor attends the tests, he does not interact with the speakers and writes a protocoll of every test session. The dialogues are recorded and transcribed. The transcriptions and the log files of the system serve as input for the evaluation tool GET. Bilingual speakers or human interpreters then judge all relevant linguistic aspects of the dialogues.
The Graphical Evaluation Tool (GET)
The end-to-end evaluation of bilingual dialogues that are interpreted by the VERBMOBIL system is perfomed by the help of the GET which is specially developed for this purpose. Four different turns are displayed at the same time. The first represents the input from one of the dialogue partners (in our example the American English speaker), the box to the right of it displays the German translation of VERBMOBIL which is actually analysed. The two boxes below show how the conversation is going on, the answer of the German speaker in the left and the English VERBMOBIL translation in the right box. By the help of the GET the translation is then evaluated according to all criteria described in the section 'Quality of Machine Interpretation' (see above). As the tool is almost completely mouse-driven, it is very easy to use and it facilitates the processing of a large database (during the second phase of the VERBMOBIL project more than 300 dialogues had to be evaluated). The possibility to display a sequence of four turns at the same time allows the evaluators to consider also the dynamic features of the dialogues and to get an idea of the ongoing communication as a process. Some separate windows of the GET show the main internal interfaces of the system. The first window shows all the translations produced by the different translation modules of the system and highlights the segments which compose the final output. Evaluators can decide whether the chosen segments are the best option or not. The second window shows the output of the speech recognizer for the current turn and highlights the wrongly recognized words. The tool can produce statistics for speech recognition providing the word accuracy, the complete list of the used words for both languages, the occurrency and recognition rate for each uttered word and the sentence context for the words which were not recognized. Two further windows provide the support for statistical computation: one window allows the user to put constraints on the statistics computation by choosing a pattern that a turn should match in order to be considered in the analysis; the other window shows the statistical results. For example it is possible to see which is the percentage of 'good' translations based on syntactically incorrect input or to measure the percentage of 'intermediate' translations for which there is a translation mismatch and which are syntactically incorrect. This procedure allows the investigation of the relations between different evaluation criteria.
Results
With the help of the GET many dialogues could be analysed and relevant linguistic characteristics of the translation quality have been isolated. We also got information on the relation between the criteria related to the features of the system and the criteria related to the quality perceived by the users; we present here the results related to this topic. respectively. The graphics show that translation quality is much more sensible to the insertion of information elements than to the deletion. Figure 2 demonstrates that when at least 50% of information elements are preserved in the translation the rate of 'good' or 'intermediate' translations is greater than the rate of 'bad' translations. On the contrary figure 3 shows that an insertion of more than 20% of information elements results in the predominance of negative judgements on translation quality. Figure 4 shows the relation between translation quality and the occurrence of translation mismatches. Some translations which contain mismatches are even judged as good translations whereas in general, the absence of a translation mismatch seems to be the central criterion for a judgement as good translation. The translation mismatch is a sufficient criteria for a negative translation quality but it is not a necessary one: 40% of turns have an intermediate translation quality although they contain a translation mismatch. This means that these translations still help continuing the conversation. Figure 6 shows the quality of the output related to the translation quality. The value axis shows the percentage of turns with different translation qualities and linguistic judgements. There exists no significant correlation between semantic correctness and the absence of possible misunderstandings and a high translation quality but we suppose that the former are more important than syntactical correctness. This becomes evident in figure 7 and 8, where the value axis shows the percentage of turns for different translation and output qualities.
The correlation between quality of the output, translation mismatch and translation quality is shown in figure 9, where all the analysed turns contain a translation mismatch.
Conclusions
The GET tool allows a visual representation of dialogues that is useful both in the evaluation phase and for the improvement of the system because a large database can be processed. As to translation quality of machine interpretation, our evaluation method reveals that the preservation of the ongoing communication as a qualitative criterion is more important than a possible loss of information caused by a translation mismatch. When the output quality is related to the translation quality, semantic correctness is preferred to syntactic correctness. Despite individual differences between human evaluators all of them follow the same course of evaluation by the help of the GET. Statistical analysis of the results as well as quantity of the evaluated turns and number of different evaluators in our view lead to a relatively objective evaluation of translation quality performed by VERBMOBIL.
Figure 1 :
1Main window of the GET
Figure 2 :Figure 3 :
23Effect Effect of insertion of information elements on user oriented judgement In figure 2 the relation between the quality of translation and the number of translated information elements is shown. The same relation is shown infigure 3concerning inserted information elements. The value axis shows the percentage of turns with different translation qualities; the category axis shows the minimal percentage of translated information elements and the maximal percentage of deleted information elements in one turn forfigure 2 and 3
Figure 4 :
4Effect of translation mismatch on user oriented judgement
Figure 5 :Figure 6 :
56Cumulative distribution of translation mismatch for the minimum percentage of translated information elementsFigure 5presents the relation between translation mismatch and a loss of information elements. The category axis represents the minimal percentage of translated information elements; the value axis represents the percentage of turns that contain a translation mismatch.The more information elements are lost the more translations mismatches occur. Even if this is obvious, it is interesting to notice that the loss of a very low number of information elements can easily cause The
Figure 7 :
7Translation
Figure 8 :
8Translation quality for negative output quality
Figure 9 :
9Distribution of output quality when a translation mismatch occurs for different translation qualities
Some Remarks about the Validation of Information Systems Development. U Andelfinger, Interdisciplinary Foundation of Systems Design and Evaluation. Seminar Report 97 of International Conference and Research Center for Computer Science. R. Keil-SlawikSaarbrücken Schloss Dagstuhl, University of Saarland. EAGLES-Expert Advisory Group on Language Engineering Standards ; University of GenevaEvaluation of Natural Language Processing Systems. Final Report. Document EAG-EWG-PR.2. Obtainable from ISSCOAndelfinger U. (1994). Some Remarks about the Validation of Information Systems Development. In Interdisciplinary Foundation of Systems Design and Evaluation. Seminar Report 97 of International Conference and Research Center for Computer Science, edited by R. Keil-Slawik, Saarbrücken Schloss Dagstuhl, University of Saarland. EAGLES-Expert Advisory Group on Language Engineering Standards (1995). Evaluation of Natural Language Processing Systems, Final Report. Document EAG-EWG-PR.2. Obtainable from ISSCO, University of Geneva.
Information Technology -Software product evaluation -Quality characteristics and guidelines for their use. International Organization for Standardization. J House, Sprachmitteln und interkulturelle Kommunikation. München: iudicium. International Standard ISO/IEC 9126. Knapp, K. and Knapp-Potthoff, A.Übersetzungsäquivalenz: ein Schlüsselbegriff in der Übersetzungswissenschaft. International Electrotechnical CommissionHouse, J. (2000). Übersetzungsäquivalenz: ein Schlüsselbegriff in der Übersetzungswissenschaft? In Knapp, K. and Knapp-Potthoff, A. (eds.) Sprachmitteln und interkulturelle Kommunikation. München: iudicium. International Standard ISO/IEC 9126. (1991). Information Technology -Software product evaluation -Quality characteristics and guidelines for their use. International Organization for Standardization, International Electrotechnical Commission.
Das Graphical Evaluation Tool für die End-to-End-Evaluation des Verbmobil Systems. Verbmobil-Techdoc. S J Jekat, L Tessiore, B Lause, Nr. 73Universität HamburgJekat, S.J., Tessiore, L. and Lause, B. (1999). Das Graphical Evaluation Tool für die End-to-End- Evaluation des Verbmobil Systems. Verbmobil- Techdoc. Nr. 73, Universität Hamburg.
Evaluating Natural Language Processing Systems. M King, Communications of the ACM on Natural Language Processing. 391Special edition ofKing, M. (1996). Evaluating Natural Language Processing Systems. Special edition of Communications of the ACM on Natural Language Processing, 39(1), pp. 73-79
Life Cycles, and Laws of Software Evolution. M M Lehman, Proceedings IEEE. IEEE68Lehman, M.M., (1980). Life Cycles, and Laws of Software Evolution. In Proceedings IEEE, 68(9), pp. 1060- 1076.
Working towards user-oriented Evaluation. S Manzi, M King, S Douglas, Proceedings of the International Conference NLP+IA/TAL+AI. the International Conference NLP+IA/TAL+AIMouncton, CanadaManzi, S. King, M. and Douglas, S. (1996). Working towards user-oriented Evaluation. In Proceedings of the International Conference NLP+IA/TAL+AI, Mouncton, Canada (pp. 155-160).
Evaluating Natural Language Processing Systems. Sparck Jones, K Galliers, J R , Springer-VerlagBerlinSparck Jones, K. and Galliers, J.R. (1996). Evaluating Natural Language Processing Systems. Berlin, Springer- Verlag. |
169,309,636 | [] | Combiner analyse superficielle et profonde : bilan et perspectives Mots-clefs : Analyse syntaxique, analyse superficielle, analyse profonde
TALN 2005, Dourdan, 6-10 juin 2005
Philippe Blache pb@lpl.univ-aix.fr
Laboratoire Parole et Langage CNRS
Université de Provence
Combiner analyse superficielle et profonde : bilan et perspectives Mots-clefs : Analyse syntaxique, analyse superficielle, analyse profonde
TALN 2005, Dourdan, 6-10 juin 2005Parsing, shallow and deep parsing
L'analyse syntaxique reste un problème complexe au point que nombre d'applications n'ont recours qu'à des analyseurs superficiels. Nous faisons dans cet article le point sur les notions d'analyse superficielles et profondes en proposant une première caractérisation de la notion de complexité opérationnelle pour l'analyse syntaxique automatique permettant de distinguer objets et relations plus ou moins difficiles à identifier. Sur cette base, nous proposons un bilan des différentes techniques permettant de caractériser et combiner analyse superficielle et profonde.Abstract Deep parsing remains a problem for NLP so that many applications has to use shallow parsers. We propose in this paper a presentation of the different characteristics of shallow and deep parsing techniques relying on the notion of operational complexity. We present different approaches combining these techniques and propose a new approach making it possible to use the output of a shallow parser as the input of a deep one.
Introduction
Le problème de l'analyse syntaxique reste une question complexe à la fois du point de vue théorique et computationnel. La solution généralement adoptée pour traiter des masses de données volumineuses ou des entrées non standard consiste à recourir à des analyseurs superficiels, robustes et efficaces, mais ne construisant que des informations partielles. Il existe un certain nombre d'études proposant de combiner les techniques d'analyse superficielle profonde permettant soit d'améliorer l'efficacité des analyseurs profonds en leur offrant un meilleur contrôle des processus, soit de proposer une approche permettant de choisir le type d'analyse désiré en fonction des besoins. Cet article dresse un bilan de ces différentes techniques en caractérisant les notions d'analyse profonde et superficielle. Ces caractéristiques sont données non seulement d'un point de vue opérationnel, mais également en introduisant la notion de complexité des phénomènes syntaxiques à analyser. Il s'agit d'une première tentative de classification distinguant les phénomènes faciles à analyser de ceux plus complexes.
On distingue généralement analyse de surface et analyse profonde en fonction de la précision de l'information linguistique construite par un analyseur. Les techniques utilisées sont habituellement différentes : on retrouve plutôt les techniques probabilistes du côté des analyseurs superficiels tandis que les analyseurs profonds utilisent plutôt des approches symboliques. Cette caractérisation doit être complétée par la prise en compte de la finalité de l'application utilisant l'analyseur. Il convient pour cela d'identifier précisément les besoins en termes morphosyntaxiques ou sémantiques pour identifier le niveau d'analyse requis (chunks pour les systèmes de synthèse de la parole, repérage d'objets nominaux pour les applications de recherche d'information, etc.). Cependant, certaines applications nécessitent, même ponctuellement, des informations plus détaillées concernant les relations syntaxiques ou les effets de sens pour une construction donnée. Nous avons ainsi d'une part une distinction en termes d'efficacité (les analyseurs superficiels sont plus rapides et plus robustes que les analyseurs profonds) et de l'autre une distinction de finalité.
La question du déterminisme est à prendre en compte de façon distincte. Si les analyseurs superficiels sont déterministes, les analyseurs profonds traitent quant à eux l'ambiguïté : toutes les possibilités sont prises en compte pendant l'analyse et le système fournit plusieurs solutions lorsque l'ambiguïté ne peut être levée. Une façon de réduire la complexité d'un analyseur profond sans le ramener à un analyseur superficiel consiste à le rendre déterministe. A un premier niveau, l'entrée elle-même peut être déterminisée par l'utilisation d'un étiqueteur désambiguïsant. La déterminisation de l'analyse consiste alors à éliminer des constructions en cours. Les propriétés de coupure utilisées peuvent être de type très différents : probabilistes (par exemple en utilisant des informations syntaxiques associées à des poids), topologiques (propriétés formelles des structures construites, par exemple profondeur des arbres, taille des constituants, etc.), ou encore cognitives (préférences de catégorisation, de rattachement, etc.). Ces techniques permettent de prendre des décisions de façon incrémentale en cours d'analyse. Elles peuvent être associées à des techniques de retardement consistant à repousser certains choix et maintenir plusieurs solutions en parallèle, par exemple en les factorisant. On peut donc à ce stade donner quelques critères distinctifs entre les deux approches :
• analyseur superficiel : rapide et robuste, il fournit une structuration simple en termes d'unités non récursives ainsi que des relations portant sur ces unités • analyseur profond : fournit une description couvrante des constructions de la langue en indiquant les relations syntaxiques ou syntactico-sémantiques entre ses constituants.
Il existe plusieurs approches permettant de combiner ces approches, la section suivante en propose une présentation. Nous reviendrons ensuite sur une caractérisation de la complexité des phénomènes à analyser avant de décrire, dans la dernière partie, une technique hybride permettant à un analyseur profond de tirer parti d'une analyse superficielle.
Approches combinant analyse superficielle et analyse profonde
Il existe un certain nombre de travaux proposant d'utiliser simultanément les techniques d'analyse superficielle et profonde. Un workshop a été récemment consacré à l'étude de ce problème (cf.
[Hinrichs04]) et a permis de faire un tour d'horizon de la situation. Dans la plupart des cas, la technique consiste à utiliser l'analyse superficielle en tant que pré-traitement d'une analyse profonde. Par analyse superficielle, on entend surtout ici formatage de l'entrée visant la désambiguïsation de l'étiquetage morpho-syntaxique, le traitement des mots inconnus et pouvant aller jusqu'à l'analyse d'unités entières comme les entités nommées par exemple à l'aide de grammaires locales. Ce type d'approche peut s'avérer très efficace et offre l'avantage de réutiliser voire d'adapter des composants différents : on trouve par exemple dans [Grover01] une description de la réutilisation d'outils originellement prévus pour l'analyse en GPSG. Dans ce type d'approche, le contrôle de l'analyse profonde se fait donc en limitant l'espace de recherche de l'analyseur grâce à une réduction du nombre d'étiquettes à prendre en compte. De plus, des parties entières peuvent être pré-analysées, ce qui réduit d'autant le nombre de structures à construire. L'intérêt majeur de ce type d'approche réside dans le fait l'analyseur n'a pas à être modifié : il est donc possible de traiter avec le même système une entrée brute ou pré-traitée.
Un second type d'approche, relativement peu répandu, consiste à utiliser les résultats d'un analyseur syntaxique superficiel. L'entrée de l'analyseur profond est la sortie de l'analyseur superficiel, ce qui nécessite l'adaptation de l'analyseur profond. Une première technique consiste à modifier (on emploie également le terme de lifter) les informations construites par l'analyseur superficiel. Cela concerne les unités lexicales comme les groupes syntaxiques. Dans le premier cas, il s'agit de transformer une unité simple en une structure enrichie adaptée au format de l'analyseur profond, par exemple par des patterns de filtrage (cf. . Il s'appuie sur une représentation décentralisée de l'information sous la forme de contraintes. Le réglage de la granularité d'analyse s'opère en faisant varier la tolérance de l'analyseur par un seuil de contraintes qu'il est possible de relâcher. Le choix de la structure construite en sortie se fait quant à lui en spécifiant le type de contraintes à satisfaire.
Les difficultés syntaxiques
Une analyse précise des difficultés rencontrées par les analyseurs syntaxiques, en dehors des problèmes purement computationnels, reste à établir. Il serait en effet très utile de distinguer les phénomènes faciles à analyser de ceux qui ne le sont pas et d'en expliquer les raisons. Il s'agit d'un exercice difficile, ce problème ne recoupant pas toujours la notion de complexité linguistique : certaines constructions peuvent être facilement interprétables, mais présenter des difficultés en termes d'implantation. C'est le cas par exemple des phénomènes d'extraction qui ne présentent que peu d'ambiguïté d'interprétation mais pour lesquels les systèmes ont des difficultés d'analyse. Réciproquement, les enchâssements de syntagmes peuvent être complexes mais ne présentent pas en soi de difficultés pour un analyseur. Il est intéressant de proposer une ensemble de constructions ou phénomènes qu'un analyseur doit pouvoir traiter. L'article de synthèse [Abeillé00] en fournit une première liste établie de façon tout à fait empirique sur la base d'une analyse des capacités des analyseurs existants au moment de la rédaction de l'article (voici 5 ans, ce domaine a bien entendu beaucoup évolué depuis) :
• dépendances locales: accord, sous-catégorisation des prédicats, expressions semi-figées, restrictions modifieur-modifié, clitiques, etc. • dépendances moyennes : pronominalisation, contrôle des infinitives, association négative, quantifieurs flottants, etc. • dépendances à distance : questions, relatives, constructions disloquées, etc.
• alternances syntaxiques : passif, impersonnel, causatives, etc.
• phénomènes de coordination et de comparaison Cette liste comporte des phénomènes variés et dont la complexité de traitement dépend de la finesse de l'analyse qu'on veut en donner, ainsi que du type de représentation de l'information choisi. Les phénomènes d'accord par exemple sont généralement faciles à traiter pour le français ou l'anglais à condition de disposer dans la grammaire ou le lexique d'un codage explicite de l'information. Pour ce qui concerne les dépendances à distance, on observe des situations très différentes. Les relatives font partie des constructions souvent faciles à traiter, y compris du point de vue de la structure sémantique. Les constructions disloquées posent en revanche plus de problèmes. Elles sont assez faciles à repérer, mais la relation sémantique entre l'élément disloqué et le reste de l'énoncé est assez complexe à traiter, même en présence d'un pronom résomptif. Il convient dans ce cas tout d'abord d'identifier ce pronom, celui-ci pouvant apparaître dans des positions très variées, de vérifier les compatibilités morpho-syntaxiques entre l'antécédent et le pronom, mais aussi les compatibilités sémantiques entre l'antécédent et la structure régissant le pronom. Les constructions clivées présentent le même type de problème : elles sont en français faciles à identifier, mais le repérage de leur site d'attachement est généralement complexe. Il faut souligner que cette complexité de traitement ne se traduit pas par une difficulté d'interprétation par un humain : les clivées sont au contraire dans la plupart des cas très facile à interpréter (ce type de problème est signalé dans [Puver04]). Pour une même construction, certaines informations sont donc plus complexes à obtenir que d'autres. Si l'on prend en compte le critère de facilité d'analyse pour caractériser un analyseur superficiel, on peut alors dire que l'identification de la présence d'une dépendance à distance peut être obtenue facilement notamment grâce à des marques morphologiques fortes.
Par ailleurs, il faut distinguer d'un côté la structure elle-même (la hiérarchie des objets) et de l'autre les relations existant entre ces objets. Dans le cas d'une approche syntagmatique par exemple, un analyseur devra produire un arbre, mais également indiquer les relations syntaxiques ou sémantiques existant entre les constituants. Un certain nombre de propositions ont été faites pour cela dans le cadre de l'évaluation des analyseurs syntaxiques (cf. Il n'est donc pas possible de distinguer simplement, comme nous l'avons vu en première partie, un analyseur superficiel d'un analyseur profond sur de simples critères d'efficacité. Mais il ne semble pas non plus pertinent de distinguer les deux approches sur la base du type d'information construit en sortie. Le parenthésage d'un énoncé est une tâche globalement facile si on se contente de constituants non récursifs. Elle devient nettement plus difficile si l'on cherche à décrire le niveau propositionnel ou les dispositifs complexes. De même, comme nous venons de le voir, certaines relations syntaxiques peuvent être plus faciles à identifier que d'autres. Il est donc intéressant d'introduire deux nouveaux critères pour la distinction entre types d'analyse : niveau opérationnel (un analyseur profond est non déterministe) et niveau formel (un analyseur superficiel ne construit que des informations simples). Les critères de déterminisme et de type d'information peuvent bien entendu être combinés. On pourra par exemple trouver des analyseurs déterministes pouvant construire des informations complexes. Il est possible dans ce cas de parler d'analyseurs intermédiaires.
Une stratégie d'analyse hybride
Ainsi que nous venons de le voir, l'analyse profonde a fréquemment recours à des techniques d'analyse superficielle, notamment grâce à une désambiguïsation de l'entrée. Par ailleurs, les résultats obtenus pour la construction d'une analyse superficielle par un analyseur superficiel et un analyseur profond ne sont pas très différents, quelque soit la forme de l'input. La comparaison des résultats obtenus par deux analyseurs sur un même ensemble de corpus dans le cadre de la campagne Easy (ces résultats seront présentés lors du workshop Easy associé à TALN) montre en effet une forte convergence, aussi bien pour le traitement de corpus de langue écrite que de langue parlée.
Nous avons donc un certain nombre d'arguments qui militent en faveur de systèmes mixtes permettant de fournir comme résultat, en fonction des besoins,aussi bien une analyse superficielle qu'approfondie. Plus précisément, nous proposons une architecture à deux niveaux permettant de réutiliser une analyse superficielle comme entrée d'un analyseur profond. Il ne s'agit pas de modifier la structure superficielle construite (à la différence de l'approche proposée par [John-son02]), mais bien de construire une représentation plus riche utilisant les objets construits par la superficielle pour construire des objets plus complexes. Il est pour cela nécessaire de définir les objets "superficiels" comme pouvant être des constituants pour l'analyse détaillée. Un parenthésage classique sous forme de chunks ne serait pas pertinente dans cette approche, un chunk ne pouvant être une unité constitutive d'un groupe syntaxique de niveau supérieur.
L'objectif d'une telle approche est tout d'abord de combiner des outils différents, en ne déclenchant éventuellement une analyse détaillée qu'en fonction des besoins. Mais elle permet également d'envisager l'analyse superficielle comme outil de contrôle de l'analyse détaillée. Dans ce cas, toutes les informations construites par l'analyseur superficiel sont susceptibles d'être utilisées par l'analyseur profond. Ces informations sont de deux types : il s'agit d'une part de groupes de mots (donc des informations de parenthésage) et d'autre part des relations entre des formes ou des groupes. Il convient donc de proposer la construction de groupes qui soient à la fois pertinents pour une analyse superficielle, mais également utilisables par un analyseur détaillé. Ces groupes sont nécessairement de premier niveau (i.e. sans constituants emboîtés), ils ne contiennent que des éléments lexicaux. L'objectif est de définir des groupements très simples et peu ambigus. La grammaire suivante donne une idée du type de groupes pouvant être construits :
Sujet (GN ≺ X ≺ GV) ∧ ( ∃ GN + ∈ X) Aux (Aux ≺ X ≺ V[ppas]) ∧ ( ∃ V ∈ X) Objet (GV ≺ X ≺ GN) ∧ ( ∃ GN + ∈ X) Conj (GX ≺ X ≺ Conj ≺ GX) ∧ ( ∃ GX ∈ X)
Les relations telles qu'elles sont définies ne permettent de spécifier qu'une partie des relations. Par exemple, la relation sujet ne prend pas en compte les inversions, de même que la relation de coordination ne permet que les coordinations simples. Le problème essentiel de ce type de relation est la surgénération. Il est cependant possible d'ajouter un niveau de contrôle spécifique, notamment concernant le type de relation possible pour une catégorie donnée ou encore en ayant recours à des informations lexicales. Une analyse intermédiaire ou détaillée tirant parti d'une analyse superficielle de ce type vise donc la construction de groupes syntaxiques de niveau supérieur ainsi que de relations complexes. Le principe consiste à utiliser en entrée les objets construits par l'analyseur superficiel. Les constituants des unités syntaxiques détaillées sont donc soit des groupes soit des catégories lexicales. Dans les deux cas, l'analyseur superficiel peut associer à ces groupes et catégories des indications en termes de probabilités permettant ainsi de contrôler le processus d'analyse profonde en réduisant l'espace de recherche de l'analyseur. Il est possible de définir les bases d'une analyse intermédiaire. Les règles de la grammaire correspondante utilisent les groupes et les relations construits par l'analyseur superficiel, ce qui permet de préciser ou d'exclure certains constituants en fonction de leur propriétés syntaxiques. Ces relations sont indiquées entre chevrons. Nous obtenons ainsi des règles de la forme : Il est donc possible d'obtenir à faible coût un analyseur intermédiaire construisant des objets dont les constituants sont des groupes fournis par l'analyse superficielle. La figure (4) présente l'exemple d'une description du clivage du SN dans le formalisme des grammaires de propriétés (cf. [Blache05]). Cette analyse s'appuie sur deux constructions : la première décrivant le site de l'extraction, et la seconde les relations avec le site duquel l'élément a été extrait. On y constate, de la même façon que pour l'analyseur intermédiaire, l'utilisation des groupes et des relations en tant que source d'information élémentaire, tout en la complétant avec des informations propres au formalisme choisi.
Conclusion
L'analyse syntaxique est un problème dont les facteurs de complexité doivent être précisés. Il est pour cela nécessaire de distinguer précisément les différents types d'analyse (superficielle ou profonde) avant de proposer une caractérisation des phénomènes influant sur cette complexité.
Nous proposons dans cet article une première approche de ce problème qui permet d'envisager une coopération entre ces différentes approches. La technique proposée permet à l'analyse détaillée de s'appuyer sur les résultats de la superficielle, ce qui permet de réduire l'espace de recherche en fournissant en entrée non plus des objets atomiques, mais des informations complexes.
[Carroll01],[Briscoe02] ou[Carroll03]). Ce paradigme propose de recenser un certain nombre de relations servant de base à la comparaison et l'évaluation des analyseurs syntaxique. L'ensemble des relations (adapté aux besoins du français par rapport à la proposition de [Carroll01]) est décrit dans la figure 3 et s'organise selon la hiérarchie suivante :On propose dans le tableau suivant une répartition entre relations faciles et difficiles à identifier. Ce jugement est ici établi sur une base empirique. On essaie de donner quelques arguments justifiant ce classement, mais il conviendrait d'en faire une description plus systématique.
Figure 3 :Figure 4 :
34Algorithme Pro[ce], GV[être], ProR[qu-), GN 1 Pro ≺ GV, GV ≺ GN 1 , GN 1 ≺ ProR Pro ⇒ GV Uniq = Description du SN clivé en GP Un algorithme simple (cf. figure 3) consiste pour chaque groupe à vérifier s'il peut appartenir à un syntagme. On utilise pour cela une pile des syntagmes (notée pile_synt) construits et deux pointeurs : l'un pointant sur le sommet de la pile (noté top), l'autre sur le syntagme en cours (noté en_cours). On indique par GC le groupe courant, on se dote d'une fonction ajouter_constituant(Const, SX) permettant d'ajouter const la liste des constituants de SX ainsi que d'une fonction fermer(SX) clôturant la liste de constituants de SX et d'une fonction booléenne ouvert(SX) indiquant si SX est ouvert ou fermé.
limitées à un certain type de constituant (par exemple les SN dans le cas de systèmes de recherche d'information). De même, ce type de système doit pouvoir construire une segmentation de l'input (par exemple sous la forme de chunks). Mais le même analyseur doit pouvoir à l'autre bout de la chaîne construire également une structure détaillée. Un exemple de ce type d'approche est décrit dans[Blache02][Blache95]). Les chunks ou
les unités syntaxiques construites peuvent également être transformés à l'aide de règles adaptées
utilisant là encore des patterns. Une telle approche est décrite dans [Marimon02] qui transforme
ainsi les unités lexicales et les chunks en structures attribut-valeurs du type HPSG comme décrit
dans l'exemple suivant :
rule(
SYNSEM | LOCAL
MORPH
LEMME 2
MORPHEME 1
AGR fem, sing
CAT | HEAD NCLASS common
, [Pos='Ncfs-', Lemma= 2 ], 1 ).
On trouve dans la même perspective une technique consistant à enrichir directement la structure
construite à l'aide de techniques spécifiques. C'est le cas de [Johnson02]) qui décrit comment,
à partir d'arbres syntaxiques simples, créer des arbres complexes à noeud vide. Il s'agit dans
ce cas d'une opération d'adjonction qui s'appuie sur des schémas d'arbre spécifiant les en-
droits où ces noeuds peuvent être insérés et la valeur des arguments qu'ils doivent prendre. Le
contrôle du processus se fait grâce à une hiérarchisation de ces schémas. Un troisième type
d'approche, défendu dans [Uszkoreit02], propose l'utilisation en parallèle d'une analyse super-
ficielle et d'une analyse profonde. Cette approche (cf. [Crysmann02] ou [Frank03]) consiste
à exploiter les informations de l'analyseur superficiel pour contrôler l'analyseur profond et ré-
duire son espace de recherche. Les informations de contrôle fournies par l'analyseur superficiel
portent dans cet exemple sur la structure topologique de la phrase en allemand. L'idée est de
repérer les champs topologiques par différentes techniques (cf. [Neumann00]) et guider ainsi la
construction de la structure par l'analyseur profond. Le dernier type d'approche repose sur la
possibilité de régler la finesse de l'analyse en fonction des objectifs. On peut distinguer deux cas
selon que les ressources utilisées sont identiques ou pas. Un premier type d'approche consiste
simplement à faire varier la grammaire en entrée. L'utilisation d'une grammaire simple, peu
ambiguë et n'utilisant que des constituants de forte granularité permettra d'obtenir une analyse
grossière d'un énoncé. Dans ce cas, nous pouvons parler de superficialisation d'un analyseur
profond par l'utilisation d'une grammaire superficielle (cf. [Puver04]). Mais il est également
Type
Caractéristiques
Exemple
Pré-traitement
Etiqueteur désambiguïsant, grammaires locales
[Grover01]
Pré-analyse
Analyse superficielle = input de l'analyseur profond
[Marimon02],[Johnson02]
Contrôle
L'analyseur profond est guidé par l'analyseur superficiel [Crysmann02], [Frank03]
Granularité variable Même analyseur, le type de sortie est une option
[Blache02]
Figure 1: Différentes techniques de combinaison d'analyseurs
possible de proposer des techniques permettant d'exploiter des ressources identiques en termes
de lexique et de grammaire. Ce type d'approche nécessite la possibilité pour l'analyseur de
construire des structures partielles,
NomDescription dépendance Relation de dépendance générique entre une tête et un dépendant modRelation entre une tête et son modifieur. Le type est le mot introduisant la dépendancencmod
Modificateur lexical (non propositionnel)
cmod
Modificateurs propositionnels
detmod
Relation déterminants / noms
arg-mod
Relation tête/argument, celui-ci étant réalisé comme un modifieur (par exemple un SP
complément du verbe)
arg
Relation générique tête/argument (plutôt de type complément)
subj
Relation prédicat/sujet
ncsubj
Sujet lexical (non propositionnel)
csubj
Sujets propositionnels (par exemple infinitive sujet)
comp
Relation tête/complément
obj
Relation tête/objet
dobj
Relation prédicat/objet direct (premier complément non propositionnel)
iobj
Relation prédicat/complément non propositionnel introduit par une préposition
clausal
Relation tête/complément propositionnel
xcomp
la proposition complément n'a pas de sujet réalisé
ccomp
la proposition complément a un sujet réalisé
Figure 2: Description des relations
Faciles
Difficiles
Relation
Caractéristique
Relation
Caractéristique
Ncmod
les relations adj/n, sp/n, sp/v, sa/n sont
juxtaposées
n/sp
séparé par d'autres éléments
Cmod rel
marque morphologique et générale-
ment adjacence
Cmod
ambiguïté de rattachement (p. ex. sp/V
vs. sp/n)
Detmod
linéarité, adjacence
Arg-mod
doit tenir compte de la forme verbale et
de la sémantique du mod
Ncsubj
ordre, marque morpho (accord)
Csubj
repérage de la proposition, complexité
potentielle, identification du verbe tête
de la prop sujet, de la tête de la phrase.
Xcomp
marques morphologiques et ordre (eg
attribut, infinitif antéposé, etc.)
Ccomp
repérage de la subordonnée, identifica-
tion des têtes verbales
Dobj
forme du complément (nonclausal), or-
dre (premier) et adjacence avec le verbe
Iobj mais ambiguïté rattachement
Conj
difficulté de distinguer les conjonctions
simples (coordonnés de même type)
des autres
Aux
marque morphologique, adjacence
Contrôle
Cette relation n'est pas exprimée di-
rectement mais par doublement de la
relation subj. Dépend du type du verbe
et du type du complément
Cette observation rapide permet de dégager quelques éléments de caractérisation de la com-
plexité opérationnelle (et non pas théorique) de l'analyse syntaxique. D'une façon générale en
effet, les relations les plus faciles à analyser sont celles profitant d'une conjonction de plusieurs
sources d'information stables : faible taux d'ambiguïté des constituants entrant en jeu dans la
relation (par exemple seule la relation Detmod peut relier un déterminant et un nom), marque
morphologique régulière (pronom relatif, conjonctions, construction "c'est ... que", etc.), or-
dre linéaire strict, etc. De plus, le niveau de la relation dans la hiérarchie influe également sur
sa complexité : une relation générique sera plus facile à repérer qu'un de ses sous-type (par
exemple la relation Comp est plus facile à indiquer que Ccomp).
A l'inverse, les relations complexes sont celles nécessitant d'accéder à des informations lo-
cales spécifiques (par exemple des traits lexicaux), dépendant de la forme des constituants non
lexicaux reliés (par exemple type du verbe ou de la préposition dans le SV ou le SP) ou en-
core reposant sur des phénomènes sémantiques de restriction. Le niveau sémantique présente
d'ailleurs des caractéristiques similaires : il est par exemple plus facile de traiter le rôle d'un
modifieur que la portée de la quantification.
Chaque relation est caractérisée par des propriétés qu'il est possible d'extraire de la liste des groupes précédemment construite. Nous utiliserons dans ce qui suit les notations suivantes : GX + pour indiquer que le groupe GX fait déjà partie d'une relation et X pour indiquer une suite quelconque d'objets.GV ::= [Adv[neg]] (Clit) [Aux] (Adv) V
GN ::= Det [Adv] [Adj] N[c] | [Det] N[p] | [Det] [Adj] N[p] | Pro[p]
GP ::= Prep Det [Adv] [Adj] N[c] | Prep N[p] | Prep V[ppres] | Prep V[inf]
GA ::= [Adv] Adj | [Adv] V[ppas]
Gadv ::= Adv *
Bien entendu, cette grammaire est largement incomplète, et de nombreuses catégories ne sont
pas prises en compte, ce qui ne perturbe pas le comportement d'un analyseur superficiel. De
même, d'autres règles complétant la description de ce qu'on peut considérer comme étant des
syntagmes noyaux peuvent être ajoutées. Enfin, une représentation sous forme syntagmatique
ne préjuge pas non plus du formalisme choisi. On peut par exemple décrire cette même infor-
mation sous forme de dépendances ou de contraintes. Signalons qu'une grammaire de ce type
a été utilisée lors de la campagne d'évaluation Easy (cf. [Vilnat04]). De leur côté, les relations
pouvant être établies par un analyseur superficiel sont relativement générales et déterminées
sur la base d'informations simples, en particulier l'ordre linéaire. Le tableau suivant propose
quelques relations avec leur sémantique opérationnelle.
SN ::= GN [GA] [Rel] [GP] < ∃ mod(GP,GV)> SV ::= GV [GN] [GP] < ∃ mod(GN,GV)> Rel ::= Pro[rel] [GN1] GV [GN2] < ∃ suj(GN2,GV)>SX ← pile_synt[en_cours]
si GC ∈ SX
ajouter(GC, SX)
finsi sinon
répéter
fermer(SX);
en_cours-;
tant que ((GC ∈ pile_synt[en_cours]) et (ouvert(pile_synt[en_cours]))
et (en_cours ≥ 0))
si en_cours = 0
ajouter(GC, pile_synt[en_cours])
finsi
pile_synt[top++] ← GC
Grammaires et analyseurs syntaxiques. A P Abeillé, Blache, Traité IC2. Ingénierie des languesAbeillé A. & P. Blache. (2000). "Grammaires et analyseurs syntaxiques", Traité IC2, Volume Ingénierie des langues, Hermès.
Outil d'intégration de bases de connaissances lexicales aux analyseurs syntaxiques. P & M Blache, Delpui, in actes des Journées "Lexicomatique et Dictionnairique"Blache P. & M. Delpui (1995) "Outil d'intégration de bases de connaissances lexicales aux analyseurs syntaxiques", in actes des Journées "Lexicomatique et Dictionnairique".
From Shallow to Deep Parsing Using Constraint Satisfaction. P Blache, J.-M Balfourier, & T Van Rullen, in proceedings of COLING-2002Blache P., J.-M. Balfourier & T. van Rullen (2002), "From Shallow to Deep Parsing Using Constraint Satisfaction", in proceedings of COLING-2002
Property Grammars: A Fully Constraint-Based Theory. P Blache, Constraint Satisfaction and Language Processing. H. Christiansen & al.Springer-Verlag LNAI 3438Blache P. (2005) "Property Grammars: A Fully Constraint-Based Theory", in Constraint Satisfaction and Language Processing, H. Christiansen & al. (eds), Springer-Verlag LNAI 3438.
Relational evaluation schemes. E Briscoe, J Carroll, J Graham, & A Copestake, proceedings of the Beyond PARSEVAL Workshop. the Beyond PARSEVAL Workshop2Briscoe, E., J. Carroll, J. Graham & A. Copestake (2002) "Relational evaluation schemes", in proceed- ings of the Beyond PARSEVAL Workshop, LREC-02.
High Precision Extraction of Grammatical Relations. Carroll J & T Briscoe, in proceedings of IWPT-01Carroll J. & T. Briscoe (2001) "High Precision Extraction of Grammatical Relations", in proceedings of IWPT-01.
Using a Grammatical Relation Annotation Scheme. J Carroll, G Minnen, & T Briscoe, Treebanks: Building and Using Syntactically Annotated Corpora. A. AbeilléKluwerParser EvaluationCarroll J., G. Minnen & T. Briscoe (2003) "Parser Evaluation. Using a Grammatical Relation Annotation Scheme", in A. Abeillé (ed) Treebanks: Building and Using Syntactically Annotated Corpora, Kluwer.
An Integrated Architecture for Shallow and Deep Processing. B A Crysmann, B Frank, S Kiefer, G Müller, J Neumann, U Piskorski, M Schäfer, H Siegel, F Uszkoreit, M Xu, & H Becker, Krieger, in proceedings of ACL-02Crysmann B. A. Frank, B. Kiefer, S. Müller, G. Neumann, J. Piskorski, U. Schäfer, M. Siegel, H. Uszkor- eit, F. Xu, M. Becker & H. Krieger (2002) "An Integrated Architecture for Shallow and Deep Processing", in proceedings of ACL-02.
Integrated Shallow and Deep Parsing: TopP meets HPSG. A Frank, M Becker, B Crysmann, B Kiefer, & U Schäfer, in proceedings of ACL-03Frank A., M. Becker, B. Crysmann, B. Kiefer & U. Schäfer (2003) "Integrated Shallow and Deep Parsing: TopP meets HPSG", in proceedings of ACL-03.
XML-Based Data Prepartion for Robust Deep Parsing. C & A Grover, Lascarides, in proceedings of ACL/EACL-01Grover C. & A. Lascarides (2001) "XML-Based Data Prepartion for Robust Deep Parsing", in proceed- ings of ACL/EACL-01.
Proceedings of the Workshop "Combining Shallow and Deep Processing for NLP. Hinrichs E. & K. Simov eds.the Workshop "Combining Shallow and Deep Processing for NLP4Hinrichs E. & K. Simov eds.(2004) Proceedings of the Workshop "Combining Shallow and Deep Processing for NLP", ESSLLI-04.
A Simple Pattern-Matching Algorithm for Recovering Empty Nodes and their Antecedents. M Johnson, in proceedings of ACL-02Johnson M. (2002) "A Simple Pattern-Matching Algorithm for Recovering Empty Nodes and their An- tecedents", in proceedings of ACL-02.
Integrating Shallow Linguistic Processing into a Unification-Based Spanish Grammar. M Marimon, in proceedings of COLING-02Marimon M. (2002) "Integrating Shallow Linguistic Processing into a Unification-Based Spanish Gram- mar", in proceedings of COLING-02.
A Divide and Conquer Strategy for Shallow Parsing of German Free Texts. G Neumann, C Braun, & J Piskorski, in proceedings of ANLP-00Neumann G., C. Braun & J. Piskorski (1999) "A Divide and Conquer Strategy for Shallow Parsing of German Free Texts", in proceedings of ANLP-00.
Incremental Parsing or Incremental Grammar?. M Puver, & R. Kempson, in proceedings of the workshop Incremental Parsing: Bringing Engineering and Cognition Together. eedings of the workshop Incremental Parsing: Bringing Engineering and Cognition Together4Puver M. & R. Kempson (2004) "Incremental Parsing or Incremental Grammar? ", in proceedings of the workshop Incremental Parsing: Bringing Engineering and Cognition Together, ACL-04.
New Chances for Deep Linguistic Processing. H Uszkoreit, in proceedings of COLING-02Uszkoreit H. (2002) "New Chances for Deep Linguistic Processing", in proceedings of COLING-02.
Annoter en constituants pour évlauer des analyseurs syntaxiques. A Vilnat, L Monceaux, P Paroubek, I Robba, V Gendner, G Illouz, & M Jardino, in actes de TALN-04Vilnat A., L. Monceaux, P. Paroubek, I. Robba, V. Gendner, G. Illouz & M. Jardino (2004) "Annoter en constituants pour évlauer des analyseurs syntaxiques", in actes de TALN-04. |
||
219,307,734 | [] | The Association for Computational Linguistics and Chinese Language Processing
August 2003
The Association for Computational Linguistics and Chinese Language Processing
Computational Linguistics and Chinese Language Processing
82August 2003
WordNet provides plenty of lexical meaning; therefore, it is very helpful in natural language processing research. Each lexical meaning in Princeton WordNet is presented in English. In this work, we attempt to use a bilingual dictionary as the backbone to automatically map English WordNet to a Chinese form. However, we encounter many barriers between the two different languages when we observe the preliminary result for the linkage between English WordNet and the bilingual dictionary. This mapping causes the Chinese translation of the English synonym *
collection (Synset) to correspond to unstructured Chinese compound words, phrases, and even long string sentence instead of independent Chinese lexical words. This phenomenon violates the aim of Chinese WordNet to take the lexical word as the basic component. Therefore, this research will perform further processing to study this phenomenon.
The objectives of this paper are as follows: First, we will discover core lexical words and characteristic words from Chinese compound words. Next, those lexical words will be expressed by means of conceptual representations. For the core lexical words, we use grammar structure analysis to locate such words. For characteristic words, we use sememes in HowNet to represent their lexical meanings. Certainly, there exists a problem of ambiguity when Chinese lexical words are translated into their lexical meanings. To resolve this problem, we use lexical parts-of-speech and hypernyms of WordNet to reduce the lexical ambiguity.
We experimented on nouns, and the experimental results show that sense disambiguation could achieve a 93.8% applicability rate and a 93.5% correct rate.
1.
GUM, CYC, ONTOS, MICROKOSMOS, EDR
WordNet [Gomez, 1998] (Lexical Relation) (Conceptual Relation) WordNet [Miller, 1990;Fellbaum, 1998] [Farreres, Rigau and Rodriguez, 1998] WordNet [Gonzalo et al., 1998;Mandala, Tokunaga and Tanaka, 1998 ] [Knight and Luk, 1994] [Jing, 1998] [Aslandogam et al., 1997] WordNet WordNet EuroWordNet [Atserias et al., 1997;Farreres, Rigau, and Rodriguez, 1998] WordNet 2. [Gale, Church and Yarowsky, 1992;Yarowsky, 1992Yarowsky, , 1995Resnik, 1993;Dagan and Itai, 1994;Luk, 1995;Ng and Lee, 1996;Riloff and Jones, 1999] [Guthrie et al., 1991;Slator, 1991;Li, Szpakowicz and Matwin, 1995;Chang, 1998, Yang andKer, 2002] [ Chang, Ker and Chen, 1998;Chen and Chang, 1998;Dorr et al., 2000;Carpuat et al., 2002;Wang, 2002] WordNet WordNet HowNet [Dorr et al., 2000;Carpuat et al., 2002 ] [ Chang, Ker and Chen, 1998;Chen and Chang, 1998] 3.
WordNet
4.
6 5
Dice [Dice, 1945]
W 1 , W 2 Sim(W 1 , W 2 ) | ) ( | | ) ( | | ) ( ) ( | 2 ) , ( 2 1 2 1 2 1 W S W S W S W S W W Sim + ∩ × = 5. 5.1 WordNet WordNet S n W 1 , W 2 , …, W n , W i k D i1, D i2, … , D ik E ij W i D ij S E ij D ij S CDEF(S) , | | max arg ) ( CDEF , , E S S j i j i ∩ = k j n i ,..., 1 , ..., , 1 = = ∀ i n S k i S i W i E i, j W i D
. Wordnet, NSC 90-2213-E-031-005WordNet NSC 90-2213-E-031-005
Using Semantic Contents and WordNet in Image Retrieval. Y A Aslandogam, C Their, C T Yu, J Zou, N Rishe, Proceedings of the 20 th Annual ACM SIGIR Conference on Research and Development in Information Retrieval. the 20 th Annual ACM SIGIR Conference on Research and Development in Information RetrievalPhiladelphiaAslandogam, Y. A., C. Their, C. T. Yu, J. Zou and N. Rishe, "Using Semantic Contents and WordNet in Image Retrieval," In Proceedings of the 20 th Annual ACM SIGIR Conference on Research and Development in Information Retrieval, Philadelphia, 1997, pp. 286-295.
Combining Multiple Methods for the Automatic Construction of Multilingual WordNets. J Atserias, S Climent, X Farreres, G Rigau, H Rodriguez, Proceedings of International Conference of Recent Advances in Natural Language Processing (RANLP'97). International Conference of Recent Advances in Natural Language Processing (RANLP'97)Tzigov Chark, BlgariaAtserias, J., S., Climent, X., Farreres, G. Rigau and H. Rodriguez, "Combining Multiple Methods for the Automatic Construction of Multilingual WordNets," In Proceedings of International Conference of Recent Advances in Natural Language Processing (RANLP'97), Tzigov Chark, Blgaria, 1997.
Creating a Bilingual Ontology: A Corpus-Based Approach for Aligning WordNet and HowNet. M Carpuat, G Ngai, P Fung, K W Church, Proceedings of the 1 st International Conference on Global WordNet. the 1 st International Conference on Global WordNetMysore, IndiaCarpuat, M., G. Ngai, P. Fung and K. W. Church, "Creating a Bilingual Ontology: A Corpus-Based Approach for Aligning WordNet and HowNet," In Proceedings of the 1 st International Conference on Global WordNet, Mysore, India, 2002.
Taxonomy and Lexical Semantics -from the Perspective of Machine Readable Dictionary. J S Chang, S J Ker, M H Chen, Proceedings of 3 rd Conference of the Association for Machine Translation in the Americas (AMTA'98). 3 rd Conference of the Association for Machine Translation in the Americas (AMTA'98)Chang, J. S., S. J. Ker and M. H. Chen, "Taxonomy and Lexical Semantics -from the Perspective of Machine Readable Dictionary," In Proceedings of 3 rd Conference of the Association for Machine Translation in the Americas (AMTA'98), 1998, pp. 199-212.
TopSense: A Topical Sense Clustering Method based on Information Retrieval Techniques on Machine Readable Resources. J N Chen, J S Chang, Special Issue on Word Sense Disambiguation, Computational Linguistics. 241Chen, J. N. and J. S. Chang, "TopSense: A Topical Sense Clustering Method based on Information Retrieval Techniques on Machine Readable Resources," Special Issue on Word Sense Disambiguation, Computational Linguistics, 24(1), 1998, pp. 61-95.
Construction of a Chinese-English WordNet and Its Application to CLIR. Chen, Chi-Ching Hsin-Hsi, Wen-Cheng Lin, Lin, Proceedings of 5th International Workshop on Information Retrieval with Asian Languages. 5th International Workshop on Information Retrieval with Asian LanguagesChen, Hsin-Hsi, Chi-Ching Lin and Wen-Cheng Lin, "Construction of a Chinese-English WordNet and Its Application to CLIR," In Proceedings of 5th International Workshop on Information Retrieval with Asian Languages, Hong Kong, 2000, pp. 189-196.
Sense-Tagging Chinese Corpus. Hsin-Hsi Chen, Chi-Ching Lin, Proceedings of 2nd Chinese Language Processing Workshop. 2nd Chinese Language Processing WorkshopChen, Hsin-Hsi and Chi-Ching Lin, "Sense-Tagging Chinese Corpus," In Proceedings of 2nd Chinese Language Processing Workshop, Hong Kong, 2000, pp. 7-14.
Word Sense Disambiguation Using a Second Language Monolingual Corpus. I Dagan, A Itai, Computational Linguistics. 204Dagan, I. and A. Itai, "Word Sense Disambiguation Using a Second Language Monolingual Corpus," Computational Linguistics, 20(4), 1994, pp. 563-596.
Measure of the Amount of Ecologic Association between Species. L R Dice, Journal of Ecolog. 26Dice, L. R., "Measure of the Amount of Ecologic Association between Species," Journal of Ecolog, 26, 1945, pp. 297-302.
Chinese-English Semantic Resource Construction. B J Dorr, D Levow, S Lin, Thomas, Proceedings of 2nd International Conference on Language Resources and Evaluation. 2nd International Conference on Language Resources and EvaluationAthens, GreeceDorr, B. J., G-A Levow, D. Lin and S. Thomas, "Chinese-English Semantic Resource Construction," In Proceedings of 2nd International Conference on Language Resources and Evaluation, (LREC 2000), Athens, Greece, 2000, pp. 757-760.
Using WordNet for Building WordNets. X Farreres, G Rigau, H Rodriguez, Proceedings of the Workshop of Usage of WordNet in NLPS, COLING-ACL'98. the Workshop of Usage of WordNet in NLPS, COLING-ACL'98Farreres, X., G. Rigau and H., Rodriguez, "Using WordNet for Building WordNets," In Proceedings of the Workshop of Usage of WordNet in NLPS, COLING-ACL'98, 1998, pp. 65-72.
WordNet: An Electronic Lexical Database. C Ed Fellbaum, MIT PressFellbaum, C. ed., WordNet: An Electronic Lexical Database, MIT Press, May 1998.
Using Bilingual Materials to Develop Word Sense Disambiguation Methods. W A Gale, K W Church, D Yarowsky, Proceedings of the 4th International Conference on Theoretical and Methodological Issues in Machine Translation. the 4th International Conference on Theoretical and Methodological Issues in Machine TranslationGale, W. A., K. W. Church and D. Yarowsky, "Using Bilingual Materials to Develop Word Sense Disambiguation Methods," In Proceedings of the 4th International Conference on Theoretical and Methodological Issues in Machine Translation, 1992, pp. 101-112.
Linking WordNet Verb Classes to Semantic Interpretation. F Gomez, Proceedings of the Workshop of Usage of WordNet in NLPS, COLING-ACL'98. the Workshop of Usage of WordNet in NLPS, COLING-ACL'98Gomez, F., "Linking WordNet Verb Classes to Semantic Interpretation," In Proceedings of the Workshop of Usage of WordNet in NLPS, COLING-ACL'98, 1998, pp. 58-64.
Indexing with WordNet Synsets can Improve Text Retrieval. J Gonzalo, F Verdejo, I Chugur, J Cigarran, Proceedings of the Workshop of Usage of WordNet in NLPS, COLING-ACL'98. the Workshop of Usage of WordNet in NLPS, COLING-ACL'98Gonzalo, J., F. Verdejo, I. Chugur and J. Cigarran, "Indexing with WordNet Synsets can Improve Text Retrieval," In Proceedings of the Workshop of Usage of WordNet in NLPS, COLING-ACL'98, 1998, pp. 38-44.
Subject-Dependent Co-Occurrence and Word Sense Disambiguation. J Guthrie, L Guthrie, Y Wilks, H Aidinejad, Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics. the 29th Annual Meeting of the Association for Computational LinguisticsGuthrie, J., L. Guthrie, Y. Wilks and H. Aidinejad, "Subject-Dependent Co-Occurrence and Word Sense Disambiguation," In Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics, 1991, pp. 146-152.
Usage of WordNet in Natural Language Generation. H Jing, Proceedings of the Workshop of Usage of WordNet in NLPS, COLING-ACL'98. the Workshop of Usage of WordNet in NLPS, COLING-ACL'98Jing, H., "Usage of WordNet in Natural Language Generation," In Proceedings of the Workshop of Usage of WordNet in NLPS, COLING-ACL'98, 1998, pp. 128-134.
Building a Large-scale Knowledge Base for Machine Translation. K Knight, S K Luk, Proceedings of The Twelfth National Conference on Artificial Intelligence. The Twelfth National Conference on Artificial IntelligenceKnight, K. and S. K. Luk, "Building a Large-scale Knowledge Base for Machine Translation," In Proceedings of The Twelfth National Conference on Artificial Intelligence, 1994, pp. 773-778.
Automatic WordNet Mapping using Word Sense Disambiguation. C Lee, G Lee, S J Yun, Proceedings of the 2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora. the 2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large CorporaLee, C., G. Lee and S. J. Yun, "Automatic WordNet Mapping using Word Sense Disambiguation," In Proceedings of the 2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, 2000, pp. 142-147.
A WordNet-Based Algorithm for Word Semantic Sense Disambiguation. X Li, S Szpakowicz, S Matwin, Proceedings of the 14th International Joint Conference on Artificial Intelligence IJCAL-95. the 14th International Joint Conference on Artificial Intelligence IJCAL-95Montreal, CanadaLi, X., S. Szpakowicz and S. Matwin, "A WordNet-Based Algorithm for Word Semantic Sense Disambiguation," In Proceedings of the 14th International Joint Conference on Artificial Intelligence IJCAL-95, Montreal, Canada, 1995.
Statistical Sense Disambiguation with Relatively Small Corpora using Dictionary Definitions. A K Luk, Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. the 33rd Annual Meeting of the Association for Computational LinguisticsLuk, A. K., "Statistical Sense Disambiguation with Relatively Small Corpora using Dictionary Definitions," In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, 1995, pp. 181-188.
The use of WordNet in Information Retrieval. R Mandala, T Tokunaga, H Tanaka, Proceedings of the Workshop of Usage of WordNet in NLPS, COLING-ACL'98. the Workshop of Usage of WordNet in NLPS, COLING-ACL'98Mandala, R., T. Tokunaga and H. Tanaka, "The use of WordNet in Information Retrieval," In Proceedings of the Workshop of Usage of WordNet in NLPS, COLING-ACL'98, 1998, pp. 31-37.
Five papers on WordNet. G A Miller, International Journal of Lexicography. 34Miller, G. A., "Five papers on WordNet," International Journal of Lexicography, 3(4) 1990.
Integrating Multiple Knowledge Sources to Disambiguate Word Sense: an Exemplar-Based Approach. H T Ng, H B Lee, Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics. the 34th Annual Meeting of the Association for Computational LinguisticsSanta CruzNg, H. T. and H. B. Lee, "Integrating Multiple Knowledge Sources to Disambiguate Word Sense: an Exemplar-Based Approach," In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics, Santa Cruz, 1996, pp. 40-47.
Selection and Information: A Class-Based Approach to Lexical Relationships. P Resnik, Department of Computer and Information Science, University of PennsylvaniaDoctoral DissertationResnik, P., "Selection and Information: A Class-Based Approach to Lexical Relationships," Doctoral Dissertation, Department of Computer and Information Science, University of Pennsylvania, 1993.
Learning Dictionaries for Information Extraction by Multi-Level Bootstrapping. E Riloff, R Jones, Proceedings of 16 th National Conference on Artificial Intelligence. 16 th National Conference on Artificial IntelligenceRiloff, E. and R. Jones, "Learning Dictionaries for Information Extraction by Multi-Level Bootstrapping," In Proceedings of 16 th National Conference on Artificial Intelligence, 1999, pp. 474-479.
Using Context for Sense Preference. B Slator, Lexical Acquisition: Exploiting on-line Resources to Build a Lexicon. In ZernikLawrence Erlbaum, Hillsdale, NJSlator, B., "Using Context for Sense Preference," In Zernik (ed.) Lexical Acquisition: Exploiting on-line Resources to Build a Lexicon, Lawrence Erlbaum, Hillsdale, NJ, 1991.
Knowledge-based Sense Pruning using the HowNet: An Alternative to Word Sense Disambiguation. Chi-Yung Wang, Thesis of Hong Kong University of Science and Technology, Computer ScienceWang, Chi-Yung, "Knowledge-based Sense Pruning using the HowNet: An Alternative to Word Sense Disambiguation," Thesis of Hong Kong University of Science and Technology, Computer Science, 2002.
Considerations of Linking WordNet with MRD. C Yang, S J Ker, Proceedings of the 19th International Conference on Computational Linguistics. the 19th International Conference on Computational LinguisticsYang, C. and S. J. Ker, "Considerations of Linking WordNet with MRD," In Proceedings of the 19th International Conference on Computational Linguistics, 2002, pp. 1121-1127.
Unsupervised Word Sense Disambiguation Rivalling Supervised Methods. D Yarowsky, Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. the 33rd Annual Meeting of the Association for Computational LinguisticsYarowsky, D., "Unsupervised Word Sense Disambiguation Rivalling Supervised Methods," In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, 1995, pp. 189-196.
Word-Sense Disambiguation using Statistical Models of Roget's Categories Trained on Large Corpora. D Yarowsky, Proceedings of the 14th International Conference on Computational Linguistics. the 14th International Conference on Computational LinguisticsNantes, FranceYarowsky, D., "Word-Sense Disambiguation using Statistical Models of Roget's Categories Trained on Large Corpora," In Proceedings of the 14th International Conference on Computational Linguistics, Nantes, France, 1992, pp. 454-460. |
||
9,502,230 | Political Issue Extraction Model: A Novel Hierarchical Topic Model That Uses Tweets By Political And Non-Political Authors | People often use social media to discuss opinions, including political ones. We refer to relevant topics in these discussions as political issues, and the alternate stands towards these topics as political positions. We present a Political Issue Extraction (PIE) model that is capable of discovering political issues and positions from an unlabeled dataset of tweets. A strength of this model is that it uses twitter timelines of political and non-political authors, and affiliation information of only political authors. The model estimates word-specific distributions (that denote political issues and positions) and hierarchical author/group-specific distributions (that show how these issues divide people). Our experiments using a dataset of 2.4 million tweets from the US show that this model effectively captures the desired properties (with respect to words and groups) of political discussions. We also evaluate the two components of the model by experimenting with: (a) Use to alternate strategies to classify words, and (b) Value addition due to incorporation of group membership information. Estimated distributions are then used to predict political affiliation with 68% accuracy. | [
17544883,
5235435
] | Political Issue Extraction Model: A Novel Hierarchical Topic Model That Uses Tweets By Political And Non-Political Authors
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 12-17, 2016. 2016
Aditya Joshi
IIT Bombay
India
Monash University
Australia
IITB-Monash Research Academy
India
Pushpak Bhattacharyya
IIT Bombay
India
Mark Carman mark.carman@monash.edu
Monash University
Australia
Political Issue Extraction Model: A Novel Hierarchical Topic Model That Uses Tweets By Political And Non-Political Authors
Proceedings of NAACL-HLT 2016
NAACL-HLT 2016San Diego, CaliforniaAssociation for Computational LinguisticsJune 12-17, 2016. 2016
People often use social media to discuss opinions, including political ones. We refer to relevant topics in these discussions as political issues, and the alternate stands towards these topics as political positions. We present a Political Issue Extraction (PIE) model that is capable of discovering political issues and positions from an unlabeled dataset of tweets. A strength of this model is that it uses twitter timelines of political and non-political authors, and affiliation information of only political authors. The model estimates word-specific distributions (that denote political issues and positions) and hierarchical author/group-specific distributions (that show how these issues divide people). Our experiments using a dataset of 2.4 million tweets from the US show that this model effectively captures the desired properties (with respect to words and groups) of political discussions. We also evaluate the two components of the model by experimenting with: (a) Use to alternate strategies to classify words, and (b) Value addition due to incorporation of group membership information. Estimated distributions are then used to predict political affiliation with 68% accuracy.
Introduction
Political discussions in social media contain contentious topics (called 'political issues'), and alternate stands with respect to these issues (called 'positions'). We present a topic model that discovers political issues and positions in tweets. Our model is called Political Issue Extraction (PIE) model. The input is the twitter timelines of authors (i.e., the user who created the tweet), and political affiliation information for a subset of authors (i.e., political authors. Antonym: Non-political authors). Political and non-political authors contribute to formation of topics, whereas only political authors contribute to position that a group is likely to take. Since our dataset consists of tweets from the US, political affiliation can be one of the three groups: 'Democrats', 'Republicans' or 'Unknown'.
For every tweet, we estimate two latent variables: issue and position. To discover topics related to issues and positions, we classify words in a tweet in three categories: issue words, position words and emoticons. Instead of document-specific distributions as in LDA, we include a hierarchy of authorspecific and group-specific position distributions in our model. This hierarchy estimates three distributions for each topic: global position, position of a given political group and position of a specific author. We evaluate our model by (a) validating our topics against standard topic lists, (b) considering different strategies of splitting words into the three categories, and (c) validating how the model benefits from the group information. Finally, we use our model to predict political affiliation of authors. Models based on LDA by Jo and Oh (2011), Mei et al. (2007); Lin and He (2009) extract sentimentcoherent topics. Past work related to political opinion has been reported by Gayo-Avello, Metaxas, and Mustafaraj (2011); Conover et al. (2011);O'Connor et al. (2010); Wong et al. (2013); Yano, Cohen, and Smith (2009) ;Lin, Xing, and Hauptmann (2008); 82 Wang, Mohanty, and McCallum (2005), and more recently by Benton et al. (2016). Two of them close to our work are by Grimmer (2010); Fang et al. (2012). Our model improves upon them in three ways:
1. In PIE model, position words depend on both issue and position latent variables (as opposed to only the latter in prior work),
2. In PIE model, a novel hierarchical author/group-wise distribution is considered instead of document-wise distribution.
3. To the best of our knowledge, PIE model is the first that operates at the author level by using complete author timelines of both political and non-political authors, and affiliation information of a subset of authors (only political authors).
The rest of the paper is organized as follows. We present the structure and estimation procedure of PIE model in Section 3 and discuss our experiment setup in Section 4. The evaluation is in Section 5. Finally, we conclude and point to future work in Section 6.
Related Work
Our model is based on Latent Dirichlet Allocation (LDA) given by Blei, Ng, and Jordan (2003). Models based on LDA by Jo andOh (2011), Mei et al. (2007), He (2009), andZhao et al. (2010) present approaches to extract sentiment-coherent topics in datasets. The past work in analytics related to political opinion can be broadly classified in three categories. The first category predicts the outcome of an election. Gayo-Avello, Metaxas, and Mustafaraj (2011) predict the election outcome for a pair of Democrat and Republican candidates, while Metaxas, Mustafaraj, and Gayo-Avello (2011) aggregate sentiment in tweets in order to map it to votes. Conover et al. (2011) use a network graph of authors and their known political orientations, in order to predict the political orientation. The second category of work in the political domain deals with correlation of sentiment with real-world events. O'Connor et al. (2010) correlate sentiment expressed in text to time series of real-world events. Wong et al. (2013) derive the consistency between tweeting and retweeting behaviour and real-world sentiment about an event. Gerrish and Blei (2011) present the ideal point topic model that correlates votes by legislators to bills being passed along with sentiment in the texts of these bills.
Our PIE model falls in the third category: extraction of political topics from a dataset. In this respect, the work closest to ours is by Grimmer (2010) and Fang et al. (2012). Grimmer (2010) present a hierarchical topic model to understand how senators explain their work in their press releases. Assuming a single topic per press release, the topic of a document is derived from an author-specific distribution over topics. Fang et al. (2012) divide words in a political statement as topic words and opinion words, based on POS tags, and assume different distributions for the two. Like these two works, we assume a single topic per tweet, and divide words into categories based on POS tags. Our model improves upon these two works in the following key ways:
• A richer latent variable structure. In our model, the opinion words depend on BOTH topic and opinion latent variables (as opposed to only the latter). This structure allows our model to generate topics corresponding to political positions, which is not achieved in the past work.
• The author-topic distribution and hierarchy of author-sentiment distributions (as noted in Section 1). In case of our models, authors are arranged into groups and this group-wise distribution is tightly linked to the structure of the model.
PIE Model
In this section, we introduce the PIE model. We first discuss the rationale behind the design. We then describe the structure of the model. Following that, we present details of the input/output and the method for estimation of distributions.
Design Rationale
The primary goal of the model is to discover topics related to political issues and two positions per issue. To be able to discover issues and corresponding positions, PIE model considers two latent variables p and i. Topic i represents the identifier for a political issue while the pair p-i represents the identifier for a position. The first component of the model is derived from the nature of data. A tweet contains two kinds of content words: "objective" words that describe an entity and "subjective" words that express opinion towards an entity. Emoticons can be thought of as a third category of sentiment words. We represent these three kinds of words as three observed variables: topic words t, opinion words o and emoticons e. A topic word t is derived from the topic i, an opinion word is derived from the topic i and sentiment p pair while an emoticon e is derived from sentiment p alone.
The second component of the model is derived from the nature of the problem. In a political setting, people are organized in political groups while the opinion of a group towards a topic is a result of individual users in the group. To model this nature, we use a hierarchical structure that relates global sentiment, group-wise sentiment and author-specific sentiment. Our model contains a hierarchy of distributions ψ i , ψ ig and ψ iu indicating global, groupwise and author-specific sentiment respectively, towards political issue indicated by z for a user u who belongs to group g.
Structure
The model is shown in Figure 1. The input is a dataset of T tweets, each annotated with one among U authors(/users of twitter). Also, each author has exactly one out of G political affiliations. We represent a tweet as a collection of three types of words: (a) Issue words that describe a political issue, (b) Position words that express a position towards the issue, and (c) Emoticons. We do not consider hashtags as a special case because of Cohen and Ruths (2013) who show that hashtags are not strong indicators of political orientation. Two latent variables are defined for each tweet: issue i and position p. The topics corresponding to issue i represent political issues, while topics corresponding to pairs of issue i and position p represent political positions.
The assumption that a tweet has exactly one issue and position is reasonable due to limited length of tweets. The arrows from i and p that lead to t, o and e realize the role of the three categories of content terms as follows:
1. The topic words t describe a political issue and hence, are based only on the topic i.
2. The opinion words o express position towards a political issue and hence, are based on both the issue i and position p.
3. The sentiment of an emoticon e does not depend on the issue being talked about and hence, is based only on position p.
The model estimates two sets of distributions: one for words and another for the user groups. The three categories of words lead to three distributions that are estimated by the model: (a) Issue word-issue distribution η i , (b) Position word-issue-position distribution φ ip , and (c) Emoticon-position distribution χ e . Since an emoticon may not completely belong to either of the political positions, we do not rely on a mapping to a position but consider a distribution of emoticons over positions. This incorporates more intricate forms of opinion expression like sarcasm. In addition to these term-specific distributions, the PIE model estimates author/group-specific distributions: (a) Author-Issue distributions: θ g is the probability of an issue with respect to a group g, θ u is the probability of an issue with respect to an author u, (b) Author-position distributions: ψ i is the probability of a position with respect to an issue i, ψ ig is the probability of a position with respect to an issue i and group g, while ψ iu is the probability of a position with respect to issue i and author u. Variables and distributions in the model are in Table 1.
The generative process of the corpus can be described as follows 1 :
1. For each issue i, select η i ∼ Dir(γ), and ψ i ∼ Dir(β 1 ) 2. For each position p select φ p ∼ Dir(δ 1 ), φ ip ∼ Dir(δ 2 φ p ), and χ p ∼ Dir( ) 3. For each group g select θ g ∼ Dir(α 1 ), and ψ ig ∼ Dir(β 2 ψ i ) 4. For each author u select θ u ∼ Dir(α 2 θ g ), and ψ iu ∼ Dir(β 3 ψ ig ) 5. For each tweet k select (a) topic i k ∼ θ u k and sentiment p k ∼ ψ i k ,u k (b) all topic words, t kj ∼ η i k (c) all opinion words, o kj ∼ φ i k ,p k (d) all emoticons, e kj ∼ χ p k
Estimation
In the PIE Model, we need to estimate word-topic distributions namely η i , χ e and φ o and authorspecific distributions θ u and ψ i,u . The estimation of the joint probability distribution is computationally intractable. Hence, we use Gibbs sampling by Casella and George (1992) to estimate the underlying distributions. For computational efficiency, we use moment matching for estimation. The sampling algorithm runs for a pre-determined number of iterations. We implement a block sampler based on Heinrich (2005) that samples p k and i k of the k th tweet together and results in faster convergence. The joint probability of p k and i k is given by:
P (p k , i k |u k , p −k , i −k ) ∝ θ i k |u k ψ p k |i k ,u k ( j η t kj |i k )( j φ o kj |p k ,i k )( j χ e kj |p k )
where all parameters θ, ψ, η, φ and χ are estimated withholding information regarding the previous assignment to the k th tweet. The generative story is omitted in the current version, due to lack of space. The word-specific distributions are estimated as:
η t|i = N (n) t,i + γ 1 V (n) N (n) i + γ θ i|u = N (i) i,u + α 2 θ i|g(u) N (i) u + α 2 ,θ i|g = N (i) i,g + α 1 1 V () N (i) g + α 1 φ o|p,i = N (o) o,p,i + δ 2 φ o|p N (o) p,i + δ 2 ,φ o|p = N (o) o,p + δ 1 1 V (o) N (o) p + δ 1χ e|p = N (e) e,p + 1 V (e) N(e)
p + where the count notation can be read as follows: 85 N (e) e,p denotes the number of times emoticon e occurs within tweets assigned to position p across the corpus and V (e) is the size of the emoticon vocabulary. The equations show that for speed and ease of implementation, we use a simple approximation to the group-wide Dirichlet mean parameter φ o|p rather than estimating expected table counts within a Chinese Restaurant Process as given by Griffiths and Tenenbaum (2004). (We leave an investigation of more precise parameter estimation to future work.)
The author/group-specific distributions are estimated in a hierarchical manner as follows:
θ i|u = N (i) i,u + α 2 θ i|g(u) N (i) u + α 2 ,θ i|g = N (i) i,g + α 1 1 V (i) N (i) g + α 1 ψ p|i,u = N (p) p,i,u + β 3 ψ p|i,g(u) N (p) i,u + β 3 , where : ψ p|i,g = N (p) p,i,g + β 2 ψ p|i N (p) i,g + β 2 ,ψ p|i = N (p) p,i + β 1 1 V (p) N (p) i + β 1
The notation here is the same as for the word-issue distributions, except that the counts are now at the "tweet-level" rather than the "word-level", i.e. N (i) i,u indicates number of tweets by author u assigned the topic i. Note again the use of simple estimates for the group-wide parameters θ g and ψ z,g .
Experiment Setup
We create a dataset of tweets using Twitter API (https://dev.twitter.com/). The authors whose timelines will be downloaded are obtained as follows. We first obtain a list of famous Democrats and Republicans using sources like about.com, The Guardian and Fanpagelist. This results in a list of 32 Republicans and 46 Democrats. We expand this list by adding randomly selected friends of these twitter handles. (The choice of "friends" as opposed to "followers" is intentional.) We then download complete twitter timelines of all authors (Twitter sets the upper limit to 3200 tweets). The resultant dataset consists of 2441058 tweets. Dirichlet hyperparameters and values of I=35 and P =2 are experimentally determined. We set priors on position words using a word list of 6789 words given by McAuley and Leskovec (2013). Function words and 25 most frequent words are removed.
Evaluation
To validate the efficacy of our model, our evaluation addresses the following questions:
• What impact do components of the model have, on its ability to discover these issues and positions? (Section 5.1)
• What political issues and positions does the model discover? (Section 5.2)
• Once we discovered political issues, positions and group-wise distribution, can the model be used to predict political affiliation? (Section 5.3)
Impact of Model Components on Performance
We evaluate two key components of PIE model, namely, segregation of words and hierarchy of author-group distributions.
Segregation of words:
A key component of PIE model is the strategy to decide whether a word is an issue word or position word. We experiment with following alternatives to do this: (a) POSbased segregation as done in Fang et al. (2012) using twitter POS tagger Bontcheva et al. (2013). We experimentally determine the optimal split as: nouns as issue words, and adjectives, verbs and adverbs as position words, (b) POS-based+PMIbased collocation handler to include n-grams, using Bird (2006) Efficacy of author-group distributions: To evaluate the benefit of our hierarchy of distributions, we obtain average cosine similarity between authorposition distributions of the two political groups. For every author with known political affiliation, we first obtain the cosine similarity between the ψ iu of the author and the ψ iu of other authors belonging to his/her own political group. This is then averaged over all authors. This value indicates how different/similar authors of a affiliation are. Table 3 shows these values for different combinations. The columns indicate two scenarios: when political affiliation information is used ('with') and when it is not used ('without') during the estimation. The rows indicate the four possible scenarios. The average cosine similarities are not symmetric, by design. The sign on ∆ shows that incorporation of political affiliation information makes authors of the same group more similar to each other, and authors of different groups less similar to each other, as desired.
Qualitative Evaluation
The issues extracted from our model are represented by the topics formed using topic words. We list some of the topics extracted using PIE model in Table 4. Each cell contains top 5 words of each topic with a manually assigned description in boldface. These topics are the political issues underlying our dataset. The issues discovered are "health in- surance, abortion, security, employment, gun laws, immigration, economy, climate, marriage, election, disasters, crime and government". In addition to these, topics beyond political issues are also observed, as expected. These include sports (game, team, season, year, football), promotional online content ({blog, showcase, article, courtesy, sup-port} or {photo, photos, video, entry, album}), etc. Manual identification of political issues from the set of retrieved topics is necessary because approaches like considering top-k probable topics may not work. For example, social media concepts such as followers or promotional online content occur more frequently than immigration and abortion. We validate that our political issues appear in at least one out of three online lists of issues from Gallup.com, About.com and Ontheissues.com in Table 5.
The alternate positions that people take, are shown in Table 6. Each box consists of a political issue written in boldface and top five words in topics corresponding to alternate sentiment. These topics show what we mean by "alternate" positions and that they are not merely positive or negative. In case of the political issue "abortion", the con-87
A Gallup O A G O A G O Abortion Government Economy Climate
Gun Laws Election Crime Immigration Employment Disasters Insurance Security/War Marriage trasting positions correspond to topics given by join, religious, stand, support, conservative and prolife, killed, born, unborn, aborted. It can be seen that the first position gives a religious view whereas the second position presents an emotional appeal with respect to abortion. Similarly, consider the box for "immigration". One position corresponds to "support" and "stand" by immigrants while the opposing position corresponds to "check" or "stop" immigration. In case of "insurance", authors are divided into ones who talk about "paying" and the ones who see "hope" in revised insurance policies. Finally, look at the box corresponding to "gun laws". Both positions contain topics with negative words but differ in a way that one position talks about a "vote" while the opposite position mentions "give". Figure 2 shows the absolute difference between the P (p|i, g) for a topic-party pair. The observations are intuitive since the issues with least difference are employment and disasters while the most contentious are abortion, election and immigration. Approach Accuracy (%) Baseline: Gottipati et al. (2013) 60 Log likelihood-based 68
Application: Prediction of Political Affiliation
Obtaining a set of non-political authors with reliable political affiliation is challenging. We first select authors who were labeled as 'Unknown'. This means that PIE model did not know about their political affiliation during training. Among these authors, we select ones who have mentioned their political affiliation in their profile description. This results in 25 test authors (out of which 6 are Democrats). We consider two approaches to predict political affiliation:
1. Baseline: This baseline is similar to Gottipati et al. (2013) except that the vectors in our case 88 are based on PIE model. We calculate cosine similarities between estimated author-position distribution ψ iu for test authors and each of the group-position distributions ψ ig for the two political groups. The predicted affiliation is the group with greater similarity value.
2. Log likelihood-based: We use the distribution for words and groups, that have been computed during training. For each test author, we again run our estimation on their tweets twice, once for each group. The goal is to learn ψ iu and θ u and compute two log-likelihood values, once for each group, and predict the more likely affiliation. Table 7 compares our approach with a past approach. The log likelihood approach results in the best accuracy of 68%.
Conclusion & Future Work
In this paper, we presented a Political Issue Extraction (PIE) model to discover political issues and two positions per issue, using a dataset of tweets by US politicians and civilians. Our PIE model represented a tweet as three sets of words: topic words, opinion words and emoticons. To model author-specific distributions, we considered a hierarchical set of distributions. To evaluate PIE model, we compared multiple strategies to classify words into three categories and showed that POS-based classification gives highest topic coherence. Our model was able to identify: a) topics corresponding to political issues, b) alternate positions that the two parties may take, and c) the issues that are likely to be the most "contentious". We estimated twelve political issues (such as security, disasters, immigration, etc.) and positions within each. Using cosine similarity within groups, we showed that our model placed members of the same group closer to each other than the ones from the other group, when group information was provided. Our PIE model discovers that abortion, immigration and marriage are among the most contentious political issues. Finally, we also presented findings of a pilot study to predict political affiliation of authors using PIE model, and achieved an accuracy of 68%.
As future work, we wish to be able to automatically identify which of the topics extracted from our model are political issues. Although the current model can be used for extraction more than two positions in principle, we would like to see if any additional challenges come up in that case. This model can be mapped to identification of controversial topics, brand loyalty, etc.
Figure 1 :
1PIE Model: Plate Diagram Random Variables u, g Author of a tweet and Group of the author i, p Issue and position of a tweet t, o Issue/Position-word in a tweet eEmoticon in a tweet Distributions θ u/g Dist. over issues for author u / group g ψ i,u/g Dist. over positions for issue i and author/group η i Dist. over topic-words for issue i φ i,p Dist. over opinion-words for issue-position pair χ p Dist. over emoticons for position p Hyper-parameters α, β Concentration par. issue/position dist. γ Concentration par. for issue-word dist. δ Concentration par. for position-word dist. Concentration par. for emoticon dist. tweets on topic i by author u V (t) Vocabulary size for topic words
Figure 2 :
2Difference between Political Positions
Table 1 :
1Glossary of Variables/Distributions used
Table 2 :
2Average topic coherence per topic for different strate-gies of word segregation
Table 3 :
3Average cosine similarity between author-position dis-tributions, with and without group membership information
(2015) for all topics. This metric uses normalized
PMI. Average coherence per topic is shown in
Table 2. We observe the highest value of 0.468
in case of the POS-based strategy. The remaining
subsections report results for our experiments with
this strategy.
Table 4 :
4Top words in Political Issues discovered by PIE
Table 5 :
5Comparison of Our Political Issues with three Online Lists of Political Issues from About.com (A), Gallup (G) andOnTheIssues (O)
Abortion
Security/War
Join
Prolife
Killed
Military
Religious
Killed
Syrian
Illegal
Stand
Born
Military Russian
Support
Unborn Fast
Targeting
Conservative Aborted Furious Back
Gun laws
Immigration
Illegal
Dont
Join
Top
Free
Free
Support Enter
Dont
Stop
Back
Check
Vote
Illegal
Stand
Stop
Stop
Give
Proud
Join
Insurance
Marriage
Pay
Check
Back
Gay
Federal
Hear
Don
Religious
Signed
Here
Lost
Political
Paid
Call
Liberal
Free
Uninsured
Hope
Great
**
Table 6 :
6Top words in Political Positions Discovered by PIE;** is a popular twitter handle
Table 7 :
7Comparison of PIE model with past approaches for prediction of Political Affiliation
'Dir' in the generative process denotes a Dirichlet prior.
. About.com. Political Issues. About.com. Political Issues, url = http://uspolitics.about.com/od/electionissues/.
Collective supervision of topic models for predicting surveys with social media. A Benton, M J Paul, B Hancock, M Dredze, Proceedings of AAAI Conference. AAAI ConferenceBenton, A.; Paul, M. J.; Hancock, B.; and Dredze, M. 2016. Collective supervision of topic mod- els for predicting surveys with social media. In Proceedings of AAAI Conference.
Nltk: the natural language toolkit. S Bird, Proceedings of the COLING-ACL on Interactive presentation sessions. the COLING-ACL on Interactive presentation sessionsAssociation for Computational LinguisticsBird, S. 2006. Nltk: the natural language toolkit. In Proceedings of the COLING-ACL on Interac- tive presentation sessions, 69-72. Association for Computational Linguistics.
. D M Blei, A Y Ng, M I Jordan, Latent dirichlet allocation. the Journal of machine Learning research. 3Blei, D. M.; Ng, A. Y.; and Jordan, M. I. 2003. Latent dirichlet allocation. the Journal of machine Learning research 3:993-1022.
TwitIE: An open-source information extraction pipeline for microblog text. K Bontcheva, L Derczynski, A Funk, M A Greenwood, D Maynard, N Aswani, Proceedings of the International Conference on Recent Advances in Natural Language Processing. the International Conference on Recent Advances in Natural Language ProcessingAssociation for Computational LinguisticsBontcheva, K.; Derczynski, L.; Funk, A.; Green- wood, M. A.; Maynard, D.; and Aswani, N. 2013. TwitIE: An open-source information extraction pipeline for microblog text. In Proceedings of the International Conference on Recent Advances in Natural Language Processing. Association for Computational Linguistics.
Explaining the gibbs sampler. G Casella, George , E I , The American Statistician. 463Casella, G., and George, E. I. 1992. Explain- ing the gibbs sampler. The American Statistician 46(3):167-174.
Classifying political orientation on twitter: It's not easy! In ICWSM. R Cohen, D Ruths, Cohen, R., and Ruths, D. 2013. Classifying political orientation on twitter: It's not easy! In ICWSM.
Predicting the political alignment of twitter users. In Privacy, security, risk and trust. M D Conover, B Gonçalves, J Ratkiewicz, A Flammini, F Menczer, IEEE third international conference on social computing. IEEEConover, M. D.; Gonçalves, B.; Ratkiewicz, J.; Flammini, A.; and Menczer, F. 2011. Predicting the political alignment of twitter users. In Privacy, security, risk and trust, 2011 IEEE third interna- tional conference on social computing, 192-199. IEEE.
Mining contrastive opinions on political texts using cross-perspective topic model. Y Fang, L Si, N Somasundaram, Z Yu, Proceedings of the fifth ACM international conference on Web search and data mining. the fifth ACM international conference on Web search and data miningACMFang, Y.; Si, L.; Somasundaram, N.; and Yu, Z. 2012. Mining contrastive opinions on political texts using cross-perspective topic model. In Proceedings of the fifth ACM international con- ference on Web search and data mining, 63-72. ACM.
Gallup.com. Jobs, Government, and Economy remain top US problems. Gallup.com. Jobs, Government, and Econ- omy remain top US problems, url = http://www.gallup.com/poll/169289/jobs- government-economy-remain-top- problems.aspx.
Limits of electoral predictions using twitter. D Gayo-Avello, P T Metaxas, E Mustafaraj, ICWSM. Gayo-Avello, D.; Metaxas, P. T.; and Mustafaraj, E. 2011. Limits of electoral predictions using twitter. In ICWSM.
Predicting legislative roll calls from text. S Gerrish, D M Blei, Proceedings of the 28th international conference on machine learning (icml-11. the 28th international conference on machine learning (icml-11Gerrish, S., and Blei, D. M. 2011. Predicting leg- islative roll calls from text. In Proceedings of the 28th international conference on machine learn- ing (icml-11), 489-496.
Predicting users political party using ideological stances. S Gottipati, M Qiu, L Yang, F Zhu, J Jiang, Social Informatics. SpringerGottipati, S.; Qiu, M.; Yang, L.; Zhu, F.; and Jiang, J. 2013. Predicting users political party using ide- ological stances. In Social Informatics. Springer. 177-191.
Hierarchical topic models and the nested chinese restaurant process. D Griffiths, M Tenenbaum, Advances in neural information processing systems. 1617Griffiths, D., and Tenenbaum, M. 2004. Hierarchi- cal topic models and the nested chinese restaurant process. Advances in neural information process- ing systems 16:17.
A bayesian hierarchical topic model for political texts: Measuring expressed agendas in senate press releases. J Grimmer, Political Analysis. 181Grimmer, J. 2010. A bayesian hierarchical topic model for political texts: Measuring expressed agendas in senate press releases. Political Anal- ysis 18(1):1-35.
Parameter estimation for text analysis. G Heinrich, Technical reportHeinrich, G. 2005. Parameter estimation for text analysis. Technical report, Technical report.
Aspect and sentiment unification model for online review analysis. Y Jo, A H Oh, Proceedings of the fourth ACM international conference on Web search and data mining. the fourth ACM international conference on Web search and data miningACMJo, Y., and Oh, A. H. 2011. Aspect and sentiment unification model for online review analysis. In Proceedings of the fourth ACM international con- ference on Web search and data mining, 815-824. ACM.
Joint sentiment/topic model for sentiment analysis. C Lin, Y He, Proceedings of the 18th ACM conference on Information and knowledge management. the 18th ACM conference on Information and knowledge managementACMLin, C., and He, Y. 2009. Joint sentiment/topic model for sentiment analysis. In Proceedings of the 18th ACM conference on Information and knowledge management, 375-384. ACM.
A joint topic and perspective model for ideological discourse. W.-H Lin, E Xing, A Hauptmann, Machine Learning and Knowledge Discovery in Databases. SpringerLin, W.-H.; Xing, E.; and Hauptmann, A. 2008. A joint topic and perspective model for ideological discourse. In Machine Learning and Knowledge Discovery in Databases. Springer. 17-32.
From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews. J J Mcauley, J Leskovec, Proceedings of the 22nd international conference on World Wide Web. the 22nd international conference on World Wide WebInternational World Wide Web Conferences Steering CommitteeMcAuley, J. J., and Leskovec, J. 2013. From ama- teurs to connoisseurs: modeling the evolution of user expertise through online reviews. In Pro- ceedings of the 22nd international conference on World Wide Web, 897-908. International World Wide Web Conferences Steering Committee.
Topic sentiment mixture: modeling facets and opinions in weblogs. Q Mei, X Ling, M Wondra, H Su, C Zhai, Proceedings of the 16th international conference on World Wide Web. the 16th international conference on World Wide WebACMMei, Q.; Ling, X.; Wondra, M.; Su, H.; and Zhai, C. 2007. Topic sentiment mixture: modeling facets and opinions in weblogs. In Proceedings of the 16th international conference on World Wide Web, 171-180. ACM.
How (not) to predict elections. P T Metaxas, E Mustafaraj, D Gayo-Avello, Privacy, security, risk and trust (PASSAT), 2011 IEEE third international conference on and 2011 IEEE third international conference on social computing (SocialCom). IEEEMetaxas, P. T.; Mustafaraj, E.; and Gayo-Avello, D. 2011. How (not) to predict elections. In Privacy, security, risk and trust (PASSAT), 2011 IEEE third international conference on and 2011 IEEE third international conference on social comput- ing (SocialCom), 165-171. IEEE.
From tweets to polls: Linking text sentiment to public opinion time series. B O'connor, R Balasubramanyan, B R Routledge, N A Smith, ICWSM. 11O'Connor, B.; Balasubramanyan, R.; Routledge, B. R.; and Smith, N. A. 2010. From tweets to polls: Linking text sentiment to public opinion time series. ICWSM 11:122-129.
Candidates on the Issues. Ontheissues, Com, Ontheissues.com. Candidates on the Issues, url = http://www.ontheissues.org/default.htm.
Exploring the space of topic coherence measures. M Röder, A Both, A Hinneburg, Proceedings of the eight International Conference on Web Search and Data Mining. the eight International Conference on Web Search and Data MiningShanghaiRöder, M.; Both, A.; and Hinneburg, A. 2015. Ex- ploring the space of topic coherence measures. In Proceedings of the eight International Confer- ence on Web Search and Data Mining, Shanghai, February 2-6.
Group and topic discovery from relations and text. X Wang, N Mohanty, A Mccallum, Proceedings of the 3rd international workshop on Link discovery. the 3rd international workshop on Link discoveryACMWang, X.; Mohanty, N.; and McCallum, A. 2005. Group and topic discovery from relations and text. In Proceedings of the 3rd international workshop on Link discovery, 28-35. ACM.
Quantifying political leaning from tweets and retweets. F M F Wong, C W Tan, S Sen, M Chiang, ICWSM. Wong, F. M. F.; Tan, C. W.; Sen, S.; and Chiang, M. 2013. Quantifying political leaning from tweets and retweets. In ICWSM.
Predicting response to political blog posts with topic models. T Yano, W W Cohen, N A Smith, Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational LinguisticsAssociation for Computational LinguisticsYano, T.; Cohen, W. W.; and Smith, N. A. 2009. Predicting response to political blog posts with topic models. In Proceedings of Human Lan- guage Technologies: The 2009 Annual Confer- ence of the North American Chapter of the Asso- ciation for Computational Linguistics, 477-485. Association for Computational Linguistics.
Jointly modeling aspects and opinions with a maxent-lda hybrid. W X Zhao, J Jiang, H Yan, X Li, Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. the 2010 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsZhao, W. X.; Jiang, J.; Yan, H.; and Li, X. 2010. Jointly modeling aspects and opinions with a maxent-lda hybrid. In Proceedings of the 2010 Conference on Empirical Methods in Natu- ral Language Processing, 56-65. Association for Computational Linguistics. |
33,377,896 | [] | É Ê Ë Ì Î Í D E Ï Ð F 3 Ñ Ì p Ò Ó Ô E Õ Ö × Ø E Ù Ú × Û Ð Ü Ö Õ & Ý ¤ Ö Ø E Þ Ü g ß n à 1 Ü á ¤ Ù ã â Ø U × ä Ö Ü á Ö ä × i Þ Ç Õ Ù ã á ¶ å ae ç Ü Ø r Ø E Þ Ç Ù Ú 4 Ü Ö Ù ã × á r à è Ó Ô E Õ Ø E ä × i é E Þ ã Õ 4 â ê Ù à 3 Ö Ô r Ü Ö ä Õ q à Q Õ q Ü ä } Ú } Ô E Õ 4 ä } à 1 Ô r Ü g ë i Õ ì à Õ í à Q Õ ë Õ 4 ä } Ü Þ p Ö × i Ø E Ù ã Ú ¡ ä Õ Ø E ä Õ à Õ 4 á Ö Ü Ö Ù ã × á r à î U à × â Õ 4 Ö Ù ã â Õ q à Ù ã á Ü á ì á r à ß n à Ö Õ 4 â Ü Ö Ù ã Ú ¶ ï 8 Ü g ß i è ñ ð Õ b Ü ä ò ì Õ ¤ Ö Ô Y Ü Ö v Ö × Ø E Ù Ú ¶ ä Õ 4 Ø r ä Õ q à Q Õ á n ó Ö Ü Ö Ù ã × á r à à Q Ô r × ì Þ í ñ é U Õ " é r Ü i à Q Õ q í ñ × á ñ Ù ã á n Û © × ä â Ü Ö Ù Ç × i á ¼ Ö Ô r Ü Ö ô Ú 4 Ü á é U Õ Ü Ú õ ì Ù Ç ä Õ í Ü ì Ö × â Ü Ö Ù Ú 4 Ü Þ ã Þ ã ß Û © ä × â ö Ü Ö × i Ø E Ù ã Ú & ó t ä Õ 4 Þ ã Õ 4 ë Ü á i Ö Ð Ú 4 × ä ó Ø ì à è ö Ó p × i Ø E Ù ã Ú à Q Ù ã ò á Y Ü Ö ì ä Õ q à 4 î 1 Ü i à Ù ã á i Ö ä × n í ì Ú & Õ í ÷ Ù Ç á B ø ù ae ¢ Ù ã á Ü á r í ú 1 × ë ¤ ß i î I û ü ü i ü i ý 3 Ø E ä × ë ¤ Ù ã í E Õ p à ì Ú } Ô " Ü á " Õ & Ý E Ü â Ø E Þ ã Õ è þ á À Ö Ô r Ù ã à Ø r Ü ó Ø U Õ 4 ä ô ï £ Õ ª Ø E ä Õ q à Q Õ á Ö ¶ Ö ï £ × Ù Ç á r Ú 4 ä Õ â Õ 4 á Ö Ü Þ 5 Õ á E Ô r Ü á Y Ú & Õ 4 â Õ á i Ö } à % Ö × Ö × i Ø E Ù Ú ¥ à Q Ù ã ò á r Ü Ö ì ä Õ à ¤ Ü á r í C à Ô E × ï ÿ Ô E × ï ê Ö Ô E Õ 4 ß Î Ú 4 Ü á Î é U Õ ì à Q Õ q í ï ¦ Ù Ç Ö Ô ò × ¤ × n í 3 ä Õ q à ì Þ Ö } à ¢ Ù ã á þ á n Û © × ä â Ü Ö Ù ã × á ¡ Ð Ý Ö ä Ü i Ú Ö Ù Ç × i á ø ù þ ¢ £ ý f Ü á r í £ ì Þ Ç Ö Ù Ç ó ¥ ¤ × n Ú ì â Õ á Ö § ¦ ì â â Ü ä Ù © q Ü Ö Ù ã × á Î ø £ ¤ ¦ E ý & è ¦ ¤ Õ Ú & Ö Ù ã × á u û Ø E ä Õ à Õ 4 á Ö à Ö Ô E Õ ô á E × Ö Ù ã × á ÷ × Û Ö × Ø r Ù ã Ú ô à Q Ù ã ò á Y Ü Ö ì ä Õ b Ü á r í ¹ Ù Ç Ö à p Õ 4 á n ó Ô r Ü á r Ú & Õ â Õ á Ö £ Ö × 5 Ö × Ø E Ù Ú ó µ ä Õ 4 Þ ã Õ 4 ë Ü á Ö £ ä Õ 4 Þ Ü Ö Ù Ç × i á r à è ¦ n Õ Ú Ö Ù Ç × i á § í n Õ & ó Ö Ü Ù Ç Þ à 3 Ö Ô r Õ Ü ì Ö × i â Ü Ö Ù Ú Ø E ä × ¤ Ú 4 Õ í ì ä Õ × Û Ü Ú 4 õ ì Ù ã ä Ù Ç á E ò p Ö Ô r Õ á r Õ 4 ï Ö × i Ø E Ù Ú ¡ ä Õ 4 Ø E ä Õ à Õ 4 á Ö } Ü Ö Ù ã × á p è ¦ ¤ Õ q Ú Ö Ù Ç × i á Ø E ä Õ q à Q Õ á Ö à Ö Ô E Õ à Õ Ú & × i á r í Õ 4 á r Ô r Ü á r Ú 4 Õ 4 â Õ 4 á Ö q î Ð á r Ü â Õ Þ Ç ß ô Ö Ô r Õ v Ö × Ø E Ù Ú Ö Ô E Õ 4 â Õ q à 4 è ¦ ¤ Õ q Ú Ö Ù ã × á í n Ù à Ú ì à à Q Õ q à Ö Ô r Õ v Ü Ø E Ø r Þ Ç Ù Ú 4 Ü Ö Ù ã × á ¥ × Û Ö Ô E Õ à Õ Ö × Ø E Ù Ú ä Õ 4 Ø E ä Õ à Õ 4 á Ö Ü ó Ö Ù ã × á Y à Ö × þ ¢ ¬ Ü á r í ! £ ¤ ¦ X è " ¦ n Õ Ú Ö Ù Ç × i á ! # à ì â â Ü ä Ù © Õ à Ö Ô r Õ 1 Ú & × á E ó Ú & Þ ì à Ù Ç × i á r à è $ Ê Ï & % H Q x ¼ 6 H G £ 9 1 7 w E P 1 D Ì v £ % × à Ö ¦ í n × n Ú ì â Õ á i Ö } à Ú 4 Ü á p é Y Õ Ú } Ô r Ü ä Ü i Ú Ö Õ 4 ä Ù © Õ í Ü à Ü à Q Õ q õ ì Õ 4 á r Ú 4 Õ × Û 8 à ì é E ó x Ö × Ø E Ù Ú 4 Ü Þ í n Ù à Ú ì à à Q Ù ã × á Y à 1 Ö Ô r Ü Ö × n Ú 4 Ú ì ä Ù ã á ¶ Ö Ô E Õ Ú 4 × á Ö Õ 4 Ý ¤ Ö × Û × i á E Õ ¶ × ä p à Q Õ ë Õ 4 ä } Ü Þ ¦ â Ü Ù Ç á u Ö × Ø E Ù Ú ¶ í n Ù à Ú ì à à Q Ù ã × á r à ( ' q è Ó × À í n Õ & ó Ö Õ ä â Ù ã á E Õ 1 Ö Ô E Õ é U × ì á r í E Ü ä Ù ã Õ à Ð × Û ¢ Õ Ü i Ú } Ô à ì é E Ö × Ø r Ù ã Ú î i Ö Ô E Õ 0 ) ¡ 1 3 2 5 4 7 6 ) ¡ 8 @ 9 " 8 @ A C B ¥ Ü Ø E Ø E ä × i Ü Ú } Ô ¤ ä Õ 4 Ø U × ä Ö Õ q í p Ù ã á À ø ù ú Õ Ü ä à Q Ö î E D G F H F P I ý Ú Ü á % é U Õ ì à Õ í f î ä Õ 4 Þ ã ß Ù ã á E ò × i á Þ Ç Õ 4 Ý n Ù ã Ú Ü Þ Y Ú 4 × Ô E Õ q à Q Ù ã × á ä Õ 4 Þ Ü Ö Ù Ç × iV X d f p ( b " h Ü ¡ à ì é n Ö × Ø E Ù Ú Ü á r í Ü á ß × Û f Ö Ô E Õ 3 â Ü Ù Ç á Ö × Ø r Ù ã Ú à Ð Û © ä × â B Ö Ô r Õ í n × n Ú ó ì â Õ 4 á Ö q è Ð þ á % Ú & × á Ö ä Ü i à Ö q î n ï ¦ Ô E Õ 4 á % Ü à Õ & Ö ¦ × Û í n × n Ú ì â Õ á i Ö } à 1 Ü é U × ì Ö Ö Ô r Õ 5 à Ü â Õ Ö × i Ø E Ù ã Ú ¡ Ù ã à Ø E ä × ë Ù í n Õ q í f î Y Ü Ö × Ø E Ù Ú ¡ Ú } Ô r Ü ä Ü i Ú Ö Õ 4 ä Ù © q Ü Ö Ù ã × á Ú 4 Ü á é Y Õ Ü Ú õ ì Ù Ç ä Õ í Ü ì Ö × i â Ü Ö Ù Ú 4 Ü Þ Ç Þ ã ß è £ ø ù ae ¢ Ù ã á ¡ Ü á r í ú 1 × ë ¤ ß i î û ü ü i ü i ý Ø E ä × Ø U × i à Õ Ü â Õ & Ö Ô E × n í Û © × ä Ú } Ô Y Ü ä } Ü Ú Ö Õ 4 ä Ù © Ù Ç á r ò 5 Ö × Ø E Ù Ú 4 à 1 Ö Ô E ä × ì ò Ô Ü Þ Ç Õ 4 Ý n Ù ã Ú Ü Þ ã Þ Ç ß ÷ í n Õ 4 Ö Õ ä â Ù ã á E Õ í j i ¥ k l 7 m n p o ( m r q s u t x i a v x w y { z f | í E Õ } r á E Õ q í Ü à i ¥ k l u m n ( ø ' x ' ý h ø ¢ r ý ( î ¤ ï ¦ Ô E Õ 4 ä Õ Õ q Ü Ú } Ô v × Û ¢ Ö Ô r Õ Ö Õ ä â à ¢ Ù à b Ô r Ù Ç ò i Ô E Þ Ç ß ñ Ú 4 × ä ä Õ Þ ã Ü Ö Õ í Î ï ¦ Ù Ç Ö Ô ñ Ö Ô E Õ i k y l 7 m n ï ¦ Ù Ö Ô Ü á ¬ Ü à à × ¤ Ú 4 Ù ã Ü Ö Ù ã × á ¶ ï 8 Õ 4 Ù ã ò Ô Ö Q è ae ¢ Ù ã á ¬ Ü á r í ô ú 1 × ë ¤ ß ¶ í n Õ h } r á E Õ Ö Ô r Õ Ö Õ ä â à Ü à ¦ Õ Ù Ö Ô E Õ 4 ä 3 à Ö Õ 4 â â Õ í % Ú & × i á Ö Õ 4 á Ö ï 8 × ä } í E à î E é E Ù ã ò ä } Ü â à ¦ × i ä Ö ä Ù ã ò ä } Ü â à 4 è 5 Ó Ô E Õ à Q Õ Þ Ç Õ q Ú Ö Ù ã × á × Û Ð Ö Ô E Õ Ö Õ ä â à Ü à 3 ï 8 Õ 4 Þ ã Þ Ð Ü à Ö Ô r Õ Ü à à Ù Ç ò i á E â Õ 4 á Ö × Û Ö Ô E Õ Ü i à à × n Ú & Ù Ü Ö Ù Ç × i á ï 8 Õ 4 Ù ã ò Ô Ö Ù à í n Õ & Ö Õ 4 ä â Ù Ç á E Õ q í é ¤ ß Ö Ô r Õ ì à Q Õ × Û p Ö Ô E Õ Þ Ç Ù Õ Þ Ç Ù ã Ô E × ¤ × n í v ä } Ü Ö Ù Ç × Y è ì Ö Ö × Ø r Ù ã Ú à Ü ä Õ á r × Ö v Ú } Ô r Ü ä Ü i Ú Ö Õ ä Ù 4 Õ q í À × i á E Þ Ç ß " é ß À Ö Õ 4 ä â à 4 î Ö Ô r Õ 4 ä Õ Ü ä Õ ö Ü Þ ã à × ä Õ Þ ã Ü Ö Ù ã × á r à ¼ é U Õ & Ö ï 8 Õ 4 Õ á Ö × Ø r Ù ã Ú Ú 4 × á r Ú 4 Õ 4 Ø n Ö } à Ö Ô Y Ü Ö v á E Õ Õ í ª Ö × ¥ é U Õ ¶ Ù í n Õ 4 á Ö Ù r } r Õ q í f è 8 ß Ü à à ì â Ù ã á E ò ô Ö Ô r Ü Ö Ö Ô r Õ Þ Ü ä ò Õ à Q Ö ¶ â Ü x × i ä Ù Ç Ö ß × Û Ö Ô E Õ q à Q Õ À ä Õ 4 Þ Ü Ö Ù Ç × i á r à ¤ Ö } Ü Õ " Ø E Þ ã Ü i Ú & Õ À é U Õ & ó Ö ï 8 Õ 4 Õ 4 á u ë i Õ 4 ä é r à y á E × â Ù ã á r Ü Þ ã Ù Ü Ö Ù Ç × i á r à Ü á r í × Ö Ô E Õ 4 ä v á E × ì á Y à 4 î 8 Ö Ô r Õ Ö × i Ø E Ù Ú v ä Õ 4 Ø E ä Õ à Õ 4 á Ö } Ü Ö Ù ã × á Y à ¡ Ú 4 Ü á À é U Õ Ø r ä × n í ì Ú 4 Õ í ¥ Ù ã á À Ö ï 8 × ô Ù Ö Õ 4 ä ó Ü Ö Ù Ç × i á r à è % þ á ¥ Ö Ô E Õ } r ä à Q Ö 5 Ù Ç Ö Õ ä Ü Ö Ù ã × á ¥ × i á E Þ Ç ß ô á E × ì á r à ¡ Ü á r í ¬ ë Õ ä é Y à Ü ä Õ Ú & × i á r à Q Ù í n Õ ä Õ q í b Ü à 3 Ö Õ 4 ä â à Ù Ç á ô Ö Ô E Õ Ö × i Ø E Ù ã Ú à Ù Ç ò i á r Ü Ö ì ä Õ p Ö Ô E Õ % à Q Õ Õ í ª ä Õ Þ ã Ü Ö Ù ã × á ï £ Õ ¤ Ø U Õ 4 ä Û © × ä â Ü Û © × ì ä Q ó à Ö Õ 4 Ø Ø E ä × ¤ Ú 4 Õ í ì ä Õ × á p Ö Ô r Õ Ö × Ø r Ù ã Ú à Ù Ç ò i á r Ü Ö ì ä Õ z f | ' U i ¥ y ¥ l p ¡ H ¢ £ & m @ ¤ ¥ i ¥ y w k v x i i k y l 7 m n k x v x i a ¤ ¥ m y h w y o è ÷ Ó p Õ ä â à Û © ä × â ¦ z 0 | ' Ö Ô Y Ü Ö p Ô r Ü g ë i Õ Ü á Õ & Ý ¤ Ö ä Õ â Õ ë Ü ä Ù Ü Ö Ù Ç × i á Û © ä × â Ö Ô E Õ Ð â Õ q Ü á ï 8 Õ 4 Ù ã ò Ô Ö Ü ä Õ { } r Þ Ç Ö Õ 4 ä Õ í ª × ì Ö è ¹ þ á u × ì ä Õ 4 Ý n Ø Y Õ ä Ù ã â Õ 4 á Ö î 8 ï £ Õ ¤ Ô r Ü g ë i Õ % Ú & × á E ó à Ù ã í n Õ ä Õ q í á E Ù ã á E Õ C í E Ù r § X Õ 4 ä Õ 4 á Ö í E × ¤ Ú ì â Õ 4 á Ö ¹ Ú & × Þ ã Þ ã Õ Ú Ö Ù Ç × i á r à î p Õ q Ü Ú } Ô ä Õ 4 Ø E ä Õ à Õ 4 á Ö Ü Ö Ù ã ë Õ b Û © × ä ¶ Ü í n Ù r § X Õ 4 ä Õ 4 á Ö ¤ Ö × i Ø E Ù Ú è ð Õ ì à Õ í Ö Ô r Õ Ö ä } Ü Ù ã á E Ù ã á E ò ¶ í n × n Ú ì â Õ á Ö à Û © ä × â Ö Ô E Õ p ú ì é n ó I ë Õ 4 á Ö ó F F p Õ ë Ü Þ Ç ó ì Ü Ö Ù ã × á r à ¶ ø x ú 1 Ù ã ä à Ú } Ô E â Ü á Õ & Ö Ü Þ t è Ç î 0 D G F F H F i ý & è ÷ Ó Ô E Õ ¤ Ö × i Ø E Ù Ú 4 à Ü ä Õ C 8 ¥ y ¦ ¥ ( l u ¤ © k c o ( m k s ¤ A ( èsay(V) have(V) be(V) 1.544e+05 1.420e+05 1.336e+05 TOPIC=BOMBINGS outliers market(N) government(N) company(N) index(N) investor(N) be(V) say(V) rise(V) analyst(N) change(V) growth(N) expect(V) raise(V) bond(N) economy(N)Ë Ù Ç ò ì ä Õ D Ó × Ø r Ù ã Ú à Ù Ç ò i á r Ü Ö ì ä Õ q à ¦ Ü á r í p × ì Ö Þ ã Ù Ç Õ ä à è i ¥ y ¥ l E D ¢ y G F t x s W i X m n I H k x w P F t x ¤ ¥ m R Q U t x i a m k x s S & £ b Ü á ¤ ß ¡ × Û U Ö Ô E Õ ¦ Ö Õ 4 ä â à Û © ä × â § © ¬ Ü ä Õ 5 á Y Ü â Õ í Õ 4 á Ö Ù Ç Ö Ù ã Õ à 3 × i ä à Õ 4 â Ü á i Ö Ù ã Ú Ú 4 × á r Ú 4 Õ 4 Ø n Ö } à Ö Ô Y Ü Ö Ú 4 Ü á é U Õ á E × ä â Ü Þ ã Ù © Õ í à Q Õ â Ü á Ö Ù Ú 4 Ü Þ ã Þ ã ß è Ë E × ä 3 Ö Ô E Ù ã à Ø ì ä ó Ø U × i à Õ î 2 ï £ Õ v ä Õ 4 Ö ä Ù Ç Õ ë Õ Ü Þ ã Þ Ð Ö Ô E Õ p à Õ 4 á Ö Õ 4 á r Ú 4 Õ à Ù ã á ¥ ï ¦ Ô E Ù Ú } Ô " Ü á ¤ ß × Û Ö Ô r Õ 3 á r × ì á r à £ Û © ä × â 1 § © p × n Ú 4 Ú ì ä à 8 Ù ã á v Ö Ô E Õ 3 í E × ¤ Ú ì â Õ 4 á Ö 1 Ú & × i Þ Ç Þ ã Õ Ú & ó Ö Ù ã × á ¥ Ü á r í ¬ Ü Ø E Ø E Þ ã ß Ü % å Ü â Õ í I á i Ö Ù Ö ß ¡ T ¦ Õ Ú 4 × ò i á E Ù © Õ 4 ä ø x å $ T ý Ö Ô Y Ü Ö ï 8 Ü i à Ö ä Ü Ù Ç á r Õ í ¥ Ö × Ù í n Õ á i Ö Ù Û © ß " á r Ü â Õ à 5 Û © ä × â D c ¶ í n Ù © § U Õ ä Q ó Õ 4 á Ö Ú 4 Ü Ö Õ 4 ò i × ä Ù Ç Õ q à è À ð ¹ Ô E Õ á E Õ 4 ë i Õ 4 ä Ü b á r × ì á U § Û © ä × i â V § © Ù à v Ö } Ü ò i ò Õ í ȩ ¤ ß u Ö Ô E Õ ¥ å C T î ¦ Ù Ç Ö ¤ Ù à ä Õ Ø E Þ Ü Ú & Õ q í ÷ é ¤ ß u Ö Ô r Õ ô Ø Y Ü Ù ã ä W § X î © 4 § è £ Ó Ô E Õ á r Ü â Õ ¡ Ú 4 Þ ã Ü i à à Y § X ³ Õ á r Ü é r Þ Ç Õ q à 8 Ö Ô E Õ á r × ä ó â Ü Þ ã Ù © q Ü Ö Ù Ç × i á ¶ × Û £ Ü Þ Ç Þ á E × ì á r à × Û Ð Ö Ô E Õ à Ü â Õ 5 á r Ü â Õ 5 Ú 4 Þ ã Ü i à à a § c b î § c d 4 î ¢ è ã è Ç è Ù ã á Ö × Ü à Q Ù ã á E ò i Þ Ç Õ Ø r Ü Ù Ç ä W § X î © 4 § b î ¤ § c d 4 î ¢ è Ç è ã è © è ¦ ¤ Ù ã â 5 ó Ù ã Þ ã Ü ä Þ ã ß î ¤ ï 8 Õ Ø U Õ 4 ä Û © × ä â Ü ¡ Ú & × i á r Ú & Õ Ø n Ö ì Ü Þ ì á E Ù © } Y Ú 4 Ü Ö Ù ã × á p × Û Ü Þ Ç Þ U Ö Ô r Õ × Ö Ô E Õ 4 ä á E × ì á r à 4 è Ë E × i ä p Ö Ô E Ù ã à Ø ì ä Ø U × i à Õ î g ï £ Õ ì à Õ I Ü 1 Ô r Ü á r í ¤ ó Ú & ä } Ü ÛÖ Õ í × á Ö × Þ ã × ò ß " Ö Ô r Ü Ö p Õ á r Ú & × n í n Õ q à D U ü ¥ Ú & × i á r Ú & Õ Ø n Ö à Ö Ô r Ü Ö à ì é r à ì â Õ û û E îü i ü ü 8 ï 8 × ä } í E à 4 è Ó Ô ì à f Õ 4 ë i Õ 4 ä ß á E × ì á Û © ä × i â e § © Ö Ô r Ü Ö p Ù à ¢ á E × Ö Ü á r Ü â Õ é U Õ 4 Þ ã × á E ò à 8 Ö × à × â Õ Ú & × á Y Ú & Õ 4 Ø E Ö ì Ü Þ f Ø r Ü Ù ã ä ¡ 2 X ' X î r 3 § g f n î § g h ¤ î I è Ç è ã è r " î ¢ ï ¦ Ô E Õ ä Õ i X ' X Ù ã à 3 Ö Ô E Õ Ú & × i á r Ú & Õ Ø n Ö Ö Ô r Ü Ö à ì é r à ì â Õ à § f Ü á r í p § c h ¤ è 2 þ µ Û 2 à ì Ú } Ô ¤ à ì é Y à ì â Ø n Ö Ù Ç × i á r à ¦ Ü ä Õ 3 á E × Ö 1 Ù í n Õ 4 á Ö Ù r } r Õ q í Ù ã á ¶ Ö Ô r Õ ¡ × á Ö × Þ ã × ò ß i î Y Ü p í ì â â ¡ ß ¶ Ú 4 × á r Ú 4 Õ 4 Ø n Ö ' X $ q Ù à Ú 4 × á r à Ù ã í E Õ 4 ä Õ í Ö × ¬ à ì é r à ì â Õ Ü Þ Ç Þ ì á n ó µ â Ü Ø E Ø U Õ í ª á E × ì á r à è À Ó Ô E Õ ¤ Ø E ä × n Ú 4 Õ à à Q Ù ã á E ò Û © ä × â à Q Ö Õ 4 Ø Y à 1 û ¡ Ü á r í § Ú & ä Õ Ü Ö Õ Ü á p Ù Ç á Ö Õ ä â Õ q í n Ù ã Ü ä ß ä Õ 4 Ø r ä Õ q à Q Õ á n ó Ö Ü Ö Ù ã × á × Û Ö Ô E Õ Ð Ö × Ø E Ù Ú Ð à Ù Ç ò i á r Ü Ö ì ä Õ z f | s r î í n Õ h } r á E Õ q í Ü à " i k y l 7 m n 2 ' © î § © C i è I Ü Ú } Ô v Õ 4 Þ ã Õ 4 â Õ 4 á Ö 8 Û © ä × i â t © v Ù à Ü i u " v z wº £ | q ¤ 4 Å } 4 | q i y x s Ø r Ü Ù ã ä q î n ï ¦ Ô E Õ 4 ä Õ Ü i à Õ Ü Ú } Ô Õ 4 Þ ã Õ 4 â Õ á i Ö Û © ä × â § © Ù à I Ü Ø r Ü Ù ã ä £ Ú 4 × á r à Ù ã à Q Ö Ù ã á E ò × Û p Ü à Õ 4 â Ü á Ö Ù Ú 1 Ú 4 Þ ã Ü i à à ø ù Ü á r Ü â Õ Ú & Þ Ü à à × ä Ü á × á Ö × i Þ Ç × i ò Ù Ú 4 Ü Þ Ú & × i á r Ú & Õ Ø n Ö } ý Ü á r í Ü v Þ ã Ù à Ö × Û I á r × ì á r à R C S U T 7 b x w y r T µ Y ¢ t ¥ w y a a T ¢ " V X S x w V 3 w y q X T 7 q X T Ỳ ¢ p ( ( b c d ¢ T ¢ w y q a T P B 4 ¦ H G 3 & d f e h g d & i B j k & l g d m i ¤ n 7 h g 3 i k o d f p i r q s g d g d i ¤ t ' G u 3 v x w w s i y i i m z m 2 { x s p | h i & i y i m G g ' { x s p | h i v d s 4 G g } w i n d f i h i p i ¤ t ' w d l d p i } z p d l 4 } p g i & 4 g s & l m l z m & 2 { x p i Û © ä × â ± z f | ' Ö Ô Y Ü Ö 1 Ü ä Õ Ö } Ü ò i ò Õ í 7 g â Ü Ø E Ø U Õ í v Ù Ç á p Ö Ô Y Ü Ö 1 à Q Õ â Ü á Ö Ù Ú Ú & Þ Ü à à è Ë Ù ã ò ì ä Õ ¶ û ô Ù ã Þ Ç Þ ì à Q Ö ä } Ü Ö Õ q à Ö Ô E Õ z 0 | r Û © × ä Ö Ô E Õ ¤ Ö × i Ø E Ù Ú ² · 3 o ¹ E » ² ¥ © 0 ¼ ³ è <explode,Ë Ù ã ò ì ä Õ û þ á Ö Õ 4 ä â Õ í n Ù Ü ä ß Ö × Ø E Ù Ú à Q Ù ã ò á r Ü Ö ì ä Õ è i ¥ y ¥ l W ¢ y ¤ y n h i a m k x s k § i k y l 7 m n § o h y y y @ c o h S Ë E × i ä ¡ Õ 4 ë i Õ 4 ä ß ë i Õ 4 ä é µ ï ¦ Ô E Ù Ú } Ô ä Õ 4 Ø E ä Õ à Õ 4 á Ö à f Ö Ô E Õ } r ä } à Ö p Õ Þ Ç Õ â Õ 4 á Ö × Û E Ü 1 Ø r Ü Ù Ç ä p Û © ä × i â © Ü á Y í ¡ Û © × i ä Ṏ 4 ë i Õ 4 ä ß ¡ à Q Õ â Ü á Ö Ù Ú Ú & Þ Ü à à " X ¦ ï ¦ Ô E Ù Ú } Ô ä Õ Ø E ä Õ q à Q Õ á Ö à Ö Ô r Õ } r ä } à Ö Ṏ Þ Ç Õ â Õ á Ö 2 × Û U Ü Ø r Ü Ù ã ä Û © ä × i â § © Ú & ä Õ Ü Ö Õ Ü Ø U × i à à Q Ù ã é E Þ ã Õ ä Õ 4 ó Þ Ü Ö Ù ã × á ! G 3 µ ù ó y X G î ì á E Þ ã Õ à à " X Ù à Ö Ô r Õ í ì â â ¡ ß ¡ Ú 4 Þ ã Ü i à à X C q i è þ á À Ö Ô E Õ Þ ã Ü Ö Q Ö Õ ä Ú Ü à Õ î 2 Ö Ô E Õ p ä Õ 4 Þ Ü Ö Ù Ç × i á r à 5 ï ¦ Ù Ç Þ ã Þ I Ö } Ü i Õ Ö Ô r Õ v Û © × i ä â G W ó y § c î E ï ¦ Ô E Õ 4 ä Õ ' § c Ù ã à ¦ Ü á ß v × Û p Ö Ô E Õ á E × ì á r à 8 Þ Ç Ù à Q Ö Õ í Ù ã á Ö Ô r Õ í ì â â ¡ ß ¡ Ú 4 Þ ã Ü i à à X qè Ë E × i ä Õ ë Õ ä ß ä Õ 4 Þ Ü Ö Ù ã × á Ú 4 × â Ø ì Ö Õ 0 î 2 ï ¦ Ô E Ù Ú } Ô À Ú & × ì á Ö à Ö Ô E Õ á ì â 5 é Y Õ ä ¡ × Û Ö Ù Ç â Õ à ¡ ë Õ ä é W 1 × i ä Ü á ¤ ß × Û ¢ Ù Ç Ö à 8 á E × i â Ù ã á r Ü Þ Ç Ù Ü Ö Ù ã × á r à £ Ú & × Þ ã Þ ã × ¤ Ú Ü Ö Õ à Ð ï ¦ Ù Ç Ö Ô p Ü á ¤ ß × Û f Ö Ô r Õ á E × ì á r à Ü à à × ¤ Ú 4 Ù ã Ü Ö Õ q í % ï ¦ Ù Ö Ô X Ù Ç á E § © ¢ è Ó Ô E Õ Ú & × i Þ Ç Þ ã × n Ú 4 Ü Ö Ù ã × á ï ¦ Ù ã á r í n × ï Ú & × i á r à Ù ã à Q Ö à × Û ¦ Ö Ô E Õ % à Q Õ á Ö Õ 4 á Y Ú & Õ p Ú 4 × á Ö Ü Ù Ç á r Ù Ç á E ò E Ü á r í Ù Ç Ö à Ð Ù ã â â Õ í n Ù Ü Ö Õ 4 Þ ã ß 5 Ø E ä Õ Ú 4 Õ í n Ù ã á E ò Ü á r í à ì Ú 4 Ú & Õ Õ í n Ù ã á E ò ¡ à Õ 4 á Ö Õ 4 á r Ú 4 Õ à è Ó Ô E Õ 8 ä Õ Þ ã Ü Ö Ù ã × á 4 b 4 d ï ¦ Ù Ç Ö Ô 5 Ö Ô E Õ Þ Ü ä ò Õ à Q Ö x b 4 d Ü Ú 4 ä × à à Ü Þ ã Þ ¤ ä Õ Þ ã Ü ó Ö Ù ã × á Y à é U Õ Ú 4 × â Õ à Ö Ô E Õ ¦ à Q Õ Õ í ä Õ 4 Þ Ü Ö Ù Ç × i á ¢ è p ð ¹ Ô r Õ 4 á ! 4 b 4 d Ù à 2 × Û E Ö ß ¤ Ø U Õ ó y X " î U Ü á r í X " Ù ã à á E × Ö Ü á r Ü â Õ Ú & Þ Ü à à 4 î W b 4 d Ù à 1 ä Õ 4 Ø E Þ Ü Ú 4 Õ í é ¤ ß b 4 ó y § î ï ¦ Ô E Õ ä Õ £ á E × ì á g § Ô Y Ü à p Ö Ô E Õ 8 Ô E Ù ã ò Ô E Õ q à Ö 2 Ú & × á E ó Ö ä Ù ã é ì Ö Ù ã × á Ö × 5 Ö Ô E Õ â Ü ò i á E Ù Ö ì í n Õ × Û b 4 d ¦ Ü â × i á E ò Ü Þ Ç Þ f á r × ì á r à Û © ä × â X è Ë Ù ã ò ì ä Õ ô Þ Ç Ù à Ö } à Ö Ô E Õ % à Õ 4 Õ q í E à ï £ Õ ¤ × é E Ö Ü Ù ã á E Õ q í À Û © × i ä Õ Ü i Ú } Ô p × Û p Ö Ô E Õ á E Ù ã á E Õ Ö × i Ø E Ù ã Ú à Û © ä × â I ë Õ 4 á Ö ó F F E è SeedË Ù ã ò ì ä Õ ¡ ¦ n Õ 4 Õ í T ¦ Õ Þ ã Ü Ö Ù ã × á r à è Ê Ï & % H Q x ) Ì Ñ 7 w E H Ï I 9 v C s ¡ ¢ ¢ £ ¤ i ¥ § ¦ r s x © Ó ï 8 × ¹ Û © × ä â à ¶ × Û Ö × Ø E Ù Ú À ä Õ 4 Þ Ü Ö Ù Ç × i á r à ¶ Ü ä Õ " Ú 4 × á r à Ù ã í E Õ 4 ä Õ í ø ¢ D g ý à ß á Ö } Ü Ý ¤ ó t é Y Ü à Õ í ä Õ 4 Þ Ü Ö Ù Ç × i á r à 8 é Y Õ 4 Ö ï £ Õ Õ 4 á Ö Ô E Õ ' ª ç Ü á Y í v Ù Ç Ö à v r « G ¬ y y n i t î ® W «y y n h i x î 2 × ä ° s w y ¥ l 3 k x o y m @ i a m k x s u t x ¤ $ ± f i a i t n ¦ £ k F y h s µ i ³ ²Ã Ü á r í ¹ ø x û i ý¬ ¥ w y h ¤ © t x i a m k x s µ o ï ¦ Ô E Ù Ú } Ô 5 ä Õ 4 Ø r ä Õ q à Q Õ á i Ö ä Õ Þ ã Ü Ö Ù ã × á r à é U Õ & Ö ï 8 Õ 4 Õ á 5 Õ ë Õ 4 á Ö } à Ü á Y í À Õ 4 á Ö Ù Ö Ù Ç Õ q à ¡ Ö Ô r Ü Ö Ú 4 Ü á E á E × Ö é U Õ p Ù í n Õ á i Ö Ù r } Y Õ í ª é ß À à ß ¤ á Ö Ü Ú & Ö Ù Ú Ú & × i á r à Q Ö ä } Ü Ù ã á i Ö } à 4 è I ó µ ä Õ Þ ã Ü Ö Ù ã × á r à ¡ Ü ä Õ â × Ö Ù ã ë Ü Ö Õ í ¬ é ¤ ß ø x Ü i ý Û © ä Õ 4 ó õ ì Õ 4 á Ö v Ú 4 × Þ ã Þ Ç × n Ú 4 Ü Ö Ù ã × á r à × Û Ú 4 Õ 4 ä Ö Ü Ù ã á á E × ì á Y à ï ¦ Ù Ö Ô ª Ö Ô E Õ ¤ Ö × i Ø E Ù Ú ë Õ ä é Y à × ä Ö × i Ø E Ù Ú 5 á r × â Ù Ç á Y Ü Þ ã Ù © q Ü Ö Ù ã × á Y à 4 î p Ü á r í u ø © é Y ý Ü á ¬ Ü Ø E Ø E ä × g Ý ¤ ó Ù ã â Ü Ö Ù ã × á ¥ × Û 8 Ö Ô E Õ Ù ã á Ö ä } Ü ó à Q Õ á i Ö Õ 4 á Ö Ù Ü Þ £ Ú & Õ 4 á Ö Õ 4 ä Ù Ç á E ò ¥ î Ü i à Ù Ç á Ö ä × ó í ì Ú 4 Õ í Ù ã á ¬ ø ¶ µ Ü â Õ 4 ß Ü â Ü E î u D U F H F " I ý è Ë E × ä Õ Ü i Ú } Ô Ö × i Ø E Ù ã Ú Ð ä Õ Þ ã Ü Ö Ù ã × á r y W ó y § G ï £ Õ 8 Ü à à ì â Õ Ð Ö Ô r Ü Ö Õ 4 ë i Õ 4 á Ö à Ü ä Õ ä Õ Ú 4 × ò á r Ù © Õ í é ¤ ß Ü 3 à â Ü Þ ã Þ r à Õ & Ö × Û i X w ( m r q U q " y w W · & k w y @ c o î¸ ¦ ¹ { S c T ¢ b b U p ( r d f b x w y t f d ẁ V a d f p ( b c w y q a T g U a T ¢ d f b V a p ( v c d f Y q X T t ¥ w V X d f p ( b c » º d T c T ẁ y t 7 p ( b U t¼ ' º E dV X S v c q a T ¢ v p ( a dV X d f p ( b x w y t u w V V a w y Y S c r T ¢ b h V X s e T ¥ V ½ º d T ¢ T ¢ b ! V ½ º d p f b c p ( g c b v c S c q a w y a T ¢ h Ö ß ¤ Ø E Ù Ú 4 Ü Þ ã Þ ã ß ȩ Õ ä é Y à ¤ × ä ¶ á E × i â Ù Ç á r Ü Þ Ç Ù Ü Ö Ù ã × á r à ¤ Þ ã Õ & Ý n Ù Ú 4 Ü Þ Ç Ù 4 Ù ã á E ò u Ö Ô r Õ Õ 4 ë i Õ 4 á Ö à è 1 ð ô Õ 5 í n Ù à Ú & × ë Õ ä ¦ Ö ä Ù ã ò ò Õ ä 1 ï 8 × ä } í E à ¦ Û © × i ä a × i á E Þ Ç ß ¤ ï ¦ Ô E Õ á Ö Ô r Õ £ ä Õ 4 Þ Ü Ö Ù Ç × i á 2 r X Ô r Ü i à ¢ Ö Ô E Õ £ à ß ¤ á i Ö } Ü Ý Ö ß ¤ Ø Y Õ | r ø ¡ W y h w h « h ¬ v r «y n h iý × ä ¦ ø ¢ W y w h « h ¬ P ® W «y y n h i ý & è Ó Ô r Õ £ Ö ä Ù ã ò ò i Õ 4 ä 2 ï 8 × ä } í E à Ü ä Õ Ü Ú õ ì Ù Ç ä Õ í ¡ é ¤ ß D z z } p i z 3 | 4 z 1 i | q i ¤ £ ¦ ¥ v § z } z } 1 z }  z } Q ± ã | z { } | i ¦ | 4 3 ¢ Ð i { Q §© £ ¥ ¤ 4 i z µ ¦ Q & p x ¦ n z ¤ § ¤ i i z ¡ | 4 z g z i © ! 2 ã | 3 ¢ Ð i { " §© ! Ç # £ ¥ $ z 3 µ ¦ ĩ Q & v x ¦ n z % u ² 2 z z | 4 g z i ¦ © ! z } i z } z ¦ i z q z £ ¢ | Q Ð ã | I z 8 z Ç & | q ' & "explosion" "blast" at LOCATION on DATE "14 people" "John Doe" NOUN PHRASE "the rocket−propelled grenade" "the car bomb" NOUN PHRASE Object Subject NOUN PHRASE "kill" "murder" VERB PHRASE trigger word Preposition NOUN PHRASE Attachment Prepositional C−Relation Ë Ù Ç ò ì ä Õ 0 Ó p × i Ø E Ù Ú ä Õ 4 Þ Ü Ö Ù Ç × i á r à è Ë Ù ã ò ì ä Õ p Ù Ç Þ ã Þ ì à Ö ä Ü Ö Õ q à 1 é U × Ö Ô b Û © × ä â à 1 × Û Ö × Ø E Ù Ú ä Õ 4 Þ Ü Ö Ù Ç × i á r à è Ó Ô E Õ Ë Ù Ç ò ì ä Õ 8 à Q Ô r × ï 1 à Ü Þ ã à × Ù ã á r à Q Ö Ü á Y Ú & Õ à × Û r Ö ä Ù Ç ò i ò Õ 4 ä ï 8 × ä } í E à p Ü á r í × Û n ä Õ 4 Þ ã Õ 4 ë Ü á Ö ¢ Õ á i Ö Ù Ö Ù Ç Õ q à 4 è þ µ Ö p Ù à f Ö × é Y Õ Ð á r × Ö Õ q í 3 Ö Ô r Ü Ö à Õ & Ö } à f × Û ¤ Ö × i Ø E Ù Ú ä Õ 4 Þ Ü Ö Ù ã × á Y à Ö Ô r Ü Ö v à Q Ô r Ü ä Õ Ö Ô E Õ b à Ü â Õ ª ç Ú Ü á é Y Õ ¶ ò i ä × ì Ø U Õ í Ù ã á i Ö × y ¦ ¥ H i a w t n i a m k x s 0 l µ t x i a i y h w ( s 3 o è Ó Ô E Ù ã à p Ù ã à Õ õ ì Ù ã ë Ü Þ ã Õ 4 á Ö ¢ Ö × 1 Ö Ô r Õ I × é E ó à Õ 4 ä ë g Ü Ö Ù ã × á v Û © ä × i â ø ) ( I Ü á E ò i Ü ä é U Õ 4 ä Õ & Ö Ü Þ x è ã î r û ü i ü ü ý I à Q Ö Ü Ö Ù ã á E ò Ö Ô r Ü Ö Õ & Ý ¤ Ö ä Ü i Ú Ö Ù ã × á ¥ Ø Y Ü Ö Q Ö Õ 4 ä á r à Ú 4 Ü á " é Y Õ í n Õ Ú 4 × â Ø Y × à Q Õ q í ¥ Ù ã á " Ü b à Õ & Ö × Û é E Ù ã á r Ü ä ß ä Õ 4 Þ Ü Ö Ù Ç × i á r à 4 è 1 0 2 ¦ 4 3 4 ¡ 6 5 7 8 3 4 ¡ @ 9 x ¢ B A D C E 7 F G I H £ ð ô Õ â × Ö Ù ã ë Ü Ö Õ Ö Ô E Õ á E Õ Õ í p Ö × Ú 4 × á r à Ù ã í E Õ 4 ä 1 Ü í r í n Ù Ö Ù Ç × i á r Ü Þ f é r Ù Ç á r Ü ä ß Õ & Ý ¤ Ö ä Ü i Ú Ö Ù ã × á b ä Õ Þ ã Ü Ö Ù ã × á r à ï ¦ Ù Ö Ô b Ö Ô E Õ 5 Õ 4 Ý E Ü â Ø E Þ ã Õ ¡ Ù ã Þ Ç Þ ì à Q Ö ä } Ü Ö Õ q í ¶ Ù ã á Ë Ù Ç ò ì ä Õ n è ¦ Ó Ô E Õ Ö Õ & Ý ¤ Ö Û © ä × â Ë Ù ã ò ì ä Õ é Y Õ Þ Ç × i á E ò i à Ö × v Ü í n × n Ú ó ì â Õ 4 á Ö ä Õ Þ Ç Õ ë Ü á Ö 1 Û © × ä Ö × i Ø E Ù Ú ² · 3 o ¹ E » ² ¥ © 0 ¼ ³ è X Ó Ô r Õ ¡ × á r Þ Ç ß à ß á Ö } Ü Ý ¤ ó t é Y Ü à Õ í ä Õ 4 Þ Ü Ö Ù Ç × i á r à 8 Ö Ô Y Ü Ö Ú 4 Ü á ¤ é Y Õ Ù í n Õ 4 á Ö Ù r } r Õ q í ¤ Ù Ç á Ö Ô E Ù ã à Ö Õ 4 Ý ¤ Ö Ü ä Õ v Ö Ô E Õ Ö ï 8 × b ë Õ ä é E ó t × i é x Õ Ú & Ö ¡ ä Õ Þ ã Ü Ö Ù ã × á r à è ¥ Ó Ô E Õ 4 ß ¥ é U × Ö Ô ä Õ 4 Þ Ü Ö Õ 3 Ö ä Ù ã ò ò i Õ 4 ä 8 ë Õ 4 ä é r à ø Q P ¤ Ù Ç Þ ã Þ 1 R 5 Ü á r í S P ï £ × ì á r í R ý I Ö × Õ 4 á Ö Ù Ç Ö Ù ã Õ à Ö Ô Y Ü Ö ä Õ 4 Ø r ä Õ q à Q Õ á i Ö ë Ù Ú Ö Ù Ç â à è 3 ú × ï £ Õ ë Õ ä î r Ö Ô E Õ ¡ Ö Õ & Ý ¤ Ö Ü Þ à Q × p Ú & × á E ó Ö Ü Ù Ç á Y à Ù Ç á n Û © × i ä â Ü Ö Ù ã × á p Ü é Y × ì Ö ¦ Ö Ô r Õ Ö ß ¤ Ø U Õ × Û é Y × i â ¡ é P Ü 5 Ö ä ì Ú y Ú & ä } Ü â â Õ í À ï ¦ Ù Ö Ô À Õ & Ý n Ø E Þ ã × i à Ù Ç ë i Õ à T R % Ü á r í " Ü é U × ì Ö 5 Ö Ô E Õ Þ Ç × n Ú Ü Ö Ù ã × á P í E × ï ¦ á i Ö × ï ¦ á £ × Þ ã × â 5 é Y × U R E è 1 6 à Ù ã á r í n Ù Ú 4 Ü Ö Õ í Ù Ç á Ë Ù Ç ò ì ä Õ n î ä Õ 4 Þ Ü Ö Ù ã × á Y à 5 é U Õ & Ö ï 8 Õ 4 Õ á À Ö Ô E Õ ¤ Ö ä Ù Ç ò i ò Õ ä 5 ï 8 × ä } í E à Ü á r í " Ö Ô E Õ q à Q Õ Ö ï 8 × ä Õ 4 Þ ã Õ 4 ë Ü á Ö ¦ Õ 4 á Ö Ù Ç Ö Ù ã Õ à Ü ä Õ ' I ó µ ä Õ Þ ã Ü Ö Ù ã × á r à è
Up to 80 people were killed and 1,400 wounded the Central Bank in downtown Colombo.
Object Object
C−relation C−relation C−relation C−relation after a truck crammed with explosives plowed into Ë Ù ã ò ì ä Õ Ý E Ü â Ø E Þ ã Õ × Û Ö × i Ø E Ù ã Ú 3 ä Õ 4 Þ Ü Ö Ù Ç × i á r à 4 è % R x q a d f ( ( T ¢ q º d p ( q a c i © p ( q b c p ( r d f b x w y t f d ẁ V X d f p ( b W V D X p y i u g T ¢ q X e Ỳ w y q a T s p ( e U u V a w y d f b c T ¢ ! e G ¼ g c a d f b c f V a S c T s r p ( q a v c S c p ( t f p ( ( d f Y ẁ y t W c T ¢ q a dg ( w V X d f p ( b c p y i d V X S c T V a q X d f ( ( T ¢ q º d p ( q X c d i p ( q a Y h c b 7 £ d 2 ¦ x £ U H C ¦ f e ¤ g ¡ © i h m ¦ 5 £ ¤ 3 ¡ # 9 7 ¢ 9 » ¦ f G x ¡ # h p £ H @ 7 8 3 4 ¡ ¦ q 9 7 © r ì ä p â × n í n Õ Þ × Û í E Ù ã à Ú & × ë i Õ 4 ä Ù Ç á E ò ¬ Õ 4 Ý Ö ä Ü i Ú Ö Ù Ç × i á ä Õ 4 Þ Ü Ö Ù Ç × i á r à Õ â 5 ó Ø E Þ ã × ß n à % Ü ª ë i Õ 4 ä ß ¹ Þ ã Ü ä ò i Õ ô Ú 4 × ä Ø ì à × Û Ö Õ 4 Ý ¤ Ö à è ð ô Õ ì à Õ í C Ö Ô r Õ 6 ¦ s a ( 2 6 þ å Ó Ú 4 × ä Ø ì à v ø x ae C ¤ ' 1 8 Ü Ö } Ü Þ ã × ò d t 5 ae C ¤ ' û ü i ü i û Ó 3 D q ý & î ï ¦ Ô E Ù Ú } Ô ) Ú & × á Ö } Ü Ù ã á r à " I H â Ù ã Þ ã Þ Ç Ù ã × á ) ï 8 × ä } í E à ¹ Ú & × i ä ä Õ 4 Þ Ü Ö Ù Ç á r ò ³ Ö × Ü é U × ì Ö f v u × Û í E Ü Ö } Ü E è £ Ó Ô r Õ Ù ã í E Õ Ü ¢ w Ù ã à Ö Ô r Ü Ö 3 à Ö } Ü ä Ö Ù ã á E ò ï ¦ Ù Ö Ô Ü v à Õ 4 Õ q í b à Õ & Ö × Û Ð Õ 4 Ý Ö ä Ü i Ú Ö Ù Ç × i á ¶ ä Õ Þ ã Ü Ö Ù ã × á r à ï 8 Õ 5 Ú & × ì Þ í b à Õ 4 Ø r Ü ä Ü Ö Õ Ö Ô r Õ 1 Ú & × ä Ø ì à 2 Ù ã á Ö × ø ¢ D g ý 2 Ü à Q Õ 4 Ö Ð × Û X ä Õ Þ Ç Õ ë g Ü á Ö í E × ¤ Ú ì â Õ 4 á Ö à I Ú & × á E ó Ö Ü Ù Ç á r Ù Ç á E ò 5 Ö Ô r Õ à Õ 4 Õ q í ä Õ 4 Þ Ü Ö Ù Ç × i á r à 4 î n Ü á r í b ø t û ý £ Ü 5 Ú 4 × â Ø E Þ ã Õ 4 â Õ 4 á Ö Ü ä ß à Õ & Ö × Û á E × á n ó µ ä Õ 4 Þ ã Õ 4 ë Ü á Ö í n × n Ú ì â Õ 4 á Ö à è ¦ þ á r à Q Ö Õ Ü i í % × Û Ð Ú 4 × á r à Ù ã í E Õ 4 ä ó Ù ã á E ò Ö Ô E Õ ¡ Õ á i Ö Ù Ç ä Õ c 6 ¦ s a ( 6 1 þ å Ó ñ Ú & × ä Ø ì à ï £ Õ ì à Õ í % × i á E Þ Ç ß ¤ í n × n Ú ó ì â Õ 4 á Ö } à 5 ä Õ & Ö ä Ù ã Õ 4 ë i Õ í ¬ ï ¦ Ô E Õ á À Û © × i ä â ì Þ ã Ü Ö Ù ã á E ò b Ü õ ì Õ 4 ä ß y x % Ö Ô r Ü Ö Ú } Ô r Ü ä Ü i Ú Ö Õ ä Ù 4 Õ q à Ö Ô E Õ 3 à Ú & Õ á r Ü ä Ù ã × 5 í n × i â Ü Ù Ç á p è 2 Ó Ô r Õ þ á n Û © × i ä â Ü Ö Ù ã × á T ¦ Õ 4 Ö ä Ù Ç Õ ë g Ü Þ Ð ø © þ T ý 1 à Q ß n à Q Ö Õ 4 â ï £ Õ Õ â Ø r Þ Ç × ß Ù à º ¦ x £ 6 T 2 Ó B ø X ì Ú y ó Þ ã Õ 4 ß Õ & Ö Ü Þ t è Ç î x D U F H F & % i ý è E Ë E × ä Õ & Ý E Ü â Ø E Þ ã Õ î g Û © × i ä p Ö Ô E Õ P Q é U × â ¡ é r Ù Ç á E ò U R 1 í n × ó â Ü Ù ã á ¢ î X ï £ Õ ì à Õ í ¶ Ü p à Q Ù ã á E ò Þ ã Õ & ó Õ ß ¤ ï £ × i ä í õ ì Õ 4 ä ß ~ » 8 y ¦ ¥ l 7 ¤ © k 4 @ H y B A y Ü á Y í v ä Õ & Ö ä Ù ã Õ 4 ë i Õ í ¤ û ü ü i ü 5 í E × ¤ Ú ì â Õ 4 á Ö à è Ó Ô E Õ Ö ä Ù ã ò ò Õ ä ï £ × i ä í r à Û © × ä ¡ Ö Ô E Õ á r Õ 4 ï B Ö × i Ø E Ù ã Ú ä Õ Þ ã Ü Ö Ù ã × á ¥ Ü ä Õ Ü Þ à × p í n Õ ä Ù ã ë Õ q í f è 0 Ë r × ä 3 Õ & Ý E Ü â Ø E Þ ã Õ î X Ù ã á ¶ Ö Ô E Õ Ú Ü à Õ ¡ × Û Ö Ô E Õ Ó p × i Ø E Ù Ú ² · Ö Ô E Õ í n Õ 4 ä Ù Ç ë i Õ í Ö ä Ù ã ò ò Õ ä ï 8 × ä } í E à Û © × ä ¡ Ö Ô E Õ p à Q Õ Õ í ¬ ä Õ 4 Þ Ü Ö Ù ã × á Y à Ü ä Õ 8 y ¦ ¥ l 7 ¤ © k 4 @ H y B A Ü á r í 5 8 y @ H y h i k s 7 t x i ¥ y B A è Î Ó Ô E Õ ¶ Ú & × i ä ä Õ à Ø Y × i á r í n Ù ã á E ò ä Õ 4 Þ ã Õ 4 ë Ü á Ö Õ 4 á Ö Ù Ö Ù Ç Õ q à Û © × ä Ö Ô E Õ d a ¦ k w y @ % Ú 4 × á r à Ù ã à Q Ö à × Û ¡ ï 8 × ä } í E à P Q é U × â 5 é 4 R E î W P Q ò i ä Õ á r Ü í E Õ R Ü á r í P Q â Ù Ç á r Õ R E è ñ Ó Ô E Õ ¶ Ö ä Ù Ç ò ó ò Õ ä ï 8 × ä } í E à î U Ö Ô E Õ Ú & × i ä Ø ì à W ¡ Ü á r í b Ö Ô r Õ à Õ 4 Õ q í b ä Õ 4 Þ Ü Ö Ù Ç × i á b Ü ä Õ Ö Ô r Õ ¦ × á E Þ ã ß ¡ Ù ã á E Ø ì Ö } à Ö × 3 × ì ä Ð Ø E ä × n Ú & Õ q í ì ä Õ £ Ö Ô r Ü Ö I í E Ù ã à Ú & × ë i Õ 4 ä } à é U × Ö Ô à ß á Ö } Ü Ý ¤ ó t é Y Ü à Õ í ¥ Ü á Y í U I ó µ ä Õ Þ ã Ü Ö Ù ã × á r à Û © ä × i â ( Ö Ô E Õ ¤ í n × n Ú ì â Õ á i Ö } à 4 è Ó Ô E Õ í n Ù à Ú & × ë Õ ä ß Ø E ä × n Ú & Õ q í ì ä Õ 3 Ô Y Ü à 8 Ö Ô r Õ Û © × Þ ã Þ Ç × ï ¦ Ù ã á E ò à Ö Õ 4 Ø r à 3 £ U G H ± d ± W H D e D f g e 2 ± d h ¦ i ± d q f s ® " H è j ¦ ¤ ß ¤ á Ö Ü Ý ó µ é r Ü i à Q Õ q í ä Õ 4 Þ Ü Ö Ù ã × á Y à Ü E è Ë r ä × i â Õ Ü Ú } Ô í n × n Ú ì â Õ á i Ö Û © ä × â k ï £ Õ 3 Õ & Ý ¤ Ö ä } Ü Ú & Ö Ü Þ Ç Þ ª Ṏ 4 ä é n ó ¢ ¦ ì é x Õ q Ú Ö q î " ª Ṏ 4 ä é n ó r é x Õ Ú & Ö î " ª Ṏ ä é n ó ç Ð ä Õ 4 Ø U × i à Ù Ç Ö Ù ã × á r Ü Þ C 6 8 Ö Q ó Ö Ü i Ú } Ô E â Õ 4 á Ö p ä Õ 4 Þ Ü Ö Ù Ç × i á r à 4 è 5 Ë r × ä p Ö Ô E Ù à ¢ Ø ì ä Ø U × i à Õ ï £ Õ ì à Q Õ q í Ü ¦ í n × n Ú ì ó â Õ 4 á Ö 2 Ø r Ü ä } à Q Õ ä ¢ Ö Ô r Ü Ö 2 Ù ã à é r Ü i à Q Õ q í â Ü Ù ã á E Þ ã ß 3 × i á } r á r Ù Ö Õ 8 à Q Ö Ü Ö Õ Ð Ö Õ Ú } Ô n ó á E × i Þ Ç × i ò ß è ¤ × n Ú ì â Õ 4 á Ö Ø E ä × n Ú & Õ q à à Ù Ç á r ò à Q Ö Ü ä Q Ö } à ï ¦ Ù Ö Ô Ö Ô E Õ Ù í n Õ 4 á n ó Ö Ù © } Y Ú Ü Ö Ù ã × á × Û á r Ü â Õ í ª Õ 4 á Ö Ù Ö Ù Ç Õ q à 4 è ç Ü ä Ö Q ó µ × Ûó à Q Ø U Õ 4 Õ q Ú } Ô ÷ ø ù ç r ¦ r ý Ö Ü ò i à ñ Ü á r í á E × á n ó µ ä Õ Ú ì ä à Ù Ç ë i Õ î À × i ä ñ é r Ü à Ù Ú î á E × ì á ( Ø E Ô r ä Ü i à Q Õ q à ø ù å ç & ¦ ý f Ü ä Õ 2 Ù í n Õ 4 á Ö Ù © } r Õ q í ì à Q Ù ã á E ò ¦ Ö Ô E Õ Ð Ó ae â Õ & Ö Ô E × n í ä Õ Ø Y × i ä Q Ö Õ í Ù ã á ¼ ø x å 1 ò Ü Ù 1 Ü á Y í Ë Þ ã × ä Ù ã Ü á ¢ î û ü ü µ D q ý è ¦ ¤ Ù ã â Ø E Þ Ç Õ % ë i Õ 4 ä é " Ø E Ô r ä Ü i à Q Õ q à ø ª 3 ç £ ý Ü á r í ô Ø E ä Õ Ø Y × à Q Ù Ç Ö Ù ã × á Y Ü Þ Ø E Ô E ä } Ü à Õ à ø x ç I ç £ ý Ü ä Õ Ù í n Õ 4 á Ö Ù r } r Õ q í ï ¦ Ù Ç Ö Ô } Y á E Ù Ö Õ & ó à Ö } Ü Ö Õ Ü ì Ö × i â Ü Ö Ü ª ø Ë ¦ k 6 ý 5 ò i ä Ü â â Ü ä à è ¦ ¤ ß ¤ á n ó Ö Ü i Ú Ö Ù ã Ú ä Õ 4 Þ Ü Ö Ù Ç × i á r à à ì Ú } Ô ¬ Ü à ª Ṏ ä é E ó ¥ ¦ ì é x Õ Ú & Ö î ª Ṏ ä é E ó r é x Õ q Ú Ö q î Ü á Y í ª Ṏ 4 ä é n ó ç Ð ä Õ 4 Ø U × i à Ù Ö Ù Ç × i á r Ü Þ » 6 8 Ö Q Ö } Ü Ú } Ô E â Õ 4 á Ö Ü ä Õ p ä Õ Ú 4 × ò i á E Ù © Õ í é ¤ ß v Ü á r × Ö Ô r Õ 4 ä º Ë ¦ k 6 è é ¢ è § I Ü Ú } Ô ¬ à ß á Ö } Ü Ý ¤ ó t é Y Ü à Õ í ¶ ä Õ 4 Þ Ü Ö Ù Ç × i á Ù à Õ 4 Ý ¤ Ø Y Ü á r í n Õ q í b é ¤ ß Ú & × i á r à Ù ã í n Õ ä Ù ã á E ò 5 Ö Ô E ä Õ 4 Õ Ø U × i à à Ù Ç é E Ù ã Þ ã Ù Ö Ù Ç Õ q à ø © Ùý 2 T 1 Õ 4 Ø E Þ Ü Ú 4 Õ 5 Õ q Ü Ú } Ô ¶ ï 8 × ä } í % ï ¦ Ù Ç Ö Ô b Ù Ç Ö à 3 ä × ¤ × Ö Û © × i ä â ¤ î X Õ èò r è Ö Ô r Õ ë i Õ 4 ä é l P Q ï 8 × ì á r í n Õ í 4 R ¥ ï ¦ Ù Ö Ô m P ï £ × ì á r í R ª Ü á r í ÷ Ö Ô E Õ ô á E × ì á P Ö ä ì Ú y ¤ à T R 5 ï ¦ Ù Ö Ô n P Q Ö ä ì Ú y 8 R E è ø © Ù ã Ù ý » T ¦ Õ Ø E Þ Ü Ú & Õ Ö Ô E Õ 3 ï 8 × ä } í ï ¦ Ù Ö Ô p Ü á ß × Û ¢ Ö Ô E Õ Ú 4 × á r Ú 4 Õ 4 Ø n Ö } à Ö Ô Y Ü Ö ¦ à ì é Y à ì â Õ 3 Ù Ö Ù ã á p Ü Ô Y Ü á r í ¤ ó Ú & ä } Ü ÛÖ Õ í f î ¤ ò i Õ 4 á E Õ ä Ü Þ r × á Ö × Þ ã × ò ß i î Õ èò r è o P Ö ä ì Ú y 8 R ¶ â Ü g ß ¬ é U Õ ä Õ 4 Ø r Þ ã Ü i Ú & Õ í " é ¤ ß " p ! 1 q C 8 6 r W 9 " 1 X î t s u " 4 d 8 6 v $ w r W 4 î E × ä x y q z ( 1 4 r W 4 è ø © Ù ã Ù Ç Ùý $ T 1 Õ 4 Ø E Þ Ü Ú 4 Õ Õ Ü i Ú } Ô á r Ü â Õ ï ¦ Ù Ö Ô v Ù Ç Ö à 8 Ú & × i ä ä Õ à Ø Y × i á r í n Ù ã á E ò á r Ü â Õ q í ÷ Õ á i Ö Ù Ö ß ÷ Ú 4 Þ ã Ü i à à î 1 Õ i èò Y è { P ae ¢ × à p 6 á E ò Õ Þ Ç Õ q à g R " ï ¦ Ù Ö Ô X AE £ | R C S U d f C d f U T w $ º 5 w y } c q a X V 5 q X T ¢ v H p ( q V a T ¢ d f b u w y b c w y q X e T q E T ¥ V w y t h f i h y R C S U T E e h g c T ¢ q ¶ ¼ d f 7 ( T ¢ b c T ¢ q w V X T ¢ º e h ¼ s Y p ( b U a d f c T ¢ q a d f b U V X S c T 5 g T ¢ q a e p ( q 7 b U p ( r u d f b x w y t f d ẁ V X d f p ( b f i © q a p ( r V a S c T r p ( V X u q a T ¢ Y ¢ T b V E w y c c T ¢ 0 V X p ( v c d f Y q a T ¢ t ¥ w V X d f p ( b " h ¢ ¡ ² 2 ¾ AE 8 º Ü á r í P Ü á 3 × Û $ 6 â Õ ä Ù Ú 4 Ü v R ï ¦ Ù Ç Ö Ô AE ¤ £ ¦ ¥ ¡ º 8 ¾ § ¡ ² 2 ¾ AE 8 º è c Ë Ù Ç ò ì ä Õ # 1 Ù ã Þ ã Þ ì à Q Ö ä } Ü Ö Õ à p Ö Ô E Õ Õ 4 Ý ¤ Ø Y Ü á r à Ù Ç × i á × Û r ä Õ 4 Þ Ü Ö Ù Ç × i á r à èË Ù Ç ò ì ä Õ # Ý n Ø r Ü á Y à Q Ù ã × á r à 8 × Û Ö ï 8 × 5 ä Õ 4 Þ Ü Ö Ù Ç × i á r à è j ¦ n Ü Þ Ç Ù ã Õ 4 á r Ú 4 Õ & ó µ é r Ü à Õ í I ó t ä Õ 4 Þ Ü Ö Ù Ç × i á r à Ü E è 6 í E í n Ù Ç Ö Ù ã × á Y Ü Þ Y Ö × Ø r Ù ã Ú ¦ ä Õ 4 Þ Ü Ö Ù Ç × i á r à â Ü g ß é Y Õ 3 í n Ù ã à Ú & × ë i Õ 4 ä Õ í ï ¦ Ù Ç Ö Ô E Ù ã á Ü à Ü Þ Ç Ù ã Õ 4 á r Ú 4 Õ ï ¦ Ù Ç á r í E × ï À Û © × ä Ṏ q Ü Ú } Ô ë Õ ä é ¢ è 2 Ó Ô E Õ ¦ ï ¦ Ù ã á r í n × ï Ù à I Ú & ä Õ Ü Ö Õ q í é ß Ú & × i á r à Ù ã í n Õ ä Ù ã á E ò © ê à Õ 4 á Ö Õ á r Ú & Õ q à Ð Ø E ä Õ Ú 4 Õ í n Ù ã á E ò Ü á r í à ì Ú 4 Ú 4 Õ 4 Õ í E Ù Ç á E ò Ö Ô E Õ ¡ à Õ 4 á Ö Õ á r Ú & Õ Ú & × i á Ö Ü Ù ã á E Ù ã á E ò Ö Ô E Õ ¡ ë Õ 4 ä é ¢ è þ á ¤ × ì ä Õ & Ý n Ø U Õ 4 ä Ù Ç â Õ á i Ö } à ï £ Õ à Õ & Ö © û n è é ¢ è C Ó Ô E Õ ¶ å ç £ à × Û Õ Ü Ú } Ô u à Ü Þ ã Ù Ç Õ á r Ú & Õ ¤ ï ¦ Ù ã á r í n × ï Ü ä Õ Õ 4 Ý ¤ ó Ö ä } Ü Ú & Ö Õ q í Ü á r í × i ä í n Õ ä Õ q í f è 2 Ó Ô E Õ 1 é r Ü i à Q Ù Ú ì á r í E Õ 4 ä Þ Ç ß ¤ Ù ã á E ò Ô ¤ ß Ø U × Ö Ô E Õ & ó à Ù ã à ¦ Ù à ¦ Ö Ô r Ü Ö ' I ó t ä Õ 4 Þ Ü Ö Ù Ç × i á r à é Y Õ 4 Ö ï £ Õ Õ 4 á % Ü ë i Õ 4 ä é Ü á r í % Ü á ¤ Õ 4 á Ö Ù Ö ß Û © ä × â Ù Ç Ö à í n × â Ü Ù Ç á " Ü ä Õ à Q Ù ã â Ù Ç Þ Ü ä Ö × b Ö Ô E Õ Ü á r Ü Ø r Ô E × ä Ù ã Ú ä Õ Þ ã Ü ó Ö Ù ã × á Y à é Y Õ 4 Ö ï £ Õ Õ 4 á ¬ Õ 4 á Ö Ù Ö Ù Ç Õ q à Ù Ç á ô Ö Õ & Ý ¤ Ö à è p Ó Ô E Õ ä Õ 4 Û © × ä Õ î p Ü i à Ù Ç Þ ã Þ ì à ó Ö ä } Ü Ö Õ í " Ù Ç á Ë Ù ã ò ì ä Õ I ¤ î I à Ù Ç Ý ¬ Ø U × i à à Ù Ç é E Þ ã Õ p Ö Õ & Ý ¤ Ö à Ø r Ü á r à î Ð Ó º ¦ u D 4 ó # r î Ú 4 Ü á é U Õ í n Õ h } r á E Õ í ¢ è v Ó Ô E Õ Ø E ä × â Ù Ç á r Õ 4 á r Ú 4 Õ × Û 8 Õ á Ö Ù Ç Ö Ù ã Õ à ä Õ 4 Þ Ü Ö Õ í Ö × Ö Ô E Õ Ü á r Ú } Ô E × ä Ú Ü á é Y Õ Ü Ø E Ø E ä × g Ý n Ù Ç â Ü Ö Õ í ¡ é ¤ ß 5 Ü Þ ã Õ & ÛÖ Q ó t Ö × ó t ä Ù ã ò Ô Ö × ä } í n Õ ä Ù ã á E ò r è Ð á Ö Ù Ö Ù Ç Õ q à Ü ä Õ } r ä } à Ö ä Õ 4 Ö ä Ù Ç Õ ë Õ í p Û © ä × â Ö Õ 4 Ý Ö 3 à Q Ø r Ü á z d 8 é Y Õ 4 Û © × ä Õ 5 é U Õ 4 Ù ã á E ò ¤ ä Õ 4 Ö ä Ù Ç Õ ë Õ q í % Û © ä × â z d ' è Ó Ô E Õ à Ü â Õ Ü Ø E ó Ø E ä × g Ý n Ù Ç â Ü Ö Ù Ç × i á ï Ü à Ù Ç á Ö ä × n í ì Ú 4 Õ í ô é ¤ ß ª ø µ Ü â Õ 4 ß i Ü â Ü E î & D U F H F P I ý Û © × ä ¦ ä Õ à × Þ ã ë ¤ Ù Ç á r ò Ú 4 × ä Õ & Û © Õ 4 ä Õ 4 á Y Ú & Õ 3 ä Õ Þ ã Ü Ö Ù ã × á r à è TS5 Sentence (i−2) Sentence (i) Trigger−Word Sentence (i−1) TS3 TS1 TS2 Sentence (i+1) TS4 Sentence (i+2) TS6 Ë Ù ã ò ì ä Õ I r ä } í n Õ 4 ä Ù ã á E ò à Ü Þ ã Ù ã Õ 4 á Ö ¦ Õ 4 á Ö Ù Ç Ö Ù ã Õ à è Ú è 8 Ü á Y í n Ù ã í r Ü Ö Õ ¡ Õ & Ý ¤ Ö ä } Ü Ú & Ö Ù ã × á % ä Õ 4 Þ Ü Ö Ù ã × á Y à 1 Ü ä Õ ò Õ á E Õ 4 ä } Ü Ö Õ í Ù ã á ñ Õ Ü Ú } Ô ñ à Ü Þ ã Ù Ç Õ á r Ú & Õ ¥ ï ¦ Ù ã á r í n × ï è Ë Ù Ç ä } à Q Ö î x w ( m r q U q " y w P ¬ W y h w h « § ä Õ Þ ã Ü Ö Ù ã × á r à v Ü ä Õ b Ú & ä Õ Ü Ö Õ í u Ü á r í u Õ & Ý n Ø r Ü á r í n Õ í Û © × i ä v Õ q Ü Ú } Ô Ú 4 Ü á r í n Ù í E Ü Ö Õ Õ á Ö Ù Ç Ö ß è Ð Ó Ô E Õ Õ & Ý n Ø r Ü á r à Ù Ç × i á r à Ü ä Õ í n × i á E Õ à Q Ù ã â Ù Ç Þ Ü ä Þ Ç ß Ü à Û © × ä 5 à ß ¤ á Ö Ü Ý ¤ ó µ é r Ü à Õ í ä Õ 4 Þ Ü Ö Ù ã × á Y à 4 è ¤ ú 1 × ï 8 Õ 4 ë i Õ 4 ä q î p ï ¦ Ô E Õ á À Ú & × á E ó à Ù ã í n Õ ä Ù ã á E ò 1 × i á E Õ I Õ 4 Ý n Ø r Ü á r à Ù ã × á Û © × i ä x w ( m r q U q " y w P ¬ W y h w h « § G Ö Ô r Õ Õ & Ý n Ø r Ü á r à Ù Ç × i á À Ù ã à Ü Þ ã Þ ã × ï £ Õ q í ¬ × á E Þ ã ß ¥ Ù Û Ù Ö ï 8 Ü i à á E × Ö Ü Þ Ç ä Õ Ü i í n ß ô Ù Ç á n ó Ö ä × n í ì Ú 4 Õ í é ¤ ß Ü á ß Õ 4 Ý n Ø r Ü á r à Ù ã × á Û © × i ä Ü á ¤ ß 3 w ( m r q G q " y h w P ¬ W y w h « § î ï ¦ Ù Ö Ô " ! § $ # Y è Ë E × ä à ß ¤ á i Ö } Ü Ý ¤ ó µ é r Ü à Õ í 3 ä Õ 4 Þ Ü Ö Ù ã × á Y à 4 î g ä Õ 4 Ø U Õ & Ö Q ó Ù Ç Ö Ù ã ë Õ Õ & Ý n Ø r Ü á r à Q Ù ã × á Y à 3 í E × á r × Ö Õ & Ý n Ù à Ö q è Ó Ô r Ù ã à Ù à 3 Ö Ô E Õ ä } Ü Ö Ù Ç × i á r Ü Þ ã Õ Û © × ä 1 í E Ù ã à Ü é E Þ ã Ù ã á E ò ä Õ Ø Y Õ 4 Ö Ù Ç Ö Ù ã × á r à ¦ × Û I ó µ ä Õ Þ ã Ü Ö Ù ã × á p Õ & Ý n Ø r Ü á Y à Q Ù ã × á r à è 3 £ U G 0 ' ± W H & % ± 2 H ¦ e f e 2 ± d h ¦ i ± d q f s ® " H è Ë E × i Þ ó Þ ã × ï ¦ Ù Ç á E ò ¡ Ö Ô E Õ â Õ & Ö Ô E × n í v Ù ã á Ö ä × n í ì Ú & Õ q í Ù ã á ô ø ¶ T ¦ Ù ã Þ Ç × H § î d D U F H F H # ý Ṏ q Ü Ú } Ô ä Õ 4 Þ Ü Ö Ù ã × á ¬ Ù à ä } Ü á 3 i Õ í é r Ü à Õ í ô × i á ¥ Ù Ö } à y h ¤ © y ( ' c t s 7 n y y h ¬ t x i ¥ y Ü á r í Ù Ç Ö à £ d w y 0 ) h v µ y s u n 2 1 è Ó Ô E Õ £ 5 w y 0 ) v µ y h s 7 n ( 1 % × Û Ü á ÷ Õ & Ý ¤ Ö ä } Ü Ú & Ö Õ q í u ä Õ 4 ó Þ Ü Ö Ù ã × á Î Ú & × ì á Ö à Ö Ô r Õ ô á ì â 5 é Y Õ ä ¤ × Û ¡ Ö Ù Ç â Õ q à Ö Ô E Õ ¬ ä Õ 4 Þ Ü Ö Ù ã × á C Ù ã à Ù í n Õ 4 á Ö Ù © } r Õ q í C Ù ã á Î Ö Ô E Õ ¥ ä Õ 4 Þ ã Õ 4 ë Ü á i Ö ¶ í n × n Ú ì â Õ 4 á Ö à è ) þ á Î Ü ¹ à Ù ã á E ò Þ ã Õ í n × n Ú ì â Õ á Ö î f × á r Õ ¡ Õ & Ý ¤ Ö ä Ü i Ú Ö Õ q í ¤ ä Õ Þ ã Ü Ö Ù ã × á ¶ â Ü g ß ¤ é U Õ 5 Ù í n Õ 4 á Ö Ù r } r Õ q í â ì Þ Ö Ù Ç Ø E Þ ã Õ Ö Ù ã â Õ à è ¡ Ó Ô E Õ y h ¤ © y ( ' c t s 7 n y y G ¬ t x i ¥ y £ d w y 0 ) h v µ y s u n 2 1 ḱ x v x s W i x î f ï ¦ Ô E Õ ä Ṍ k x v x s µ i 2 â Õ Ü i à ì ä Õ q à 1 Ö Ô E Õ 5 á ì â 5 é Y Õ ä × Û Ð Ö Ù Ç â Õ à Ü á p Õ & Ý ¤ Ö ä } Ü Ú & Ö Õ q í ä Õ Þ ã Ü Ö Ù ã × á Ù à 8 ä Õ q Ú & × i ò á E Ù 4 Õ q í Ù Ç á ¤ Ü á ¤ ß í n × n Ú ì â Õ á Ö Ú & × i á r à Ù ã í n Õ ä Õ q í f è T ¦ Õ 4 Þ Ü Ö Ù Ç × i á r à ï ¦ Ù Ç Ö Ô y h ¤ © y ( ' c t s 7 n y y G ¬ t x i ¥ y 4 3 u Ü ä Õ í n Ù à Ú Ü ä } í n Õ í ª Ü à á E × i á n ó µ ä Õ Þ Ç Õ ë g Ü á Ö è 6 í E í n Ù Ç Ö Ù ã × á Y Ü Þ ã Þ Ç ß i î £ ï 8 Õ p â Ü Ù Ç á n ó Ö Ü Ù Ç á ô × á r Þ Ç ß ¶ ä Õ 4 Þ Ü Ö Ù Ç × i á r à 3 ï ¦ Ù Ö Ô 5 9 u 7 6 ¤ 8 ( 9 A @ C B E D 0 F H G P I Q 6 ¤ 8 R 9 S @ Q B 2 u U T X î ï ¦ Ô E Õ ä Õ V F W G P I Q 6 ¤ 8 ( 9 A @ C B Ù ã á r í n Ù Ú 4 Ü Ö Õ q à Ö Ô r Õ Ö × Ö Ü Þ £ á ì â ¡ é U Õ 4 ä × Û ¦ Ù Ç á n ó à Q Ö Ü á Y Ú & Õ à Û © × i ä Ö Ô E Õ â × i à Q Ö Ú 4 × â â × á 5 ä Õ Þ ã Ü Ö Ù ã × á ¢ î g Ö × 3 Ü g ë i × Ù í á E × i Ù ã à Õ × ä ì á E Ù Ç á E Û © × ä â Ü Ö Ù ã ë Õ ä Õ 4 Þ Ü Ö Ù ã × á Y à 4 èX 3 £ U G i a 9 ± H $ ® $ f ¦ i ± d q f s ® " H è Ó Ô E Õ % ä } Ü á 3 ¤ Ù ã á E ò Û © ä × i â ¦ Ö Õ 4 Ø ÷ û ô í n Õ & Ö Õ 4 ä â Ù ã á E Õ q à Ü á u × ä } í n Õ ä é U Õ & ó Ö ï 8 Õ 4 Õ 4 á ª Ü Þ ã Þ 8 Ú Ü á r í n Ù í E Ü Ö Õ v Õ 4 Ý ¤ Ö ä } Ü Ú Ö Ù Ç × i á ¥ ä Õ Þ ã Ü Ö Ù ã × á r à è r á r Þ Ç ß ô Ö Ô r Õ } r ä } à Ö 1 ä Õ 4 Þ Ü Ö Ù Ç × i á v Ù à 1 à Õ 4 Þ ã Õ Ú Ö Õ í ¤ Ü á Y í p Ü i í E í n Õ í p Ö × Ö Ô E Õ à Õ & Ö 1 × Û í n Ù à ó Ú & × ë i Õ 4 ä Õ í b ä Õ Þ ã Ü Ö Ù ã × á r à è Ó Ô E Õ ä Õ Þ ã Ü Ö Ù ã × á r à Ü á r í b Ù Ç Ö à ä Ü á 3 n à Ú & × á E ó à Q Ö Ù Ç Ö ì Ö Õ Ö Ô E Õ á E Õ ï Ö × Ø E Ù Ú à Q Ù ã ò á Y Ü Ö ì ä Õ ! z f | è þ á E Ù Ç Ö Ù Ü Þ ã Þ ã ß î C z 0 | Ö Ô r Õ à Q Õ Õ í p ä Õ Þ ã Ü Ö Ù ã × á ¢ è 3 £ U G Ỳ ¦ ¤ ± b a e f x ® c " c è b Ó Ô E Õ v á r Õ 4 ï à Õ & Ö × Û í n Ù à Ú 4 × ë Õ 4 ä Õ í b ä Õ Þ ã Ü Ö Ù ã × á r à 3 Ù à ì à Õ í b Ö × % ä Õ & ó Ú & Þ Ü à à Q Ù Ç Û © ß ¤ Ö Ô r Õ í n × n Ú ì â Õ á Ö à Û © ä × i â Ù Ç á Ö × p ä Õ Þ Ç Õ ë Ü á Ö Ü á Y í % á E × á E ó t ä Õ 4 Þ ã Õ 4 ë Ü á Ö q è 6 À á r Õ 4 ï ¬ Ù Ö Õ 4 ä } Ü Ö Ù Ç × i á ä Õ q à ì â Õ q à f é ¤ ß & ì â Ø E Ù ã á E ò Ö × f ¦ Ö Õ 4 Ø 5 û E î g ï ¦ Ô E Õ 4 ä Õ Ö Ô r Õ ä Õ 4 Þ Ü Ö Ù Ç × i á r à v Ü ä Õ ä Ü á 3 Õ q í ¹ Ü ò i Ü Ù Ç á p î 1 é r Ü à Õ í ÷ × á ¹ Ö Ô E Õ á r Õ 4 ï à Õ & Ö × Û £ ä Õ 4 Þ ã Õ 4 ë Ü á i Ö í n × n Ú ì â Õ á i Ö } à í n Õ & Ö Õ 4 ä â Ù Ç á E Õ q í ¶ é ¤ ß ¤ Ö Ô r Õ õ ì Õ ä ß í n Õ ä Ù ã ë Õ q í Û © ä × i â Ö Ô E Õ ¶ ë Õ ä é u g á E × i â Ù ã á r Ü Þ Ç Ù Ü Ö Ù ã × á × Û Ö Ô r Õ ¶ â × i à Q Ö ä Õ Ú & Õ á Ö Þ ã ß u Ü i í E í n Õ q í ¹ ä Õ Þ ã Ü Ö Ù ã × á ¢ è ö Ó Ô E Õ ¬ í n Ù à Ú 4 × ë Õ 4 ä ß " Ø r ä × n Ú & Õ q í ì ä Õ à Q Ö × Ø Y à 3 Ü ÛÖ Õ ä å D q ü ü Ù Ç Ö Õ ä Ü Ö Ù ã × á r à î X × ä 3 ï ¦ Ô E Õ 4 á ô á E × á E Õ 4 ï ³ ä Õ 4 ó Þ Ü Ö Ù ã × á Y à ¦ Ü ä Õ í E Ù ã à Ú & × ë i Õ 4 ä Õ í f è 1. VP(EXPLODE−word)−Subject−> NP(ARTIFACT | OBJECT)(b) Ë Ù Ç ò ì ä Õ % Ó p × i Ø E Ù ã Ú ä Õ 4 Þ Ü Ö Ù Ç × i á r à £ Û © × ä 1 Ó ø x Ü i ý £ Ö × Ø D ü à Q ß ¤ á Ö Ü Ý ¤ ó é r Ü i à Q Õ q í v ä Õ 4 Þ Ü Ö Ù Ç × i á r à à ø ù é Y ý £ Ö × i Ø D q ü i I ó t ä Õ 4 Þ Ü Ö Ù Ç × i á r à 4 è Y A D C E 7 F G I H £ B © ¦ f e ¦ g p ¡ ¶ © h & ¦ 5 £ ¤ s £ ¤ p £ H @ 7 8 3 4 ¡ ¦ q 9 7 © Ë Ù Ç ò ì ä Õ p % r ø x Ü i ý ¡ Þ Ç Ù à Q Ö à 5 Ö Ô E Õ p Ö × i Ø D q ü b à ß á Ö } Ü Ý ¤ ó t é Y Ü à Õ í ¬ ä Õ 4 Þ Ü Ö Ù ã × á Y à í n Ù à Ú 4 × ë Õ ä Õ q í ï ¦ Ô r Õ 4 ä Õ Ü à Ë Ù Ç ò ì ä Õ p % r ø © é U ý Þ ã Ù ã à Q Ö à ¡ Ö Ô E Õ v Ö × i Ø D ü I ó ä Õ 4 Þ Ü Ö Ù ã × á Y à ¦ í n Ù ã à Ú & × ë i Õ 4 ä Õ í f è s Ë E ä × â Ë Ù ã ò ì ä Õ % Y ø ù Ü i ý 8 ï 8 Õ ¡ } r á Y í Ö Ô r Ü Ö Ö Ô r Õ ¡ à Q ß n à Q Ö Õ â í n Ù à Ú 4 × ë Õ 4 ä Õ í v Ö Ô Y Ü Ö D s u 4 d 8 v Q w r W 4 e d × i ä x y f z ( 1 f r µ 4 e d í n Õ 4 Ö Õ 4 ä â Ù Ç á r Õ ¡ Õ & Ý n Ø E Þ ã × i à Ù ã × á r à 5 ø © ä Õ 4 Þ Ü Ö Ù ã × á Y à D 5 Ü á r í i ý Ã ï ¦ Ô E Ù Ú } Ô b Ü ä Õ Ü Þ à × ô í n Õ 4 Ö Õ ä â Ù ã á E Õ í é ¤ ß D b d ¥ ø © ä Õ 4 Þ Ü Ö Ù ã × á u û ý & è ¤ Ù © § U Õ ä Õ á Ö f y ¹ T g U a T ¢ g i h q p G q r h q p U ( w y b U & s & h q pt c h á ì â ¡ é U Õ 4 ä } à p × Û Ø U Õ 4 × Ø r Þ Ç Õ × ä Þ Ç Ù ã ë ¤ Ù Ç á E ò ª é Y Õ Ù Ç á r ò i à Ü ä Õ ¶ Ù Ç á ì ä Õ q í ÷ × i ä ¤ Ù Ç Þ ã Þ ã Õ í ¹ ø © ä Õ 4 Þ Ü Ö Ù Ç × i á r à r î % E î 2 Ü á Y í D q ü i ý & è ¦ ¤ × i â Õ 4 × á r Õ Ú & Þ Ü Ù ã â à ä Õ 4 ó à Ø Y × i á r à Q Ù ã é E Ù ã Þ Ç Ù Ç Ö ß b ø ù ä Õ Þ ã Ü Ö Ù ã × á r à I 5 Ü á Y í { F ý è Ë E ä × â Ë Ù ã ò ì ä Õ 2 % Y ø © é Y ý Ð ï £ Õ º } r á r í Ö Ô r Ü Ö I I ó µ ä Õ 4 Þ Ü Ö Ù ã × á Y à Ð Ú } Ô r Ü ä } Ü Ú & ó Ö Õ ä Ù 4 Õ Ü í r í n Ù Ö Ù Ç × i á r Ü Þ ä Õ 4 Þ ã Õ 4 ë Ü á i Ö Ù ã á n Û © × ä â Ü Ö Ù ã × á p î X Û © × ä Õ & Ý E Ü â Ø E Þ ã Õ Ù ã á x ì ä Ù Ç Õ q à Ü á Y í ¤ Ù Ç Þ ã Þ Ç Ù ã á E ò à Ü ä Õ ä Õ 4 Þ Ü Ö Õ q í Ö × p é Y × i â ¡ é r à 5 ø © ä Õ 4 Þ Ü Ö Ù ã × á Y à D ¡ Ð Ü á r í u û ¢  ý î 8 ï ¦ Ô E Ù ã Ú } Ô ¹ Ü i í E í E à ì Ø Ö × ô Ö Ô E Õ ¤ Û ù Ü Ú & Ö Ö Ô r Ü Ö Ø U Õ 4 × i Ø E Þ ã Õ Ü ä Õ 1 Ö Ô E Õ 3 × á E Õ q à & ¤ Ù ã Þ Ç Þ ã Õ í ø © ä Õ 4 Þ Ü Ö Ù ã × á Y à & £ Ç î k % ¤ ã î n Ü á r í D ü ¥ ý & è Ð Ý ¤ Ø r Þ Ç × ó à Ù Ç × i á r à × ¤ Ú Ú ì ä 2 Ü Ö X AE ¢ ¡ ² 2 ¾ AE 8 º Ä Ü á r í ¡ × á à ¡ ² ¿ Ä ø © ä Õ 4 Þ Ü Ö Ù ã × á Y à # ¤ £ Ü á r í % ¤  ý è ¼ Ó Ô E Õ % Ø U Õ 4 ä Ø Y Õ 4 Ö ä } Ü Ö × ä } à Ü ä Õ § ¦ © E w A d î à × â Õ 4 ó Ö Ù ã â Õ à w 9 1 d Ü á Y í Ö Ô E Õ ß â Ù ã ò Ô Ö é Y Õ Þ Ç × i á E ò Ö × 5 × i ä Ð Ö Ü ä ò i Õ & Ö £ Ü á AE ¤ £ ¦ ¥ ¡ º £ ¾ § ¡ ² 2 ¾ µ AE 8 º ø © ä Õ 4 Þ Ü Ö Ù Ç × i á § I  ý è Ð Ó Ô E ä × ì ò i Ô E × ì Ö Ë Ù ã ò ì ä Õ ' % Ö Ô r Õ á E × Ö } Ü Ö Ù Ç × i á p 1 u W 5 A U ó · & k x w y @ ä Õ & Û © Õ ä à ¦ Ö × Ö Ô E Õ Ö ä Ù ã ò ò Õ ä ï 8 × ä } í E à Ö × À Ö Ô r Õ ä Õ 4 Þ ã Õ 4 ë Ü á Ö Õ 4 á Ö Ù Ç Ö Ù ã Õ à ¤ Ü à à × ¤ Ú 4 Ù ã Ü Ö Õ q í u ï ¦ Ù Ö Ô ÷ Ö Ô r Õ p 1 u W 5 A è Ê Ï & % H Q x Ê Ë Ì p Ò B Ì v Ó Ô E Õ ä Õ v Ü ä Õ p å ae p ç Ü Ø E Ø r Þ Ç Ù Ú 4 Ü Ö Ù ã × á r à Û © × ä ï ¦ Ô E Ù ã Ú } Ô ¬ Ö × Ø E Ù Ú ä Õ 4 Ø r ä Õ 4 ó à Õ 4 á Ö Ü Ö Ù ã × á b Ü i à 3 Ü p Ú & × i Þ Ç Þ ã Õ Ú & Ö Ù ã × á b × Û I ä Õ 4 Þ ã Õ 4 ë Ü á Ö ä Õ 4 Þ Ü Ö Ù Ç × i á r à Ù ã à 3 á E × Ö à ì ¢ Ú & Ù ã Õ 4 á Ö è ³ Ó Ô r Ù ã à p Ù ã à v é Y Õ q Ú 4 Ü ì à Q Õ b Ö × i Ø E Ù ã Ú ¶ ä Õ Þ ã Ü Ö Ù ã × á r à p Ú 4 Ü Ø E Ö ì ä Õ × á r Þ Ç ß Ö Ô E Õ â × i à Q Ö ¦ Ú } Ô Y Ü ä } Ü Ú Ö Õ 4 ä Ù ã à Q Ö Ù Ú 1 Ü á r í v ä Õ 4 Ø U Õ & Ö Ù Ö Ù Ç ë i Õ Ù Ç á n Û © × i ä â Ü ó Ö Ù ã × á C Ü é U × ì Ö p Ü ¥ Ö × Ø r Ù ã Ú è ð Õ ô Ü ä ò ì Õ ¤ Ö Ô r Ü Ö ¤ Ü í E í E Ù Ö Ù Ç × i á r Ü Þ Ù Ç á n ó Û © × ä â Ü Ö Ù Ç × i á ¶ Ù ã à × Û Ð Ù ã á Ö Õ ä Õ q à Ö q î U Õ q à Q Ø U Õ Ú 4 Ù ã Ü Þ Ç Þ ã ß Û © × ä à ì Ú } Ô b Ü Ø E Ø r Þ Ç Ù Ú 4 Ü ó Ö Ù ã × á Y à 1 Ü à â ì Þ Ö Ù ó í n × n Ú ì â Õ á Ö à ì â â Ü ä Ù Ü Ö Ù ã × á ¢ è Ë r × ä 1 Õ 4 Ý n Ü â ó Ø E Þ ã Õ î Ü ¶ í n × n Ú ì â Õ 4 á Ö ¡ Û © × n Ú ì à Q Ù ã á E ò ¶ × i á ¬ Ö × Ø r Ù ã Ú ² · 3 o º ¹ E » ! ² ¥ © ¼ ³ í n Ù à Ú ì à à Q Õ q à ¡ Ü Þ à × ¶ × Ö Ô E Õ 4 ä 5 Ö Ô E Õ â Õ à î Ṏ èò r è Ü ä ä Õ à Q Ö à î 2 à ì à Ø U Õ Ú Ö } à 4 î à Õ Ú ì ä Ù Ç Ö ß ¼ â Õ q Ü à ì ä Õ à î ¡ é U × â ¡ é í n Õ 4 Þ ã Ù Ç ë i Õ 4 ä ß ñ Ü á r í í n Õ & Ö × á r Ü Ö Ù ã × á â Õ & Ö Ô E × n í E à 4 è & ¦ n × â Õ × Û ¢ Ö Ô E Õ q à Q Õ Ö Ô r Õ 4 â Õ à â Ü g ß é Y Õ Ü à 8 ï 8 Õ 4 Þ ã Þ U Ú 4 Õ 4 á n ó Ö ä } Ü Þ Y Ö × 5 × Ö Ô E Õ 4 ä I Ö × Ø r Ù ã Ú à 4 î ¤ Õ èò r è Ü ä ä Õ à Q Ö à Ð Ü á r í Ù ã á r í n Ù Ú Ö â Õ á Ö à Ü ä Õ Ú & Õ á Ö ä } Ü Þ ¤ Ö × Ö × Ø E Ù Ú § © ² À q ± ¿ ¹ ¬ x ® « ¿ 7 ª s ³ ³ è i ú × ï £ Õ ë Õ 4 ä q î à × â Õ Ö Ô r Õ 4 â Õ à ¦ â Ü g ß á r × Ö 1 é U Õ ¡ Ú & × ë i Õ 4 ä Õ í é ¤ ß Ü á ¤ ß × Û í n × n Ú ì â Õ á Ö à ¦ Ù ã á Ö Ô r Õ v Ú & × i Þ Ç Þ ã Õ Ú & Ö Ù ã × á ¢ è Ë r × ä ¡ à × â Õ å ae p ç ñ Ü Ø r Ø E Þ Ç Ù Ú 4 Ü Ö Ù ã × á r à ¡ à ì Ú } Ô ¥ Ü i à â ì Þ Ö Ù ó í n × n Ú ì â Õ á i Ö à ì â â Ü ä Ù Ü Ö Ù ã × á ¢ î Ù ã á n Û © × i ä â Ü Ö Ù Ç × i á " Ü é U × ì Ö ï ¦ Ô E Ù Ú } Ô ¶ Ö Ô E Õ 4 â Õ à Ü ä Õ 5 â × i ä Õ Ú } Ô r Ü ä } Ü Ú & Ö Õ ä Ù à Ö Ù ã Ú 3 Û © × i ä Ü Ö × i Ø E Ù Ú ¡ Ù ã á Ü Ú & × i Þ Ç Þ ã Õ Ú & Ö Ù ã × á ¤ Ù à ¦ Ù Ç â Ø U × ä Ö Ü á i Ö q è Ë r × ä ¦ Ö Ô E Ù ã à Ø ì ä Ø Y × à Q Õ i î E ï 8 Õ Ú & × á E ó à Ù ã í n Õ ä Õ q í Ü " Ö Ô E Ù ã ä } í ÷ ä Õ 4 Ø E ä Õ à Õ 4 á Ö Ü Ö Ù ã × á ÷ × Û Ö × i Ø E Ù Ú à Ù Ç ò i á r Ü Ö ì ä Õ à è z f | ² Ù ã à í n Õ h } r á E Õ q í Ü i à " i k y l 7 m n s ø z ! ' c ' ý G © ø z ! ý ( î ï ¦ Ô E Õ ä Õ z ! W ä Õ 4 Ø E ä Õ à Õ 4 á Ö à × i á E Õ × Û Ö Ô E Õ Ö Ô r Õ 4 â Õ à Ü i à à × n Ú & Ù Ü Ö Õ í ï ¦ Ù Ç Ö Ô Ö Ô E Õ Ö × Ø E Ù Ú Ü á Y í U Ù ã à Ù Ç Ö à 1 ä Ü á 3 X è ALVORD,Ë Ù ã ò ì ä Õ ¡ F Ó × Ø E Ù Ú 3 Ö Ô r Õ 4 â Õ à è Ë Ù ã ò ì ä Õ F u Ù ã Þ Ç Þ ì à Q Ö ä } Ü Ö Õ à Ö Ô r Õ " à Q Õ ò â Õ 4 á Ö } Ü Ö Ù ã × á Î × Û Ü u í n × n Ú ó ì â Õ 4 á Ö ¶ í n Ù à Ú ì à à Q Ù ã á E ò À Ö × Ø E Ù Ú ² · 4 o º ¹ E » ² ¥ © ¼ ³ Ù Ç á Ö × ª Ö × i Ø E Ù Ú Ö Ô r Õ 4 â Õ à è Ó Ô E Õ " à Õ 4 ò i â Õ á Ö Ü Ö Ù Ç × i á ¼ ï 8 Ü i à % Ø E ä × ¤ í ì Ú & Õ q í ¼ é ß C Ö Ô r Õ ) ¡ 1 x 2 5 4 d ) ¡ 8 @ 9 " 8 @ A C B ³ Ü Þ ã ò × ä Ù Ç Ö Ô E â ø ù ú 1 Õ q Ü ä } à Ö q î D G F H F P I D ¢ Ó p × i Ø E Ù ã Ú b ä Õ Þ ã Ü Ö Ù ã × á r à p Ø U Õ 4 ä Ö Ü Ù ã á E Ù ã á E ò À Ö × ª × Ö Ô E Õ ä v Ö × Ø E Ù Ú 4 à Ü ä Õ b ä Õ Ú 4 × ò i á E Ù © Õ í ¹ Ü i à v ï 8 Õ 4 Þ ã Þ x è Ó Ô E Õ Ö Ô r Õ 4 â Õ ä Õ Ú 4 Õ 4 Ù ã ë Õ q à p Ü i í E í n Ù Ç ó Ö Ù ã × á Y Ü Þ Þ Ü é U Õ 4 Þ à 4 î v í E Õ & Ö Õ ä â Ù ã á E Õ í B é ¤ ß Ö Ô E Õ ¹ á E Õ ï Ö × Ø E Ù Ú 4 à ¬ ä Õ Ú & ó × ò i á E Ù 4 Õ í ¢ è Ë r × ä v Ö Ô E Õ ¶ Õ 4 Ý E Ü â Ø E Þ ã Õ ¶ Ù Ç Þ ã Þ ì à Ö ä Ü Ö Õ q í u Ù Ç á Ë Ù ã ò ì ä Õ F E î Ö Ô r Õ à Õ Ú 4 × á r í ¶ Ö Ô E Õ â Õ Ô r Ü i à Ö ï 8 × v Þ Ü é U Õ 4 Þ à î f × á E Õ Ø Y Õ ä Q Ö } Ü Ù ã á E Ù ã á E ò Ö × Ö × i Ø E Ù Ú ² · 3 o º ¹ E » ! ² ¥ © ¼ ³ î n Ö Ô E Õ × Ö Ô r Õ 4 ä ¦ Ø U Õ 4 ä Ö Ü Ù ã á E Ù ã á E ò Ö × Ö × i Ø E Ù Ú § © ² À q ± y ¿ ¹ 0 ¬ 3 ® « ¿ 7 ª ³ x ³ èt c o h y a W ¢ Ó Ô E Õ b Ö Ô E Õ 4 â Õ ¬ Ú & × i á i Ö } Ü Ù ã á r à p ä Õ Þ Ç Õ ë g Ü á Ö Ú & × i á r Ú & Õ Ø n Ö à p Û © × i ä Ö Ô r Õ Ö × Ø E Ù Ú f z ȩ ì Ö ¦ á E × Ö × Ø E Ù Ú ä Õ 4 Þ Ü Ö Ù Ç × i á v × Û C z Ù à 8 ä Õ q Ú & × ò i á E Ù 4 Õ í ¢ è þ á Ö Ô E Ù à 1 Ú Ü à Õ Ö Ô E Õ ( å µ å r ð ÷ å ¼ Þ ã Ü é Y Õ Þ ¢ Ù ã à ¦ Ü i à à Ù ã ò á E Õ q í f è Ó Ô E Õ ¡ ä } Ü á 3 ¤ Ù ã á E ò × Û 2 Ö Ô E Õ Ö × Ø E Ù Ú Ö Ô E Õ 4 â Õ à Û © × ä Ö × i Ø E Ù ã Ú z Ú & × á E ó à Ù ã í n Õ ä à ¢ Ö Ô r Ü Ö 8 ø D q ý f ä Õ 4 Þ Ü Ö Ù Ç × i á r à ¢ Ø U Õ 4 ä Ö Ü Ù Ç á r Ù Ç á E ò 1 Ö × Ü á ¤ ß × Ö Ô E Õ ä ¢ Ö × i Ø E Ù Ú z # " $ z Ú r á E Õ q í ¶ é ¤ ß % Ü i ¥ y G F l 7 ¤ y i a i y 4 î X Õ 4 á n ó Ü é r Þ Ç Ù ã á E ò Ö Ô r Õ 1 í n Õ } r á r Ù Ö Ù Ç × i á × Û X Ü Ø Y Ü ä Ö Ù Ú ì Þ Ü ä Ð Ú 4 Þ ã Ü i à à 2 × Û U Õ ë Õ 4 á Ö } à 2 × Û Ù ã á i Ö Õ 4 ä Õ à Q Ö Ð Ö × ¡ Ö Ô E Õ Ö × i Ø E Ù ã Ú è Ë E × ä £ Õ & Ý E Ü â Ø E Þ ã Õ î i Ö Ô r Õ 1 Ö Õ â Ø E Þ Ç Õ 4 Ö Q Ö Õ Û © × i ä Ö × i Ø E Ù Ú ² · 3 o ¹ E » ² ¥ © 0 ¼ ³ Ù ã à ä Õ 4 Ø E ä Õ à Õ 4 á Ö Õ q í é ¤ ß Ö Ô E Õ Ð Û © × i Þ Ç Þ ã × ï ¦ Ù ã á E ò Þ ã Ù ã à Q Ö 1 × Û 2 à Q Þ ã × Ö } à C E DF % G I H I P R Q T S C U DF R G 3 V W S Y X à Y b d ce E G d cF R f g £ h p i q g r ¥ s S t E S Y X d Y b a ce U G d cF u f v F u b £ G x w U e y S p F R y F u Q ¢ ¢ 3 ¢ ¢ r ¥ 3 r £ h W r ¥ s S P u DDS Y u S Y t y X a X a e y S Ỳ G d S Y t y È Y DP u cQ T S Y t F R b 3 f E F B £ f ) e y S Y b a e y S G d b a P R G d F u b V W i q £ W t E S Y X d Y b ag P u X a S Y t S Y DP R G d cF R f X C U w U f j G d P ¡ £ ¢ g P u X a S Y t S Y DP R G d cF R f X ¤ P R f t È Y b a P R G d S Y t S Y DP R G d cF R f X n ¢ S Y DP % G d cF u f E X ¥ ' ¥ ' ¥ ' º ¡ ² § ¦ £ ¡ r Þ ã Þ Ç Õ q í À ï ¦ Ù Ç Ö Ô ª Ú 4 × ä ä Õ q Ú Ö Ù Ç á n Û © × i ä â Ü Ö Ù ã × á ¬ Û © ä × â ( Ö Ô r Õ Ö ä } Ü Ù ã á E Ù ã á E ò ¤ í E Ü Ö } Ü v × Û Ð Ö Ô E Õ I ë Õ 4 á Ö ó F F Õ 4 ë Ü Þ ì Ü Ö Ù ã × á r à è ¡ Ó Ô E Ù à 3 Ù Ç á n ó Û © × ä â Ü Ö Ù Ç × i á ¶ Õ 4 á r Ü é E Þ ã Õ í ì à Ö × p Ú 4 ä Õ q Ü Ö Õ Ü v ò i × Þ í ¶ à Q Ö Ü á Y í E Ü ä } í ¶ Ü á E ó á E × Ö Ü Ö Ù Ç × i á v Û © × i ä ¦ Õ Ü i Ú } Ô Ö × i Ø E Ù Ú è Ë E × i ä ¦ Õ Ü i Ú } Ô p à Þ Ç × Ö s } Y Þ Ç Þ ã Õ 4 ä q î n ï £ Õ Ô r Ü g ë i Õ Ü á r á E × Ö } Ü Ö Õ q í ¼ ø ¢ D q ý Ù Ö } à v à × ì ä } Ú & Õ ¤ Ù ã á u Ö Ô E Õ b í E × ¤ Ú ì â Õ 4 á Ö p Ü á r í ñ ø x û i ý Ü ¡ Ö × Ø r Ù ã Ú ä Õ Þ ã Ü Ö Ù ã × á r à I Ö Ô Y Ü Ö ¦ Ú & × ì Þ ã í v Õ 4 á r Ü é E Þ Ç Õ 3 Ù Ç Ö à 8 Ù ã í E Õ 4 á Ö Ù © } Y Ú 4 Ü Ö Ù ã × á ¢ è ¦ ¤ × i â Õ 4 Ö Ù ã â Õ à î Ö Ô r Õ 1 à Q Þ ã × Ö } r Þ ã Þ ã Õ 4 ä × ä Ù ã ò Ù ã á r Ü Ö Õ í 5 Ù Ç á à Õ 4 ë Õ ä Ü Þ ¤ Ø E Þ ã Ü i Ú & Õ q à Ù ã á ¤ Ö Ô E Õ ¡ í E × ¤ Ú ì â Õ 4 á Ö è £ þ á ¤ Ö Ô E Ù à Ú Ü à Õ î E Ü Ö × Ø E Ù Ú ä Õ 4 Þ Ü Ö Ù ã × á Y à ï 8 Ü i à â Ü á ì Ü Þ Ç Þ ã ß Ü á r á E × Ö } Ü Ö Õ q í ¹ Û © × i ä % Õ Ü Ú } Ô Î Ù ã á r à Ö } Ü á r Ú 4 Õ ¬ × Û ¡ Ö Ô E Õ ¬ Ö Õ 4 Ý ¤ Ö à á E Ù Ç Ø r Ø Y Õ 4 Ö è £ Ó Ô r Õ Ü á E á E × Ö Ü Ö Ù ã × á r à 8 ï 8 Õ 4 ä Õ ì à Q Õ q í v Ö × Õ 4 ë Ü Þ ì Ü Ö Õ Ö Ô r Õ Ø E ä × n Ú & Õ í ì ä Õ × Û 3 í n Ù ã à Ú & × ë i Õ 4 ä Ù Ç á r ò % Ö × i Ø E Ù ã Ú ä Õ 4 Þ Ü Ö Ù Ç × i á r à 4 è " ð Õ % Ú & × i â ó Ø r Ü ä Õ q í ¡ Ö Ô E Õ í E Ù ã à Ú & × ë i Õ 4 ä Õ í 5 ä Õ Þ ã Ü Ö Ù ã × á r à Ü ò i Ü Ù ã á r à Q Ö Ö Ô E Õ ò × Þ í à Q Ö Ü á E ó í E Ü ä í f è 5 Ó Ô E Õ ä Õ q à ì Þ Ç Ö à Ü ä Õ Þ Ç Ù à Q Ö Õ í Ù ã á ô Ó Ü é E Þ ã Õ { D è 5 Ó Ô r Õ Ø r ä Õ q Ú & Ù Ç ó à Ù Ç × i á ¼ Ú 4 × ì á Ö à Ö Ô E Õ 1 á ì â 5 é Y Õ ä I × Û Y Ö Ù Ç â Õ à £ Ü á ¤ ß ¡ × Û X Ö Ô E Õ 1 Ù í n Õ á i Ö Ù ó } r Õ q í " ä Õ 4 Þ Ü Ö Ù Ç × i á r à Ú & × ä ä Õ à Ø Y × i á r í ¥ Ö × ¬ Ü ô ò i × Þ í ¤ ó µ à Q Ö Ü á r í E Ü ä í À Ü á E á E × ó Ö Ü Ö Ù ã × á ¢ è ê Ó Ô r Õ ô ä Õ Ú Ü Þ ã Þ â Õ Ü i à ì ä Õ à Ô E × ï ÿ â Ü á ¤ ß ÷ × Û Ö Ô r Õ ò × i Þ ã í n ó µ à Q Ö Ü á r í E Ü ä } í v Ü á E á E × Ö Ü Ö Ù Ç × i á r à 8 ï £ Õ ä Õ â Ü Ö } Ú } Ô E Õ í é ¤ ß v Ü á ß v ä Õ 4 ó Þ Ü Ö Ù ã × á p è ª ð ô Õ % Ü Þ à Q × ì à Õ í 5 4 D 4 ó à Ú & × ä Õ × Û F E è F C v è £ þ á ¤ Ö Ô E Õ 5 à Ü â Õ Ú & × i á r í n Ù Ç Ö Ù ã × á r à Ô r Ü á Y í E Ú & ä } Ü ÛÖ Õ í ä Õ 4 Þ Ü Ö Ù ã × á Y à £ ï 8 Õ 4 ä Õ Ù ã í E Õ 4 á Ö Ù © } r Õ í ï ¦ Ù Ç Ö Ô % Ü á V 4 D 4 ó µ à Ú & × i ä Õ × Û F i ü E è E C v è Ó p × i Ø E Ù ã Ú & ó t ä Õ 4 Þ ã Õ 4 ë Ü á i Ö Ù ã á n Û © × i ä â Ü Ö Ù Ç × i á u Ù ã à Ü Þ à Q × ¥ Ù ã â Ø Y × i ä Q Ö } Ü á Ö Û © × i ä £ ì Þ Ç Ö Ù Ç ó ¥ ¤ × n Ú ì â Õ á Ö ¦ ì â â Ü ä Ù Ü Ö Ù Ç × i á ÷ ø £ ¤ ¦ E ý & è Ó Ô E Õ ä Õ 4 Û © × ä Õ ï 8 Õ £ Ù ã á Ö Õ 4 ò i ä Ü Ö Õ q í Ö × i Ø E Ù ã Ú 8 ä Õ 4 Þ Ü Ö Ù Ç × i á r à Ü á r í Ö × Ø E Ù Ú I Ö Ô E Õ â Õ q à 2 Ù Ç á 5 Ö Ô r Õ £ ¤ ¦ à ß n à Ö Õ 4 â ø ù ú Ü ä } Ü é r Ü ò Ù ì Ü á r í
(a) 2 .
2VP(EXPLODE−word)−Object−> NP(BOMB−word) 3. VP(DAMAGE−word)−Object−> NP(ARTIFACT | VEHICLE | BOMBING) 4. VP(INJURE−word)−Object−> NP(NUMBER) 5. VP(EXPLODE−word)−Prep_Attach{in | outise}−> NP(ARTIFACT) 6. VP(EXPLODE−word)−Object−> NP(DATE | TIME) 7. VP(CLAIM−word)−Object−> NP("responsability") 9. VP(CLAIM−word)−Subject−> NP(HUMAN | LIVING | PERSON−Name) 8. VP(INJURE−word)−Object−> NP(HUMAN | LIVING | PERSON−Name) 10. VP(KILL−word)−Object−> NP(NUMBER) Syntax−based Topic Relations 1'. VP(INJURE−word)−−> NP(BOMB−word) C−Relations 2'. VP(KILL−word)−−> NP(BOMB−word) 3'. VP(EVENT)−Object−> NP(BOMB−word) 4'. VP(EXPLODE−word)−−> NP(ARTIFACT | OBJECT) 5'. VP(EXPLODE−word)−−> NP(MALE | HUMAN | LIVING | PERSON−Name) 6'. VP(EXPLODE−word)−−> NP(CITY−Name | COUNTRY−Name) 7'. VP(EXPLODE−word)−−> NP(ORGANIZATION−Name) 8'. VP(EXPLODE−word)−−> NP(DATE | TIME) 9'. VP(INJURE−word)−−> NP(HUMAN | LIVING) 10'. VP(KILL−word)−−> NP(NUMBER)
Texas. An Alvord woman was badly burned when a package bomb exploded in her face shortly before noon on Friday, sending tremors of alarm through this North Texas community of 1,000 people. old man in connection with the bombing, according to Wise Friday night a warrant was issued for the arrest of a 50−year− County Sheriff Phil Ryan. The victim is Cheryl Taylor, 30. She is in serious condition at Parkland Hospital in Dallas with damaged ear drums and burns on her face and hand. Ryan said Taylor and the suspect have been trading compaints of thefts and threats from each other for the past four months. "He had been down here complaining on her and she was compaining on him," Ryan said. "And then we found some witnesses that had some knowledge that he threatened to do this." first time I had contact with him he was living in Arkansas."Ryan said that the man, a truck driver, moves frequently. "TheTh2
Th1
Explosion
Dammage, Arrest
(T3)
(T3)
(T5) |
||
14,755,119 | Morphology vs. Syntax in Adjective Class Acquisition | This paper discusses the role of morphological and syntactic information in the automatic acquisition of semantic classes for Catalan adjectives, using decision trees as a tool for exploratory data analysis. We show that a simple mapping from the derivational type to the semantic class achieves 70.1% accuracy; syntactic function reaches a slightly higher accuracy of 73.5%. Although the accuracy scores are quite similar with the two resulting classifications, the kinds of mistakes are qualitatively very different. Morphology can be used as a baseline classification, and syntax can be used as a clue when there are mismatches between morphology and semantics. | [
1937760,
5805000,
11250697,
9680240,
6232405
] | Morphology vs. Syntax in Adjective Class Acquisition
June 2005
Gemma Boleda gemma.boleda@upf.edu
Toni Badia toni.badia@upf.edu
Sabine Schulte
Walde
GLiCom Pompeu
GLiCom Pompeu
Fabra University
Barcelona
Computational Linguistics Saarland University Saarbrücken schulte@CoLi.Uni-SB.DE
Fabra University
Barcelona
Morphology vs. Syntax in Adjective Class Acquisition
Proceedings of the ACL-SIGLEX Workshop on Deep Lexical Acquisition
the ACL-SIGLEX Workshop on Deep Lexical AcquisitionAnn ArborJune 2005
This paper discusses the role of morphological and syntactic information in the automatic acquisition of semantic classes for Catalan adjectives, using decision trees as a tool for exploratory data analysis. We show that a simple mapping from the derivational type to the semantic class achieves 70.1% accuracy; syntactic function reaches a slightly higher accuracy of 73.5%. Although the accuracy scores are quite similar with the two resulting classifications, the kinds of mistakes are qualitatively very different. Morphology can be used as a baseline classification, and syntax can be used as a clue when there are mismatches between morphology and semantics.
Introduction
This paper fits into a broader effort addressing the automatic acquisition of semantic classes for Catalan adjectives. So far, no established standard of such semantic classes is available in theoretical or empirical linguistic research. Our aim is to reach a classification that is empirically adequate and theoretically sound, and we use computational techniques as a means to explore large amounts of data which would be impossible to explore by hand to help us define and characterise the classification.
In previous research (Boleda et al., 2004), we developed a three-way classification according to generally accepted adjective properties (see Section 2), and applied a cluster analysis to further examine the classes. While the cluster analysis confirmed our classification to a large extent, it was clear that one of the classes needed further exploration. Also, we used only syntactic features modelled as pairs of POS-bigrams; we explored neither other syntactic features nor the role of morphological evidence for the classification.
In this paper we apply a supervised classification technique, decision trees, for exploratory data analysis. Our aim is to explore the linguistic features and description levels that are relevant for the semantic classification, focusing on morphology and syntax. We check how far we get with morphological information, and whether syntax is helpful to overcome the ceiling reached with morphology.
Decision trees are appropriate for our task, to test and compare sets of features, based on our gold standard. They are also known for their easy interpretation, by reading feature combinations off the tree paths. This property will help us get insight into relevant characteristics of our adjective classes, and in the error analysis.
The paper is structured as follows: Section 2 presents the adjective classification and the gold standard used for the experiments. Sections 3 and 4 explore the morphology-semantics interface and the syntax-semantics interface with respect to the classification proposed, and Section 5 focuses on the differences in the kind of information each level provides for the classification. Sections 6 and 7 are devoted to discussion of related work and conclusions.
Classification and gold standard 2.1 Classification proposal
To date, no semantic classification of adjectives is generally accepted in theoretical linguistics. Much research in formal semantics has focused on entailment properties , while other kinds of lexical semantic properties are left uncovered. Standard descriptive grammars propose broader classifications (see Picallo (2002) for Catalan), but these usually do not follow a single classification parameter: they mix morphological, syntactic and semantic criteria and end up with classifications that are not consistent.
We are interested in properties of the lexical semantics of adjectives that have a bearing on their syntactic behaviour. This property makes the semantic distinctions traceable at another linguistic level, which is desirable to ensure falsability of the classification criteria. On more practical terms, it also allows the exploitation of the syntax-semantics interface as is usual in Lexical Acquisition, to automate the acquisition of the relevant classes.
Our proposal is largely inspired by the Ontological Semantics framework (Raskin and Nirenburg, 1995). The assumption of an ontology as a model of the world allows us to distinguish linguistic aspects (e.g. syntactic properties) from the actual content of the lexical entries, formalised as a link to an element the ontology. We assume an ontology of basic denotations composed of properties (or attributes), objects (or entities), and events. Adjectives participate in each of these possible denotations, and can be basic, object-related or event-related, depending on their lexical meaning. 1 We next characterise each class.
Basic adjectives are the prototypical adjectives, which denote attributes or properties which cannot be decomposed (bonic 'beautiful', sòlid 'solid'). Event adjectives have an event component in their meaning. For instance, if something is tangible ('tangible'), then it can be touched: tangible necessarily evokes a potential event of touching which is embedded in the meaning of the adjective. Other examples are alterat ('altered') and ofensiu ('offensive'). Similarly, object adjectives have an embed-ded object component in their meaning: deformació nasal ('nasal deformity') can be paraphrased as deformity that affects the nose, so that nasal evokes the object nose. Other examples are peninsular ('peninsular') and sociolingüístic ('sociolinguistic').
This proposal shares many aspects with discussions in descriptive grammar (2002) and proposals in other lexical resources, such as WordNet (Miller, 1998). In particular, the distinction between prototypical, attribute-denoting adjectives and objectrelated adjectives is found both in descriptive grammar and in WordNet. As for event-related adjectives, they are not usually found as a class in Romance descriptive grammar, and in WordNet they are distinguished but only if they are participial; other kinds of deverbal adjectives are considered basic (in our terminology). More on the morphology-semantics relationship in Section 3.
Our classification focuses on the semantic content of adjectives, rather than on formal properties such as entailment patterns (contrary to the tradition in formal semantics). The semantic distinctions proposed have an effect on the syntactic distribution of adjectives, as will be shown throughout the paper, and can be exploited in low-level NLP tasks (POStagging), and also in more demanding tasks, such as paraphrase detection and generation (e.g. exploiting the relationship tangible → can be touched, or deformació nasal → deformity affecting the nose).
Gold standard
To perform the experiments, we built a set of annotated data based on this classification (gold standard from now on). We extracted the lemmata and data for the gold standard from a 16.5 million word Catalan corpus (Rafel, 1994), lemmatised, POS-tagged and shallow parsed with the CatCG tool (Alsina et al., 2002). The shallow parser gives information on the syntactic function of each word (subject, object, etc.), not on phrase structure. 186 lemmata were randomly chosen among all 2564 adjectives occuring more than 25 times in the corpus. 86 of the 186 lemmata were classified by 3 human judges into each of the classes (basic, object, event). 2 In case of polysemy affecting the class as-signment, the judges were instructed to return the class for the most frequent sense as the primary class, and a secondary class for the other sense.
Polysemy typically arises in cases where an adjective has developed a noncompositional sense. One of these cases would be the adjective puntual, a denominal adjective (derived from punt, 'point'). Its most frequent sense is 'punctual' as in 'I expect Mary to be punctual for this meeting'. This is a basic meaning, noncompositional in the sense that it cannot be predicted from the meaning of the originating noun in combination with the suffix.
The adjective has a compositional sense, namely, 'related to a point' (usually, a point in time), as in això va ser un esdeveniment puntual, 'this was a once-occuring event'. This is the meaning we would expect from the derivation punt ('point') + al, and is an object meaning. In this case, the judge should assign the adjective to two classes, primary basic, secondary object. Compositional meanings are thus those corresponding to active morphological processes, and can be predicted from the meaning of the noun and the derivation with the suffix (be it denominal, deverbal or participial).
The judges had an acceptable 0.74 mean κ agreement (Carletta, 1996) for the assignment of the primary class, but a meaningless 0.21 for the secondary class (they did not even agree on which lemmata were polysemous). As a reaction to the low agreement about polysemy, we incorporated polysemy information from a Catalan dictionary (DLC, 1993). This information was incorporated only in addition to the gathered gold standard: In many cases the dictionary only lists the compositional sense. We added it as a second reading if our judges considered the noncompositional one as most frequent.
One of the authors of the paper classified the remaining 100 lemmata according to the same criteria. For our experiment, we use the complete gold standard containing 186 lemmata (87 basic, 46 event, and 53 object adjectives).
Morphological evidence
There is an obvious relationship between the derivational type of an adjective (whether it is denominal, deverbal, or not derived) and the semantic clastask was quite high. sification we have put forth: Usually, a denominal adjective has an object embedded in its meaning (corresponding to the object denoted by the noun from which it is derived). Similarly, a deverbal or participial adjective tends to denote a relationship with an event (the event denoted by the originating verb), and a nonderived adjective tends to have a basic meaning. Therefore, the simplest classification strategy is to associate each derivational type with a semantic class: nonderived → basic, participial → event, deverbal → event, and denominal → object. Table 1 reflects the accuracy results of this theoretically defined mapping between morphology and semantics, compared to our gold standard (cases corresponding to the predicted mapping in boldface). 3 For instance, the first line of this table shows that 39 of the 42 nonderived adjectives, predicted to be basic by the morphology-semantics mapping, are actually deemed basic by the human judges, while the remaining 3 are classified as object adjectives. Note that the table correctly reflects the general tendencies just outlined: This simple classification achieves 0.76 f-score. However, there are obvious mismatches. Most of these mismatches are concentrated in the first column, namely many of the deverbal, participial and denominal adjectives (predicted to denote event or object meanings) actually have a basic meaning as their most frequent sense. This fact is reflected in the low recall score for basic adjectives (0.45), and in precision being much lower than recall for the other two classes (0.64 vs. 1 for event, 0.67 vs. 0.91 for object adjectives).
The mismatches usually correspond to polysemy due to noncompositional senses of the adjectives, such as the denominal adjective puntual discussed above. Another case is the participial abatut, which compositionally means 'shot-down', but is most frequently used as a synonym to 'depressed, downcast', and therefore is classified as basic. Similarly, a deverbal adjective such as radiant most frequently means 'happy', but also has a compositional sense, 'irradiating'.
Sometimes the compositional meaning is completely lost, as with most deverbal adjectives classified as basic. In some cases the underlying verb no longer exists in Catalan (horrible-*horrir, compatible-*compatir), and they are not perceived as derived. 4 In other cases, although the verb exists, it is a stative predicate (e.g. inestable, 'unstable', from estar 'stand/be'; pudent 'stinking', from pudir, 'stink'), and thus are much more similar to basic adjectives than deverbal adjectives deriving from dynamic predicates, such as ofensiu ('offensive'). Aspectuality of the deriving verb is a factor that has to be examined more carefully in the future.
To summarise, the results for the morphologysemantics mapping indicate that there is a clear relationship between these two levels: Morphology does most of the job right, because each morphological rule has an associated semantic operation. However, this level of information has a clear performance ceiling. In case of noncompositional meanings the morphological class will systematically be misleading, which cannot be overcome unless other kinds of information are let into play.
Syntactic evidence
If we adhere to the hypothesis that semantics has a reflection in syntactic distribution (basis for most work in Lexical Acquisition), we can expect that syntax gives us a better clue to semantics than morphology, particularly in cases of noncompositional meanings. We expect that adjectives with a noncom-positional meaning behave in the syntax as basic adjectives, not as event or object adjectives.
Before getting into the experiments using syntactic information, we briefly present the syntax of adjectives in Catalan and the predictions with respect to the syntactic behaviour of each class.
Adjective syntax in Catalan
The default function of the adjective in Catalan is that of modifying a noun; the default position is the postnominal one (about 66% of adjective tokens in the corpus modify nouns postnominally). Examples are taula gran ('big table'), arquitecte tècnic ('technical architect'), and element constitutiu ('constitutive element').
However, some adjectives can appear prenominally, mainly when used non-restrictively (so-called "epithets"; 26% of the tokens occur in prenominal position). In English, this epithetic use is not typically distinguished by position, but some adjectives can epithetically modify proper nouns ('big John' vs. '*technical John'). 'Big' in 'big John' does not restrict the reference of 'John', but highlights a property. In Catalan and other Romance languages, prenominal position is systematically associated to this use, with proper or common nouns.
The other main function of the adjective is that of predicate in a copular sentence (6% of the tokens), such as aquesta taula és gran ('this table is big'). Other predicative contexts, such as adjunct predicates (as in la vaig veure borratxa, 'I saw her drunk'), are much less frequent: approx. 1% of the adjectives in the corpus.
From empirical exploration and literature review, we gathered the following tentative predictions as to the syntactic behaviour of each class in Catalan:
Basic adjectives occur in predicative environments, have scope over other adjectives modifying the same head (most notably, object adjectives), and can have epithetic uses and therefore occur prenominally.
Event adjectives occur in predicative environments and after object adjectives.
Object adjectives occur in a rigid position, directly after their head noun; they do not allow pred-icative constructions nor epithetic uses (therefore not prenominal position).
Setup
We modelled the syntactic behaviour of adjectives using three different representation strategies. The values in the three cases were frequency counts, that is, the percentage of occurrence of each adjective in that syntactic environment. The frequency of the adjectives from the gold standard in the corpus ranges from 27 to 7154 (median: 129.5). All in all, 56,692 out of the approx 600,000 sentences in the corpus were used as data for this experiment. We have not analysed the influence of frequency on the results, but each adjective is represented by a reasonable amount of data, so that the representation of the syntactic evidence in terms of frequency is adequate. The simplest modelling strategy is unigram representation, taking the POS of the word to the left of the adjective and the POS of the word to the right as separate features. Adjectives have a limited syntactic distribution (much more restricted than e.g. verbs), so that even this simple representation should provide relevant evidence. The second one is bigram representation, with features consisting of the POS of the word to the left of the adjective and the POS of the word to the right as a single feature. This representation results in a much larger number of features (see Table 2), thus potentially leading to data sparsenes, but it should be more informative, because left and right context are taken into account at the same time.
The third one is the syntactic function, as given by CatCG. For adjectives, these functions are noun modifier (distinguishing between prenominal and postnominal position), predicate in a copular sentence, and predicative adjunct (more information in Section 4.4). CatCG does not yield completely disambiguated output, and the ambiguous functions were also taken into account, so as not to miss any potentially relevant source of evidence.
To perform the experiment, we used C5.0, a commercial decision tree and rule induction engine developed by Ross Quinlan (Quinlan, 1993). We tried several options, including the default, winnowing, and adaptive boosting. Although the results varied a bit within each representation strategy (boosting tended to perform better, winnowing did not have a homogeneous behaviour), the general picture remained the same as to the relative performance of each level of representation. Therefore, and for clarity of exposure and exploration reasons, we will only present and discuss results using the default options.
For comparison, we ran the tool on the 3 syntactic representation levels and on morphological information, using derivational type, a finer-grained derivational type, and the suffix. 5
Results
The results of the experiment, obtained averaging ten 10-fold cross-validation runs, are depicted in Table 2. In this table, #f is the number of features for each representation strategy, size the size of the trees (number of leaves), accuracy the accuracy rate of the classifiers (in percentage), and SE the standard error of each parameter. We currently assume a majority baseline, that of assigning all adjectives to the most numerous class (basic). Given that there are 87 basic adjectives and 186 items in the gold standard (see Table 1 Note that all four classifiers are well above the majority baseline (46.8%). The best results are obtained with the lowests number of features (3 for morphology, 14 for syntactic function, vs. 24 and 135 for unigram and bigram), and correspondingly, with the smallest trees (average 4.3 and 3.5 leaves for morphology and function, 19.1 and 18.8 for ngrams). We interpret this result as indicating that the levels of description of morphology and syntactic function are more adequate than the n-gram representation, although this is only a tentative conclusion, because the differences in accuracy are not large. Function abstracts away from particular POS environments, and summarises the most relevant information without the data sparseness problems inherent in n-gram representation. Also noteworthy is that the accuracy rates for syntax are lower than we would have expected, according to the hypothesis that it better reflects synchronic meaning. For the first two syntactic representations, unigrams and bigrams, results are worse than using the simple morphological mapping explained above (respectively 68.8% and 67.4% accuracy, compared to 70.1% accuracy achieved with morphology). 6 Only syntactic function improves upon the morphological results, and only slightly (73.8% average accuracy). However, as will be explored in the rest of the Section, the mistakes of the morphological classifier are qualitatively different from those of the syntactic classifiers, which can be used to gain insight into the nature of the problem handled, and to build better classifiers.
Error analysis
For the analysis of the results, we will focus on the syntactic function features, because it is the best system and allows clearer exploration of the hypotheses stated so far than the n-gram representation. Table 3 contains the data for the 4 main syntactic functions for adjectives. For each class (all adjectives classified as basic, event or object in the gold standard), it contains the average percentage of occurence with each syntactic function, along with the standard deviation. A set of 10 remaining syntactic features represented cases not disambiguated by CatCG, which had really low mean values and were rarely used in the DTs.
The values of the 4 syntactic functions confirm to a large extent the predictions made with respect to the syntactic behaviour of each adjective class, but also evidence an additional fact: basic and event adjectives, in the current definition of the classes, have only slight differences in their syntax.
Basic and event adjectives have similar mean values for the default adjective position in Catalan (postnominal modifier; 0.69 and 0.68 mean values), and also for the predicative function in a copular sentence (0.10 and 0.084 mean values). The twosample t-test confirms that the differences in mean are not significant (p=0.73 and p=0.88 at the 95% confidence interval). 7 Basic adjectives occur more frequently as prenominal modifiers (0.07 compared to 0.02), but note the large standard deviation (0.09 and 0.04)), which means that there is a large within-class variability. In addition, event adjectives have a larger mean value for the predicative adjunct function (0.19 vs. 0.09), but again, the standard deviation of both classes is very large (0.16 and 0.08). Nevertheless, a t-test returns significant p values (< 0.001, 95% conf. int.) for the differences in mean of these two features, so that they can be used as a clue to the characterisation of the event class. 8 The bias of event adjectives towards predicative uses can be attributed to participials -the most frequent kind of adjectives in the event class (35 vs. 11).
Object adjectives do present a distinct syntactic behaviour: They act (as expected) as rigid postnominal modifiers (mean value 0.94), and cannot be used as prenominal modifiers (mean value 0.01) or as predicates (mean values 0.018 and 0.008 for predicative functions). Also note that the standard deviation for each feature is lower in the case of object adjectives than in the case of basic and event adjectives, which indicates a higher homogeneity of the object class. T-tests for the difference in means with respect to the basic and event class return significant p values (< 0.001) except for the difference in prenominal modification values between event and object adjectives (p=0.26). 9 Decision trees built with this feature set use the information consistent with the observations just outlined. In general, they characterise object adjectives as postnominal modifiers (usual threshold: 0.9), basic adjectives as prenominal modifiers (usual threshold: 0.01), and event adjectives as not being prenominal modifiers. In some trees, information about predicativity is also included (event adjectives act as predicative adjuncts; usual threshold: 0.04).
From the discussion of the feature values, it is to be expected that most of the mistakes when using the syntactic function feature set are due to basic-event confusion, and this is indeed the case. For the error analysis, we divided the gold standard into three equal sets, and successively trained on two sets and classified the third. The classification of the gold standard that resulted is reflected in Table 4 shows that the object class is best characterised (0.78 f-score), followed by the basic (0.73) and event (0.69) classes. Particularly low are precision for event (0.61) and recall for basic (0.64) adjectives. This distribution indicates that many adjectives are classified as event while belonging to other classes (18 to basic, 4 to object), and many basic adjectives are classified into other classes (18 as event, 13 as object).
The basic-event confusion mainly takes place with basic adjectives not used as epithets (in prenominal position; curull 'full', dispers 'scattered') and event adjectives used as epithets (interminable 'endless', ofensiu 'offensive'). Although more analysis is needed, in many of these cases (such as interminable) the underlying verb is stative, which makes the adjectives very similar to basic adjectives, as mentioned in Section 3. The judges reported difficulties particularly in distinguishing event from basic adjectives, which matches the results of the experiments. The classification is fuzzy in this point, and we intend to develop clearer criteria to distinguish adjectives with an "active" event in their lexical meaning from basic adjectives.
As for the basic-object confusion, it is due to two factors. The first one is basic being the default class: In the gold standard, if an adjective does not fit into the other 2 classes, it is considered basic, even if it does not denote a prototypical kind of attribute or property. Examples are radioactiu ('radioactive') and recíproc 'reciprocal'. These tend to be used less in predicative and epithetic functions.
The second one is polysemy. 4 adjectives classified in the gold standard as polysemous between a basic (primary) and an object (secondary) reading are classified by C5.0 as object because they almost only (> 90% of the time) occur postnominally: artesanal, mecànic, moral, ornamental ('artesanal, mechanical, moral, ornamental'). All of these cases have a compositional meaning paraphrasable by 'related-to X', where X is the derived noun, and a noncompositional meaning such as 'automatic' for mecànic. The syntactic behaviour of the adjective is mixed according to the two classes, so that the values for environments typical of basic adjectives are too low to meet the thresholds. 10 To sum up, event adjectives do not seem to have consistent syntactic characteristics that tell them apart from basic adjectives, while object adjectives have a consistent behaviour distinct from the other two classes. This result backs up previous experimentation with clustering (Boleda et al., 2004), where half of the event adjectives were systematically clustered together with basic adjectives. 11 Pol-ysemy plays a tricky role, because depending on the uses of the adjective it leads to a continuum in the feature values which sometimes does not allow a clear identification of the most frequent sense.
Differences between morphology and syntax
A crucial point to understand the roles of morphology and syntax for our semantic classification is the differences in the kinds of mistakes that each of the information level carries with it. From the discussion up to this point, we would expect that the default morphological classification causes less mistakes with event vs. basic, because the deverbal morphological rules carry the associated "relatedto-event" meaning. On the contrary, syntax should handle better the cases where the relationship between morphology and semantics is lost, what we have termed noncompositional meanings. If we compare the mistakes made by each mapping, both morphology and syntax assign the expected class to 103 lemmata (55.4% of the gold standard), and both coincide in assigning a wrong class for 21 (11.3%). The cases where one mapping achieves the right classification and the other one makes a mistake are reflected in Tables 5 and 6. true class → basic event object Total basic 7 5 12 event 6 4 10 object 4 4 8 Total 10 11 9 Cases where morphology achieves the right class and syntax does not (Table 6) do not present a very clear pattern, although the basic-event confusion in were so due to their bearing complements, a parameter orthogonal to the targeted classification. syntax is indeed reflected as the most numerous in Table 5 (6+7 cases). In absence of a syntactic characterisation of the class, applying the default mapping will yield better results.
As for the cases where syntax classifies correctly and morphology does not (Table 6), they do present a clear pattern: They correspond, as expected, to deverbal (8), participial (2) and denominal (17) adjectives with a meaning that does not correspond to the morphological rule. Among denominals, examples are elemental and horrorós ('elementary' and 'horrifying'); among deverbals, raonable and present ('reasonable' and 'present'); among participials, innat and inesperat ('innate' and 'unexpected').
Note that syntax is most helpful in the identification of basic denominal adjectives (17 cases), providing support for the hypothesis that adjectives with a noncompositional meaning behave in the syntax as basic adjectives, which can be exploited in a lexical acquisition setting. In contrast, event and basic classes not having a clearly distinct syntactic distribution, the syntactic features do not help in telling these two classes apart. This problem accounts for the little overall accuracy improvement from morphology (70.1%) to syntax (73.8%): It improves the object vs. basic distinction, but it does not consistently improve the event vs. basic distinction.
Combining morphological and syntactic features
The next logical step in building a better classifier for adjectives is to use both morphological and syntactic function information. When doing that, a slightly better result is obtained, although no dramatic jump in improvement: 74.7% mean accuracy averaged across ten 10-fold cross-validation runs, with trees of average 8 leaves (mean accuracy being 70.1% with morphology and 73.8% with syntactic function; see Table 2). In most of the partitions of the data when using this feature set, the first node uses syntactic evidence (high values for postnominal position for object adjectives vs. the rest), and the second level nodes use the derivational type. The remaining morphological features (suffix, fine-grained derivational type; see footnote 4.2) are seldom used.
In all the decision trees, nonderived adjectives are directly assigned to the basic class, and in 80% par-ticipial adjectives are classified as event. The last rule causes a large number of errors, because 12 out of 47 participles were classified as basic in the gold standard. For the other two derivational types, syntactic evidence is used again in almost all decision trees (99% for deverbal, 80% for denominal adjectives). Deverbal or denominal adjectives that occur prenominally are deemed basic, according to expectation. Contrary to expectation, however, deverbal adjectives that occur predicatively are classified as basic. This result confirms the suspicion that frequent predicative use is associated with participial, but not with other kinds of deverbal adjectives, as stated in Section 4.4.
Related work
In recent years much research (Merlo and Stevenson, 2001;Schulte im Walde and Brew, 2002;Korhonen et al., 2003) has aimed at exploiting the syntax-semantics interface for classification tasks, mostly based on verbs. In particular, Merlo and Stevenson (2001) present a classification experiment which bears similarities to ours. They use decision trees to classify intransitive English verbs into three semantic classes: unergatives, unaccusatives, and object-drop. As in our experiments, they define three classes, and use only 60 verbs for the experiments. Merlo and Stevenson identify linguistic features referring to verb argument structure (crucially involving thematic relations), and classify the verbs into the three classes with an accuracy of 69.8%. They compare their results with a random baseline of 33%.
There has been much less research in Lexical Acquisition for adjectives. Early efforts include Hatzivassiloglou and McKeown (1993), a cluster analysis directed to the automatic identification of adjectives belonging to the same scale (such as cold-temperedhot). More recently, Bohnet et al. (2002) used bootstrapping to assign German adjectives to "functional" classes (of a more traditional sort, based on a German descriptive grammar). They relied on ordering restrictions and coordination data which can be adapted to Catalan.
As for Romance languages, the only related work we are aware of is Carvalho and Ranchod (2003), who developed a finite-state approach to disam-biguating homograph adjectives and nouns in Portuguese. They manually classified the adjectival uses of the homographs into six syntactic classes with characteristics used in our classification (predicative uses, position with respect to the head noun, etc.). They used that information to build finite state transducers aimed at determinining the POS of the homographs in each context, with a high accuracy (99.3%) and coverage (94%). The research undergone in this paper leads to the automatic acquisition of the classes, defined however at a semantic rather than syntactic level.
Conclusion and future work
In this paper, we have presented and discussed the role of two sources of evidence for the automatic classification of adjectives into ontological semantic classes: morphology and syntax. Both levels provide relevant information, as indicated by their respective accuracy results (70.1% for morphology, 73.8% for syntax), both well above a majority baseline (46.8%). Morphology fails in cases of noncompositional meaning, when the relationship to the deriving word has been lost, cases that syntax tends to correctly classify. In contrast, syntax systematically confuses event and basic adjectives due to the lack of a sufficiently distinct syntactic profile of the event class. Therefore, the default morphology-semantics mapping handles these cases better.
Not suprisingly, the best classifier is obtained combining both kinds of information (74.7%), although it is not even 1% better than the syntactic classifier. More research is needed to achieve better ways of combining both levels of description.
We can summarise our results as indicating that morphology can give a reliable initial hypothesis with respect to the semantic class of an adjective, which syntax can refine in cases of noncompositional meaning, particularly for object adjectives. Therefore, morphology can be used as a baseline in future classification experiments.
The experiments presented in this paper also shed light on the characteristics of each class. In particular, we have shown that event adjectives do not have a homogeneous and distinct syntactic profile. One factor to take into account is that the morphological variability within the class (suffixes -ble, iu, nt, participles) is associated with a high semantic variability. This semantic variability is not found in the object class, where the several suffixes (al, ic, à, etc.) all have a similar semantic effect. Another factor which seems to play a role, and which has been identified in the error analysis, is the aspectuality of the deriving verb, particularly whether it is stative or dynamic. In the near future, we intend to use the best classifier to automatically classify more adjectives of our database, so as to allow further exploration of the data and a clearer definition of the class.
A major issue we leave for future research is polysemy detection. Up to now, we have only aimed at single-class classification, and not attempted to capture multiple uses of an adjective. E.g. the approach in Bohnet et al. (2002) could be adapted to Catalan: We can use data on coordination and ordering for polysemy detection, once the class of the most frequent sense is established with the methodology explained in this paper.
Finally, the results presented in this paper seem to point in a fruitful direction for the study of adjective semantics: Adjectives that are flexibly used, those that fully exploit the syntactic possibilities of the language (in Catalan, being used predicatively and as epithets), tend to correspond to adjectives with a basic meaning, that is, tend to be viewed as a compact attribute, as a prototypical adjective. In contrast, derived adjectives which retain much of the semantic link to the noun or verb from which they derive do not behave like prototypical adjectives, are tied to certain positions, and do not exhibit the full range of syntactic possibilities of adjectives as a class. We intend to explore the consequences of this hypothesis in more detail in the future.
), this baseline results in 46.8% accuracy.size
accuracy
#f mean SE
mean SE
baseline
-
-
-
46.8
-
morphology
3
4.3 0.1
70.1 0.3
unigram
24
19.1 0.2
68.8 0.6
bigram
135
18.8 0.4
67.4 0.8
synt. funct.
14
3.5 0.1
73.8 0.3
Table 2 :
2Decision Tree experiment
Table 3 :
3Average values for the syntactic functions in each adjective class.
Table 4 (
4correctly classified items in boldface).true class → basic event object Total
basic
56
7
5
68
event
18
35
4
57
object
13
4
44
61
Total
87
46
53
186
precision
.82
.61
.72
.72
recall
.64
.76
.83
.69
f-score
.73
.69
.78
.73
Table 4 :
4Syntax-semantics mapping: results
Table 5 :
5Morphology right, syntax wrong
true class → basic event object Total
basic
2
3
3
event
10
12
object
17
17
Total
27
2
3
Table 6 :
6Syntax right, morphology wrong
Raskin and Nirenburg (1995) account separately for other kinds of adjectives, such as membership adjectives ('fake'). We will abstract away from these less numerous classes.
The 3 human judges were PhD students with training in linguistics, one of which had done research on adjectives. As it was defined, the level of training in linguistics needed for the
The morphological information was obtained from a manually constructed electronic database of adjectives, kindly provided by RoserSanromà (2003).
The question may arise of whether these adjectives are really deverbal. In the current version of the adjective database, all adjectives bearing a suffix that is active in the Catalan derivational system are classified as derived. The problem is that Catalan shares suffixes with Latin, so that fixed forms from Latin that have been incorporated into Catalan cannot be superficially distinguished from active derived forms.
The finer-grained derivational type states whether the adjective is derived from a noun or verb that still exists in Catalan or not.
When using morphological features, DTs used almost only the main derivational type, according to the hypothesis stated in Section 3.
Alternatives "not equal" and "basic smaller than event" respectively. 8 Alternatives: "basic greater than event" for prenominal modification, "event greater than basic" for predicative adjunct.
Alternatives: all means of basic and event greater than those of object, except for postnominal modification, testing against a greater mean for object.
Note, however, that in 6 other cases with the same polysemy, syntax does tell them apart from typical object adjectives, and are classified as basic (such as the puntual case discussed above; see discussion in next Section).11 The ones that were distinguished from basic adjectives
AcknowledgementsMany thanks to the people who have manually annotated the data: Àngel Gil, Martí Quixal, Roser Sanromà. Also thanks to Louise McNally, Maite Melero, Martí Quixal, and three anonymous reviewers for revision and criticism of previous versions of the paper. We thank Eric Joanis, Alexander Koller, and Oana Postolache for suggestions that lead to this paper. Special thanks are due to Roser Sanromà for kindly providing us with her manual morphological classification(Sanromà, 2003), and to the Institut d'Estudis Catalans for lending us the research corpus. This work is supported by the Fundación Caja Madrid.
CATCG: a general purpose parsing tool applied. À Alsina, T Badia, G Boleda, S Bott, À Gil, M Quixal, O Valentín, Proceedings of the 3rd LREC. the 3rd LRECÀ. Alsina, T. Badia, G. Boleda, S. Bott, À. Gil, M. Quixal, and O. Valentín. 2002. CATCG: a general purpose parsing tool applied. In Proceedings of the 3rd LREC, pages 1130-1135.
An approach to automatic annotation of functional information to adjectives with an application to German. B Bohnet, S Klatt, L Wanner, Proceedings of the Workshop on Linguistic Knowledge Acquisition and Representation at the 3rd LREC Conference. the Workshop on Linguistic Knowledge Acquisition and Representation at the 3rd LREC ConferenceB. Bohnet, S. Klatt, and L. Wanner. 2002. An approach to auto- matic annotation of functional information to adjectives with an application to German. In Proceedings of the Workshop on Linguistic Knowledge Acquisition and Representation at the 3rd LREC Conference.
Acquisition of semantic classes for adjectives from distributional evidence. G Boleda, T Badia, E Batlle, Proceedings of the 20th COLING. the 20th COLINGG. Boleda, T. Badia, and E. Batlle. 2004. Acquisition of se- mantic classes for adjectives from distributional evidence. In Proceedings of the 20th COLING, pages 1119-1125.
Assessing agreement on classification tasks: The kappa statistic. J Carletta, Computational Linguistics. 222J. Carletta. 1996. Assessing agreement on classification tasks: The kappa statistic. Computational Linguistics, 22(2):249- 254.
Analysis and disambiguation of nouns and adjectives in Portuguese by FST. P Carvalho, E Ranchhod, Proceedings of the Workshop on Finite-State Methods for Natural Language Processing at EACL2003. the Workshop on Finite-State Methods for Natural Language Processing at EACL2003P. Carvalho and E. Ranchhod. 2003. Analysis and disambigua- tion of nouns and adjectives in Portuguese by FST. In Pro- ceedings of the Workshop on Finite-State Methods for Natu- ral Language Processing at EACL2003, pages 105-112.
Diccionari de la Llengua Catalana. Enciclopèdia Catalana. Dlc, Barcelonathird editionDLC. 1993. Diccionari de la Llengua Catalana. Enciclopèdia Catalana, Barcelona, third edition.
Towards the automatic identification of adjectival scales: Clustering adjectives according to meaning. V Hatzivassiloglou, K R Mckeown, Proceedings of the 31st ACL. the 31st ACLV. Hatzivassiloglou and K. R. McKeown. 1993. Towards the automatic identification of adjectival scales: Clustering ad- jectives according to meaning. In Proceedings of the 31st ACL, pages 172-182.
Clustering polysemic subcategorization frame distributions semantically. A Korhonen, Y Krymolowski, Z Marx, Proceedings of the 41st ACL. the 41st ACLA. Korhonen, Y. Krymolowski, and Z. Marx. 2003. Cluster- ing polysemic subcategorization frame distributions seman- tically. In Proceedings of the 41st ACL, pages 64-71.
Automatic verb classification based on statistical distributions of argument structure. P Merlo, S Stevenson, Computational Linguistics. 273P. Merlo and S. Stevenson. 2001. Automatic verb classifica- tion based on statistical distributions of argument structure. Computational Linguistics, 27(3):373-408.
Modifiers in WordNet. K J Miller, WordNet: an Electronic Lexical Database. Christiane FellbaumLondonMITK. J. Miller. 1998. Modifiers in WordNet. In Christiane Fellbaum, editor, WordNet: an Electronic Lexical Database, pages 47-67. MIT, London.
L'adjectiu i el sintagma adjectival. C Picallo, Gramàtica del català contemporani. Joan SolàEmpúries, BarcelonaC. Picallo. 2002. L'adjectiu i el sintagma adjectival. In Joan Solà, editor, Gramàtica del català contemporani, pages 1643-1688. Empúries, Barcelona.
R Quinlan, C4.5: Programs for Machine Learning. San FranciscoMorgan KaufmannR. Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco.
Un corpus general de referència de la llengua catalana. J , Caplletra. 17J. Rafel. 1994. Un corpus general de referència de la llengua catalana. Caplletra, 17:219-250.
An applied ontological semantic microtheory of adjective meaning for natural language processing. V Raskin, S Nirenburg, Machine Translation. 13V. Raskin and S. Nirenburg. 1998. An applied ontological semantic microtheory of adjective meaning for natural lan- guage processing. Machine Translation, 13:135-227.
Inducing German semantic verb classes from purely syntactic subcategorisation information. S Schulte Im Walde, C Brew, Proceedings of the 40th ACL. the 40th ACLS. Schulte im Walde and C. Brew. 2002. Inducing German se- mantic verb classes from purely syntactic subcategorisation information. In Proceedings of the 40th ACL, pages 223- 230.
Aspectes morfològics i sintàctics dels adjectius en català. Roser Sanromà, Universitat Pompeu FabraMaster's thesisRoser Sanromà. 2003. Aspectes morfològics i sintàctics dels adjectius en català. Master's thesis, Universitat Pompeu Fabra. |
250,390,866 | Cleansing and expanding the HURTLEX(EL) with a multidimensional categorization of offensive words | We present a cleansed version of the Modern Greek branch of the multilingual lexicon HURTLEX. 1 The new version contains 737 offensive words. We worked bottom-up in two annotation rounds and developed detailed diagnostics of "offensiveness" by cross-classifying words on three dimensions: context, reference, and thematic domain. Our work reveals a wider spectrum of thematic domains concerning the study of offensive language than those identified in the Greek lexicographic literature as well as social and cultural aspects that are not included in the original HURTLEX categories. | [
244097184,
12477446,
9626793,
44132652,
227231283,
184483129,
8821211,
12245213
] | Cleansing and expanding the HURTLEX(EL) with a multidimensional categorization of offensive words
July 14, 2022
Vivian Stamou vivianstamou@gmail.com
Institute for Language and Speech Processing
Athena R.C. †Faculty of Philology
University of Athens
Iakovi Alexiou iakovi.alexiou@gmail.com
Institute for Language and Speech Processing
Athena R.C. †Faculty of Philology
University of Athens
Antigone Klimi antyklimi@gmail.com
Institute for Language and Speech Processing
Athena R.C. †Faculty of Philology
University of Athens
Eleftheria Molou moloueleftheria@gmail.com
Institute for Language and Speech Processing
Athena R.C. †Faculty of Philology
University of Athens
Alexandra Saivanidou
Institute for Language and Speech Processing
Athena R.C. †Faculty of Philology
University of Athens
Stella Markantonatou stiliani.markantonatou@gmail.com
Institute for Language and Speech Processing
Athena R.C. †Faculty of Philology
University of Athens
Cleansing and expanding the HURTLEX(EL) with a multidimensional categorization of offensive words
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)
the Sixth Workshop on Online Abuse and Harms (WOAH)July 14, 202210.17605/OSF.IO/T5JEY)
We present a cleansed version of the Modern Greek branch of the multilingual lexicon HURTLEX. 1 The new version contains 737 offensive words. We worked bottom-up in two annotation rounds and developed detailed diagnostics of "offensiveness" by cross-classifying words on three dimensions: context, reference, and thematic domain. Our work reveals a wider spectrum of thematic domains concerning the study of offensive language than those identified in the Greek lexicographic literature as well as social and cultural aspects that are not included in the original HURTLEX categories.
Introduction
The term offensive language (OL) is used to describe "hurtful, derogatory or obscene comments made by one person to another person" and the term hate speech (HS) to describe speech that is possibly harmful to disadvantaged social groups. 2 Although both legal and ethical aspects have been considered in an effort to differentiate between HS and OL, the line between the two terms is difficult to be drawn Waseem et al. 2017) and they are often used interchangeably (Jacobs and Potter, 1998). In this work, terms in the domains of OL and HS are considered together.
Many of the studies referring to OL detection use vocabularies (Chen et al. 2012;Colla et al. 2020;Njagi et al. 2015;Pedersen 2019;Razavi et al. 2010) or patterns as a starting point and depend heavily on the selection of "seed words". Keywordbased approaches might be more effective in the case of explicit abuse according to the typology provided in Waseem et al. (2017). Also, there are strong indications that key-word and lexicon-based 1 The lexicon is available here: https://osf.io/t5jey/?view_only= e910e28ea21e4895905aff2d0c0ac162 (archived under: DOI 10.17605/OSF.IO/T5JEY). 2 https://thelawdictionary.org/offensive-language/. approaches score better when there is a shortage of annotated corpora (Sazzed, 2021); Modern Greek (MG) is an underresourced language in terms of corpora annotated for OL. Resource development for OL detection is an issue in itself. Firstly, "offense" is a subjective notion and as a result, the social (in general) and personal characteristics of the annotators as well as the annotation method may put bias on the resources for OL detection (lists of offensive words, corpora). The so-called "descriptive" approaches to resource development try to represent various stances in the same resource while the so-called "prescriptive" approaches try to represent few or even only one stance. High interranotator scores seem to correlate with the prescriptive approach (Röttger et al., 2022). Furthermore, Schmidt and Wiegand (2017) point out that little is known about the creation process and the theoretical concepts underlying collections of offensive words. The context in which words occur also affects their offensive nature; for instance, Pelosi et al. (2017) observe that words collected in vulgar lexicons, sometimes may be considered neutral or even positive. Our group represents female native speakers of MG with middle to high education aged 20-60; none belongs to marginal social groups. Our work is of the prescriptive persuasion. We did not make use of a pre-existing list of guidelines for recognising offensive words; instead we developed our own list of diagnostics with an iterative bottomup procedure. We offer a cleansed version of the HURTLEX-(EL) lexicon containing 737 words after removing the wrong words and the words that were not considered offensive by all the annotators. Explanations whether the OL value of the words is context-dependent or not are offered as well as descriptions of certain contexts that trigger the offensive meanings.
OL identification studies and resources for Modern
Greek Pitenis et al. (2020) presented the first annotated MG dataset, the Offensive Greek Tweet Dataset (OGTD) that was extracted with a yet unpublished list of profane or obscene keywords (e.g., μαλάκας 'asshole', πουτάνα 'whore'). Tweets were marked as "offensive", "not offensive" or "spam". As "offensive" were labelled tweets that contained profane or obscene language or when they could be considered offensive on the basis of the context (Pitenis 2019:32-33). These general annotation guidelines were meant for texts. Lekea and Karampelas (2018) has investigated HS in the context of terrorist argument drawing on an also unpublished list of 1265 words. Perifanos and Goutsos (2021) have combined visual and textual cues in a multimodal approach for HS detection on Twitter. 4004 tweets with the hashtag #απέλαση 'deportation' and the term λάθρο 'illegal' were annotated manually as hateful, xenophobic and racist by 3 annotators with the majority vote. Overall, the literature on Modern Greek OL detection does not provide annotated corpora representing a wide range of registers, sizeable OL lexica or annotation guidelines. In this context, and given that lexical resources are crucial for OL identification when few or no labelled corpora exist (Sazzed 2021), the Greek (EL) branch of HURTLEX (Bassignana et al., 2018) seemed a promising starting point.
HURTLEX is a domain-independent lexicon of 53 languages with offensive, aggressive and hateful words. Its kernel consists of ∼1000 manually selected words corresponding to 17 fine-grained thematic categories that were enriched in a semiautomatic manner by drawing on the MultiWodrnet synsets and Babelnet. 3,4 In HURTLEX each lemma-sense pair is classified as "non-offensive" or "neutral" or "offensive". The neutral cases were further divided into "not literally pejorative" and "negative connotation" (not a directly derogatory use). An agreement of 61% between two annotators was reported. The senses judged as non offensive were removed and two versions of the lexicon were received: one containing the translations of offensive senses and one with the additional distinction concerning the neutral cases.
Notably, HURTLEX aims to support the development of resources for underrepresented languages (Bassignana et al. 2018:5).
OL has been discussed in the context of MG lexicography. Efthymiou et al. (2014) show that the classification of the negative terms as derogatory, offensive, slang and taboo words in two celebrated dictionaries of MG, the LNEG2 (Babiniotis, 2002) and the LKN (Triantafyllidis, 2007) do not converge. In Table 1 a tick in the sixth column denotes an overlap between the categories of OL words identified by Efthymiou et al. (2014) and our classification. Christopoulou (2012) and Xydopoulos (2012) discuss extensively experiments on the measuring of word offensiveness but do not expand on how native speakers offer the relevant evaluation.
Working with HURTLEX-(EL)
Although filtering has been applied to prevent noise propagation in the semi-automatically enriched HURTLEX, its EL branch still includes synsets with no offensive meaning and incorrect terms. First, we manually removed clearly incorrect terms. Two linguists agreed that these included: (i) foreign words (384 words; either in English or French), (ii) combinations of Greek and foreign words (33 words), i.e., ευρασίας griffon, Lit. eurasia's griffon, (iii) about 194 meaningless phrases, i.e., πουτίγκα κεφάλι, Lit. pudding head, (iv) terms with morphological errors (23 words) i.e., φυσιογνωμονική 'physiognomic' instead of φυσιογνωμική 'physiognomic' (v) agreement errors (46 words), i.e., σεξουαλικά επίθεση, instead of σεξουαλική επίθεση 'sexual assault' (vi) different inflectional forms of the same lemma (298 words); MG makes heavy use of inflectional morphology and HURTLEX seemed unable to filter out types in the same inflectional paradigm, and (vii) archaic words (37 words), i.e., αιχμαλωτίζων 'capturer' which is an active present participle of a verb still used in MG but these particular participles belong to older forms of the language. At this stage, annotators also removed words that they all considered "unoffensive" in MG, i.e., μοτσαρέλα 'mozzarella'. 2143 words (about 69% of the original HURTLEX-(EL) contents) were retained out of the 3114 original entries of HURTLEX -(EL).
Given the growing body of literature (Chakrabarty et al. 2019;Naseem et al. 2019;Ashraf et al. 2021) emphasizing the role of context in characterising a word as offensive, we adopted an annotation schema with three categories, namely offensive (context-independent), context (context-dependent), following the distinction introduced in Vargas et al. (2021), and non-offensive entries. Representative examples were provided for terms assigned the label "context-dependent".
Next, four independent annotators, all undergraduate linguists who offered volunteer work, assigned one of the three labels: context-independent, context-dependent, non-offensive. General diagnostics of offensiveness mainly about profane and obscene language were offered as suggestions at this stage. The interannotator agreement score in this first step was 0.77 (Fleiss kappa), which indicates an already substantial agreement.
In the final step, a somewhat different annotation procedure was adopted (see Poletto et al. 2017 for a similar approach). The four annotators were provided with a set of more detailed diagnostics of offensiveness, e.g.: "Names of animals that are stereotypically related with negative properties in the Greek culture, such as ugliness, e.g., φώκια 'seal' or dirt, e.g., γουρούνι 'pig', are offensively used when they target individuals." These diagnostics were not developed on the basis of the classification of offensive words in the original HURTLEX or in the MG lexica (Section 2); instead, we preferred to work bottom-up and develop our own diagnostics. The motivation for this decision was that the rich material in HURTLEX-(EL) would present more classification challenges than the material in Greek printed lexica and that a Greek group's idea of offensiveness might not be identical to that of HURTLEX, a possibility that is recognised by the HURTLEX developers (Bassignana et al. 2018:5). The annotators were asked to consult these diagnostics when classifying the terms as un/offensive but (i) they might propose changes such as deletions, additions and redefinitions of categories (ii) a term might fit to more than one category. The annotators would meet with the group leaders to discuss the diagnostics. There were three rounds of this procedure and eventually the system of thematic categories was developed as a set of diagnostics for recognising offensive words in Modern Greek; this system is presented in Section 4.
Lastly, the labels context-independent, contextdependent and non-offensive were reassigned independently by the annotators and an interannotator agreement Fleiss kappa score of 0.96 was received. We did not resort to majority vote so only 737 terms that were shared by all the four annotators were included in the final lexicon; of them, as "context independent" were marked 448 words and as "context dependent" 289 words.
Annotation Diagnostics
Prose in this Section should be read with constant reference to Table 1. The final annotation diagnostics scheme comprises:
1. 17 thematic categories of offensive words 2. Tripartite distinction: offensive contextdependent, offensive context-independent and non-offensive words (Section 3). The role of the context is illustrated with the following examples: (i) the word φυτό 'plant' acquires derogatory meaning when it is attributed to a person ('nerd'), (ii) the word μαλάκας 'asshole' loses its offensive connotation when it is used to address someone in a friendly social context (Christopoulou, 2012;Xydopoulos, 2012).
3. A subtler specification of context where words are classified by the entities that are the targets of the offensive meaning: individuals (indv.), groups, non-humans and events / properties / states (ESP). This is helpful, for instance, when individuals are assigned stereotypically negative characteristics of animals.
Below are given indicative terms and clarifications regarding the identified 17 thematic categories listed in Table 1: 1. Social class and hierarchy: Words implying stereotypical negative characteristics of the members of the respective social communities, e.g., χωριάτης 'peasant', νεόπλουτος 'nouveau riche', φτωχός 'poor', βαρώνος 'baron'. 2. Historical and social context: Historical events, movements or acts are assigned a negative characterization that is absent in the their historical context but it may have occurred because of the their contemporary obsolete nature (Hamilton et al., 2016), e.g., σχολαστικισμός 'scholasticism', ηθικολόγος 'moralist', ακαδημαϊσμός 'academicism', μεσαιωνικός 'medieval'. 3. Crime and immoral behavior & respective agents, e.g., δολοφονία 'murder' and δολοφόνος 'murderer', τρομοκρατία 'terrorism' and τρομοκράτης 'terrorist', ληστεία 'robbery', συκοφαντία 'slander' and σούφρωμα 'puckering'.
4.
Religion is viewed as a behavior not congruent with the beliefs of the Greek population and its duly constituted religion (Moon, 2018), e.g., ειδωλολατρία 'idololatry', μασόνος 'mason'. 5. Nationality/ethnicity: Negative stereotypical ethnic characteristics are assigned to individuals of other nationalities and minorities, e.g., Εβραίος 'Jew', γύφτος 'gypsy' (Razavi et al. 2010;Warner and Hirschberg 2012). These words might be acceptable in a casual conversation if the speaker and the recipient belong to the same cultural group (Warner and Hirschberg, 2012). 6. Politics: In the context of democratic and liberal societies especially (Razavi et al., 2010), extreme political regimes or acts receive negative political evaluation, e.g., φασισμός 'fascism', χούντα 'junta', αποστάτης 'renegade'. 7. Professions of low prestige and sexual occupations, e.g., σκαφτιάς 'digger', παπαράτσι 'paparazzi', ιερόδουλη 'prostitute', ζιγκολό 'gigolo'. 8. Animals: Transfer of animal characteristics to humans, e.g., γουρούνι 'pig', γάιδαρος 'donkey', πρόβατα 'cattle', φίδι 'snake', τσιμπούρι 'tick' (Efthymiou et al., 2014). 9. Plants: Stereotypical negative attributes are assigned to humans regarding their cognitive skills and physical appearance, e.g., αγγούρι 'cucumber', πατάτες 'potatoes', φάβα 'fava bean', φυτό 'nerd'. 10. Characteristics of inannimates are transferred to humans e.g., σκουπίδι 'trash', βαρίδι 'sinker'. 11. Sentiments/psychological states: e.g., τρελός 'crazy', δυστυχισμένος 'miserable', θυμωμένος 'mad', μανιασμένος 'raging'. 12. Behavior: People tend to criticize other people's manner based on social norms and their own way of perceiving reality, e.g., κακότροπος 'snappy', λεχρίτης 'asswipe', εξυπνάκιας 'smartass', κλόουν 'clown'. 13. Physical and cognitive disabilities / appearance: Assignment of specific physical or cognitive disabilities to humans (καμπούρης 'hunchback', τυφλός 'blind', χωλός 'lame', βλάκας 'idiot', κουτορνίθι 'dumb'. 14. Sexuality / gender identity: Some are official terms, e.g., ομοφυλόφιλος 'homosexual', λεσβία 'lesbian', τραβεστί 'tranny' (Narváez et al., 2009). 15. Taboo body parts are context-independent offensive, e.g., αρχίδια 'balls', κώλος 'ass', παπάρι 'whatchamacallit', ψωλή 'dick'. Scientific terms, e.g., χολή 'spleen', οπίσθια 'buttock' may be used offensively or as formal / scientific terminology (Crespo-Fernández, 2018). 16. Scientific or medical terms, e.g., ναρκισσισμός 'narcissism', μικρόβιο 'germ'. 17. Places related to offensive occupations, e.g., μπουρδέλο 'brothel'. Figure 1 presents the distribution of words per diagnostic. Behavior is the most populated diagnostic followed by Crime & immoral behavior and Animals.
Comparison to HURTLEX-(EL)
HURTLEX relies on a classification of OL words in 17 categories (Bassignana et al., 2018). We have defined our own diagnostics in a bottom-up iterative fashion (Section 3). The comparison of these diagnostics against the OL categories in the MG literature (sixth column of Table 1) justifies our expectations that HURTLEX would provide access to more thematic categories of offensive/derogatory words (note that all the OL categories defined in the MG literature feature among our diagnostics). Our 17 diagnostics are equal in number with the original HURTLEX categories, but they present, probably expected, similarities and differences.
Similarities were expected because we worked on the expansion of the original 17 HURTLEX categories. However, this similarity of our independently derived diagnostics -also with the lexicographic OL categories of Greek-indicates a certain stability of OL diagnostics across different social settings, namely those of HURTLEX, of Greek lexicography which refers to the Greek society of at least 20 years ago and the contemporary Greek social settings that our group represents. The deviation was expected because OL phe-nomena are influenced by regional and cultural patterns (Bassignana et al. 2018). As a fact, mainly historically and culturally marked diagnostics deviate from the HURTLEX categories. The differences between HURTLEX's categories and our diagnostics are: (i) HURTLEX's category "SVPwords related to the seven deadly sins of the Christian tradition": Our diagnostic 4 reflects tendencies of Greek society and contains words referring to different religions or religious states (ii) HURTLEX's "IS-social class/ hierarchy": Our diagnostic 1 also comprises terms denoting social and economic (dis)advantages, e.g., νεόπλουτος 'nouveau riche' and βαρώνος 'baron' (iii) We included the new diagnostic 2 "Historical / social context", which contains contemporary terms particular to Greek history, e.g., κλέφτες 'armatole / militiamen' (Greek armed groups of the Ottoman occupation era); HURTLEX distributes these words in the categories "Potential negative connotations (QAS)", "Derogatory words (CDS)" and, "Felonies and words related to crime and immoral behavior (RE)" (iv) We added the new diagnostic 5 containing terms about nationalities/minorities within the Greek ethnicity and words reflecting social and cultural differentiation, e.g., 'Jew', 'gypsy' (vi) We included the words related to sexual orientation (HURTLEX's OM) in the single diagnostic 16 "Sexuality / gender identity".
Conclusions and future work
We have discussed our experience regarding the development of an openly available, cleansed version of the Greek branch of HURTLEX; in doing so, we have defined diagnostics of offensiveness that will be useful in future offensive word and text categorisation tasks.
This was the first step in a longer-term effort that aims to offer reasonable MG lexica and corpora for the task of OL detection. On the lexicon development front we plan to study the effect of evaluative morphology on OL (Christopoulou, 2012;Stavrianaki, 2009), enlarge the lexicon semiautomatically drawing on corpora (Wiegand et al., 2018) and test its coverage and contribution to OL identification tasks using texts from a variety of registers. On the corpora development front, we intend to use the lexicon in order to leverage corpora for OL detection and for a variety of registers.
Figure 1 :
1Word distribution per diagnostic.
+ Table 1 :
+1Presentation of the OL diagnostics & comparison to the study byEfthymiou et al. (2014).Classes
OL Tar-
get
Cont.
Ind.
Cont.
Dep.
Efthymiou
(2014)
1.
Social class/ hi-
erarchy
indv.,
groups
+
2.
Historical/ so-
cial context
indv.,
groups,
ESP
+
3.
Crime immoral
behavior
indv.,
groups,
ESP
+
+
4.
Religion
indv.,
groups,
ESP
+
✓
5.
Nationality eth-
nicity
indv.,
groups
+
+
✓
6.
Politics
indv.,
groups,
ESP
+
+
✓
7.
Professions of
low prestige/
sexual occup.
indv.,
groups,
ESP
+
+
8.
Animals
indv.,
groups,
non-
human
+
9.
Plants
indv.,
groups,
non
human
+
10.
Character-
istics
of
inannimates
indv.,
groups,
non-
human
+
11.
Sentiments,
psychological
states
indv.,
ESP
+
+
12.
Behavior
indv.,
groups,
ESP
+
+
✓
13.
Physical/
cognitive
disabilities,
appearance
indv.,
groups,
non
humans
+
+
✓
14.
Sexuality gen-
der identity
indv.,
groups,
ESP
+
+
✓
15.
Body parts
indv.,
groups,
ESP,
non-
human
+
+
✓
16.
Scientific
terms
indv.,
groups,
ESP,
non-
human
+
17.
Places-
locations
indv.,
groups,
ESP, non
human
https://multiwordnet.fbk.eu/english/ home.php. 4 https://babelnet.org/.
Abusive language detection in youtube comments leveraging replies as conversational context. Noman Ashraf, Arkaitz Zubiaga, Alexander Gelbukh, 10.7717/peerj-cs.742PeerJ Computer Science. 7742Noman Ashraf, Arkaitz Zubiaga, and Alexander Gel- bukh. 2021. Abusive language detection in youtube comments leveraging replies as conversational con- text. PeerJ Computer Science, 7:e742.
Dictionary of Modern Greek Language. George Babiniotis, Center LexicologyGeorge Babiniotis. 2002. Dictionary of Modern Greek Language. Center Lexicology.
Hurtlex: A multilingual lexicon of words to hurt. Elisa Bassignana, Valerio Basile, Viviana Patti, Proceedings of the Fifth Italian Conference on Computational Linguistics (CLiC-it). the Fifth Italian Conference on Computational Linguistics (CLiC-it)Elisa Bassignana, Valerio Basile, and Viviana Patti. 2018. Hurtlex: A multilingual lexicon of words to hurt. In Proceedings of the Fifth Italian Conference on Computational Linguistics (CLiC-it).
Pay "attention" to your context when classifying abusive language. Tuhin Chakrabarty, Kilol Gupta, Smaranda Muresan, 10.18653/v1/W19-3508Proceedings of the Third Workshop on Abusive Language Online. the Third Workshop on Abusive Language OnlineFlorence, ItalyAssociation for Computational LinguisticsTuhin Chakrabarty, Kilol Gupta, and Smaranda Mure- san. 2019. Pay "attention" to your context when clas- sifying abusive language. In Proceedings of the Third Workshop on Abusive Language Online, pages 70-79, Florence, Italy. Association for Computational Lin- guistics.
Detecting offensive language in social media to protect adolescent online safety. Ying Chen, Yilu Zhou, Sencun Zhu, Heng Xu, International Confernece on Social Computing. Ying Chen, Yilu Zhou, Sencun Zhu, and Heng Xu. 2012. Detecting offensive language in social media to pro- tect adolescent online safety. 2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing, pages 71-80.
A lexicological analysis of slang vocabulary of Modern Greek. Aikaterini Christopoulou, University of PatrasPhD dissertationAikaterini Christopoulou. 2012. A lexicological anal- ysis of slang vocabulary of Modern Greek. PhD dissertation, University of Patras.
GruPaTo at SemEval-2020 task 12: Retraining mBERT on social media and fine-tuned offensive language models. Davide Colla, Tommaso Caselli, Valerio Basile, Jelena Mitrović, Michael Granitzer, 10.18653/v1/2020.semeval-1.202Proceedings of the Fourteenth Workshop on Semantic Evaluation. the Fourteenth Workshop on Semantic EvaluationBarcelonaInternational Committee for Computational LinguisticsDavide Colla, Tommaso Caselli, Valerio Basile, Jelena Mitrović, and Michael Granitzer. 2020. GruPaTo at SemEval-2020 task 12: Retraining mBERT on social media and fine-tuned offensive language mod- els. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1546-1554, Barcelona (online). International Committee for Computational Linguistics.
Taboos in speaking of sex and sexuality. The Oxford Handbook of Taboo Words and Language. Eliecer Crespo-Fernández, 10.1093/oxfordhb/9780198808190.013.3Eliecer Crespo-Fernández. 2018. Taboos in speaking of sex and sexuality. The Oxford Handbook of Taboo Words and Language, pages 41-60.
Automated hate speech detection and the problem of offensive language. Thomas Davidson, Dana Warmsley, Michael Macy, Ingmar Weber, Proceedings of the 11th International AAAI Conference on Web and Social Media, ICWSM '17. the 11th International AAAI Conference on Web and Social Media, ICWSM '17Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech de- tection and the problem of offensive language. In Proceedings of the 11th International AAAI Confer- ence on Web and Social Media, ICWSM '17, pages 512-515.
Labeling of Derogatory Words in Modern Greek Dictionaries. Angeliki Efthymiou, Zoe Gavriilidou, Eleni Papadopoulou, 10.2478/9788376560885.p12De Gruyter Open PolandAngeliki Efthymiou, Zoe Gavriilidou, and Eleni Pa- padopoulou. 2014. Labeling of Derogatory Words in Modern Greek Dictionaries, pages 27-40. De Gruyter Open Poland.
Inducing domain-specific sentiment lexicons from unlabeled corpora. William L Hamilton, Kevin Clark, Jure Leskovec, Dan Jurafsky, 10.18653/v1/D16-1057Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsWilliam L. Hamilton, Kevin Clark, Jure Leskovec, and Dan Jurafsky. 2016. Inducing domain-specific senti- ment lexicons from unlabeled corpora. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 595-605, Austin, Texas. Association for Computational Lin- guistics.
B James, Kimberly Jacobs, Potter, Hate Crimes: Criminal Law and Identity Politics. USAOxford University PressJames B. Jacobs and Kimberly Potter. 1998. Hate Crimes: Criminal Law and Identity Politics. Oxford University Press USA.
Detecting hate speech within the terrorist argument: A greek case. K Ioanna, Panagiotis Lekea, Karampelas, Proceedings of the 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM '18. the 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM '18IEEE PressIoanna K. Lekea and Panagiotis Karampelas. 2018. De- tecting hate speech within the terrorist argument: A greek case. In Proceedings of the 2018 IEEE/ACM International Conference on Advances in Social Net- works Analysis and Mining, ASONAM '18, page 1084-1091. IEEE Press.
Putting Faith in Hate: When Religion Is the Source or Target of Hate Speech. Richard Moon, 10.1017/9781108348423Cambridge University PressRichard Moon. 2018. Putting Faith in Hate: When Re- ligion Is the Source or Target of Hate Speech. Cam- bridge University Press.
A qualitative approach to the intersection of sexual, ethnic, and gender identities. Rafael F Dr, Narváez, H Ilan, Robert M Meyer, Suzanne C Kertzner, Allegra R Ouellette, Gordon, 10.1080/1528348080257937527683200Identity. 91Dr. Rafael F. Narváez, Ilan H. Meyer, Robert M. Kertzner, Suzanne C. Ouellette, and Allegra R. Gor- don. 2009. A qualitative approach to the intersection of sexual, ethnic, and gender identities. Identity, 9(1):63-86. PMID: 27683200.
Deep context-aware embedding for abusive and hate speech detection on twitter. Usman Naseem, Imran Razzak, Ibrahim A Hameed, Aust. J. Intell. Inf. Process. Syst. 15Usman Naseem, Imran Razzak, and Ibrahim A. Hameed. 2019. Deep context-aware embedding for abusive and hate speech detection on twitter. Aust. J. Intell. Inf. Process. Syst., 15:69-76.
A lexicon-based approach for hate speech detection. Dennis Njagi, Z Zuping, Damien Hanyurwimfura, Jun Long, 10.14257/ijmue.2015.10.4.21International Journal of Multimedia and Ubiquitous Engineering. 10Dennis Njagi, Z. Zuping, Damien Hanyurwimfura, and Jun Long. 2015. A lexicon-based approach for hate speech detection. International Journal of Multime- dia and Ubiquitous Engineering, 10:215-230.
Duluth at SemEval-2019 task 6: Lexical approaches to identify and categorize offensive tweets. Ted Pedersen, 10.18653/v1/S19-2106Proceedings of the 13th International Workshop on Semantic Evaluation. the 13th International Workshop on Semantic EvaluationMinneapolis, Minnesota, USAAssociation for Computational LinguisticsTed Pedersen. 2019. Duluth at SemEval-2019 task 6: Lexical approaches to identify and categorize offen- sive tweets. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 593-599, Minneapolis, Minnesota, USA. Association for Com- putational Linguistics.
Mining offensive language on social media. Serena Pelosi, Alessandro Maisto, Pierluigi Vitale, Simonetta Vietri, CLiC-it. Serena Pelosi, Alessandro Maisto, Pierluigi Vitale, and Simonetta Vietri. 2017. Mining offensive language on social media. In CLiC-it.
Multimodal hate speech detection in greek social media. Konstantinos Perifanos, Dionysis Goutsos, 10.3390/mti5070034Multimodal Technologies and Interaction. 57Konstantinos Perifanos and Dionysis Goutsos. 2021. Multimodal hate speech detection in greek social media. Multimodal Technologies and Interaction, 5(7).
Detecting Offensive Posts in Greek Social Media. Zeses Pitenis, University Of Wolverhampton, School Of HumanitiesMaster thesisZeses Pitenis. 2019. Detecting Offensive Posts in Greek Social Media. Master thesis, University Of Wolver- hampton, School Of Humanities.
Offensive language identification in Greek. Zesis Pitenis, Marcos Zampieri, Tharindu Ranasinghe, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationZesis Pitenis, Marcos Zampieri, and Tharindu Ranas- inghe. 2020. Offensive language identification in Greek. In Proceedings of the 12th Language Re- sources and Evaluation Conference, pages 5113- 5119, Marseille, France. European Language Re- sources Association.
Hate speech annotation: Analysis of an italian twitter corpus. Fabio Poletto, Marco Antonio Stranisci, Manuela Sanguinetti, Viviana Patti, Cristina Bosco, CLiC-it. Fabio Poletto, Marco Antonio Stranisci, Manuela San- guinetti, Viviana Patti, and Cristina Bosco. 2017. Hate speech annotation: Analysis of an italian twitter corpus. In CLiC-it.
Offensive language detection using multi-level classification. H Amir, Diana Razavi, Sasha Inkpen, Stan Uritsky, Matwin, Advances in Artificial Intelligence. Berlin, Heidelberg; Berlin HeidelbergSpringerAmir H. Razavi, Diana Inkpen, Sasha Uritsky, and Stan Matwin. 2010. Offensive language detection using multi-level classification. In Advances in Artificial In- telligence, pages 16-27, Berlin, Heidelberg. Springer Berlin Heidelberg.
Two contrasting data annotation paradigms for subjective nlp tasks. Paul Röttger, Bertie Vidgen, Dirk Hovy, Janet B Pierrehumbert, 2112.07475Paul Röttger, Bertie Vidgen, Dirk Hovy, and Janet B. Pierrehumbert. 2022. Two contrasting data anno- tation paradigms for subjective nlp tasks. ArXiv 2112.07475.
A lexicon for profane and obscene text identification in Bengali. Salim Sazzed, Proceedings of the International Conference on Recent Advances in Natural Language Processing. the International Conference on Recent Advances in Natural Language ProcessingRANLP 2021. Held Online. INCOMA LtdSalim Sazzed. 2021. A lexicon for profane and ob- scene text identification in Bengali. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 1289-1296, Held Online. INCOMA Ltd.
A survey on hate speech detection using natural language processing. Anna Schmidt, Michael Wiegand, 10.18653/v1/W17-1101Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media. the Fifth International Workshop on Natural Language Processing for Social MediaValencia, SpainAssociation for Computational LinguisticsAnna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language pro- cessing. In Proceedings of the Fifth International Workshop on Natural Language Processing for So- cial Media, pages 1-10, Valencia, Spain. Association for Computational Linguistics.
−άκι και −ette: diminutive suffixes of Modern Greek and French. A comparative analysis and extensions regarding language learning and teaching. Αikaterini Stavrianaki, Proceedings of International Conference. International ConferenceThessalonikiAUTHDepartment of French LanguageEuropean year of intercultural dialogue: Talking with languages-culturesΑikaterini Stavrianaki. 2009. −άκι και −ette: diminu- tive suffixes of Modern Greek and French. A com- parative analysis and extensions regarding language learning and teaching. In Proceedings of Interna- tional Conference 2008, European year of inter- cultural dialogue: Talking with languages-cultures. Thessaloniki: Department of French Language, AUTH, pages 759-769.
Dictionary of Standard Modern Greek. Institute of Modern Greek Studies. Manolis Triantafyllidis, Manolis Triandaphyllidis FoundationManolis Triantafyllidis. 2007. Dictionary of Stan- dard Modern Greek. Institute of Modern Greek Stud- ies.[Manolis Triandaphyllidis Foundation].
Identifying offensive expressions of opinion in context. Francielle Alves Vargas, Isabelle Carvalho, Fabiana Rodrigues De Goes, abs/2104.12227ArXiv. Francielle Alves Vargas, Isabelle Carvalho, and Fabi- ana Rodrigues de Goes. 2021. Identifying offen- sive expressions of opinion in context. ArXiv, abs/2104.12227.
Detecting hate speech on the world wide web. William Warner, Julia Hirschberg, Proceedings of the Second Workshop on Language in Social Media. the Second Workshop on Language in Social MediaMontréal, CanadaAssociation for Computational LinguisticsWilliam Warner and Julia Hirschberg. 2012. Detecting hate speech on the world wide web. In Proceedings of the Second Workshop on Language in Social Me- dia, pages 19-26, Montréal, Canada. Association for Computational Linguistics.
Understanding abuse: A typology of abusive language detection subtasks. Zeerak Waseem, Thomas Davidson, Dana Warmsley, Ingmar Weber, 10.18653/v1/W17-3012Proceedings of the First Workshop on Abusive Language Online. the First Workshop on Abusive Language OnlineVancouver, BC, CanadaAssociation for Computational LinguisticsZeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A typology of abusive language detection subtasks. In Proceedings of the First Workshop on Abusive Lan- guage Online, pages 78-84, Vancouver, BC, Canada. Association for Computational Linguistics.
Inducing a lexicon of abusive words -a feature-based approach. Michael Wiegand, Josef Ruppenhofer, Anna Schmidt, Clayton Greenberg, 10.18653/v1/N18-1095Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaLong Papers1Association for Computational LinguisticsMichael Wiegand, Josef Ruppenhofer, Anna Schmidt, and Clayton Greenberg. 2018. Inducing a lexicon of abusive words -a feature-based approach. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1046-1056, New Orleans, Louisiana. Association for Computational Linguis- tics.
Lexicology: An Introduction to the Analysis of Words and Dictionaries. George Xydopoulos, Patakis PublishersAthensGeorge Xydopoulos. 2012. Lexicology: An Introduction to the Analysis of Words and Dictionaries. Patakis Publishers, Athens. |
227,231,162 | Relation Specific Transformations for Open World Knowledge Graph Completion | We propose an open-world knowledge graph completion model that can be combined with common closed-world approaches (such as ComplEx) and enhance them to exploit text-based representations for entities unseen in training. Our model learns relation-specific transformation functions from text-based embedding space to graph-based embedding space, where the closedworld link prediction model can be applied. We demonstrate state-of-the-art results on common open-world benchmarks and show that our approach benefits from relation-specific transformation functions (RST), giving substantial improvements over a relation-agnostic approach. | [
5267356,
5378837,
2768038,
53223679
] | Relation Specific Transformations for Open World Knowledge Graph Completion
December 13, 2020
Haseeb Shah hshah1@ualberta.ca
Department of Computer Science
DCSM Department RheinMain University of Applied Sciences
University of Alberta
Johannes Villmow johannes.villmow@hs-rm.de
Department of Computer Science
DCSM Department RheinMain University of Applied Sciences
University of Alberta
Adrian Ulges adrian.ulges@hs-rm.de
Department of Computer Science
DCSM Department RheinMain University of Applied Sciences
University of Alberta
Relation Specific Transformations for Open World Knowledge Graph Completion
Proceedings of the Graph-based Methods for Natural Language Processing (TextGraphs)
the Graph-based Methods for Natural Language Processing (TextGraphs)Barcelona, SpainDecember 13, 202079
We propose an open-world knowledge graph completion model that can be combined with common closed-world approaches (such as ComplEx) and enhance them to exploit text-based representations for entities unseen in training. Our model learns relation-specific transformation functions from text-based embedding space to graph-based embedding space, where the closedworld link prediction model can be applied. We demonstrate state-of-the-art results on common open-world benchmarks and show that our approach benefits from relation-specific transformation functions (RST), giving substantial improvements over a relation-agnostic approach.
Introduction
Knowledge graphs are an interesting source of information that can be exploited by retrieval (Dong et al., 2014) and question answering systems (Ferrucci et al., 2010). They are, however, known to be inherently sparse (Paulheim, 2017). To overcome this problem, knowledge graph completion (KGC) enriches graphs with new triples. While most existing approaches require all entities to be part of the training graph, for many applications it is of interest to infer knowledge about entities not present in the graph i.e. open-world entities. Here, approaches usually assume some text describing the target entity to be given, from which an entity representations can be inferred, for example via text embeddings (Mikolov et al., 2013;Devlin et al., 2018). To the best of our knowledge, only a few such open-world KGC approaches have been proposed so far (Xie et al., 2016;Shi and Weninger, 2017a;Shah et al., 2019).
We suggest a simple yet effective approach towards open-world KGC 1 : Similar to Shah et al. (2019)'s OWE model, our approach enables existing KGC models to perform open-world prediction: Given an open-world entity, its name and description are aggregated into a text-based entity representation, and a transformation from text-based embedding space to graph-based embedding space is learned, where the closed-world KGC model can be applied. However, while OWE's transformation only takes the open-world entity into account, our approach also utilizes the target triple's relation, such that specific mappings are learned for different relations such as birthplace, spouse, or located in (see Figure 1). We demonstrate that this extension comes with strong improvements, yielding state-of-the-art results on common open-world datasets.
Related Work
Interest in KGC has increased recently, with most of the work focusing on embedding-based approaches. Earlier approaches (Nickel et al., 2016) have recently been complemented by other models such as DistMult (Yang et al., 2014), TransR (Lin et al., 2015), ComplEx (Trouillon et al., 2016), ProjE (Shi and Weninger, 2017b) and RotatE (Sun et al., 2019). The above models estimate the probability of triples (head, rel, tail) using a scoring function φ(u head , u rel , u tail ), where u x denotes the embedding of entity/relation x and is a real-valued or complex-valued vector. φ depends on the model and varies from simple translation (Bordes et al., 2013) over bilinear forms (Yang et al., 2014) Figure 1: Our approach first trains a KGC model on the graph without using textual information (bottom left). For every annotated entity, we extract a text-based embedding v by aggregating the word embeddings for tokens in the entity's name and description (top left). A transformation Ψ is trained to map v to the space of graph-based embeddings (center). The learned mapping can then be applied to unknown entities, thus allowing the trained KGC model to be applied (right).
forms (Trouillon et al., 2016). Training happens by learning to discriminate real triples from perturbed ones, typically by negative sampling (Nickel et al., 2016).
While the knowledge graph completion models described above leverage only the structure of the graph, some approaches combine text with graph information, typically using embeddings that represent terms, sentences or documents (Goldberg, 2016). Embeddings are usually derived from language models, either in static form (Mikolov et al., 2013) or by contextualized models (Devlin et al., 2018). KGC models can employ such textual information for entities scarcely linked in the graph (Gesese et al., 2019). Most approaches combine a textual embedding with structural KGC approaches, either by initializing structural embeddings from text (Socher et al., 2013;Wang and Li, 2016), interpolating between textual and structural embeddings (Xu et al., 2017), sometimes with a joined loss (Toutanova and Chen, 2015) and gating mechanisms (Kristiadi et al., 2019). Others perform a fine-tuning for KGC based on textual labels of the entities and relations (Yao et al., 2019).
Only few other works have addressed open-world KGC so far. Xie et al. (2016) proposed DKRL with a joint training of graph-based embeddings (TransE) and text-based embeddings while regularizing both types of embeddings to be aligned using an additional loss. ConMask (Shi and Weninger, 2017a) is a text-centric approach where text-based embeddings for entities and relations are derived by an attention model over names and descriptions. Closest to our work is OWE (Shah et al., 2019), which trains graph and text embeddings independently and then learns a mapping between the two embedding spaces (more details are provided in Section 3). While OWE's mapping only take entities into account, our extended model's mapping is learned given both the entity and relation when predicting a triple. Orthogonal to our work, WOWE (Zhou et al., 2020) extends the OWE approach by replacing the averaging aggregator with a weighted attention mechanism and can be combined with our approach.
Approach
Given a knowledge graph G ⊂ E×R×E containing triples (h, r, t) (E and R denote finite sets of entities and relations), KGC models can perform tail prediction as follows: Given a pair of head and relation (h, r), the tail is estimated as
t * = arg max t∈E φ(u h , u r , u t )(1)
where u h , u r , u t are entity/relation embeddings and φ is a model-specific scoring function. Note that this approach -and our extension -can be applied for head prediction accordingly.
We address an open-world setting, where the triple's head is not a part of the knowledge graph, i.e. h ∈ E. However, h is assumed to come with a textual description. Our approach (Figure 1) transforms this text into a token sequence W h = (w 1 , w 2 , ..., w n ), from which a sequence of embeddings (v w 1 , v w 2 , ..., v wn ) is derived using a textual embedding model pre-trained on a large text corpus. We experimented with BERT (Devlin et al., 2018) but did not achieve major improvements over static embeddings, likely because descriptions on KGC datasets tend to be short. Instead we use Wikipedia2Vec (Yamada et al., 2016), which contains phrase embeddings for entity names like "Sheila Walsh". If no phrase embedding is available, we use token-wise embeddings instead. If no embedding is available for a token, we use a vector of zeros as an "unknown" token. The resulting sequence of embeddings is aggregated by average pooling to obtain a single text-based embedding vector of the head entity v h ∈ R d .
The key step of our approach is to learn transformation functions Ψ r that align the text-based and graph-based embedding spaces such that Ψ r (v h )≈u h . When applying this mapping, the open-world entity's text-based embedding v h is transformed into a graph-based proxy embedding Ψ r (v h ). Triples with h are scored by applying the KGC model from Equation 1 to Ψ r (v h ):
t * = arg max t∈E φ Ψ r (v h ), u r , u t(2)
Like OWE (Shah et al., 2019), we use an affine transformation Ψ r (v) = A r ·v + b r . Our focus of this paper is to deal with relation specificity, for which we propose the following two strategies:
Relation Specific Transformation (RST) While the OWE model consists of a global transformation function (Ψ), our proposed RST approach trains a separate transformation function per relation (Ψ r ): Our hypothesis is that when predicting a tail (h, r, ?), including information on the relation r may be beneficial. Have a look at Table 1, where the transformation Ψ birthP lace maps v Shiela W alsh to a completely different region in the graph embedding space than Ψ recordLabel . Therefore, we learn a separate transformation Ψ r for each relation r∈R, containing a separate learnable matrix A r and vector b r . For a fair comparison with OWE, we use the ComplEx KGC model (Trouillon et al., 2016) in our experiments and use separate parameters for the real and imaginary parts.
Relation Clustering Transformation (RCT) Our second approach -Relation Clustering Transformation (RCT) -aggregates relations to clusters and then learns a separate transformation Ψ C for each cluster C. We use an agglomerative clustering approach (pseudo-code in Figure 2), which first initializes each cluster with a single relation r∈R and conducts η fusion steps, each joining "similar" clusters: Let t(C) denote the set of tails attached to any relation r in C. Clusters C, C are fused if the number of shared tails, divided by size of the smaller cluster, exceeds a threshold S:
|t(C) ∩ t(C )| min(|t(C)|, |t(C )|) > S(3)
Training
Our text embeddings v and graph embeddings u are based on pretrained models, such that the only parameters Θ to be learned are the relation matrices A r and vectors b r . First a KGC model is trained (2019)).
on the full graph G, obtaining graph-based entity embeddings u 1 , ..., u n . We then choose the subset E t of all entities in the graph with textual descriptions, and define G t := G ∩ (E t ×R×E) as all triplets containing heads with text. We then minimize the following loss:
L(Θ) = (h,r,t)∈G t dist Ψ r (v h ), u h(4)
with dist() referring either to Euclidean or cosine distance between the graph-and text-based head embeddings. As we use ComplEx, where embeddings u h contain real and imaginary parts, the above loss is summed for both parts. Since the number of entities in the datasets used is limited and overfitting is expected to be an issue, we neither fine-tune into the graph nor the text-embeddings.
Evaluation
We evaluate our model on FB20k (Xie et al., 2016), DBPedia50k (Shi and Weninger, 2017a) and FB15k-237-OWE (Shah et al., 2019) and compare our model with the state of the art on the task in open-world tail prediction. Results are illustrated in Table 2. Due to the lack of an open-world validation set on FB20k, we remove random 10% of the test triples and use them as a validation set. Hyperparameters were optimized using a grid search (details in the appendix). We use the same evaluation criteria as Shah et al. (2019), and evaluate our results only on ComplEx to provide a fair comparison with the OWE models. For training the closed-world KGC models, we utilize OpenKE (Han et al., 2018). Additionally, we use the target filtering approach (Shi and Weninger, 2017a) on any results reported. The Target Filtering Baseline is evaluated by assigning random scores to all targets that pass the target filtering criterion.
Results
We observe that the ComplEx-RST outperforms all other approaches -including OWE -by a margin on all metrics except Hits@10 on DBPedia50k. ComplEx-RCT (relation clusters) performs competitively with the ComplEx-RST (one mapping per relation), while the number of transformation functions is reduced from 351 to 279 in case of DBPedia50k, from 235 to 114 in case of FB15k-237-OWE and from 1341 to 522 in case of FB20k. The values of η and S were optimized to achieve the greatest reduction in number of clusters at a negligible negative impact on accuracy. We also note that the improvement achieved by utilizing the relation information is higher in DBPedia50k and FB20k, both of which use very long descriptions compared to FB15k-237-OWE. We believe that this is because the longer descriptions often have more pieces of information relevant to the relation, which the relationspecific transformations are able to extract and utilize. Finally, in Figure 2 (right) we investigate which relations benefit strongest from relation-specific mappings: Each point represents a relation in FB15k-237-OWE. We observe that points to the left (rare relations) tend to benefit stronger from learning a transformation of their own (RST). Those scarce relations seem to be underrepresented in the training data and -accordingly -in the global mapping.
Input: Training set G, similarity factor S ∈ R, iterations η ∈ Z Output: Clusters C = {c1, c2, . . . , cm}
for i ← 1 . . . |R| do ci ← {ri} Procedure make clusters(C, η) if η = 0 then return C for i ← 1 . . . |C| do t(ci) ← get tails(ci, G) for j = i+1 . . . |C| do t(cj) ← get tails(cj, G) if |t(c i )∩t(c j )|
min(|t(c i )|,|t(c j )|) > S then ci ← ci ∪ cj delete cj from C break make clusters(C, η − 1) end make clusters({c1 . . . c |R| }, η) We see that this improvement tends to be higher for scarce relations (left on the x-axis).
Conclusion
We have proposed a simple approach to incorporate relation-specific information into open-world knowledge graph completion. Our approach achieves state-of-the-art results on common open-source benchmarks and offers strong improvements over relation-agnostic state-of-the-art methods. An interesting direction for future work will be to adapt our model to longer textual inputs, e.g. by using attention to enable the model to select relevant passages (similar to ConMask (Shi and Weninger, 2017a)).
Figure 2 :
2The algorithm (left) outlines our implementation of the RCT approach in pseudo code. The plot (right) visualizes the improvement in MRR of ComplEx-RST compared to ComplEx-OWE on the y-axis. Each point is a relation in FB15k-237-OWE. The size of the point represents the number of test triples containing the relation. The x-axis shows the number of training triples containing the relation.
to complex-valuedword embeddings
text-based
knowledge graph
average pooling
train KGC model
graph-based
Sheila Walsh
Scottish contemporary
Vocalist and songwriter.
open-world entity
1. build embedding spaces
2. learn relation-specific
transformations
3. apply KGC model
compute score
Table 1 :
1Relation-specific transformations map entities to corresponding regions in embedding space: When mapping Sheila Walsh with the birthplace relation, the resulting embedding lies in a cluster of British people. When mapping with the recordLabel relation, the resulting embedding lies close to pop albums (which also have a recordLabel relation).
Table 2 :
2Comparison with other open-world KGC models on tail prediction. The Relation Specific
Transformation (ComplEx-RST) performs best, particularly on the DBPedia50K and FB20K datasets
with long textual descriptions ( † results reported by Shah et al.
We make our code available under https://github.com/haseebs/RST-OWE This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/.
AcknowledgementsThis work was funded by German Federal Ministry of Education and Research (Program FHprofUnt, Project DeepCA (13FH011PX6)).
Translating Embeddings for Modeling Multi-relational Data. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, Oksana Yakhnenko, Adv. in Neural Information Processing Systems. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translat- ing Embeddings for Modeling Multi-relational Data. In Adv. in Neural Information Processing Systems, pages 2787-2795.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805.
Knowledge vault: A web-scale approach to probabilistic knowledge fusion. Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, Wei Zhang, Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14. the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14New York, NY, USAAssociation for Computing MachineryXin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14, page 601-610, New York, NY, USA. Association for Computing Machinery.
Building Watson: An Overview of the DeepQA Project. David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, J William Murdock, Eric Nyberg, John Prager, Nico Schlaefer, Chris Welty, AI Magazine. 313David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A. Kalyanpur, Adam Lally, J. William Murdock, Eric Nyberg, John Prager, Nico Schlaefer, and Chris Welty. 2010. Building Watson: An Overview of the DeepQA Project. AI Magazine, 31(3):59-79.
Gesese Genet Asefa, Russa Biswas, Mehwish Alam, and Harald Sack. 2019. A survey on knowledge graph embeddings with literals: Which model links better literal. Genet Asefa Gesese, Russa Biswas, Mehwish Alam, and Harald Sack. 2019. A survey on knowledge graph embeddings with literals: Which model links better literal-ly?
A Primer on Neural Network Models for Natural Language Processing. Yoav Goldberg, J. Artif. Int. Res. 571Yoav Goldberg. 2016. A Primer on Neural Network Models for Natural Language Processing. J. Artif. Int. Res., 57(1):345-420, September.
Openke: An open toolkit for knowledge embedding. Xu Han, Shulin Cao, Lv Xin, Yankai Lin, Zhiyuan Liu, Maosong Sun, Juanzi Li, Proceedings of EMNLP. EMNLPXu Han, Shulin Cao, Lv Xin, Yankai Lin, Zhiyuan Liu, Maosong Sun, and Juanzi Li. 2018. Openke: An open toolkit for knowledge embedding. In Proceedings of EMNLP.
Incorporating literals into knowledge graph embeddings. Agustinus Kristiadi, Mohammad Asif Khan, Denis Lukovnikov, Jens Lehmann, Asja Fischer, The Semantic Web -ISWC 2019 -18th International Semantic Web Conference. Auckland, New ZealandProceedings, Part IAgustinus Kristiadi, Mohammad Asif Khan, Denis Lukovnikov, Jens Lehmann, and Asja Fischer. 2019. Incor- porating literals into knowledge graph embeddings. In The Semantic Web -ISWC 2019 -18th International Semantic Web Conference, Auckland, New Zealand, October 26-30, 2019, Proceedings, Part I, pages 347-363.
Learning entity and relation embeddings for knowledge graph completion. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, Xuan Zhu, Twenty-ninth AAAI conference on artificial intelligence. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In Twenty-ninth AAAI conference on artificial intelligence.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean, abs/1310.4546CoRRTomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. CoRR, abs/1310.4546.
A Review of Relational Machine Learning for Knowledge Graphs. Maximilian Nickel, Kevin Murphy, Volker Tresp, Evgeniy Gabrilovich, Proceedings of the IEEE. 1041Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. 2016. A Review of Relational Ma- chine Learning for Knowledge Graphs. Proceedings of the IEEE, 104(1):11-33.
Knowledge Graph Refinement: A Survey of Approaches and Evaluation Methods. Semantic Web. Heiko Paulheim, 8Heiko Paulheim. 2017. Knowledge Graph Refinement: A Survey of Approaches and Evaluation Methods. Se- mantic Web, 8(3):489-508.
An open-world extension to knowledge graph completion models. Haseeb Shah, Johannes Villmow, Adrian Ulges, Ulrich Schwanecke, Faisal Shafait, The Thirty-Third AAAI Conference on Artificial Intelligence. Haseeb Shah, Johannes Villmow, Adrian Ulges, Ulrich Schwanecke, and Faisal Shafait. 2019. An open-world ex- tension to knowledge graph completion models. In The Thirty-Third AAAI Conference on Artificial Intelligence, pages 3044-3051.
Open-World Knowledge Graph Completion. CoRR. Baoxu Shi, Tim Weninger, abs/1711.03438Baoxu Shi and Tim Weninger. 2017a. Open-World Knowledge Graph Completion. CoRR, abs/1711.03438.
ProjE: Embedding Projection for Knowledge Graph Completion. Baoxu Shi, Tim Weninger, Proc. AAAI. AAAIBaoxu Shi and Tim Weninger. 2017b. ProjE: Embedding Projection for Knowledge Graph Completion. In Proc. AAAI, pages 1236-1242.
Reasoning with neural tensor networks for knowledge base completion. Richard Socher, Danqi Chen, Christopher D Manning, Andrew Y Ng, Proceedings of the 26th International Conference on Neural Information Processing Systems. the 26th International Conference on Neural Information Processing SystemsUSACurran Associates Inc1NIPS'13Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Proceedings of the 26th International Conference on Neural Information Processing Systems -Volume 1, NIPS'13, pages 926-934, USA. Curran Associates Inc.
Rotate: Knowledge graph embedding by relational rotation in complex space. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, Jian Tang, arXiv:1902.10197arXiv preprintZhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. arXiv preprint arXiv:1902.10197.
Observed Versus Latent Features for Knowledge Base and Text Inference. Kristina Toutanova, Danqi Chen, 3rd Workshop on Continuous Vector Space Models and Their Compositionality. Kristina Toutanova and Danqi Chen. 2015. Observed Versus Latent Features for Knowledge Base and Text Inference. In 3rd Workshop on Continuous Vector Space Models and Their Compositionality, July.
Complex Embeddings for Simple Link Prediction. Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, Guillaume Bouchard, Int. Conference on Machine Learning. Théo Trouillon, Johannes Welbl, Sebastian Riedel,Éric Gaussier, and Guillaume Bouchard. 2016. Complex Embeddings for Simple Link Prediction. In Int. Conference on Machine Learning, pages 2071-2080.
Text-enhanced Representation Learning for Knowledge Graph. Zhigang Wang, Juanzi Li, Proc. International Joint Conference on Artificial Intelligence. International Joint Conference on Artificial IntelligenceZhigang Wang and Juanzi Li. 2016. Text-enhanced Representation Learning for Knowledge Graph. In Proc. International Joint Conference on Artificial Intelligence, pages 1293-1299.
Representation Learning of Knowledge Graphs with Entity Descriptions. Ruobing Xie, Zhiyuan Liu, Jia Jia, Huanbo Luan, Maosong Sun, Proc. AAAI. AAAIRuobing Xie, Zhiyuan Liu, Jia Jia, Huanbo Luan, and Maosong Sun. 2016. Representation Learning of Knowl- edge Graphs with Entity Descriptions. In Proc. AAAI, pages 2659-2665.
Knowledge Graph Representation with Jointly Structural and Textual Encoding. Jiacheng Xu, Xipeng Qiu, Kan Chen, Xuanjing Huang, Proc. Int. Joint Conference on Artificial Intelligence. Int. Joint Conference on Artificial IntelligenceJiacheng Xu, Xipeng Qiu, Kan Chen, and Xuanjing Huang. 2017. Knowledge Graph Representation with Jointly Structural and Textual Encoding. In Proc. Int. Joint Conference on Artificial Intelligence, pages 1318-1324.
Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation. Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, Yoshiyasu Takefuji, Proc. SIGNLL Conference on Computational Natural Language Learning. SIGNLL Conference on Computational Natural Language LearningBerlin, GermanyIkuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016. Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation. In Proc. SIGNLL Conference on Computational Natural Language Learning, pages 250-259, Berlin, Germany, August.
Embedding Entities and Relations for Learning and Inference in Knowledge Bases. Bishan Yang, Wen-Tau Yih, Xiaodong He, Jianfeng Gao, Li Deng, abs/1412.6575CoRR. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding Entities and Relations for Learning and Inference in Knowledge Bases. CoRR, abs/1412.6575.
Kg-bert: Bert for knowledge graph completion. Liang Yao, Chengsheng Mao, Yuan Luo, Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Kg-bert: Bert for knowledge graph completion.
Weighted aggregator for the open-world knowledge graph completion. Yueyang Zhou, Shumin Shi, Heyan Huang, Data Science. Singapore; SingaporeSpringerYueyang Zhou, Shumin Shi, and Heyan Huang. 2020. Weighted aggregator for the open-world knowledge graph completion. In Data Science, pages 283-291, Singapore. Springer Singapore. |
16,671,536 | Why discourse affects speakers' choice of referring expressions | We propose a language production model that uses dynamic discourse information to account for speakers' choices of referring expressions. Our model extends previous rational speech act models (Frank and Goodman, 2012) to more naturally distributed linguistic data, instead of assuming a controlled experimental setting. Simulations show a close match between speakers' utterances and model predictions, indicating that speakers' behavior can be modeled in a principled way by considering the probabilities of referents in the discourse and the information conveyed by each word. | [
11825762,
3933777,
6399480,
2065400,
7983519
] | Why discourse affects speakers' choice of referring expressions
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 26-31, 2015. 2015
Naho Orita
Graduate School of Information Sciences
Computer Science and Linguistics
Computer Science
Tohoku University
University of Maryland
UMIACS University of Maryland
UMIACS University of Maryland
Eliana Vornov evornov@umd.edu
Graduate School of Information Sciences
Computer Science and Linguistics
Computer Science
Tohoku University
University of Maryland
UMIACS University of Maryland
UMIACS University of Maryland
Naomi H Feldman
Graduate School of Information Sciences
Computer Science and Linguistics
Computer Science
Tohoku University
University of Maryland
UMIACS University of Maryland
UMIACS University of Maryland
Hal Daumé
Graduate School of Information Sciences
Computer Science and Linguistics
Computer Science
Tohoku University
University of Maryland
UMIACS University of Maryland
UMIACS University of Maryland
Iii
Graduate School of Information Sciences
Computer Science and Linguistics
Computer Science
Tohoku University
University of Maryland
UMIACS University of Maryland
UMIACS University of Maryland
Why discourse affects speakers' choice of referring expressions
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing
the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaAssociation for Computational LinguisticsJuly 26-31, 2015. 2015
We propose a language production model that uses dynamic discourse information to account for speakers' choices of referring expressions. Our model extends previous rational speech act models (Frank and Goodman, 2012) to more naturally distributed linguistic data, instead of assuming a controlled experimental setting. Simulations show a close match between speakers' utterances and model predictions, indicating that speakers' behavior can be modeled in a principled way by considering the probabilities of referents in the discourse and the information conveyed by each word.
Introduction
Discourse information plays an important role in various aspects of linguistic processing, such as predictions about upcoming words (Nieuwland and Van Berkum, 2006) and scalar implicature processing (Breheny et al., 2006). The relationship between discourse information and speakers' choices of referring expression is one of the most studied problems. Speakers' choices of referring expressions have long been thought to depend on the salience of entities in the discourse (Givón, 1983). For example, speakers normally do not choose a pronoun to refer to a new entity in the discourse, but are more likely to use pronouns for referents that have been referred to earlier in the discourse. A number of grammatical, semantic, and distributional factors related to salience have been found to influence choices of referring expressions (Arnold, 2008). While the relationship between discourse salience and speakers' choices of referring expressions is well known, there is not yet a formal account of why this relationship exists.
In recent years, a number of formal models have been proposed to capture inferences between speakers and listeners in the context of Gricean pragmatics (Grice, 1975;Frank and Goodman, 2012). These models take a game theoretic approach in which speakers optimize productions to convey information for listeners, and listeners infer meaning based on speakers' likely productions. These models have been argued to account for human communication (Jager, 2007;Frank and Goodman, 2012;Bergen et al., 2012a;Smith et al., 2013), and studies report that they robustly predict various linguistic phenomena in experimental settings (Goodman and Stuhlmüller, 2013;Degen et al., 2013;Kao et al., 2014;Nordmeyer and Frank, 2014). However, these models have not yet been applied to language produced outside of the laboratory, nor have they incorporated measures of discourse salience that can be computed over corpora.
In this paper, we propose a probabilistic model to explain speakers' choices of referring expressions based on discourse salience. Our model extends the rational speech act model from Frank and Goodman (2012) to incorporate updates to listeners' beliefs as discourse proceeds. The model predicts that a speaker's choice of referring expressions should depend directly on the amount of information that each word carries in the discourse. Simulations probe the contribution of each model component and show that the model can predict speakers' pronominalization in a corpus. These results suggest that this model formalizes underlying principles that account for speakers' choices of referring expressions.
The paper is organized as follows. Section 2 reviews relevant studies on choices of referring expressions. Section 3 describes the details of our model. Section 4 describes the data, preprocessing and annotation procedure. Section 5 presents simulation results. Section 6 summarizes this study and discusses implications and future directions.
2 Relevant Work 2.1 Discourse salience Speakers' choices of referring expressions have long been an object of study. Pronominalization has been examined particularly often in both theoretical and experimental studies. Discourse theories predict that speakers use pronouns when they think that a referent is salient in the discourse (Givón, 1983;Ariel, 1990;Gundel et al., 1993;Grosz et al., 1995), where salience of the referent is influenced by various factors such as grammatical position (Brennan, 1995), recency (Chafe, 1994), topicality (Arnold, 1998), competitors (Fukumura et al., 2011), visual salience (Vogels et al., 2013b), and so on.
Discourse theories have characterized the link between referring expressions and discourse salience by stipulating constructs such as a scale of topicality (Givón, 1983), accessibility hierarchy (Ariel, 1990), or implicational hierarchy (Gundel et al., 1993). All of these assume fixed form-salience correspondences in that a certain referring expression encodes a certain degree of salience. However, it is not clear how this form-salience mapping holds nor why it should be.
There is also a rich body of research that points to the importance of production cost (Rohde et al., 2012;Bergen et al., 2012b;Degen et al., 2013) and listener models (Bard et al., 2004;Van der Wege, 2009;Galati and Brennan, 2010;Fukumura and van Gompel, 2012) in language production. These studies suggest that only considering discourse salience of the referent may not precisely capture speakers' choices of referring expressions, and it is necessary to examine discourse salience in relation to these other factors.
Formal models
Computational models relevant to speakers' choices of referring expressions have been proposed, but there is a gap between questions that previous models have addressed and the questions that we have raised above. Grüning and Kibrik (2005) and Khudyakova et al. (2011) examine the significance of various factors that might influence choices of referring expressions by using machine learning models such as neural networks, logistic regression and decision trees. Although these models qualitatively show some significant factors, they are data-driven rather than being explanatory, and have not focused on why and how these factors result in the observed referring choices.
Formal models that go beyond identifying superficial factors focus on only pronouns rather than accounting for speakers' word choices per se. For example, Kehler et al. (2008) formalize a relationship between pronoun comprehension and production using Bayes' rule to account for comprehender's semantic bias in experimental data. Rij et al. (2013) use ACT-R (Anderson, 2007) to examine the effects of working memory load in pronoun interpretation. These models show how certain factors influence pronoun production/interpretation, but it is not clear how these models would predict speakers' choices of referring expressions.
Relevant formal models in computational linguistics include Centering theory (Grosz et al., 1995;Poesio et al., 2004) and Referring Expression Generation (Krahmer and Van Deemter, 2012). These models propose deterministic constraints governing when pronouns are preferred in local discourse, but it is not clear how these would account for speakers' choices of referring expressions, nor it is clear why there should be such deterministic constraints.
Uniform Information Density
One potential formal explanation for the relation between discourse salience and speakers' choices of referring expressions is the Uniform Information Density hypothesis (UID) (Levy and Jaeger, 2007;Tily and Piantadosi, 2009;Jaeger, 2010). UID states that speakers prefer to smooth the information density distribution of their utterances over time to achieve optimal communication. This theory predicts that speakers should use pronouns instead of longer forms (e.g., the president) when a referent is predictable in the context, whereas they should use longer forms for unpredictable referents that carry more information (Jaeger, 2010). Tily and Piantadosi (2009) empirically examined the relationship between predictability of a referent and choice of referring expressions. They found that predictability is a significant predictor of writers' choices of referring expressions, in that pronouns are used when a referent is predictable.
While these results appear to support UID, there are several inconsistencies with previous UID accounts. Information content of words has been estimated using an n-gram language model (Levy and Jaeger, 2007), a verb's subcategorization frequency (Jaeger, 2010), and so on, whereas here the information content is that of referents with respect to discourse salience. In addition, selecting between a pronoun and a more specified referring expression involves deciding how much information to convey, whereas previous applications of UID (Levy and Jaeger, 2007) have been concerned with deciding between different ways of expressing the same information content. We show in the next section that we can derive predictions about referring expressions directly from a model of language production.
Summary
Previous linguistic studies have focused on identifying factors that might influence choices of referring expressions. However, it is not clear from this previous work how and why these factors result in the observed patterns of referring expressions. Where formal models relevant to this topic do exist, they have not been built to explain why there is a relation between discourse salience and speakers' choices of referring expressions. Even UID, which relates predictability to word length, is not set up to account for the choice between words that vary in their information content.
In the next section, we propose a speaker model that formalizes the relation between discourse salience and speakers' choices of referring expressions, considering production cost and speakers' inference about listeners in a principled and explanatory way.
Speaker model 3.1 Rational speaker-listener model
We adopt the rational speaker-listener model from Frank and Goodman (2012) and extend this model to predict speakers' choices of referring expressions using discourse information.
The main idea of Frank and Goodman's model is that a rational pragmatic listener uses Bayesian inference to infer the speaker's intended referent r s given the word w, their vocabulary (e.g., 'blue', 'circle'), and shared context that consists of a set of objects O (e.g., visual access to object referents) as in (1), assuming that a speaker has chosen the word informatively.
P (r s |w, O) = P S (w|r s , O)P (r s ) Σ r ∈O P (w|r , O)P (r )(1)
While our work does not make use of this pragmatic listener, it does build on the speaker model assumed by the pragmatic listener. This speaker model (the likelihood term in the listener model) is defined using an exponentiated utility function as in (2).
P S (w|r s , O) ∝ e αU (w;rs,O) (2) The utility U (w; r s , O) is defined as I(w; r s , O) − D(w), where I(w; r s , O)
represents informativeness of word w (quantified as surprisal) and D(w) represents its speech cost. If a listener interprets word w literally and cost D(w) is constant, the exponentiated utility function can be reduced to (3) where |w| denotes the number of referents that the word w can be used to refer to.
P S (w|r s , O) ∝ 1 |w| (3)
Thus, the speaker model chooses a word based on its specificity. We show in the next section that this corresponds to a speaker who is optimizing informativeness for a listener with uniform beliefs about what will be referred to in the discourse. The assumption of uniform discourse salience works well in a simple language game where there are a limited number of referents that have roughly equal salience, but we show that a model that lacks a sophisticated notion of discourse falls short in more realistic settings.
Incorporating discourse salience
To extend Frank and Goodman's model to a natural linguistic situation, we assume that the speaker estimates the listener's interpretation of a word (or referring expression) w based on discourse information. We extend the speaker model from (3) by assuming that a speaker S chooses w to optimize a listener's belief in speaker's intended referent r relative to the speaker's own speech cost C w . This cost is another factor in the speaker model, roughly corresponding to utterance complexity such as word length. 1
P S (w|r) ∝ P L (r|w) · 1 C w (4)
The term P L (r|w) in (4) represents informativeness of word w: the speaker chooses w that most helps a listener L to infer referent r. The term C w in (4) is a cost function: the speaker chooses w that is least costly to speak. The speaker's listener model, P L (r|w), infers referent r that is referred to by word w according to Bayes' rule as in (5).
P L (r|w) = P (w|r)P (r) Σ r P (w|r )P (r )(5)
The first term in the numerator, P (w|r), is a word probability: the listener in the speaker's mind guesses how likely the speaker would be to use w to refer to r. The second term in the numerator, P (r), is the discourse salience (or predictability) of referent r. The denominator Σ r P (w|r )P (r ) is a sum of potential referents r that could be referred to by word w. The terms in this sum are non-zero only for referents that are compatible with the meaning of the word. If there are many potential referents that could be referred to by word w, that word would be more ambiguous thus less informative. The whole of the right side in Equation (5) represents the speaker's assumption about the listener: given word w the listener would infer referent r that is salient in a discourse and less ambiguously referred to by word w.
If P (r) is uniform over referents and P (w|r) is constant across words and referents, this listener model reduces to 1 |w| . Thus, Frank and Goodman (2012)'s speaker model in (3) is a special case of our speaker model in (4) that assumes uniform discourse salience and constant cost.
Our model predicts that the speaker's probability of choosing a word for a given referent should depend on its cost relative to its information content. To see this, we combine (4) and (5), yielding
P S (w|r) ∝ P (w|r)P (r) r P (w|r )P (r ) · 1 C w(6)
Because the speaker is deciding what word to use for an intended referent, and the term P (r) denotes the probability of this referent, P (r) is constant in the speaker model and does not affect the relative probability of a speaker producing different words. We further assume for simplicity that P (w|r) is constant across words and referents. This means that all referents have about the same number of words that can be used to refer to them, and that all words for a given referent are equally probable for a naive listener. In this scenario, the speaker's probability of choosing a word is
P S (w|r) ∝ 1 r P (r ) · 1 C w (7)
where the sum denotes the total discourse probability of the referents referred to by that word. The information content of an event is defined as the negative log probability of that event. In this scenario, the information conveyed by a word is the logarithm of the first term in (7), − log r P (r ). This means that in deciding which word to use, the highest cost a speaker should be willing to pay for a word should depend directly on that word's information content.
This relationship between cost and information content allows us to derive the prediction tested by Tily and Piantadosi (2009) that the use of referring expressions should depend on the predictability of a referent. For referents that are highly predictable from the discourse, different referring expressions (e.g., pronouns and proper names) will have roughly equal information content, and speakers should choose the referring expression that has the lowest cost. In contrast, for less predictable referents, proper names will carry substantially more information than pronouns, leading speakers to pay a higher cost for the proper names. These are the same predictions that have been discussed in the context of UID, but here the predictions are derived from a principled model of speakers who are trying to provide information to listeners. The extent to which our model can also capture other cases that have been put forward as evidence for the UID hypothesis remains a question for future research.
Predicting behavior from corpora
The model described in Section 3.2 is fully general, applying to arbitrary word choices, discourse probabilities, and cost fuctions. As an initial step, our simulations focus on the choice between pronouns and proper names. Our work tests the speaker model from (4) directly, asking whether it can predict the referring expressions from corpora of writ-ten and spoken language. Implementing the model requires computing word probabilities P (w|r), discourse salience P (r), and word costs C w .
We simplify the word probability P (w|r) in the speaker's listener model as in (8):
P (w|r) = 1 V(8)
where the count V is the number of words that can refer to referent r. We assume that V is constant across all referents. Our reasoning is as follows.
There could be many ways to refer to a single entity.
For example, to refer to entity Barack Obama, we could say 'he', 'The U.S. president', 'Barack', and so on. We assume that there are the same number of referring expressions for each entity and that each referring expression is equally probable under the listener's likelihood model. In our simulations, we assume that a speaker is choosing between a proper name and a pronoun. For example, we assume that an entity Barack Obama has one and only one proper name 'Barack Obama', and this entity is unambiguously associated with male and singular. Although we use an example with two possible referring expressions, as long as P (w|r) is constant across all referents and words, it does not make a difference to the computation in (5) how many competing words we assume for each referent.
To estimate the salience of a referent, P (r), our framework employs factors such as referent frequency or recency. Although there are other important factors such as topicality of the referent (Orita et al., 2014) that are not incorporated in our simulations, this model sets up a framework to test the role and interaction of various potential factors suggested in the discourse literature.
Salience of the referent is computed differently depending on its information status: old or new. The following illustrates the speaker's assumptions about the listener's discourse model:
For each referent r ∈ [1, R d ]:
1. If r = old, choose r in proportion to N r (the number of times referent r has been referred to in the preceding discourse). 2. Otherwise, r = new with probability proportional to α (a hyperparameter that controls how likely the speaker is to refer to a new referent).
3. If r = new, sample that new referent r from the base distribution over entities with probability 1 U· (count U · denotes a total number of unseen entities that is estimated from a named entity list (Bergsma and Lin, 2006)). The above discourse model is frequency-based. We can replace the term N r for the old referent with f (d i,j ) = e −d i,j /a that captures recency, where the recency function f (d i,j ) decays exponentially with the distance between the current referent r i and the same referent r j that has previously been referred to. This framework for frequency and recency of new and old referents exactly corresponds to priors in the Chinese Restaurant Process (Teh et al., 2006) and the distance-dependent Chinese Restaurant Process (Blei and Frazier, 2011).
The denominator in (5) represents the sum of potential referents that could be referred to by word w. We assume that a pronoun can refer to a large number of unseen referents if gender and number match, but a proper name cannot. For example, 'he' could refer to all singular and male referents, but 'Barack Obama' can only refer to Barack Obama. This assumption is reflected as a probability of unseen referents for the pronoun as illustrated in (10) below.
In our simulations, the speaker's cost function C w is estimated based on word length as in (9). We assume that longer words are costly to produce.
C w = length(w)(9)
Suppose that the speaker is considering using 'he' to refer to Barack Obama, which has been referred to N O times in the preceding discourse, and there is another singular and male entity, Joe Biden, in the preceding discourse that has been referred to N B times. In this situation, the model computes the probability that the speaker uses 'he' to refer to Barack Obama as follows:
P S ('he'|Obama) ∝ P L (Obama|'he') · 1 C 'he' = P ('he'|Obama)P (Obama) Σ r P ('he'|r )P (r ) · 1 C 'he' = 1 V ·N O ( 1 V ·N O )+( 1 V ·N B )+( 1 V ·α· U sing&masc U· ) · 1 C 'he'(10)
where count U sing&masc in the denominator of the last line denotes the number of unseen singular & male entities that could be referred to by 'he'. We estimate this number for each type of pronoun we evaluate (singular-female, singular-male, singularneuter, and plural) based on the named entity list in Bergsma and Lin (2006). The term ( 1 V · α · U sing&masc U· ) is the sum of probabilities of unseen referents that could be referred to by the pronoun 'he'. The unseen referents can be interpreted as a penalty for the inexplicitness of pronouns. In the case of proper names, the denominator is always the same as the numerator, under the assumption that each entity has one unique proper name.
Data
Corpora
Our model was run on both adult-directed speech and child-directed speech. We chose to use the SemEval-2010 Task 1 subset of OntoNotes (Recasens et al., 2011), a corpus of news text, as our corpus of adult-directed speech. The Gleason et al. (1984) subset of CHILDES (MacWhinney, 2000) was chosen as our corpus of child-directed speech.
The model requires coreference chains, agreement information, grammatical position, and part of speech. These were extracted from each corpus, either manually or automatically. The coreference chains let us easily count how many times/how recently each referent is mentioned in the discourse, which is necessary for computing discourse salience. The agreement information (gender and number of each referent) is required so that the model can identify all possible competing referents for pronouns. For instance, Barack Obama will be ruled out as a possible competitor for the pronoun she. The grammatical position that each proper name occupies 2 determines the form of the alternative pronoun that could be used there. For example, the difference between he and him is the grammatical position that each can appear in. The part of speech is used to identify the form of the referring expression (pronouns and proper names), which is what our model aims to predict. 3 OntoNotes includes information about coreference chains, part of speech, and grammatical dependencies. Gleason CHILDES has parsed part of speech and grammatical dependencies (Sagae et al., 2010), but it does not have coreference chains.
Neither corpus has agreement information. The following section describes manual annotations that we have done for this study. Due to time constraints, we annotated only a part of the CHILDES Gleason corpus, 9 out of 70 scripts.
Annotation
Mention annotation
We considered only maximally spanning noun phrases as mentions, ignoring nested NPs and nested coreference chains. For the sentence "Both Al Gore and George W. Bush have different ideas on how to spend that extra money" from OntoNotes, the extracted NPs are Both Al Gore and George W.
Bush and different ideas about how to spend that extra money.
These maximally spanning NPs were automatically extracted from the OntoNotes data, but were manually annotated for the CHILDES data using brat (Stenetorp et al., 2012) by two annotators. 4
Agreement annotation
Many mentions (46,246 out of 56,575 mentions in OntoNotes and 10,141 out of 10,530 mentions in CHILDES Gleason) were automatically annotated using agreement information from the named entity list in Bergsma and Lin (2006), leaving 10,329 to be manually annotated from OntoNotes (about 18%) and 389 from CHILDES (about 4%). 5 The guidelines we followed for this manual agreement annotation were largely based on pronoun replacement tests. NPs that referred to a single man and could be replaced with he or him were labeled 'male singular', NPs that could be replaced by it, such as the comment, were labeled 'neuter singular', and so on. NPs that could not be replaced with a pronoun, such as about 30 years earnings for the average peasant, who makes $145 a year, were excluded from the analysis.
Coreference annotation
We used the provided coreference chains for the OntoNotes data, but for the CHILDES data, it was necessary to do this manually using brat. The guidelines we followed for determining whether mentions coreferred came from the OntoNotes corefer-ence guidelines (BBN Technologies, 2007). 6
Experiments
Our experiments are designed to quantify the contributions of the various components of the complete model described in Section 3.2 that incorporates discourse salience, cost, and unseen referents. We contrast the complete model with three impoverished models that lack precisely one of these components. The comparison model without discourse uses a uniform discourse salience distribution. The model without cost uses constant speech cost. The model without good estimates of unseen referents always assigns probability 1 V · α · 1 C· to unseen referents in the denominator of (5), regardless of whether the word is a proper name or pronoun. In other words, this model does not have good estimates of unseen referents like the complete model does.
We use the adult-and child-directed corpora to examine to what extent each model captures speakers' referring expressions. We selected pronouns and proper names in each corpus according to several criteria. First, the referring expression had to be in a coreference chain that had at least one proper name, in order to facilitate computing the cost of the proper name alternative. Second, pronouns were only included if they were third person pronouns in subject or object position, and indexicals and reflexives were excluded. Finally, for the CHILDES corpus, children's utterances were excluded.
After filtering pronouns and proper names according to these criteria, 553 pronouns and 1,332 proper names (total 1,885 items) in the OntoNotes corpus, and 165 pronouns and 149 proper names (total 314 items) in the CHILDES Gleason corpus remained for use in the analysis.
Each model chooses referring expressions given information extracted from each corpus as described in Section 4.1. For evaluation, we computed accuracies (total, pronoun, and proper name) and model log likelihood (summing log P S (w|r) for the words in the corpus) for each model. Table 1 summarizes the results of each model with the OntoNotes and CHILDES datasets. The new referent hyperparameter α and the decay parameter for discourse recency salience were fixed at 0.1 and 3.0, respectively. 7
Results
News
Overall, the recency salience measure provides a better fit than the frequency salience measure with respect to accuracies, suggesting that recency better captures speakers' representations of discourse salience that influence choices of referring expressions. On the other hand, the models with frequency discourse salience have higher model log likelihood than the models with recency do. This is because of the peakiness of the recency models. Model log likelihood computed over pronouns and proper names (complete model) were -1022. 33 and -222.76, respectively, with recency, and -491.81 and -467.06 with frequency. The recency model tends to return a higher probability for a proper name than the frequency model does. Some pronouns receive a very low probability for this reason, and this lowers the model log likelihood.
The model without discourse and the model without cost consistently failed to predict pronouns (these models predicted all proper names). This happens because in the model without discourse, the information content of pronouns is extremely low due to the large number of consistent unseen referents. In the model without cost, pronouns are disfavored because they always convey less information than proper names. The log likelihoods of these models were also below that of the complete model. These results show that pronominalization depends on subtle interaction between discourse salience and speech cost. Neither of them is sufficient to explain the distribution of pronouns and nouns on its own.
The total accuracy of the model without good estimates of unseen referents was the worst among the four models, but this model did predict pronouns to some extent. Because the number of proper names is larger than the number of pronouns in this dataset, the difference in total accuracies between the model without good estimates of unseen referents and the models without discourse or cost reflects this asymmetry. Comparison between the complete model and the model without good estimates of unseen referents also suggests that having knowledge of unseen referents helps correctly pre-
Child-directed speech
Unlike the adult-directed news text, neither recency nor frequency discourse salience provides a good fit to the data. The low accuracies of pronouns and the high accuracies of proper names in all models indicate that the models are more likely to predict proper names than pronouns. There are several possible reasons for this. First, the CHILDES transcripts involve long conversations in a natural settings. Compared to the news, interlocutors are not focusing on a specific topic, but rather they often switch topics (e.g., a child interrupts her parents' conversation about her father's coworker to talk about her eggs). This topic switching makes it difficult for the model to estimate discourse salience using simple frequency or recency measures. Second, interlocutors are a family and they share a good deal of common knowledge/background (e.g., a mother said she as the first mention of her child's friend's mother). The current model is not able to incorporate this kind of background knowledge. Third, many referents are visually available. The current model is not able to use visual salience. In general, these problems arise due to our impoverished estimates of salience, and we would expect a more sophisticated discourse model that accurately measured salience to show better performance.
Summary
Experiments with the adult-directed news corpus show a close match between speakers' utterances and model predictions. On the other hand, experiments with child-directed speech show that the models were more likely to predict proper names where pronouns were used, suggesting that the estimates of discourse salience using simple measures were not sufficient to capture a conversation.
Discussion
This paper proposes a language production model that extends the rational speech act model from Frank and Goodman (2012) to incorporate updates to listeners' beliefs as discourse proceeds. We show that the predictions suggested from UID in this domain can be derived from our speaker model, providing an explanation from first principles for the relation between discourse salience and speakers' choices of referring expressions. Experiments with an adult-directed news corpus show a close match between speakers' utterances and model predictions, and experiments with child-directed speech show a qualitatively similar pattern. This suggests that speakers' behavior can be modeled in a principled way by considering the probabilities of referents in the discourse and the information conveyed by each word. A controversial issue in language production is to what extent speakers consider a listener's discourse model (Fukumura and van Gompel, 2012). By incorporating an explicit model of listeners, our model provides a way to explore this question. For example, the speaker's listener model P L (r|w) in (4) might differ between contexts and could also be extended to sum over possible listener identity q in mixed contexts, as in (11). P L (r|w) = Σ q P (r|w, q)P (q)
This provides a way to probe speakers' sensitivity to differences in listener characteristics across situations.
Although the simulations in this paper employed simple measures for discourse salience (referent frequency and recency), the discourse models used by speakers are likely to be more complex. Studies show that semantic information that cannot be captured with these simple measures, such as topicality (Orita et al., 2014) and animacy (Vogels et al., 2013a), affects speakers' choices of referring expressions. Future work will test to what extent this latent discourse information could affect the model predictions.
Our model predicts a tight coupling between the probability of a referent being mentioned, p(r), and the choice of referring expression. However, these two quantities appear to be dissociated in some cases. For example, Fukumura and Van Gompel (2010) show that semantic bias (as a measure of predictability) affects what to refer to (i.e., the referent), but not how to refer (i.e., the referring expression), while grammatical position does affect how you refer. One way of dissociating the probability of mention from the choice of referring expression in our model would be through the likelihood term, the word probability p(w|r). While we have assumed this word probability to be constant across words and referents, Kehler et al. (2008) use grammatical position to define this probability and show that their model captures experimental data. Syntactic constraints (such as Binding principles) also influence form choices, and this kind of knowledge may also be reflected in the word probability. Examining the role of the word probability p(w|r) more closely would allow us to further explore these issues.
Despite these limitations, our model provides a principled explanation for speakers' choices of referring expressions. In future work we hope to look at a broader range of referring expressions, such as null pronouns and definite descriptions, and to explore the extent to which our model can be applied to other linguistic phenomena that rely on discourse information.
Table 1: Accuracies and model log-likelihood dict the use of proper names in the first mention of a referent.Corpus
Model
Discourse
Total accuracy Pronoun accuracy Proper name accuracy Log-lhood
OntoNotes
complete
recency
80.27%
59.49%
88.89%
-1245.09
frequency
73.10%
62.74%
77.40%
-958.87
-discourse NA
70.66%
0.00%
100.00%
-6904.77
-cost
recency
70.66%
0.00%
100.00%
-1537.71
frequency
70.66%
0.00%
100.00%
-1017.38
-unseen
recency
64.14%
68.17%
62.46%
-1567.51
frequency
56.98%
76.67%
48.80%
-1351.58
CHILDES
complete
recency
49.68%
11.52%
91.95%
-968.64
frequency
46.18%
10.30%
85.91%
-360.28
-discourse NA
47.45%
0.00%
100.00%
-2159.22
-cost
recency
47.45%
0.00%
100.00%
-1055.54
frequency
47.45%
0.00%
100.00%
-392.72
-unseen
recency
50.31%
13.94%
90.60%
-961.54
frequency
48.41%
21.21%
78.52%
-332.73
Our speaker model corresponds to Frank and Goodman's exponentiated utility function (2), with α equal to one and with their cost D(w) being the log of our cost Cw.
Dependency tags used were 'SUBJ','OBJ', and 'PMOD' in OntoNotes and 'SBJ' and 'OBJ' in CHILDES. 3 The part of speech used to extract the target NPs were 'PRP' (pronoun), 'NNP' (proper name), and 'NNPS' (plural proper name) from OntoNotes and 'pro' (pronoun) and 'n:prop' (proper name) from CHILDES.
Interannotator agreement for the CHILDES mention annotation was: precision 0.97, recall 0.98, F-score 0.97 (for two scripts).5 Interannotator agreement for the manual annotation of agreement information was 97% (for 500 mentions).
Interannotator agreement for CHILDES coreference annotation was computed using B 3 (Bagga and Baldwin, 1998): precision: 0.99, recall: 1.00 (for one script).
We chose the best parameter values based on multiple runs, but results were qualitatively consistent across a range of parameter values.
AcknowledgmentsWe thank the UMD probabilistic modeling reading group for helpful comments and discussion.
How can the human mind occur in the physical universe?. John R Anderson, Oxford University PressJohn R Anderson. 2007. How can the human mind occur in the physical universe? Oxford University Press.
Accessing noun-phrase antecedents. Mira Ariel , RoutledgeMira Ariel. 1990. Accessing noun-phrase antecedents. Routledge.
Reference form and discourse patterns. Jennifer Arnold , Stanford University Stanford, CAPh.D. thesisJennifer Arnold. 1998. Reference form and discourse patterns. Ph.D. thesis, Stanford University Stanford, CA.
Reference production: Production-internal and addressee-oriented processes. Jennifer Arnold, Language and cognitive processes. 234Jennifer Arnold. 2008. Reference produc- tion: Production-internal and addressee-oriented processes. Language and cognitive processes, 23(4):495-527.
Algorithms for scoring coreference chains. Amit Bagga, Breck Baldwin, The first international conference on language resources and evaluation workshop on linguistics coreference. 1Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In The first interna- tional conference on language resources and evalua- tion workshop on linguistics coreference, volume 1, pages 563-566.
Referential form, word duration, and modeling the listener in spoken dialogue. Approaches to studying world-situated language use: Bridging the language-as-product and language-asaction traditions. Ellen Gurman Bard, P Matthew, J Aylett, M Trueswell, Tanenhaus, Ellen Gurman Bard, Matthew P Aylett, J Trueswell, and M Tanenhaus. 2004. Referential form, word du- ration, and modeling the listener in spoken dialogue. Approaches to studying world-situated language use: Bridging the language-as-product and language-as- action traditions, pages 173-191.
OntoNotes English coreference guidelines version. Bbn Technologies, BBN Technologies. 2007. OntoNotes English co- reference guidelines version 7.0.
That's what she (could have) said: How alternative utterances affect language use. Leon Bergen, Noah Goodman, Roger Levy, Proceedings of the 34th Annual Conference of the Cognitive Science Society. the 34th Annual Conference of the Cognitive Science SocietyLeon Bergen, Noah Goodman, and Roger Levy. 2012a. That's what she (could have) said: How alternative utterances affect language use. In Proceedings of the 34th Annual Conference of the Cognitive Science Society.
That's what she (could have) said: How alternative utterances affect language use. Leon Bergen, D Noah, Roger Goodman, Levy, Proceedings of the thirty-fourth annual conference of the cognitive science society. the thirty-fourth annual conference of the cognitive science societyLeon Bergen, Noah D Goodman, and Roger Levy. 2012b. That's what she (could have) said: How alternative utterances affect language use. In Pro- ceedings of the thirty-fourth annual conference of the cognitive science society.
Bootstrapping path-based pronoun resolution. Shane Bergsma, Dekang Lin, Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational LinguisticsSydney, AustraliaAssociation for Computational LinguisticsShane Bergsma and Dekang Lin. 2006. Bootstrapping path-based pronoun resolution. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Associ- ation for Computational Linguistics, pages 33-40, Sydney, Australia, July. Association for Computa- tional Linguistics.
Distance dependent Chinese restaurant processes. M David, Peter I Blei, Frazier, The Journal of Machine Learning Research. 12David M Blei and Peter I Frazier. 2011. Distance de- pendent Chinese restaurant processes. The Journal of Machine Learning Research, 12:2461-2488.
Are generalised scalar implicatures generated by default? an on-line investigation into the role of context in generating pragmatic inferences. Richard Breheny, Napoleon Katsos, John Williams, Cognition. 1003Richard Breheny, Napoleon Katsos, and John Williams. 2006. Are generalised scalar implicatures generated by default? an on-line investigation into the role of context in generating pragmatic inferences. Cogni- tion, 100(3):434-463.
Centering attention in discourse. E Susan, Brennan, Language and Cognitive Processes. 102Susan E Brennan. 1995. Centering attention in discourse. Language and Cognitive Processes, 10(2):137-167.
Wallace Chafe, Discourse, consciousness, and time. Discourse. 2Wallace Chafe. 1994. Discourse, consciousness, and time. Discourse, 2(1).
Cost-based pragmatic inference about referential expressions. Judith Degen, Michael Franke, Gerhard Jäger, Proceedings of the 35th Annual Conference of the Cognitive Science Society. the 35th Annual Conference of the Cognitive Science SocietyJudith Degen, Michael Franke, and Gerhard Jäger. 2013. Cost-based pragmatic inference about referen- tial expressions. In Proceedings of the 35th Annual Conference of the Cognitive Science Society, pages 376-381.
Predicting pragmatic reasoning in language games. Michael Frank, Noah Goodman, Science. 3366084Michael Frank and Noah Goodman. 2012. Predicting pragmatic reasoning in language games. Science, 336(6084):998-998.
Choosing anaphoric expressions: Do people take into account likelihood of reference?. Kumiko Fukumura, Roger Pg Van Gompel, Journal of Memory and Language. 621Kumiko Fukumura and Roger PG Van Gompel. 2010. Choosing anaphoric expressions: Do people take into account likelihood of reference? Journal of Memory and Language, 62(1):52-66.
Producing pronouns and definite noun phrases: Do speakers use the addressee's discourse model? Cognitive. Kumiko Fukumura, Roger Pg Van Gompel, Science. 367Kumiko Fukumura and Roger PG van Gompel. 2012. Producing pronouns and definite noun phrases: Do speakers use the addressee's discourse model? Cog- nitive Science, 36(7):1289-1311.
How does similarity-based interference affect the choice of referring expression?. Kumiko Fukumura, Trevor Roger Pg Van Gompel, Martin J Harley, Pickering, Journal of Memory and Language. 653Kumiko Fukumura, Roger PG Van Gompel, Trevor Harley, and Martin J Pickering. 2011. How does similarity-based interference affect the choice of re- ferring expression? Journal of Memory and Lan- guage, 65(3):331-344.
Attenuating information in spoken communication: For the speaker, or for the addressee?. Alexia Galati, Susan E Brennan, Journal of Memory and Language. 621Alexia Galati and Susan E Brennan. 2010. Attenuat- ing information in spoken communication: For the speaker, or for the addressee? Journal of Memory and Language, 62(1):35-51.
Topic continuity in discourse: A quantitative cross-language study. Talmy Givón, John Benjamins Publishing3Talmy Givón. 1983. Topic continuity in discourse: A quantitative cross-language study, volume 3. John Benjamins Publishing.
What's the magic word: Learning language through politeness routines. Jean Berko Gleason, Rivka Y Perlmann, Esther Blank Greif, Discourse Processes. 7Jean Berko Gleason, Rivka Y Perlmann, and Es- ther Blank Greif. 1984. What's the magic word: Learning language through politeness routines. Dis- course Processes, 7(4):493-502.
Knowledge and implicature: Modeling language understanding as social cognition. D Noah, Andreas Goodman, Stuhlmüller, Topics in Cognitive Science. Noah D Goodman and Andreas Stuhlmüller. 2013. Knowledge and implicature: Modeling language un- derstanding as social cognition. Topics in Cognitive Science.
Logic and conversation. Syntax and semantics. H Paul Grice, 3H Paul Grice. 1975. Logic and conversation. Syntax and semantics, 3:41-58.
Centering: A framework for modeling the local coherence of discourse. J Barbara, Scott Grosz, Aravind K Weinstein, Joshi, Computational Linguistics. 212Barbara J Grosz, Scott Weinstein, and Aravind K Joshi. 1995. Centering: A framework for modeling the lo- cal coherence of discourse. Computational Linguis- tics, 21(2):203-225.
Modeling referential choice in discourse: A cognitive calculative approach and a neural network approach. André Grüning, Andrej A Kibrik, Anaphora Processing: Linguistic, Cognitive and Computational Modelling. Ruslan MitkovJohn BenjaminsAndré Grüning and Andrej A Kibrik. 2005. Modeling referential choice in discourse: A cognitive calcu- lative approach and a neural network approach. In Ruslan Mitkov, editor, Anaphora Processing: Lin- guistic, Cognitive and Computational Modelling, pages 163-198. John Benjamins.
Cognitive status and the form of referring expressions in discourse. Language. Nancy Jeanette K Gundel, Ron Hedberg, Zacharski, Jeanette K Gundel, Nancy Hedberg, and Ron Zacharski. 1993. Cognitive status and the form of referring ex- pressions in discourse. Language, pages 274-307.
Redundancy and reduction: Speakers manage syntactic information density. T Florian, Jaeger, Cognitive psychology. 611Florian T Jaeger. 2010. Redundancy and reduc- tion: Speakers manage syntactic information density. Cognitive psychology, 61(1):23-62.
Game dynamics connects semantics and pragmatics. Gerhard Jager, Ahti-Veikko Pietarinen, editor, Game theory and linguistic meaning. ElsevierGerhard Jager. 2007. Game dynamics connects seman- tics and pragmatics. In Ahti-Veikko Pietarinen, edi- tor, Game theory and linguistic meaning, pages 89- 102. Elsevier.
Nonliteral understanding of number words. T Justine, Jean Kao, Leon Wu, Noah D Bergen, Goodman, Proceedings of the National Academy of Sciences. 11133Justine T Kao, Jean Wu, Leon Bergen, and Noah D Goodman. 2014. Nonliteral understanding of num- ber words. Proceedings of the National Academy of Sciences, 111(33):12002-12007.
Coherence and coreference revisited. Andrew Kehler, Laura Kertz, Hannah Rohde, Jeffrey L Elman, Journal of Semantics. 251Andrew Kehler, Laura Kertz, Hannah Rohde, and Jef- frey L Elman. 2008. Coherence and coreference revisited. Journal of Semantics, 25(1):1-44.
Computational modeling of referential choice: Major and minor referential options. Mariya V Khudyakova, B Grigory, Andrej A Dobrov, Natalia V Kibrik, Loukachevitch, Proceedings of the CogSci 2011 Workshop on the Production of Referring Expressions. the CogSci 2011 Workshop on the Production of Referring ExpressionsBostonMariya V Khudyakova, Grigory B Dobrov, Andrej A Kibrik, and Natalia V Loukachevitch. 2011. Com- putational modeling of referential choice: Major and minor referential options. In Proceedings of the CogSci 2011 Workshop on the Production of Refer- ring Expressions. Boston (July 2011).
Computational generation of referring expressions: A survey. Emiel Krahmer, Kees Van Deemter, Computational Linguistics. 381Emiel Krahmer and Kees Van Deemter. 2012. Compu- tational generation of referring expressions: A sur- vey. Computational Linguistics, 38(1):173-218.
Speakers optimize information density through syntactic reduction. Roger Levy, T. Florian Jaeger, Proceedings of the 20th Conference on Neural Information Processing Systems (NIPS). the 20th Conference on Neural Information Processing Systems (NIPS)Roger Levy and T. Florian Jaeger. 2007. Speakers op- timize information density through syntactic reduc- tion. In Proceedings of the 20th Conference on Neu- ral Information Processing Systems (NIPS).
The CHILDES project: Tools for analyzing talk. Brian Macwhinney, Brian MacWhinney. 2000. The CHILDES project: Tools for analyzing talk.
When peanuts fall in love: N400 evidence for the power of discourse. S Mante, Jos Ja Van Nieuwland, Berkum, Journal of Cognitive Neuroscience. 187Mante S Nieuwland and Jos JA Van Berkum. 2006. When peanuts fall in love: N400 evidence for the power of discourse. Journal of Cognitive Neuro- science, 18(7):1098-1111.
A pragmatic account of the processing of negative sentences. E Ann, Michael Nordmeyer, Frank, Proceedings of the 36th Annual Conference of the Cognitive Science Society. the 36th Annual Conference of the Cognitive Science SocietyAnn E Nordmeyer and Michael Frank. 2014. A pragmatic account of the processing of negative sen- tences. In Proceedings of the 36th Annual Confer- ence of the Cognitive Science Society.
Quantifying the role of discourse topicality in speakers' choices of referring expressions. Naho Orita, Eliana Vornov, H Naomi, Jordan Feldman, Boyd-Graber, Association for Computational Linguistics, Workshop on Cognitive Modeling and Computational Linguistics. Naho Orita, Eliana Vornov, Naomi H Feldman, and Jor- dan Boyd-Graber. 2014. Quantifying the role of discourse topicality in speakers' choices of referring expressions. In Association for Computational Lin- guistics, Workshop on Cognitive Modeling and Com- putational Linguistics.
Centering: A parametric theory and its instantiations. Massimo Poesio, Rosemary Stevenson, Barbara Di Eugenio, Janet Hitzeman, Computational Linguistics. 303Massimo Poesio, Rosemary Stevenson, Barbara Di Eu- genio, and Janet Hitzeman. 2004. Centering: A parametric theory and its instantiations. Computa- tional Linguistics, 30(3):309-363.
. Marta Recasens, Lluis Marquez, Emili Sapena, M Martí, Mariona Taulé, Marta Recasens, Lluis Marquez, Emili Sapena, M. Antònia Martí, and Mariona Taulé. 2011.
SemEval-2010 task 1 OntoNotes English: Coreference resolution in multiple languages. SemEval-2010 task 1 OntoNotes English: Corefer- ence resolution in multiple languages.
How WM load influences linguistic processing in adults: A computational model of pronoun interpretation in discourse. Jacolien Rij, Hedderik Rijn, Petra Hendriks, Topics in Cognitive Science. 53Jacolien Rij, Hedderik Rijn, and Petra Hendriks. 2013. How WM load influences linguistic processing in adults: A computational model of pronoun inter- pretation in discourse. Topics in Cognitive Science, 5(3):564-580.
Communicating with cost-based implicature: A game-theoretic approach to ambiguity. Hannah Rohde, Scott Seyfarth, Brady Clark, Gerhard Jäger, Stefan Kaufmann, The 16th Workshop on the Semantics and Pragmatics of Dialogue. ParisHannah Rohde, Scott Seyfarth, Brady Clark, Gerhard Jäger, and Stefan Kaufmann. 2012. Communicat- ing with cost-based implicature: A game-theoretic approach to ambiguity. In The 16th Workshop on the Semantics and Pragmatics of Dialogue, Paris, September.
Morphosyntactic annotation of CHILDES transcripts. Kenji Sagae, Eric Davis, Journal of Child Language. 3703Alon Lavie, Brian MacWhinney, and Shuly WintnerKenji Sagae, Eric Davis, Alon Lavie, Brian MacWhin- ney, and Shuly Wintner. 2010. Morphosyntactic an- notation of CHILDES transcripts. Journal of Child Language, 37(03):705-729.
Learning and using language via recursive pragmatic reasoning about other agents. J Nathaniel, Noah Smith, Michael Goodman, Frank, Advances in neural information processing systems. Nathaniel J Smith, Noah Goodman, and Michael Frank. 2013. Learning and using language via recursive pragmatic reasoning about other agents. In Ad- vances in neural information processing systems, pages 3039-3047.
brat: a web-based tool for NLP-assisted text annotation. Pontus Stenetorp, Sampo Pyysalo, Goran Topic, Tomoko Ohta, Sophia Ananiadou, Junichi Tsujii, Proceedings of the Demonstrations Session at EACL. the Demonstrations Session at EACLPontus Stenetorp, Sampo Pyysalo, Goran Topic, Tomoko Ohta, Sophia Ananiadou, and Junichi Tsujii. 2012. brat: a web-based tool for NLP-assisted text annotation. In Proceedings of the Demonstrations Session at EACL 2012.
Hierarchical Dirichlet Processes. Yee Whye Teh, Michael I Jordan, J Matthew, David M Beal, Blei, Journal of the American Statistical Association. 101Yee Whye Teh, Michael I Jordan, Matthew J Beal, and David M Blei. 2006. Hierarchical Dirichlet Pro- cesses. Journal of the American Statistical Associ- ation, 101.
Refer efficiently: Use less informative expressions for more predictable meanings. Harry Tily, Steven Piantadosi, Proceedings of the workshop on the production of referring expressions: Bridging the gap between computational and empirical approaches to reference. the workshop on the production of referring expressions: Bridging the gap between computational and empirical approaches to referenceHarry Tily and Steven Piantadosi. 2009. Refer effi- ciently: Use less informative expressions for more predictable meanings. In Proceedings of the work- shop on the production of referring expressions: Bridging the gap between computational and empir- ical approaches to reference.
Lexical entrainment and lexical differentiation in reference phrase choice. Mija Van Der Wege, Journal of Memory and Language. 604Mija Van der Wege. 2009. Lexical entrainment and lexical differentiation in reference phrase choice. Journal of Memory and Language, 60(4):448-463.
When a stone tries to climb up a slope: the interplay between lexical and perceptual animacy in referential choices. Jorrig Vogels, Emiel Krahmer, Alfons Maes, Frontiers in psychology. 4Jorrig Vogels, Emiel Krahmer, and Alfons Maes. 2013a. When a stone tries to climb up a slope: the interplay between lexical and perceptual animacy in referential choices. Frontiers in psychology, 4.
Who is where referred to how, and why? the influence of visual saliency on referent accessibility in spoken language production. Language and Cognitive Processes. Jorrig Vogels, Emiel Krahmer, Alfons Maes, 28Jorrig Vogels, Emiel Krahmer, and Alfons Maes. 2013b. Who is where referred to how, and why? the influence of visual saliency on referent accessibility in spoken language production. Language and Cog- nitive Processes, 28(9):1323-1349. |
5,246,477 | Uniform and Effective Tagging of a Heterogeneous Giga-word Corpus | Tagging as the most crucial annotation of language resources can still be challenging when the corpus size is big and when the corpus data is not homogeneous. The Chinese Gigaword Corpus is confounded by both challenges. The corpus contains roughly 1.12 billion Chinese characters from two heterogeneous sources: respective news in Taiwan and in Mainland China. In other words, in addition to its size, the data also contains two variants of Chinese that are known to exhibit substantial linguistic differences. We utilize Chinese Sketch Engine as the corpus query tool, by which grammar behaviours of the two heterogeneous resources could be captured and displayed in a unified web interface. In this paper, we report our answer to the two challenges to effectively tag this large-scale corpus. The evaluation result shows our mechanism of tagging maintains high annotation quality. | [
7680828,
5371286,
1324511
] | Uniform and Effective Tagging of a Heterogeneous Giga-word Corpus
Wei-Yun Ma ma@iis.sinica.edu.tw
Chu-Ren Huang
Academia Sinica
128 Sec. 2
Academia Rd
115Nankang, TaipeiTaiwan, R.O.C
Uniform and Effective Tagging of a Heterogeneous Giga-word Corpus
Tagging as the most crucial annotation of language resources can still be challenging when the corpus size is big and when the corpus data is not homogeneous. The Chinese Gigaword Corpus is confounded by both challenges. The corpus contains roughly 1.12 billion Chinese characters from two heterogeneous sources: respective news in Taiwan and in Mainland China. In other words, in addition to its size, the data also contains two variants of Chinese that are known to exhibit substantial linguistic differences. We utilize Chinese Sketch Engine as the corpus query tool, by which grammar behaviours of the two heterogeneous resources could be captured and displayed in a unified web interface. In this paper, we report our answer to the two challenges to effectively tag this large-scale corpus. The evaluation result shows our mechanism of tagging maintains high annotation quality.
Background
With growing interest in Chinese language processing, a few gargantuan Chinese corpora of modern Chinese have been assembled and released with query tools in recent years. For example, the Sinica Corpus (CKIP, 1995(CKIP, /1998 developed by Academia Sinica in Taiwan contains 5.2 million words with part-of-speech tag (POS) while the Chinese corpus developed by the Center for Chinese Linguistics (CCL corpus) at Peking University contains 85 million Chinese characters. Both corpora offer the keyword-in-context (KWIC) function for inspecting the context of a given keyword through their web interfaces. However, there are two major restrictions to use the both popular online corpora to obtain deeper and comparable Chinese grammatical information. One restriction is that although the Sinica Corpus is segmented and POS-tagged, CCL is not segmented and tagged. Therefore it is unable to make deeper syntactic analysis via CCL and is also difficult to compare the syntactic behaviours of a given word between Taiwan and Mainland China. The other difficulty is that only utilizing KWIC concordance is not sufficient to capture and display complete and organized grammatical information of a given keyword.
Other several existing linguistic annotated corpora of Chinese, e.g. Penn Chinese Tree Bank (Xia et al 2000, Xue et al 2002, Sinica Treebank (Huang et al 2000), provide more elaborate annotations. But they suffer from the same problem: they are all extremely labor-intensive to build and typically have a narrow coverage and are therefore insufficient to reflect the real usage of a given keyword.
In this paper, in order to resolve the difficulties above, we attempt to segment and POS-tag the Chinese Gigaword Corpus (CGW) released in 2003 by Linguistic Data Consortium (LDC). CGW was produced by LDC. It contains about 1.12 billion Chinese characters, including 735 million characters from Taiwan's Central News Agency (CNA) from 1991 to 2002, and 380 million characters from Mainland China's Xinhua News Agency (XIN) from 1990 to 2002. CNA uses the complex character form and XIN uses the simplified character form. CGW has three major advantages for the corpus-based Chinese linguistic research: (1) It is large enough to reflect the real written language usage in either Taiwan or Mainland China. (2) All text data are presented in a SGML form, using a markup structure to provide each document with rich metadata for further inspecting. (3) CGW is appropriate for the comparison of the Chinese usage between Taiwan and Mainland China, because it provides the same newswire text type, and these news texts were almost published during the overlapping time period. We utilize Chinese Sketch Engine (Kilgarriff et al 2004, Kilgarriff et al 2005 as the corpus query tool, by which grammar behaviours of the two heterogeneous resources could be captured and displayed in a unified web interface. Therefore, how to annotate the two heterogeneous corpora to let them could be consistently compared their words' syntactic behaviours through Chinese Sketch Engine is an important concern in this paper.
A challenging task is to segment and POS-tag such huge amount of corpus efficiently. Given the corpus size, it is clearly not possible to adopt the semi-automatic approach of human-aided machine tagging to reach the task in the limited time. Therefore, even adopting fullautomatic tagging strategy but still maintaining high annotation quality is also a major task in this paper.
Introduction to CGW
We begin with an introduction to the details of CGW because of its importance at our processing strategies.
Size of CGW
SGML Form
All text data are presented in a SGML form, using a very simple, minimal markup structure. The markup structure, common to all data files, can be illustrated by the following example:
<DOC id="CNA19910101.0003" type="story"> <HEADLINE> 捷運局對工程噪音採多項防治措施 </HEADLINE> <DATELINE> (中央社台北一日電) </DATELINE> <TEXT> <P> 台北都會區捷運工程正處於積極趕工階段,… </P> <P> 淡水線工程進度百分之三十六點一九,落後百分之二點六七,… </P> </TEXT> </DOC>
Figure1. Example of a news document in CGW
For every "opening" tag (DOC, HEADLINE, DATELINE, TEXT, P), there is a corresponding "closing" tag. The "id=" attribute of DOC consists of the 3-letter source abbreviation (in CAPS), an 8-digit date string representing the date of the story (YYYYMMDD), a period, and a 4-digit sequence number starting at "0001" for each date (e.g. " CNA19910101.0003"); in this way, every DOC in the corpus is uniquely identifiable by the id string.
Design of Automatic Annotator
There are two major missions of our automatic annotator: word segmentation and POS tagging. In order to speed up the process and to maintain high quality at the same time, our automatic annotator has the following characteristics: (1) The annotator takes advantage of the characteristics of CGW for reaching high annotation quality.
(2) The annotator has the capability to process a large corpus efficiently, which means the program is robust, and hardware resources used by the program are carefully managed. (3) The annotation format exerts the merits of the used corpus query tool (i.e. Chinese Sketch Engine). (4) The annotator generates some records of annotation process for speeding up human examination if human examination is still decided to be done in the future. For instance, several word types are more difficult to be correctly identified. The annotator records the list of these unreliable words. If human examination is undertaken in the future, human annotators will only need to examine these records and get much better whole quality in a limited time.
We enhanced Sinica Word Segmenter (Ma and Chen 2005) to possess the above characteristics. And we utilized HMM method for POS tagging and morphemeanalysis-based method (Tseng and Chen 2002) to predict POSs for new words.
Document-based v.s. Corpus-based Statistical Information
Occurrences of new words, which are not covered in the lexicon, degraded significantly the performances of most word segmentation methods. The number is especially higher in news reports-averagely 3% to 5% new words within a news document. Therefore unknown word identification would play a key role for segmenting CGW.
Most popular segmentation technologies (Chiang 1992, Tseng 2005) use corpus-based statistical methods for identifying new words with high statistics and use morphological rules for those with low statistics. However, for these corpus-based statistical methods, they usually suffer a problem that phrases or partial phrases are easily incorrectly identified as words because of their statistical significance in a corpus. Even very frequently superfluous character strings with strong statistical associations are also easily incorrectly identified as words. Similarly, on the other side, frequently new words with high statistics within a document are probably hard to be identified because of their low statistics in a whole corpus. This situation is more serious while processing newswire text data. For newswire text data like CGW, a document usually tightly focuses on the same event or subject, and the keywords of a text are often new words and frequently recur in a news document, but not necessarily recur the same proportion in the whole corpus.
Therefore, for statistical methods of our word segmentation, we mainly rely on the document-based statistical information instead of corpus-based statistical information so that the locality of the keywords in a newswire document is fully utilized. Because all text data of CGW are presented in a SGML form, it is convenient to separate CGW into individual documents using a simple SGML parser. We proposed two strategies of word segmentation by pseudocodes shown in Figure 2 and Figure 3.
In Strategy A, while segmenting a given document, only the basic lexicon and extracted new words of the document are referenced. In Strategy B, while segmenting a given document, we also references NewWordLexicon collected from other documents. But two things are worth noticing: One is that in NewWordLexicon only new words with high accumulated frequency are covered, which means these words have high reliability as real words. Another is that when referencing these statistics, the statistics of a given document should still play a more important role than NewWordLexicon for resolving segmentation ambiguity.
In addition to fully utilizing locality of newswire data text, Strategy A or B also has another advantage: the memory resource is always controlled within the range of a document, which also means the total processing time will be much shorter than corpus-based statistical methods because the searching space of document-based statistical information is much smaller than the searching space of corpus-based statistical information.
Annotation Format
We utilize Chinese Sketch Engine as the corpus query tool. Besides traditional KWIC function, the engine would automatically generate a one-page, corpus-derived summary of a given word's grammatical and collocation behaviour, such as the distributions of its subjects, objects, preposition objects, and modifiers, by consulting grammatical relations for Chinese. The grammatical relations are defined using regular expressions over POS tags. The more elaborate grammar relations are, the more precise querying results will be obtained. Therefore in order to facilitate the design of flexible and elaborate grammar relations of Chinese Sketch Engine, we adopted mixing POSs tagging strategy: after segmentation and HMM-based tagging process, each word is annotated with the basic POS, such as "陳(Nb 1 )". And for most words, their basic POSs can be further converted into elaborate POSs, such as "陳(Nbc 2 )", by consulting the basic lexicon. The rest of words, such as new words, are still reserved with their basic POSs, which are obtained by the prediction of the morpheme-analysis-based tagger. The final annotation results can be illustrated by the
Implementation
In order to exhibit substantial linguistic differences under consistent querying environment for CNA and XIN, it is necessary to use a unified basic lexicon and POS tagset for annotation. The basic lexicon we used consists of three sources: (1) Sinica lexicon with 80000 word entries.
(2) A 50000-words' set collected from Sinica Corpus 3.0, which is a balanced corpus of modern Chinese containing separated words and their POSs checked by human. (3) Xinhua new-words lexicon, which collects 5000 new words frequently used in Mainland China. We adopt Sinica Tagset as the uniform POS tagset for CNA and XIN.
So far we have finished implementing Strategy A discussed in sention 3.1. An array of machine was used to process CGW, which took over 3 days to perform. After completing the whole annotation of CGW, total 462 million words of CNA and 252 million words of XIN are identified.
Evaluation
We randomly picked one document from CNA per season and one document from XIN per year. Then there are total 48 documents of CNA and 12 documents of XIN. They are regarded as testing data set for evaluation. These 60 documents are carefully checked by a linguist. The annotation performance is provided in Table 2
Table2. Evaluation result
The evaluation result shows that our automatic annotater performs very well in either CNA or XIN. The segmentation performance of XIN is a bit lower than CNA probably because most of the words in our basic lexicon are collected from Taiwan sources. In other words, the proportion of new words of XIN are higher than CNA, and these new words caused rather more segmentation mistakes.
Character Form Conversion
To clearly and conveniently observe querying results of a given word appeared in CNA and XIN, the distinct character forms need to be unified as the same as a given querying word's form. Therefore we in advance generated two additional data sets: CNA with the simplified character form obtained through the conversion of its original complex character form, and XIN with the complex character form obtained through the conversion of its original simplified character form. Therefore four data sets were obtained. We further generated another two data sets through combining the existed four data sets: one data set is generated through combining CNA and XIN with the complex character form, the other data set is generated through combining CNA and XIN with the simplified character form. Word Sketch Engine then could directly display the querying results of CNA and XIN with the same character form at the same time. The examples are shown as Figure 5 and Figure 6.
Figure5. KWIC concordance result with the complex character form while querying word "喜歡" of the complex character form.
Figure6. KWIC concordance result with the simplified character form while querying word "喜欢" of the simplified character form.
Conclusion and Future Work
Based on careful analyses of CGW's characteristics, in this paper, we proposed our concerns and strategies for tagging CGW. Not much processing time and high annotation quality demonstrate our automatic annotator performs very well. We also concerned about the relation between the corpus query tool and the annotated corpus. How to fully exert the advantages of the corpus query tool is an important concern about the design of the annotation strategy and the annotation format. In our work, we utilized the same lexicon and tagset to segment CNA and XIN, by which Word Sketch Engine could exhibit substantial linguistic differences under consistent querying environment of the heterogeneous sources-respective news in Taiwan and in Mainland China.
We are now collecting more lexicon resources from Mainland China in order to further improve the segmentation performance of XIN in the future. We are also working on another related project-to automatically mark nominalization feathers on those verbs in CGW with noun usages in specific contexts.
We expect our experiences of tagging CGW will be a worthy example to reference for the development of any gargantuan and heterogeneous corpus.
Figure 4 (
4Bold characters represent new words and their predicted basic POSs, the others represent words and their elaborate POSs covered in the basic lexicon, or quantifier words, reduplicated words, etc): <DOC id="CNA19910101.0003" type="story"> <HEADLINE> 捷運局(Nc) 對(P31) 工程(Nac) 噪音(Nad) 採(VC2) 多(Neqa) 項(Nfa) 防治(VC2) 措施(Nac) </HEADLINE> <DATELINE> ((PARENTHESISCATEGORY) 中央社(Nca) 台北(Nca) 一日(Nd) 電(VC2) )(PARENTHESISCATEGORY) </DATELINE> <TEXT> <P> 台北(Nca) 都會區(Ncb) 捷運(Nad) 工程(Nac) 正(Dd) 處(VJ3)
Table 1
1presents the following categories of information: source of the data, number of files per source, Totl-MB shows totals for file sizes (nearly 4 gigabytes, total), number of characters, and number of documents.Source
#Files
Totl-MB K-#Chars #DOCs
CNA
144
2606
735499
1649492
XIN
142
1331
382881
817348
TOTAL
286
3937
1118380
2466840
Table1. Size of CGW
Taiwan's CNA is from 1991 to 2002, and Mainland
China's XIN is from 1990 to 2002. Each file contains all
documents for the given month from the given news
source.
.RefWord# TestWord# MatchWord# Recall Precision
CAN 12500
12416
12186
0.97
0.98
XIN
4002
3945
3790
0.95
0.96
Note: Recall=MatchWord# / RefWord#
Precision=MatchWord# / TestWord#
MatchWord#
MatchPOS#
POS Precision
CNA 12186
12033
0.99
XIN
3790
3725
0.98
Note: POS Precision= MatchPOS# / MatchWord#
"Nb" represents "proper noun" according to Sinica Tagset. 2 "Nbc" represents "Chinese surname", one kind of proper noun, according to Sinica Tagset.
The Content and Illustration of Academica Sinica Corpus. no 95- 02/98-04Taipei: Academia Sinica Xia. Nianwen Xue, Mary Ellen Okurowski, John Kovarik, Fu-Dong Chiou, Shizhe Huang, Tony Kroch, and Mitch Marcus.Fei, Martha PalmerCKIP (Chinese Knowledge Information Processing GroupTechnical ReportCKIP (Chinese Knowledge Information Processing Group). (1995/1998). The Content and Illustration of Academica Sinica Corpus. (Technical Report no 95- 02/98-04). Taipei: Academia Sinica Xia, Fei, Martha Palmer, Nianwen Xue, Mary Ellen Okurowski, John Kovarik, Fu-Dong Chiou, Shizhe Huang, Tony Kroch, and Mitch Marcus. (2000).
Developing Guidelines and Ensuring Consistency for Chinese Text Annotation. Proceedings of LREC. LRECDeveloping Guidelines and Ensuring Consistency for Chinese Text Annotation. Proceedings of LREC
Building a Large-Scale Annotated Chinese Corpus Proceedings of COLING. Xue Nianwen, Fu-Dong Chiou, Martha Palmer, Xue Nianwen, Fu-Dong Chiou, and Martha Palmer (2002). Building a Large-Scale Annotated Chinese Corpus Proceedings of COLING
Sinica Treebank: Design Criteria, Annotation Guidelines, and On-line Interface. Huang Chu-Ren, Keh-Jiann Chen, Feng-Yi Chen, Keh-Jiann Chen, Zhao-Ming Gao, Kuang-Yu Chen, Proceedings of 2nd Chinese Language Processing Workshop. 2nd Chinese Language Processing WorkshopHuang Chu-Ren, Keh-Jiann Chen, Feng-Yi Chen, Keh- Jiann Chen, Zhao-Ming Gao and Kuang-Yu Chen. (2000). Sinica Treebank: Design Criteria, Annotation Guidelines, and On-line Interface. Proceedings of 2nd Chinese Language Processing Workshop pp. 29-37.
The Sketch Engine. Adam Kilgarriff, Pavel Rychlý, Pavel Smrz, David Tugwell, Proceedings of EURALEX. Kilgarriff, Adam, Pavel Rychlý, Pavel Smrz and David Tugwell. (2004). The Sketch Engine. Proceedings of EURALEX, Lorient, France.
Adam Kilgarriff, Chu-Ren Huang, Pavel Rychlý, Simon Smith, David Tugwell, Chinese Word Sketches. ASIALEX 2005: Words in Asian Cultural Context. Kilgarriff, Adam, Chu-Ren Huang, Pavel Rychlý, Simon Smith, and David Tugwell. (2005). Chinese Word Sketches. ASIALEX 2005: Words in Asian Cultural Context.
Design of CKIP Chinese Word Segmentation System. Wei-Yun Ma, Keh-Jiann Chen, Chinese and Oriental Languages Information Processing Society. 14Ma, Wei-Yun and Keh-Jiann Chen, (2005). Design of CKIP Chinese Word Segmentation System, Chinese and Oriental Languages Information Processing Society, Vol 14. No. 3. pp. 235-249.
Design of Chinese Morphological Analyzer. H H K J Tseng, Chen, Proceedings of SIGHAN Workshop on Chinese Language Processing. SIGHAN Workshop on Chinese Language ProcessingTseng, H.H. & K.J. Chen, (2002). Design of Chinese Morphological Analyzer," Proceedings of SIGHAN Workshop on Chinese Language Processing, pp. 49-55
Statistical Models for Word Segmentation and Unknown Word Resolution. T H Chiang, M Y Lin, & K Y Su, Proceedings of ROCLING V. ROCLING VChiang, T. H., M. Y. Lin, & K. Y. Su, (1992). Statistical Models for Word Segmentation and Unknown Word Resolution, Proceedings of ROCLING V, pp. 121-146.
A Conditional Random Field Word Segmenter. H H Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, Christopher Manning, Proceedings of SIGHAN Workshop on Chinese Language Processing. SIGHAN Workshop on Chinese Language ProcessingTseng, H.H., Pichuan Chang, Galen Andrew, Daniel Jurafsky, Christopher Manning. (2005). A Conditional Random Field Word Segmenter. Proceedings of SIGHAN Workshop on Chinese Language Processing. |
712,309 | Dialogue Act Classification in Domain-Independent Conversations Using a Deep Recurrent Neural Network | In this study, we applied a deep LSTM structure to classify dialogue acts (DAs) in open-domain conversations. We found that the word embeddings parameters, dropout regularization, decay rate and number of layers are the parameters that have the largest effect on the final system accuracy. Using the findings of these experiments, we trained a deep LSTM network that outperforms the state-of-the-art on the Switchboard corpus by 3.11%, and MRDA by 2.2%. | [
629094,
1957433,
14161105,
215825908,
990233
] | Dialogue Act Classification in Domain-Independent Conversations Using a Deep Recurrent Neural Network
December 11-17 2016
Hamed Khanpour hamedkhanpour@my.unt.edu
University of North Texas HiLT Lab
Nishitha Guntakandla nishithaguntakandla@my.unt.edu
University of North Texas HiLT Lab
Rodney Nielsen rodney.nielsen@unt.edu
University of North Texas HiLT Lab
Dialogue Act Classification in Domain-Independent Conversations Using a Deep Recurrent Neural Network
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers
COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersOsaka, JapanDecember 11-17 2016
In this study, we applied a deep LSTM structure to classify dialogue acts (DAs) in open-domain conversations. We found that the word embeddings parameters, dropout regularization, decay rate and number of layers are the parameters that have the largest effect on the final system accuracy. Using the findings of these experiments, we trained a deep LSTM network that outperforms the state-of-the-art on the Switchboard corpus by 3.11%, and MRDA by 2.2%.
Introduction
Dialogue Act (DA) classification plays a key role in dialogue interpretation, especially in spontaneous conversation analysis. Dialogue acts are defined as the meaning of each utterance at the illocutionary force level (Austin, 1975). Many applications benefit from the use of automatic dialogue act classification such as dialogue systems, machine translation, Automatic Speech Recognition (ASR), topic identification, and talking avatars (Král and Cerisara, 2012). Due to the complexity of DA classification, most researchers prefer to focus on the task-oriented systems such as restaurant, hotel, or flight, etc. reservation systems.
Almost all standard approaches to classification have been applied in DA classification, from Bayesian Networks (BN) and Hidden Markov Models (HMM) to feed forward Neural Networks, Decision Trees (DT), Support Vector Machines (SVM) and rule-based approaches.
Recently, the advancement of research in deep learning has led to performance upheavals in many Natural Language Processing (NLP) tasks, even leading Manning (2016) to refer to the phenomenon as a neural network "tsunami". One of the main benefits of using deep learning approaches is that they are not as reliant on handcrafted features; instead, they manufacture features automatically from each word (Turian et al., 2010), sentence (Lee and Dernoncourt, 2016;Kim, 2014), or even long texts (Collobert et al., 2011;Pennington et al., 2014). Inspired by the performance of recent studies utilizing deep learning for improving DA classification in domain-independent conversations Lee and Dernoncourt, 2016;Kalchbrenner and Blunsom, 2013), we propose a model based on a recurrent neural network, LSTM, that benefits from deep layers of networks and pre-trained word embeddings derived from Wikipedia articles.
Related Work
Prior work has defined general sets of DAs for domain-independent dialogues that are commonly used in almost all research on DA classification (Jurafsky et al., 1997;Dhillon et al., 2004). The task of DA classification (sometimes called DA identification) is to attribute one member of a predefined DA to each given utterance. Therefore, DA classification is sometimes treated as short-text classification. Similar to many other traditional text classification methods, five sources of information have been used for DA classification tasks: lexical information, syntax, semantics, prosody, and dialogue history. Among all This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/ proposed methods, those which used more sophisticated techniques for extracting lexical information, achieved the best results before deep learning was applied to the problem.
DA classification research started with handcrafting lexical features that yielded high quality results with an accuracy of 75.22% on the 18 DAs in the VERMOBIL dataset (Jekat et al., 1995). In general, Bayesian techniques were the most common approaches for DA classification tasks, which used a mixture of n-gram models together with dialogue history for predicting DAs (Grau et al., 2004;Ivanovic, 2005). In some studies, prosody information was integrated with surface-level lexical information to improve accuracy (Stolcke et al., 2000). Stolcke et al. (2000) reported the best accuracy on the core 42 DAs in the Switchboard corpus as 71%. This result was achieved by applying contextual information with HMM for recognizing temporal patterns in lexical information. Novielli and Strapparava (2013) investigated the sentiment load of each DA. They compared the accuracies of the classification before and after analyzing utterances in the Switchboard corpus by using Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2007) and postulated that affective analysis improved the accuracy.
Recently, approaches based on deep learning methods improved many state-of-the-art techniques in NLP including, DA classification accuracy on open-domain conversations (Kalchbrenner and Blunsom, 2013;Ravuri and Stolcke, 2015;Ji et al., 2016;Lee and Dernoncourt, 2016). Kalchbrenner and Blunsom (2013) used a mixture of Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN). CNNs were used to extract local features from each utterance and RNNs were used to create a general view of the whole dialogue. This work improved the state-of-the-art 42-tag DA classification on Switchboard (Stolcke et al., 2000) by 2.9% to reach 73.9% accuracy. Ji et al. (2016) presented a hybrid architecture that merges an RNN language model with a discourse structure that considers relations between two contiguous utterances as a latent variable. This approach improved the result of the state-of-the-art method by about 3% (from 73.9 to 77) when applied on the Switchboard corpus. The best result was achieved when the algorithm was trained to maximize the conditional likelihood. Ji et al. (2016) also investigated the performance of using standard RNN and CNN on DA classification and got the cutting edge results on the MRDA corpus (Ang et al., 2005) using CNN.
Our Model
Most deep learning variations were designed and studied in the late 1990s, but their true performance was not revealed until high-speed computers were commercialized and researchers were able to access significant amounts of data. Collobert et al. (2011) used a large amount of unlabeled data to map words to high-dimensional vectors and a Neural Network architecture to generate an internal representation. By adding a CNN architecture Collobert et al. (2011) built the SENNA application that uses representation in language modeling tasks. Their approach outperforms almost all sophisticated traditional NLP applications like part-of-speech-tagging, chunking, named entity recognition, and semantic role labeling without resorting to the use of any handcrafted features or prior knowledge which are usually optimized for each task. In this study, we designed a deep neural network model that benefits from pre-trained word embeddings combined with a variation of the RNN structure for the DA classification task.
For each utterance that contains l number of words, our model convert it into l sequential word vectors. Word vectors can be generated randomly with arbitrary dimensions or being set by a pre-trained word vectors using a variety of word-to-vector techniques Pennington et al., 2014). Figure 1 illustrates a typical structure of an RNN. As can be seen, information from previous layers, h t−1 , is contributed to the succeeding layer's computations that generate h t . Since almost all tokens, X i , in a conversation are related to their previous tokens or words, we choose to use an RNN structure.
RNN-based Utterance Representation
Given a list of d -dimensional word vectors, X 1 , X 2 , ........, X t−1 , X t , ....X t+n in a given time step, t, we will have: where W hh εR h×h and W hx εR h×d are weight matrices. σ represents logistic sigmoid function, and y t , y t εR k , is the class representation of each utterance and k denotes the number of classes for classification task.
h t = σ W hh h t−1 + W hd X t(1)y t = sof tmax W (S) h t(2)
In the pooling layer (Figure 1), our model takes all h vectors, h 1:t , and generate one vector. We can choose from three mechanisms: mean-, max-or last-pooling. Mean-pooling measures the average of all h vectors, max-pooling takes the greatest figure out of each h vector and last-pooling takes the last h vector (i.e., h t ).
Theoretically, RNNs should preserve the memory of previous incidents, but in practice when the gap between relevant information extends, RNNs fail to maintain relevant information. Hochreiter (1991) and Bengio et al. (1994) investigated the main reasons for RNNs' failures in detail. The other problem with RNN is the vanishing and exploding gradient that causes the learning process to be terminated prematurely (Mikolov et al., 2010;Pascanu et al., 2013).
Given the aforementioned problems with RNNs, we use Long Short Term Memory (LSTM), which is a variation of RNNs that is tuned to preserve long-distance dependencies as their default specificity. In DA classification, having the ability to connect related expressions of information that are distant from each other is important, particularly when it comes to classifying utterances as either subjective or objective, which is considered as one of the main sources of error in DA classification (Novielli and Strapparava, 2013). Classifying subjective versus objective texts is one of the major tasks in sentiment analysis in which LSTM-based approaches are shown to achieve high-quality results (Socher et al., 2013). Another reason for using LSTM is that it uses a forget gate layer to distill trivial weights, which belong to unimportant words from the cell state (see Eq. 4) . Figure 2 illustrates a standard structure of an LSTM cell.
As can be seen in Figure 2, we can define the LSTM cell at each time step t to be a set of vectors in R d : Where inputs are d dimensional vectors, i t is the input gate, f t is the forget gate, o t is the output gate, c t is the memory cell, h t is the hidden state and represents element-wise multiplication. c t (Eq. 7) is the key part of LSTMs -it connects chains of cells together with linear interactions. In LSTMs, we have gates in each cell that decide dynamically which signals are allowed to pass through the whole chain. For example, the forget gate f t (Eq. 4) decides to what extent the previous memory cell should be forgotten, the input gate (Eq. 3) manages the extent to which each cell should be updated, and the output gate manages the exposure of the internal memory state. The hidden layer h t represents a gated, partial view of its cell state. LSTMs are able to view information over multiple time scales due to the fact that gating variables are assigned different values for each vector element (Tai et al., 2015).
i t = σ W (i) X t + U (i) h t−1 + b (i) (3) f t = σ W (f ) X t + U (f ) h t−1 + b (f ) (4) o t = σ W (o) X t + U (o) h t−1 + b (o)(5)u t = tanh W (u) X t + U (u) h t−1 + b (u)(6)c t = i t u t + f (t) c t−1 (7) h t = o t tanh(c t )(8)
Stacked LSTM
By arranging some LSTM cells back to back (Figure 2), the hidden layer, h t , of each cell is considered as input for the subsequent layer in the same time step (Graves et al., 2013;Sutskever et al., 2014). The main reason for stacking LSTM cells is to gain longer dependencies between terms in the input chain of words.
In this study, we used stacked LSTMs with pre-trained word embeddings. Word embedding is distributional representations of words that are used to solve the data sparsity problem (Bengio et al., 2003). We trained word embeddings with 300-dimensional vectors by choosing the window and min-count equal to 5 .
Datasets
Since our study focuses on classifying DAs in open-domain conversations, we chose to evaluate our model on Switchboard (SwDA) (Jurafsky et al., 1997) and the five-class version of MRDA (Ang et al., 2005). We used the list of files provided by Lee and Dernoncourt (2016) for creating the training, test, and development sets from the MRDA datasets.
Experimental Settings
We used the SwDA dataset to tune all hyperparameters including dropout, decay rate, word embeddings and the number of LSTM layers. All conversations in the training set were preprocessed and a randomized selection of one-third of them were utilized as a development set to allow the LSTM parameters to be trained over a reasonable number of epochs. We tuned one parameter value at a time and measured the accuracy on the development set, stopping when the accuracy on the development set did not change for 20 epochs. We used the NN packages provided by Lei et al. (2015) and Barzilay et al. (2016).
Word Embeddings
We tuned the word embedding parameters method, corpus and dimensionality, while holding other parameters constant (dropout = 0.5, decayrate = 0.5 and layersize = 2). Specifically, we tested the methods Word2vec using the Gensim Word2vec package (Řehůřek and Sojka, 2010) and pretrained Glove word embeddings (Pennington et al., 2014). Word2vec embeddings were learned from Google News , and separately, from Wikipedia 1 . The Glove embeddings were pretrained on the 840 billion token Common Crawl corpus. Table 1: Accuracy using different word embedding techniques, corpora and vector dimensions. Table 1 illustrates that the best results were consistently achieved by embeddings with 150-dimensions, and of those, Word2vec trained on Wikipedia had the best accuracy. Hence, these settings were used throughout the remainder of the experiments.
Method
Decay Rate
LSTM uses standard backpropagation to adjust network connection weights (see Eq. 9), where E is the error and W ij is the weight matrix between two nodes, i and j.
w ij ← w ij − η ∂E ∂w ij ,(9)
where η is the learning rate. To avoid overfitting, a regularization factor is added to Eq. 9 to penalize large changes in w ij .
w ij ← w ij − η ∂E ∂w ij − ηλw ij .(10)
The term −ηλw ij is the regularization factor and λ is the decay factor that causes w ij decay in scale to its prior measure. We found that changing η does not impact the accuracy so we set η = 1e − 3 and change λ to find the best fit for the data ( Table 2). As can be seen from Table 2, the positive trend of increasing accuracy fails after setting λ = 0.8. Therefore, we set λ = 0.7 in our experiments. Table 2: The impact of changing λ on accuracy.
Dropout
Most of the recent studies that exploit deep learning approaches use the dropout technique (Hinton et al., 2012). Dropout is a kind of regularization technique that prevents the network from overfitting by discarding some weights. In each training cycle, it is possible that some neurons are co-adapted by randomly assigning zero to their weights. Dropout methods were originally introduced for feed-forward and convolutional neural networks but recently have been applied pervasively in the input embeddings layer of recurrent networks including LSTMs (Zaremba et al., 2014;Pachitariu and Sahani, 2013;Bayer et al., 2013). Bayer et al. (2013) report that standard dropout does not work effectively with RNNs due to noise magnification in the recurrent process which results in diminished learning. Since standard dropout is proven not to work effectively for RNNs, we apply the dropout technique proposed by Zaremba et al. (2014) for regularizing RNNs that is used by most studies in the literature employing LSTM models (Lei et al., 2015;Barzilay et al., 2016;Jaech et al., 2016;Swayamdipta et al., 2016;Lu et al., 2016). Zaremba et al. (2014) postulate that their approach reduces overfitting on a variety of tasks, including language modeling, speech recognition, image caption generation, and machine translation. We experimented with dropout probability settings in the range between 0.0 and 0.5. Table 3: Impact of changing dropout on accuracy.
Accuracy
As can be seen in Table 3, any dropout at all hurt the accuracy. Hence, the value was set at 0.0dropout was not used in later tuning or in the final model.
Number of LSTM Layers
Finally, we tuned the number of layers. If you utilize only two layers, the model does not detect relevant tokens that are distant from each other. Conversely, if you use too many LSTM layers, the model will be prone to overfitting. We tested values in the range of 2 to 15. Table 4 illustrates our settings' performance on the development set -the accuracy increases up to a 10 LSTM cells before dropping significantly at 15.
Other Parameters
In addition to the aforementioned parameters, we investigated the impact of changing L2-reg, pooling, and activation and finally set them to 1e − 5, last pooling, and tanh respectively. These settings were consistent with previous findings in the literature and we did not observe significant improvements by changing these values.
Results and Discussion
In previous sections, we found the best setting for our model, with which we gained the best accuracy on the SwDA development set. In this section, we report our results on the SwDA and MRDA test set.
Model Accuracy (%) Our RNN Model 80.1 HMM (Stolcke et al., 2000) 71.0 CNN (Lee and Dernoncourt, 2016) 73.1 RCNN (Kalchbrenner and Blunsom, 2013) 73.9 DRLM-joint training 74.0 DRLM-conditional training 77.0 Tf-idf (baseline) 47.3 Inter-annotator agreement 84.0 Table 5: SwDA dialogue act tagging accuracies. Table 5 shows the results achieved by our model in comparison with previous works. As a baseline, we consider the accuracy obtained from a Naive Bayes classifier using tf-idf bigrams as features (Naive Bayes outperformed other classifiers including SVM and Random Forest). Our model improved results over the state-of-the-art methods and the baseline by 3.11% and 32.85%, respectively.
We also applied our model to classify dialogue acts in the MRDA with 5 dialogue acts. To do so, we used the same settings as described above for classifying dialogue acts in SwDA (Table 5). Table 6 shows our results on the MRDA corpus.
Model
Accuracy (%) Our RNN Model 86.8 CNN (Lee and Dernoncourt, 2016) 84.6 Graphical Model (Ji and Bilmes, 2006) 81.3 Naive Bayes (Lendvai and Geertzen, 2007) 82.0 Tf-idf (baseline) 74.6 Table 6: MRDA dialogue act tagging accuracies.
We calculate the baseline as before, by using tf-idf bigram features. The Random Forest classifier achieved the best result in comparison to other classifiers such as Naive Bayes and SVM. Our results in Table 6 show that our model outperformed the state-of-the-art method by 2.2%. It should be emphasized that our model achieved this result without being tuned on an MRDA development set.
Conclusion
In this study, we used a deep recurrent neural network for classifying dialogue acts. We showed that our model improved over the state-of-the-art in classifying dialogue act in open-domain conversational text.
We ran several experiments to realize the effects of setting each hyperparameter on the final results. We found that dropout regularization should be applied to LSTM-based structures (even for LSTM-adapted dropout methods that have been proven to have a positive impact on some datasets) cautiously to ensure that it does not have a negative impact on the accuracy of the system.
Figure 1 :
1RNN structure for creating a vector-based representation of an utterance from its word.
Figure 2 :
2LSTM cell structure and its respective parameters (http://colah.github.io).
• SwDA :
SwDAThe Switchboard corpus(Godfrey et al., 1992) contains 1,155 five-minute, spontaneous, open-domain dialogues.Jurafsky et al. (1997) revised and collapsed the original DA tags into 42 DAs, which we use to evaluate our model. SwDA has 19 conversations in its test set.• MRDA: The ICSI Meeting Recorder Dialogue Act corpus was annotated with the DAMSL tagset.This corpus is comprised of recorded multi-party meeting conversations. The MRDA contains 75 one-hour dialogues. There are several variations of the MRDA corpus but MRDA with 5 tags is commonly used in the literature.
Table 4 :
4Impact of LSTM layers on accuracy.Accuracy (%) No. of layers
73.29
2
73.61
5
73.92
10
72.90
15
https://dumps.wikimedia.org/enwiki/20160421
AcknowledgementsThis research is partially supported by grant IIS-1262860 to UNT from the National Science Foundation.
Automatic dialog act segmentation and classification in multiparty meetings. Jeremy Ang, Yang Liu, Elizabeth Shriberg, ICASSP (1). Jeremy Ang, Yang Liu, and Elizabeth Shriberg. 2005. Automatic dialog act segmentation and classification in multiparty meetings. In ICASSP (1), pages 1061-1064.
How to do things with words. Austin John Langshaw, Oxford university pressJohn Langshaw Austin. 1975. How to do things with words. Oxford university press.
Semi-supervised question retrieval with gated convolutions. Tao Lei Hrishikesh Joshi Regina Barzilay, Tommi Jaakkola, Katerina Tymoshenko, Alessandro Moschitti Llu Marquez, NaaclTao Lei Hrishikesh Joshi Regina Barzilay, Tommi Jaakkola, Katerina Tymoshenko, and Alessandro Moschitti Llu Marquez. 2016. Semi-supervised question retrieval with gated convolutions. Naacl.
Justin Bayer, Christian Osendorfer, Daniela Korhammer, Nutan Chen, Sebastian Urban, Patrick Van Der, Smagt, arXiv:1311.0701On fast dropout and its applicability to recurrent networks. arXiv preprintJustin Bayer, Christian Osendorfer, Daniela Korhammer, Nutan Chen, Sebastian Urban, and Patrick van der Smagt. 2013. On fast dropout and its applicability to recurrent networks. arXiv preprint arXiv:1311.0701.
Learning long-term dependencies with gradient descent is difficult. Yoshua Bengio, Patrice Simard, Paolo Frasconi, IEEE transactions on neural networks. 52Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks, 5(2):157-166.
A neural probabilistic language model. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, Christian Jauvin, journal of machine learning research. 3Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. journal of machine learning research, 3(Feb):1137-1155.
Natural language processing (almost) from scratch. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, Pavel Kuksa, The Journal of Machine Learning Research. 12Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493-2537.
Meeting recorder project: Dialog act labeling guide. Rajdip Dhillon, Sonali Bhagat, Hannah Carvey, Elizabeth Shriberg, DTIC DocumentTechnical reportRajdip Dhillon, Sonali Bhagat, Hannah Carvey, and Elizabeth Shriberg. 2004. Meeting recorder project: Dialog act labeling guide. Technical report, DTIC Document.
Switchboard: Telephone speech corpus for research and development. J John, Godfrey, C Edward, Jane Holliman, Mcdaniel, Acoustics, Speech, and Signal Processing. IEEE1ICASSP-92IEEE International Conference onJohn J Godfrey, Edward C Holliman, and Jane McDaniel. 1992. Switchboard: Telephone speech corpus for research and development. In Acoustics, Speech, and Signal Processing, 1992. ICASSP-92., 1992 IEEE Inter- national Conference on, volume 1, pages 517-520. IEEE.
Dialogue act classification using a bayesian approach. Sergio Grau, Emilio Sanchis, Maria Jose Castro, David Vilar, 9th Conference Speech and Computer. Sergio Grau, Emilio Sanchis, Maria Jose Castro, and David Vilar. 2004. Dialogue act classification using a bayesian approach. In 9th Conference Speech and Computer.
Hybrid speech recognition with deep bidirectional lstm. Alex Graves, Navdeep Jaitly, Abdel-Rahman Mohamed, Automatic Speech Recognition and Understanding (ASRU). IEEEAlex Graves, Navdeep Jaitly, and Abdel-rahman Mohamed. 2013. Hybrid speech recognition with deep bidi- rectional lstm. In Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on, pages 273-278. IEEE.
Improving neural networks by preventing co-adaptation of feature detectors. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, abs/1207.0580CoRRGeoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2012. Improv- ing neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580.
Untersuchungen zu dynamischen neuronalen netzen. Diploma. Sepp Hochreiter, 91Technische Universität MünchenSepp Hochreiter. 1991. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Universität München, page 91.
Dialogue act tagging for instant messaging chat sessions. Edward Ivanovic, Proceedings of the ACL Student Research Workshop. the ACL Student Research WorkshopAssociation for Computational LinguisticsEdward Ivanovic. 2005. Dialogue act tagging for instant messaging chat sessions. In Proceedings of the ACL Student Research Workshop, pages 79-84. Association for Computational Linguistics.
Aaron Jaech, Larry Heck, Mari Ostendorf, arXiv:1604.00117Domain adaptation of recurrent neural networks for natural language understanding. arXiv preprintAaron Jaech, Larry Heck, and Mari Ostendorf. 2016. Domain adaptation of recurrent neural networks for natural language understanding. arXiv preprint arXiv:1604.00117.
Dialogue acts in VERBMOBIL. Susanne Jekat, Alexandra Klein, Elisabeth Maier, Ilona Maleck, Marion Mast, J Joachim Quantz, CiteseerSusanne Jekat, Alexandra Klein, Elisabeth Maier, Ilona Maleck, Marion Mast, and J Joachim Quantz. 1995. Dialogue acts in VERBMOBIL. Citeseer.
Backoff model training using partially observed data: Application to dialog act tagging. Gang Ji, Jeff Bilmes, Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics. the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational LinguisticsAssociation for Computational LinguisticsGang Ji and Jeff Bilmes. 2006. Backoff model training using partially observed data: Application to dialog act tagging. In Proceedings of the main conference on Human Language Technology Conference of the North Amer- ican Chapter of the Association of Computational Linguistics, pages 280-287. Association for Computational Linguistics.
A latent variable recurrent neural network for discourse relation language models. Yangfeng Ji, Gholamreza Haffari, Jacob Eisenstein, arXiv:1603.01913arXiv preprintYangfeng Ji, Gholamreza Haffari, and Jacob Eisenstein. 2016. A latent variable recurrent neural network for discourse relation language models. arXiv preprint arXiv:1603.01913.
Automatic detection of discourse structure for speech recognition and understanding. Daniel Jurafsky, Rebecca Bates, Noah Coccaro, Rachel Martin, Marie Meteer, Klaus Ries, Elizabeth Shriberg, Audreas Stolcke, Paul Taylor, Van Ess-Dykema, Automatic Speech Recognition and Understanding. IEEEDaniel Jurafsky, Rebecca Bates, Noah Coccaro, Rachel Martin, Marie Meteer, Klaus Ries, Elizabeth Shriberg, Au- dreas Stolcke, Paul Taylor, Van Ess-Dykema, et al. 1997. Automatic detection of discourse structure for speech recognition and understanding. In Automatic Speech Recognition and Understanding, 1997. Proceedings., 1997 IEEE Workshop on, pages 88-95. IEEE.
Nal Kalchbrenner, Phil Blunsom, arXiv:1306.3584Recurrent convolutional neural networks for discourse compositionality. arXiv preprintNal Kalchbrenner and Phil Blunsom. 2013. Recurrent convolutional neural networks for discourse composition- ality. arXiv preprint arXiv:1306.3584.
Convolutional neural networks for sentence classification. Yoon Kim, arXiv:1408.5882arXiv preprintYoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882.
Dialogue act recognition approaches. Pavel Král, Christophe Cerisara, Computing and Informatics. 292Pavel Král and Christophe Cerisara. 2012. Dialogue act recognition approaches. Computing and Informatics, 29(2):227-250.
Sequential short-text classification with recurrent and convolutional neural networks. Ji Young Lee, Franck Dernoncourt, arXiv:1603.03827arXiv preprintJi Young Lee and Franck Dernoncourt. 2016. Sequential short-text classification with recurrent and convolutional neural networks. arXiv preprint arXiv:1603.03827.
Molding cnns for text: non-linear, non-consecutive convolutions. Tao Lei, Regina Barzilay, Tommi Jaakkola, arXiv:1508.04112arXiv preprintTao Lei, Regina Barzilay, and Tommi Jaakkola. 2015. Molding cnns for text: non-linear, non-consecutive convo- lutions. arXiv preprint arXiv:1508.04112.
Token-based chunking of turn-internal dialogue act sequences. Piroska Lendvai, Jeroen Geertzen, Proceedings of the 8th SIGDIAL Workshop on Discourse and Dialogue. the 8th SIGDIAL Workshop on Discourse and DialoguePiroska Lendvai and Jeroen Geertzen. 2007. Token-based chunking of turn-internal dialogue act sequences. In Proceedings of the 8th SIGDIAL Workshop on Discourse and Dialogue, pages 174-181.
Segmental recurrent neural networks for end-to-end speech recognition. Liang Lu, Lingpeng Kong, Chris Dyer, A Noah, Steve Smith, Renals, arXiv:1603.00223arXiv preprintLiang Lu, Lingpeng Kong, Chris Dyer, Noah A Smith, and Steve Renals. 2016. Segmental recurrent neural networks for end-to-end speech recognition. arXiv preprint arXiv:1603.00223.
Computational linguistics and deep learning. D Christopher, Manning, Computational Linguistics. Christopher D Manning. 2016. Computational linguistics and deep learning. Computational Linguistics.
Recurrent neural network based language model. Tomas Mikolov, Martin Karafiát, Lukas Burget, Jan Cernockỳ, Sanjeev Khudanpur, Interspeech. 23Tomas Mikolov, Martin Karafiát, Lukas Burget, Jan Cernockỳ, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Interspeech, volume 2, page 3.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, arXiv:1301.3781arXiv preprintTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
The role of affect analysis in dialogue act identification. Affective Computing. Nicole Novielli, Carlo Strapparava, IEEE Transactions on. 44Nicole Novielli and Carlo Strapparava. 2013. The role of affect analysis in dialogue act identification. Affective Computing, IEEE Transactions on, 4(4):439-451.
Regularization and nonlinearities for neural language models: when are they needed?. Marius Pachitariu, Maneesh Sahani, arXiv:1301.5650arXiv preprintMarius Pachitariu and Maneesh Sahani. 2013. Regularization and nonlinearities for neural language models: when are they needed? arXiv preprint arXiv:1301.5650.
On the difficulty of training recurrent neural networks. Razvan Pascanu, Tomas Mikolov, Yoshua Bengio, 28Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural net- works. ICML (3), 28:1310-1318.
Linguistic inquiry and word count: Liwc. W James, Pennebaker, J Roger, Martha E Booth, Francis, computer softwareJames W Pennebaker, Roger J Booth, and Martha E Francis. 2007. Linguistic inquiry and word count: Liwc [computer software].
. T X Austin, Net, Austin, TX: liwc. net.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, EMNLP. 14Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word represen- tation. In EMNLP, volume 14, pages 1532-1543.
Recurrent neural network and lstm models for lexical utterance classification. Suman Ravuri, Andreas Stolcke, Proc. Interspeech. InterspeechDresdenSuman Ravuri and Andreas Stolcke. 2015. Recurrent neural network and lstm models for lexical utterance classi- fication. Proc. Interspeech, Dresden.
Software Framework for Topic Modelling with Large Corpora. Petr Radimřehůřek, Sojka, Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks. the LREC 2010 Workshop on New Challenges for NLP FrameworksValletta, MaltaRadimŘehůřek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceed- ings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Valletta, Malta, May. ELRA. http://is.muni.cz/publication/884893/en.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Y Jean, Jason Wu, Chuang, D Christopher, Manning, Y Andrew, Christopher Ng, Potts, Proceedings of the conference on empirical methods in natural language processing. the conference on empirical methods in natural language processingCiteseer16311642Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christo- pher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Pro- ceedings of the conference on empirical methods in natural language processing (EMNLP), volume 1631, page 1642. Citeseer.
Dialogue act modeling for automatic tagging and recognition of conversational speech. Andreas Stolcke, Noah Coccaro, Rebecca Bates, Paul Taylor, Carol Van Ess-Dykema, Klaus Ries, Elizabeth Shriberg, Daniel Jurafsky, Rachel Martin, Marie Meteer, Computational linguistics. 263Andreas Stolcke, Noah Coccaro, Rebecca Bates, Paul Taylor, Carol Van Ess-Dykema, Klaus Ries, Elizabeth Shriberg, Daniel Jurafsky, Rachel Martin, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics, 26(3):339-373.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, Advances in neural information processing systems. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104-3112.
Greedy, joint syntactic-semantic parsing with stack lstms. Swabha Swayamdipta, Miguel Ballesteros, Chris Dyer, Noah A Smith, arXiv:1606.08954arXiv preprintSwabha Swayamdipta, Miguel Ballesteros, Chris Dyer, and Noah A Smith. 2016. Greedy, joint syntactic-semantic parsing with stack lstms. arXiv preprint arXiv:1606.08954.
Improved semantic representations from tree-structured long short-term memory networks. Kai Sheng Tai, Richard Socher, Christopher D Manning, arXiv:1503.00075arXiv preprintKai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075.
Word representations: a simple and general method for semi-supervised learning. Joseph Turian, Lev Ratinov, Yoshua Bengio, Proceedings of the 48th annual meeting of the association for computational linguistics. the 48th annual meeting of the association for computational linguisticsAssociation for Computational LinguisticsJoseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th annual meeting of the association for computational linguistics, pages 384-394. Association for Computational Linguistics.
Wojciech Zaremba, arXiv:1409.2329Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. arXiv preprintWojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329. |
218,973,700 | [] | Digital Language Infrastructures -Documenting Language Actors
May 2020
Verena Lyding verena.lyding@eurac.edu
Institute for Applied Linguistics
Eurac Research
BolzanoItaly
Alexander König
Institute for Applied Linguistics
Eurac Research
BolzanoItaly
CLARIN ERIC
the Netherlands
Monica Pretti pretti.monica@gmail.com
Institute for Applied Linguistics
Eurac Research
BolzanoItaly
Digital Language Infrastructures -Documenting Language Actors
Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)
the 12th Conference on Language Resources and Evaluation (LREC 2020)MarseilleMay 20203457local infrastructurelanguage actorsinstitution catalog
The major European language infrastructure initiatives like CLARIN (Hinrichs and Krauwer, 2014), DARIAH (Edmond et al., 2017) or Europeana (Europeana Foundation, 2015 have been built by focusing in the first place on institutions of larger scale, like specialized research departments and larger official units like national libraries, etc. However, besides these principal players also a large number of smaller language actors could contribute to and benefit from language infrastructures. Especially since these smaller institutions, like local libraries, archives and publishers, often collect, manage and host language resources of particular value for their geographical and cultural region, it seems highly relevant to find ways of engaging and connecting them to existing European infrastructure initiatives. In this article, we first highlight the need for reaching out to smaller local language actors and discuss challenges related to this ambition. Then we present the first step in how this objective was approached within a local language infrastructure project, namely by means of a structured documentation of the local language actors landscape in South Tyrol. We describe how the documentation efforts were structured and organized, and what tool we have set up to distribute the collected data online, by adapting existing CLARIN solutions.
Introduction
Existing digital language infrastructures like CLARIN (Hinrichs and Krauwer, 2014), DARIAH (Edmond et al., 2017) or Europeana (Europeana Foundation, 2015) have been built by focusing in the first place on well-established and specialized research institutions and larger official units like national libraries, major European museums etc. This is sensible as it allowed to gather and develop best practices and joint solutions among established centers of expertise to "create and maintain an infrastructure to support the sharing, use and sustainability of language data and tools" 1 . However, besides these principal players and some smaller actors, an even larger number of local language institutions could contribute to and benefit from language infrastructures. This is especially true at this point in time, where the basics for creating language infrastructures are on a good way. As these smaller institutions, like local libraries, archives and publishers, often collect, manage and host language resources of particular value for their geographical and cultural region, it seems particularly relevant to find ways of engaging and connecting them to existing European infrastructure initiatives. In this paper, we highlight the need to reach out to smaller language actors on a local scope and report on a concrete effort we have made to tackle this need within the context of the DI-ÖSS project 2 . The project aims at taking the first steps towards creating a local language infrastructure for South Tyrol, by bringing together relevant actors of various language institutions and organizations on the local level and by transferring and applying best practices and standards of European initiatives to the local context (see Section 2). 1 https://www.clarin.eu/content/clarin-in-a-nutshell 2 Digital Infrastructure for the Ecosystem of South Tyrolean Language Data and Services: http://www.eurac.edu/en/ research/projects/Pages/projectdetail4262. aspx
In this paper, we focus on one specialized subtask of DI-ÖSS that is concerned with the systematic screening and documentation of the ecosystem of language actors in South Tyrol. This task strives to involve local target groups, that are oftentimes not linked to bigger infrastructures yet, by actively approaching them and building a concise documentation of their data, services and needs-in terms of workflows and target groups. More specifically, we describe the process of creating a concise documentation of the local ecosystem of language institutions while exploring ways of formalizing the collected information and making it available for consultation online. By doing so, we aim at facilitating information gain about local language actors: Who is around in the South Tyrolean language context? What are these actors doing with language data and which services are they offering? And, where can these actors be found and contacted? By providing free access to this data in a structured way (see Section 4.3), we aim at fostering collaboration opportunities among institutions.
The DI-ÖSS Project
The DI-ÖSS project, running from 2017 to 2020, approaches the overall challenge of making the first steps in growing a digital language infrastructure between various local language institutions by means of implementing prototypical use cases to exploit synergies between the institutions (Lyding et al., 2018). In fact, the project was initiated with the assumption that a digital language infrastructure could benefit any organization dealing with language data paired with the observation that smaller local institutions are however less involved yet. Given that a lot of knowledge and data about local language and cultural heritage is situated in smaller institutions on the local level and given that these institutions are often less connected to bigger research initiatives, it seems relevant to find ways to actively approach and involve these smaller players.
In pursuing the project objectives we have encountered four main challenges, which are related to different characteristics particular to smaller local language actors:
1. Local actors are oftentimes not aware of bigger infrastructure initiatives.
2. Smaller actors often lack the required methodological knowledge, technical skills or human resources for addressing meta-tasks that go beyond the daily duties of their businesses.
3. Smaller local actors, which usually have little or no experience in infrastructure initiatives often encounter difficulties to anticipate the added value of their involvement.
4. The local language ecosystem and the characteristics of individual language institutions and organizations are not transparent and information about them is not openly accessible in a centralized place.
In the following section we will present an approach for addressing the fourth challenge, the need for the systematic documentation of language institutions.
Objectives and Overall Approach
As discussed in the previous section, gaining a comprehensive overview of the existing language actors and their role in the local context, i.e. a current-state depiction of each actor's data and data management practices within the local language ecosystem, is of fundamental importance to:
1. Understand the overall local language landscape, and 2. the current situation of the individual institutions and their related demands.
The second point is needed in order to be able to extrapolate from the selected use cases and to identify follow-up opportunities for a wide-reaching infrastructure on the local scope, while the first point allows to gain an overview of the local situation and to identify individual actors. We therefore claim that digital language infrastructure initiatives should not only be concerned with documenting and linking language data and tools, but also with defining systematic procedures for the documentation of language actors, their organizational structure, functions, resources and needs. We also claim that a common and publicly accessible repository for the documentation of language actors should be set up and we present our approach for tackling this task. Our approach, within the DI-ÖSS project, aims at mapping out relations and possible interactions between the chosen set of language institutions in order to realize a coherent infrastructural framework of multiple connections between them. It follows a bottom-up (vs top-down) logic which turned to the language actors and, learned about their situation and needs in a first step and documented this situation, with the aim to actively involve them and respond to their needs in a second step. In the remainder of this paper, we will focus on the first step, the bottom-up informed documentation of language actors in South Tyrol.
Documenting Language Actors
The documentation process of the local language actors is approached in three steps:
1. Identifying the actors and establishing documentation categories, 2. collecting and organizing the data, and 3. formalizing and distributing them.
The following sections discuss the three working phases in further detail.
Identifying Actors and Documentation Categories
This primary stage of the documentation process is threefold: firstly, it consists in identifying relevant establishments in the local context, compiling a comprehensive overview of such actors, and categorizing them into self-contained yet interrelated clusters. Secondly, it designates the shortlist of documentation criteria for portraying the chosen institutions in adequate detail and, thirdly, it entails the actual selection of project-apt candidates.
In the first place, an enumeration comprising a total of around 200 establishments was drawn up. In line with the project objectives, the scope for selecting institutions was confined in two ways:
1. The geographical area -the Autonomous Province of Bolzano/Bozen in northern Italy, and 2. the type of institutions -organizations primarily dealing with language data and services.
The list included profile and contact information about each identified establishment as well as a seven-category clustering based on institution types (see Figure 1 for a percentage representation of the preliminary institution classification). The seven institution types are: archives, libraries, online media, catalogs, cultural institutes, publishing houses and journals. These classes were determined in a bottom-up manner, i.e. by observing and abstracting which organizations showed similarities to or shared common ground with one another given their predominant area(s) of interest and competence. While representing a valuable project output in itself 3 , such categorization allowed to create a preliminary structure for the pool of collected institutions, thus generating a more systematic description and understanding of the inspected ecosystem of language actors. In the second place, when deciding which documentation criteria were required to ensure a project-relevant depiction of the language actors, two factors were combined:
1. The intent of integrating the data into an online repository for user-friendly consultation, and 2. its potential for extension to also accommodate more and different institutions in the future.
target groups
The reason why these key areas were considered as musthave descriptors is that they are the conceptual foundation and the structural pillars of the infrastructure: by analyzing and implementing them, recurrent patterns of interaction and interplay should emerge, be recognized and exploited to both establish and achieve synergies amongst the chosen organizations (Lyding et al., 2018). This goal also guided the selection process for involving a small number of institutions and organizations into the project's information collection phase. While the setup of the documentation effort is designed to host an exhaustive description of South Tyrolean language actors in the mid to long term (see Section 6), the initial documentation effort described here aimed at describing a small sample of representative institutions to gain a first picture. Targeting an initial set of about ten institutions and organizations the following range of criteria for selecting the initial set of institutions were elaborated:
1. Quantitative and qualitative relevance in the territory, 2. category coverage -that is, at least one institution per type was selected with the aim of rendering an authentic, prototype-compatible cross-section of the chosen language ecosystem, 3. category descriptive completeness -for instance, for the type library three different organizations with nonoverlapping media/data domains were selected 5 in order to present diverse facets of the category library, and 4. multi-category ascription -that is, institutions were selected which pertained to more than one category at once 6 . 4 See Appendix A for the listing of all aspects addressed. 5 In this respect, a general, a technical and a specialized library were interviewed. 6 A rather frequent combination involved institutions belonging to the categories "library" and "catalog" at the same time.
This allowed identifying cross-category features, thus understanding better the feasibility of describing complexity via a detailed documentation. Out of the around 200 institutions recorded in the first identification phase (see above), eleven institutions have been selected, and were contacted one by one (see Section 4.2). All eleven institutions have been interviewed based on the structured questionnaire (see Appendix A): they served as a prototypical reference scenario for fine-tuning the preparation stage and, as a result, having a concrete database with a varied set of language institutions.
Establishing Contacts and Collecting Data
The data collection phase comprised three sub-stages: First the initial step of establishing contacts with institutions, second the step of explaining the project's objectives if the considered institution proved interested, and finally the concluding step of carrying out the interviews. First, the selected organizations were approached to see whether they were willing and/or able to participate in the project and, if so, we involved them first-hand. Next, they were informed about the main objectives of DI-ÖSS and of the documentation process as well as of the procedure for collecting data. Also, possible questions from the side of the language actors were clarified in this step. From a communicative standpoint, we encountered a challenge in explaining the overall purpose of the project to organizations which have little direct experience in infrastructures. The explanations we provided moved from a goal-oriented overview of the system into its component parts, thus aiming at demonstrating the synergetic opportunities inherent to a language infrastructure project. From a practical viewpoint, describing how data needed to be acquired allowed the institution's contact person to make an informed decision as to whether or not to take part in the documentation initiative. It also helped the contact persons, to find their way of portraying the organization in light of the DI-ÖSS framework.
The actual data collection process was implemented through arranged interviews: they were conducted using the aforementioned questionnaire (see Section 4.1) as a code of practice. This guaranteed both content completeness -at least on a procedural level since some questionnaire areas had to be filled out flexibly according to each institution's specificsand data collection standardization in view of creating facets (see Section 4.3). Furthermore, to maximize the potential of the meetings held with each selected institution, the following modus operandi was adopted: on the one side, a series of 'pre-investigations' were made into the organization by consulting its web pages; this permitted gaining initial contextualized impressions and, as a consequence, asking targeted questions during the interview. On the other, the encounter itself was audio recorded so as to collect data in as accurate a way as possible. Interviews were then formalized at a later stage. They were transcribed and during the transcription process new abstract attributes of description were identified (see Section 4.3). In particular, the transcription implied extensively completing the designed questionnaire in continuous prose and, where possible, adding links to existent institutional websites.
Presenting and Distributing Data
To present this inventory of language actors to potential users in the best possible way, the transcribed interviews were transformed into a more concise format resulting in a clear classification of the institutions. In this way a user can both easily identify the institutions they are looking for and, at the same time, explore the inventory for similar or related entries. After careful consideration, it was decided to use a faceted interface. This should provide a good way of approaching the collected language actor data. Within CLARIN, the Virtual Language Observatory (VLO) 7 , developed and maintained by CLARIN ERIC, provides the technical solution to a similar problem. It has to be said that the information being collected within the CLARIN VLO and DI-ÖSS slightly differs as far as content goes -the CLARIN VLO focuses on language resources, whereas DI-ÖSS looks at language institutions as a whole, including information on their institutional structure, resources (media collections) and services offered. However, the use case both projects are working on is still relatively similar in that it consists of collections of data which need to be presented in a compact and user-friendly way to make them accessible via the Internet. Apart from the VLO being well-maintained software and it being used in an important European infrastructure, this technical choice has the additional advantage of allowing for a future follow-up project in which the collection data -once separated from the general institution data -can be integrated into the CLARIN VLO. This will be much easier if the data in DI-ÖSS have already been collected in a CLARIN-compatible way. To display the language actor documentation in the DI-ÖSS VLO, there first has to be a "translation" of the transcribed interviews into a more structured set of metadata and the most relevant metadata fields have to be identified and turned into facets that users can then use to filter the institutions. Because of the exploratory nature of the project and the type of information that has been collected, the language actor 7 https://www.clarin.eu/content/ virtual-language-observatory-vlo documentation is very detailed and often shaped by the organization and work environment of the specific institution. However, some good candidates for facets did emerge from the data when we analyzed it specifically with this aim in mind. Overall we identified 14 relevant metadata fields with reasonable abstractions of their values, which we encoded into facets for the search (see Table 1). Examples of these facets are the 'institution type' 8 , the sort of language data they mainly host (e.g. 'genre' fiction vs. non-fiction) or the time range of the items in a collection (i.e. 'publication period'). In addition a facet for filtering institutions by the services they offer was introduced (i.e. 'service for target group', such as library catalog). Figure 2 gives the detailed view of the information recorded for the Landesbibliothek Dr. Friedrich Tessmann. Figure 3 shows the entrance page of the Language Actor Repository listing all institutions which are currently recorded. Because of the severe information reduction that was necessary to transform the collected information into a format compatible with the VLO, it was decided that the full interviews should still remain available and be accessible so that users can always enquire more detailed information on an institution after they have narrowed down the selection through a faceted search. On the more technical side, there had to be some preparatory work as well as editing of the VLO setup to make it work with our type of data. First, the metadata profile had to be formalized into a CMDI profile (Broeder et al., 2012) within the CLARIN Component Registry 9 , so the VLO could process the data. As we consider our efforts not necessarily a part of CLARIN, we have decided to keep the profile that is being used in this project private for now. Then the VLO configuration had to be edited to support the facets selected for our project and at the same time superfluous standard VLO facets were removed. The VLO software is provided in a Docker Compose setup 10 that could be installed without much additional work. It still needed to be slightly adapted for the use in this project. Apart from the facet configuration, the styling needed to be adjusted to reflect the project environment and make it more Figure 3: Entrance page of Language Actor Repository, with facet 'institution type' expanded attractive for the envisioned target audience. Finally, there needed to be some technical adaptions to make the existing Docker Compose setup work on the Kubernetes cluster running at our institute. The DI-ÖSS Language Actor Repository is available online to the public at https:// kommul.eurac.edu/sprachinstitutionen/.
Figure 2: Repository entry for library Tessmann
Implicated Infrastructure Needs
Considering the experiences in this project and the reasons why it was set up, there emerges a vision for a larger-scale version of this kind of language actor documentation. Ideally this approach would be copied and transferred into other communities, documenting the actors in these environments in a similar way. As there is only one small pilot study so far we cannot be certain of possible unexpected obstacles, but if data collected elsewhere is comparable to the one found in the DI-ÖSS project it should be possible to aggregate this data on a higher level and CLARIN could set up a Language Actor Catalog as a companion to the Virtual Language Observatory. As described above there were certain difficulties with adapting the VLO software and especially the facets for the data collected about the language institutions. This suggests that the software might need some adaptions or a different, but similar, software should be used instead. It is expected that setting up such a CLARIN Language Actors Registry (CLAR) will help smaller local institutions to more easily interconnect with other ones that are facing the same problems and could learn from each other in solving them. Also, it could help in finding institutions that face complementary problems, that means one institution has the solution to the other's problems and vice versa. Finally, having this repository at the European level means that these interactions and synergies can not only happen on a local level, but also across different countries. The same solution that works for a small historical library in South Tyrol might also work for a similar library in Catalonia, for example. Additionally, by having this envisioned repository integrated within CLARIN, possibly using the CMDI standard for recording the data, it becomes much easier to take out just the information about the actual language data from the Language Actor profiles and integrating them into the CLARIN VLO. The VLO could then for each collection link back to the Language Actor Registry and in fact, this link could also be added for existing collections in the VLO (provided the institution has been added to the CLAR), where there is already a metadata field for this information called Organisation.
Summary and Future Work
In this paper, we have reported on a first attempt to create a comprehensive documentation of language actors in South Tyrol while raising awareness of the topic. In order to foster the growth and wide adoption of language research infrastructures, we claim that not only language resources and tools, but also actors in language-related domains need to be documented. Within the DI-ÖSS initiative for South Tyrol, in this initial phase eleven institutions have been contacted and were interviewed in detail. The interviews were fully transcribed and information related to the selected facets of key information (see Section 4.3) were extracted and imported into the VLO. In future work, we envision to extend the online documentation by populating it with information of the entire list of recorded language actors in South Tyrol (see Section 4.1) by asking them to fill short questionnaires related to only the key information encoded in the online documentation. Recording the details of language actors allows both understanding their aims and needs and concretely mapping out the language ecosystem on a general/global and local scope. To attain this goal, we suggest creating ways to grant access to data about language actors on a broader level and explore implications for the technical pre-conditions as discussed above.
Bibliographical References
Broeder, D., Windhouwer, M., Van Uytvanck, D., Goosen, T., and Trippel, T. (2012). Cmdi: a component metadata infrastructure. In Describing LRs with metadata: towards flexibility and interoperability in the documentation of LR workshop programme, volume 1. Edmond, J., Fischer, F., Mertens, M., and Romary, L.
Figure 1 :
1Distribution of institution types in preliminary classificationTo this end, a paper questionnaire scrutinizing five main types of information about each institution was designed
. The dariah eric: Redefining research infrastructure for the arts and humanities in the digital age. ERCIM News, (111). Europeana Foundation. (2015). Transforming the world with culture: Next steps on increasing the use of digital cultural heritage in research, education, tourism and the creative industries. Technical report, Europeana Foundation, September. Hinrichs, E. and Krauwer, S. (2014). The clarin research infrastructure: Resources and tools for ehumanities scholars. In Nicoletta Calzolari (Conference Chair), et al., editors, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), Reykjavik, Iceland, may. European Language Resources Association (ELRA). Lyding, V., König, A., Gorgaini, E., Nicolas, L., and Pretti, M. (2018). Di-öss -building a digital infrastructure in south tyrol. In Inguna Skadin , a, editor, Selected papers from the CLARIN Annual Conference 2018, Pisa, 8-10 October 2018. Linköping University Electronic Press. in press.
This exhaustive but concise listing (in terms of information for each institution) of local institutions is kept as part of the project documentation alongside the more detailed reports on institutions.
See Section 4.1 and Appendix A, questions related to background information about the institution. 9 https://catalog.clarin.eu/ds/ ComponentRegistry/ 10 https://gitlab.com/CLARIN-ERIC/compose_vlo
A. QuestionnaireThe questionnaire collected the following types of information about South Tyrolean language institutions: |
||
17,827,598 | Challenges in Developing a Rule based Urdu Stemmer | Urdu language raises several challenges to Natural Language Processing (NLP) largely due to its rich morphology. In this language, morphological processing becomes particularly important for Information Retrieval (IR). The core tool of IR is a Stemmer which reduces a word to its stem form. Due to the diverse nature of Urdu, developing stemmer is a challenging task. In Urdu, there are large numbers of variant forms (derivational and inflectional forms) for a single word form. The aim of this paper is to present issues pertaining to the development of Urdu stemmer (rule based stemmer). | [
16238434
] | Challenges in Developing a Rule based Urdu Stemmer
2011. November 8, 2011
Sajjad Ahmad Khan sajjadkhan25@hotmail.com
Department of Computer Science
COMSATS Institute of Information Technology
AbbottabadPakistan
Waqas Anwar
Department of Computer Science
COMSATS Institute of Information Technology
AbbottabadPakistan
Usama Ijaz Bajwa
Department of Computer Science
COMSATS Institute of Information Technology
AbbottabadPakistan
Challenges in Developing a Rule based Urdu Stemmer
Proceedings of the 2nd Workshop on South and Southeast Asian Natural Language Processing
the 2nd Workshop on South and Southeast Asian Natural Language ProcessingChiang Mai, Thailand2011. November 8, 2011
Urdu language raises several challenges to Natural Language Processing (NLP) largely due to its rich morphology. In this language, morphological processing becomes particularly important for Information Retrieval (IR). The core tool of IR is a Stemmer which reduces a word to its stem form. Due to the diverse nature of Urdu, developing stemmer is a challenging task. In Urdu, there are large numbers of variant forms (derivational and inflectional forms) for a single word form. The aim of this paper is to present issues pertaining to the development of Urdu stemmer (rule based stemmer).
Introduction
Urdu is an Indo-Aryan language. It is the national language of Pakistan and is one of the twentythree official languages of India. It is written in Perso-Arabic script. The Urdu vocabulary consists of several languages including Arabic, English, Turkish, Sanskrit and Farsi (Persian) etc. Urdu's script is right-to-left and form of a word's character is context sensitive, means the form of a character is dissimilar in a word because of the position of that character in the word (beginning, centre, on the ending) (Waqas et al., 2006). In Urdu language, morphological processing becomes particularly important for Information Retrieval (IR). Information retrieval system is used to ensure easy access to stored information. It also deals with saving, representation and organization of information objects. Modules of an IR system consist of a group of information objects, a group of requests and a method to decide which information items are most possibly helping to meet the requirements of the requests. Inside IR, the information data which is stored and receives search calls usually corresponds to the lists of identifiers recognized as key terms, keywords. One of the attempts to make the search engines more efficient in information retrieval is the use of stemmer. Stem is the base or root form of a word. Stemmer is an algorithm that reduces the word to their stem/root form e.g. tested, testing, pretest and tester have the stem "test". Similarly the Urdu stemmer should stem the words ﮐﻢ ﻋﻘﻞ (senseless), ﻋﻘﻞ ﻣﻨﺪ (sensible), ﻣﻨﺪﯼ ﻋﻘﻞ (sagacity) to Urdu stem word ﻋﻘﻞ (sense). Stemming is part of the complex process of taking out the words from text and turning them into index terms in an IR system. Indexing is the process of selecting keywords for representing a document. The smallest units of word which cannot be decomposed further into smaller meaningful units are called Morphemes. 1 They are of two kinds: free morphemes and bound morphemes. Morphemes which exist freely (alone) are called free morphemes whereas bound morphemes are made as a result of combination with another morpheme. For instance "flower" is a free morpheme, while "s" is the example of a bound morpheme.
The study of internal structure of words is called Morphology. 2 Deriving new words from the existing ones is called derivational morphemes e.g. Honour, Honourable, Honourably. Examples in Urdu: The words ﭼﺎﮨﺖ (love), ﭼﺎﮨﺘﺎ (to love) and ﭼﮩﻴﺘﺎ (lovely) are the derivatives of word ﭼﺎﮦ (love). Those morphemes that produce the grammatical formation of a word is called Inflectional morphemes e.g. Boys. Examples in Urdu: The words ﺗﺮ ﺳﺨﺖ (harder) and ﺗﺮﻳﻦ ﺳﺨﺖ (hardest) are the inflected forms of word ﺳﺨﺖ (hard).
The stemmer is also applicable to other natural language processing applications needing morphological analysis for example spell checkers, word frequency count studies, word parsing etc. The rest of the paper is organized as follows: In section 2, different rule based stemming algorithms are discussed. Section 3 gives an introduction regarding orthographic features. In section 4, several issues pertaining to Urdu stemmer are discussed in detail. Conclusion of the study and the future work is discussed in section 5.
Stemming Algorithms
There are four kinds of stemming approaches (Frakes, R.Baeza-Yates, 1992): table lookup, affix removal, successor variety and n-grams. Table lookup method is also known as brute force method, where every word and its respective stem are stored in table. The stemmer finds the stem of the input word in the respective stem table. This process is very fast, but it has severe disadvantage i.e. large memory space required for words and their stems and the difficulties in creating such tables. This kind of stemming algorithm might not be practical. The affix removal stemmer eliminates affixes from words leaving a stem. The successor variety stemmer is based on the determination of morpheme borders, i.e., it needs information from linguistics, and is more complex than affix removal stemmer. The Ngrams stemmer is based on the detection of bigrams and trigrams. The (J.B. Lovins, 1968) published the first English stemmer and used about 260 rules for stemming the English language. She suggested a stemmer consisting of two-phases. The first stage removes the maximum possible ending which matches one on a redefined suffix list. The spelling exceptions are covered in the 2 nd stage.
The (M.F. Porter, 1980) developed the stemmer on the truncation of suffixes, by means of list of suffixes and some restrictions/conditions are placed to recognize the suffix to be detached and generating a valid stem. Porter Stemmer performs stemming process in five steps. The Inflectional suffixes are handled in the first step, derivational suffixes are handling through the next three steps and the final step is the recoding step. Porter simplified the Lovin's rules upto 60 rules.
Different stemmers have also been developed for Arabic language. The (S. Khoja and R. Garside, 1999) developed an Arabic stemmer called a superior root-based stemmer, developed by Khoja and Garside. This stemming algorithm truncates prefixes, suffixes and infixes and then uses patterns for matching to pull out the roots. The algorithm has to face many problems particularly with nouns. The (Thabet. N., 2004) created a stemmer, which performs on classical Arabic in Quran to produce stem. For each Surah, this stemmer generates list of words. These words are checked in stop word list, if they don't exist in this list then corresponding prefixes and suffixes are removed from these words. The (Eiman Tamah Al-Shammari, Jessica Lin, 2008) proposed the Educated Text Stemmer (ETS). It is a simple, dictionary free and efficient stemmer that decreases stemming errors and has lesser storage and time required.
Bon was the first stemmer developed for Persian language (M. Tashakori, M. Meybodi & F. Oroumchian, 2003). Bon is an iterative longest matching stemmer. The iterative longest matching stemmer truncates the longest possible morpheme from a word according to a set of rules. This procedure is repeated until no more characters can be eliminated. The (A. Mokhtaripour and S. Jahanpour, 2006) proposed a Farsi stemmer that works without dictionary. This stemmer first removes the verb and noun suffixes from a word. After that it starts truncation of prefixes from that word.
Till date only one stemmer i.e. Assas-Band, developed for Urdu language (Q. Akram, A. Naseer and S. Hussain, 2009). This stemmer extracts the stem/root word of only Urdu words and not of borrowed words i.e. words from Arabic, Persian and English words. This algorithm removes the prefix and suffix from a word and returns the stem word. This stemmer does not handle words having infixes.
Orthographic Features of Urdu
According to (Malik M G Abbas et al., 2008), Urdu alphabet consists of 35 simple consonants, 15 aspired consonants, 10 vowels, 15 diacritical marks, 10 digits and other symbols.
Consonants
Consonants are divided into two groups: a. Aspirated Consonants There are 15 aspirated consonants in Urdu language. These consonants are shown by a grouping of a simple consonant to be aspirated. A special letter called Heh Doachashmee )ه( is used to mark the aspiration. Aspired Consonants are ,ﺑﻬ
Vowels
Urdu has ten vowels. Seven of them contain nasalized forms. Out of these seven, four long vowels are represented by Alef Madda ,)ﺁ( Alef ,)ا( Choti Yeh )ﯼ( and Vav )و( and three short vowels are represented by Arabic Kasra (Zer), Arabic Fatha (Zabar) and Arabic Damma (Pesh). In Urdu language, the Vowel demonstration is context sensitive. For example, the Urdu Choti Yeh )ﯼ( and Vav )و( can also be used as a consonant (Malik M G Abbas et al., 2008).
Aerab Marks
The aerab marks are those marks that are added to a letter to change the pronunciation of a word or to differentiate among similar words. It is also called as diacritical mark or diacritic. 3 There are 15 accent marks in Urdu (Malik M G Abbas et al., 2008). Accent marks (Zabar, Zer, Pesh, Ulta Pesh, Do-Zabar, Do-Zer, Do-Pesh etc) represent vowel sounds. These are placed above or below of an Urdu word. The accent marks are very rarely used by people in writing Urdu. When the diacritic of a character in a word is changed then it could entirely change its meaning. These accent marks play a significant role in the right pronunciation and recognition of meaning of a sentence, such as:
ﮨﮯ۔ ﺑﻴﻞ ﮐﯽ اﻧﮕﻮر ﭘﺮ درﺧﺖ
(A vine is on the tree) and
ﮔﻬﺎ ﺑﻴﻞ ﮨﮯ۔ رﮨﺎ ﮐﻬﺎ س ) The bull is eating grass ( In the first sentence, the word )ﺑﻴﻞ( means "a creeping plant" or a "vine" while in the second sentence it means a "bull". To remove the doubt between these two words, there should be Zabar after Beh )ب( in the second sentence.
Special Characters
There are two special characters used in Urdu which are discussed bellow: a.
Issues in developing an Urdu Stemmer
Morphological rich language
Urdu is morphologically rich language. It produces high number of derivational and inflectional words for a single word form. There are 57 different forms that can be generated from a single Urdu word (Rizvi, S. & Hussain, M., 2005). For Example, some different forms of Urdu word ﭘﮍه (read) are:
ﭘﮍهﻨﺎ،ﭘﮍهﺎ،ﭘﮍهﮯ،ﭘﮍهﻴﮟ،ﭘﮍهﯽ،ﭘﮍهﻨﯽ،ﭘﮍهﻮ،ﭘﮍهﻮں، ﭘﮍهﺎ،ﭘﮍهﺎﻧﺎ،ﭘﮍهﺎﺗﮯ،ﭘﮍهﺎﺗﺎ،ﭘﮍهﻮا،ﭘﮍهﻮاﺗﺎ،ﭘﮍهﻮں
Besides its own vocabulary, the Urdu vocabulary also consists of large number of Arabic, Persian, Hindi and English words etc. Thus Urdu language inherits the characteristics of the above mentioned languages too and as a result stemming process becomes a challenging task. We cannot achieve a good level of precision if a stemmer of any borrowed language is used as a stemmer on Urdu words. The reason is that, the Arabic stemmer will just stem Arabic words that are used in Urdu as borrowed words and a Persian stemmer will just stem borrowed Persian words etc. By using traditional process of modeling every form of a word as a unique word generates a lot of problems for Natural Language Processing applications such as growth of vocabulary, inflectional gaps, larger out-of-vocabulary rates and poor language model probability estimation. The relation among words in Urdu is found by using inflecting nouns, postposition and pronouns to state case information, number and gender. Inflecting verbs to reproduce number, gender and person information etc. Inflecting adjectives are to agree with the noun in number, gender and case. Thus, the standard stemmers which are developed for English words are not practically implementable for Urdu language.
Engineering issues
Urdu is bidirectional language and electronically we cannot represent it in ASCII form. Such type of language is represented by a special character set called Unicode. The Arabic Orthography Unicode Standards are used to process Urdu. Unicode is not supported by many programming languages. The languages that support Unicode include C#, Python and Java etc. Some programming language support Unicode but the IDE may not support it fully.
Diacritical Marks
Special attention should be given to the diacritical marks while developing an Urdu stemmer. The stem of an Urdu word changes with the use of these marks. For example ﻋﺎﻟﻢ is used in two senses, when Zabar is placed above the character ع and on ,ل then its meaning is people and its stem is ﻋﺎﻟﻢ (people). But when Zer is placed below ,ل then its meaning is scholar and its stem is ﻋﻠﻢ (knowledge).
Similarly رﺳﻞ word has two meanings. One is messengers when Pesh is used on ر and س with stem رﺳﻮل (messenger) and other is access when Zabar is used on ر and س with stem ارﺳﺎل (sending). Another example is the word ﺧﺎﺗﻢ , which has two meanings (The last/ring), the first one has stem ﺧﺘﻢ (finish) and second has ﺧﺎﺗﻢ (ring).
Compound Words
For word formation, compounding is one of the morphological procedures. The grouping of two words which already exist is called a compound word (Payne, Thomas E., 2006). When two or more than two lexeme stems are merged together to produce another lexeme, then it is called compound word (Sproat. R., 1992). Examples are: Firefighter, Blackbird, Water-hose, Hardhat, Rubber-hose and Fire-hose in English. It is very difficult to classify the compound words as a single or multiple words. The (Durrani N., 2007) discussed three schemes of compound words in Urdu i.e. AB, A-o-B and A-e-B. a. AB formation This scheme involves only joining of two free morphemes e.g. ﭘﭩﯽ ﻣﺮﮨﻢ (Bandaging) , ﺑﻴﻮﯼ ﻣﻴﺎں (husband wife), couple literally, اﺣﻮال ﺣﺎل (condition). AB form of compounds is further classified into Dvanda, Tatpurusa, Karmadharaya and Divigu (Sabzwari S, 2002).
b. A-o-B formation
This formation of Urdu compounds contains a linking morpheme "o" and is represented by a character "و" , e.g. ﻋﺠﺰواﻧﮑﺴﺎرﯼ (soberness and humility), وﮐﺘﺎﺑﺖ ﺧﻂ (correspondence), واﻣﺎن اﻣﻦ (law and order).
c. A-e-B formation
In this formation constituent words are connected with the help of one of the enclitic short morphemes; zer-e-izafat or hamza-e-izafat e.g. ﺻﺪرﻣﻤﻠﮑﺖ (president) is combined by a diacritical mark "Zer" below ر called as zer-e-izafat while in د ل ﺟﺬﺑہء (heart's spirit) and ﺧﻠﻔﺎﺋﮯاﺳﻼ م (Islamic caliphs), the diacritical mark hamza )ء( is used as a hamza-e-izafat. Some times the reduplication also produces ambiguity; whether it is treated as single or double word e.g. ﺳﺎﺗﻬ ﺁﮨﺴﺘہ،ﺳﺎﺗﻬ ﺟﮕہ،ﺁﮨﺴﺘہ ﺟﮕہ (together, slowly, at every place) Therefore there should be some rule for the identification of compound words. Thus these points should be considered while developing an Urdu stemmer.
Tokenization
The natural language processing applications need that the entered text should be tokenized for further processing. English language generally uses white spaces or punctuation marks for the identification of word boundaries. Although in Urdu, space character is not present but with increasing usage of computer, it is now being used, for generating right shaping and to break up words. Example:
ﺻﺪرﻧﮯدورﺳﮯوزﯾﺮﮐﻮﺁوازدﯼ
(The President called away the Minister)
In the above sentence there are eight words (tokens) but computer will consider the whole sentence as a single word because the computer will generate tokens on the basis of space occurrence. As due to non-joiner characters (here ر، ے ،و،ز ) in the words, no space occurs among words, so this whole sentence is considered as a single word. Therefore, during stemming, these non-joiner characters wrongly generate tokens of input text, stemmer will generate wrong resultantly stem. Tokenization process should be error free, hence producing correct tokens before applying an Urdu stemmer.
Affixes Removal
The word affix is used by the linguists for expressing that where a bound morpheme precisely be joined to a word. The Prefix, Suffix and infix are called affixes. Due to the use of affixes, a single word may contain a lot of variants and by removing these affixes (prefix and suffix) from a word will result into a stem word e.g. ﺑﺪﮔﻤﺎﻧﯽ (mis presumption). After removing the Urdu prefix and suffix from this word, produced a stem word ﮔﻤﺎن (presumption). A lot of stemmers (except for Urdu) were developed for stripping off prefixes & suffixes from a word but there is little work done on infix stripping from a word. We cannot get stem word of an Urdu word by only stripping off prefixes and (or) suffixes e.g. اﻗﻮام (nations) , ﻣﺴﺎﺟﺪ (mosques) , ﻋﻠﻮم (knowledge). These words contain infixes and large amount of such type of words are present in Urdu. Special attention should be given to those Urdu words having infixes. After studying the morphology of Urdu words, it is noticed that if patterns for such type of words (having infixes) are made, then a correct stem could be achieved.
Exceptional Cases a. Exceptional words
The removal of affixes (Prefixes and Suffixes) from a word produces a stem word but some times truncating these affixes leads to an erroneous stem e.g. .ﻧﺎدار Here ﻧﺎ is a prefix, where the stemmer eliminates it by producing دار , which is not a correct stem of the above stated word. It means that in some words, the affixes play the role of stem characters and should not be removed. Such type of words should be treated as an exceptional case. In Urdu, there are a lot of words that can be treated as an exceptional case, thus for a stemmer, such word lists should be maintained in advance.
b. Urdu digits, Arithmetic Symbols and Punctuations
Urdu is read and written from right to left but when numbers are introduced, it is read and written from left to right.
ڈے ﺑﺮﺗﻬ ﮐﯽ ﺣﻔﺼہ
Stem-word Dictionary
To check the accuracy of any stemmer, there should be a stem word dictionary. After studying relevant literature, it is noted that there is no stem dictionary available for Urdu text. Therefore, development of an Urdu stem dictionary is necessary for testing the accuracy of a stemmer on huge corpus.
Different Urdu words having same stem
In Urdu, there are a lot of words that are different in meaning but their stem is same e.g. ﺗﺎﺛﻴﺮ (characteristic) and ﺁﺛﺎر (signs). As we mentioned that the meaning of these two words are different from each other but their stem is same i.e. اﺛﺮ Similarly the words ﻣﻠﻮﮎ (rulers) and ﻣﻼﻳﮏ (angels) are two different words having single script for their stem without diacritical marks i.e. .ﻣﻠﮏ The word ﻣﻠﮏ has two meanings i.e. ruler or angel. The word اﺻﻮل (principles) and اﺻﻠﻴﺖ (facts) have same stem i.e. اﺻﻞ (principle/fact). Such type of words needs attention while developing a stemmer for Urdu language.
Code switching
Code switching, in linguistics, is the parallel use of more than one languages during conversation. The code switching in Urdu language is common and it accepts foreign words especially from English, e.g. ﮨﮯ borrowed ﻳہ ﮐﻴﻤﺮﮦ (This Camera is borrowed). In this example the Urdu text is from right to left-wards, while the English word "borrowed" is from left to right. The tokenization of the above sentence is performed in proper way electronically but Urdu stemmer will not stem the foreign word "borrowed", which is an issue.
Conclusion and Future Work
Stemmer is the core tool of any IR system. In this paper we have discussed some rule based English, Arabic, Persian and Urdu stemmers. Very less work has been done on Urdu stemmer due to its complex and rich morphology. Besides its own vocabulary, Urdu is also influenced by other morphology such as Arabic, Persian, Hindi, English etc. We have pointed out some challenges pertaining to the development of an Urdu stemmer. These issues should be considered while developing a rule based Urdu stemmer. After studying different stemmers developed for Arabic, Persian and Urdu languages, we intend to develop an efficient rule based Urdu stemmer which will not only handle those Urdu words having prefixes and suffixes but also infixes. We will make patterns for handling infixes. For preprocessing of the proposed Urdu stemmer, Urdu stop word list will be maintained. An Urdu stemword dictionary will also be prepared for evaluation purposes.
,ﭘﻬ ﺗﻬ , ,ﭨﻬ ,ﺟﻬ ,ﭼﻬ ده , ڈه , ﮐﻬ , ﮔﻬ , ره , ڑه , ,ﻣﻬ ﻧﻬ , ﻟﻬ b. Non Aspirated Consonants Urdu language consists of 35 non aspirated consonant signs that represent 27 consonant sounds. Various scripts are employed to show the similar sound in Urdu, For example: Sad ,)ص( Seen )س( and Seh )ث( represent the sound [s].
٢ ﻓﺮورﯼ ٢٠٠٩ ﮨﮯ (Hafsa's birthday is 2 nd February 2009) The Urdu digits (٠-٩), Arithmetic Symbols (+,-,*, /) and Punctuation marks ,۔( ؟ , ٫ , ، , '," ؛, ,:) should be treated as an exceptional case during developing Urdu stemmer.
http://www.ielanguages.com/linguist.html 2 http://introling.ynada.com/session-6-types-ofmorphemes
A Mokhtaripour, S Jahanpour, Introduction to a New Farsi Stemmer, CIKM'06, November 5-11. Arlington, Virginia, USAA. Mokhtaripour and S. Jahanpour, 2006. Introduc- tion to a New Farsi Stemmer, CIKM'06, No- vember 5-11, Arlington, Virginia, USA.
Typology of Word and Automatic Word Segmentation in Urdu Text Corpus. N Durrani, Lahore, PakistanNational University of Computer and Emerging SciencesDurrani N. 2007. Typology of Word and Automat- ic Word Segmentation in Urdu Text Corpus. National University of Computer and Emerging Sciences,Lahore, Pakistan.
Towards an Error-Free Arabic Stemming, iNEWS'08. Jessica Eiman Tamah Al-Shammari, Lin, Napa Valley, California, USAEiman Tamah Al-Shammari, Jessica Lin, October 30, 2008. Towards an Error-Free Arabic Stem- ming, iNEWS'08, Napa Valley, California, USA.
Information Retrieval: Data Structures & Algorithms, New Jersey. R Frakes, Baeza-Yates, Prentice Hall PTRFrakes, R.Baeza-Yates, 1992. Information Retriev- al: Data Structures & Algorithms, New Jer- sey: Prentice Hall PTR.
Development of a stemming algorithm. J B Lovins, Mechanical Translation and Computational Linguistics. 11J.B. Lovins, 1968. Development of a stemming algorithm. Mechanical Translation and Computational Linguistics, 11, pp.22-31.
. I Javed, New Urdu Grammar. Advance Urdu Buru. Javed I. 1985. New Urdu Grammar. Advance Urdu Buru, New Dehali
Hindi Urdu Machine Transliteration using Finite-state Transducers, proceedings of COLING. M G Malik, Abbas, Christian Boitet, Pushpak Bhattcharyya, Manchester, UKMalik, M. G. Abbas. Boitet, Christian. Bhattcharyya, Pushpak. 2008. Hindi Urdu Machine Transli- teration using Finite-state Transducers, pro- ceedings of COLING 2008, Manchester, UK.
An algorithm for suffix stripping. M F Porter, Program14M.F. Porter, 1980. An algorithm for suffix strip- ping, Program, 14(3) pp. 130-137.
Bon: The Persian stemmer. M Tashakori, Meybodi, Oroumchian, Proc. 1st EurAsian Conf. on Information. 1st EurAsian Conf. on InformationM Tashakori, MR Meybodi, F Oroumchian, 2003. Bon: The Persian stemmer, in Proc. 1st EurA- sian Conf. on Information.
Exploring Language Structure, A Student's Guide. Cambridge: Cambridge. Thomas E Payne, University PressPayne, Thomas E. 2006. Exploring Language Structure, A Student's Guide. Cambridge: Cam- bridge University Press.
Assas-Band, an Affix-Exception-List Based Urdu Stemmer. Q Akram, A Naseer, S Hussain, Proceedings of the 7 th Workshop on Asian Language Resources. the 7 th Workshop on Asian Language ResourcesSuntec, SingaporeQ. Akram, A. Naseer and S. Hussain, 6-7 August 2009. Assas-Band, an Affix-Exception-List Based Urdu Stemmer, Proceedings of the 7 th Workshop on Asian Language Resources, pp. 40- 47, Suntec, Singapore.
Analysis, Design and Implementation of Urdu Morphological Analyzer, Engineering Sciences and Technology. S Rizvi, M Hussain, SCONEST 2005. Student Conference. Rizvi, S. & Hussain, M. 2005, Analysis, Design and Implementation of Urdu Morphological Analyzer, Engineering Sciences and Technology, SCONEST 2005. Student Conference, pp. 1-7
S Sabzwari, Urdu Quwaid. Lahore: Sang-e-Meel Publication. Sabzwari, S. 2002, Urdu Quwaid. Lahore: Sang-e- Meel Publication
Stemming Arabic Text. S Khoja, R Garside, Lancaster, UKComputing Department, Lancaster UniversityS. Khoja and R. Garside, 1999. Stemming Arabic Text, Lancaster, UK, Computing Department, Lancaster University.
Morphology and Computation. R Sproat, The MIT PressSproat, R. 1992. Morphology and Computation. The MIT Press
Stemming the Qur'an In the Proceedings of the Workshop on Computational Approaches to Arabic Script-based Languages. N Thabet, Thabet, N. 2004. Stemming the Qur'an In the Pro- ceedings of the Workshop on Computational Ap- proaches to Arabic Script-based Languages.
A Waqas, W Xuan, Lu Li, W Xiao-Long, A Survey of Automatic Urdu Language Processing. International Conference on Machine Learning and Cybernetics. Waqas A., Xuan W., Lu Li, Xiao-long W. 2006. A Survey of Automatic Urdu Language Processing. International Conference on Machine Learning and Cybernetics, pp: 4489-4494 |
3,002,688 | Usability Considerations for a Cellular-based Text Translator | This paper describes a cellular-telephonebased text-to-text translation system developed at Transclick, Inc. The application translates messages bidirectionally in English, French, German, Italian, Spanish and Portuguese. This paper describes design features uniquely suited to hand-held-device based translation systems. In particular, we discuss some of the usability conditions unique to this type of application and present strategies for overcoming usability obstacles encountered in the design phase of the product. | [] | Usability Considerations for a Cellular-based Text Translator
Leslie Barrett lbarrett29@hotmail.com
Transclick, Inc. New York
Transclick, Inc
10012, 10012New YorkNY, NY
Robert Levin robert.levin@transclick.com
Transclick, Inc. New York
Transclick, Inc
10012, 10012New YorkNY, NY
Usability Considerations for a Cellular-based Text Translator
This paper describes a cellular-telephonebased text-to-text translation system developed at Transclick, Inc. The application translates messages bidirectionally in English, French, German, Italian, Spanish and Portuguese. This paper describes design features uniquely suited to hand-held-device based translation systems. In particular, we discuss some of the usability conditions unique to this type of application and present strategies for overcoming usability obstacles encountered in the design phase of the product.
Basic Application Functionalities
Transclick, Inc. has recently developed a text-based translator designed for implementation on hand-held devices. While such a system is put through many rounds of testing for translation quality, it must also be tested to assess its function as a portable communication device. The idea of a evaluating a translation system's deployment environment is a topic that has received little attention, mostly because of its novelty. The focus of this paper will be the usability considerations involved in designing a translation device for use on portable hand-held systems.
In the initial design phase of the Transclick
Mobile-Device-based texttranslation system 1 , three main usability issues were of concern. These were manipulation of the keypad for text-entry, screen scrolling for long messages, and lag time in translation. Because this application is used for translation, and users are expected to type extensively into the text window on the interface, this application represented a unique usability challenge for mobile-device-interface design. Not only was scrolling a concern, since users would want to check the entire input text before translating, but actual lag-time was a concern as well, since translation was remote-server-based, not local.
The Transclick cellular application was created in BREW (Binary Runtime Environment for Windows) and loaded onto a Motorola Web-Enabled cellular phone with Verizon cellular service. The Transclick translation application appears as a screen icon, which, once activated, allows translation in three modalities: Basic Text Translation, E-mail Translation and SMS translation. We found, after usability testing, that the main usability problem was navigation among various sub-applications, but that issues involving scrolling and keypad manipulation were minimal.
Transclick used three usability testers 2 who were given the Transclick User Manual while using the device. Testers translated text messages, email, and SMS in the languages of their choice. The subapplication taxonomy for the Transclick mobile translator is shown in Figure 1: Due to screen size, the Main Menu is scrollable, as is any submenu (e.g. the Buddy List) containing 5 or more items. The BREW/Qualcomm developer's guide, based on a study by Norman (1991) suggests limiting levels to 2 or 3. This guideline was followed in general with the exception of the Translation action, which takes 4 steps to complete (if the translation is being transmitted via email or SMS). Since other studies have also shown a user preference for pagination over scrolling (Tscheligi et al. 2002), we were careful to design a text window at least adequate for completing short paragraphs. 3 This, of course, compromises utilization of maximum font size, but testers did not record font size as a usability problem. The current font-size for most items approximates 8-9 point font.
An example of the text-translation interface, showing the text-input window is in Figure 2.
Figure 2. Text Translation Interface
Testers also reported little trouble with keypad manipulation when following the User Manual closely.
Hardware vs. Software Functionalities
The Motorola Web-enabled phone has four main functionalities programmed to a circular "mouse" located above the keypad. Figure 3 shows the phone with Mouse functions labeled. One of the most frequently used actions, scrolling, is found on the "mouse". Task selection, equally important, is found on the uppermost key, simply called the "select button". Pressing this key will select an activity highlighted by the scrolling action. Testers found the mouse easy to manipulate, and physically easier to manipulate than the keypad. Some testers expressed a preference for both scrolling and selecting on the mouse.
One issue related to scrolling that some testers found confusing was the scrolling directionality. Drop-down menus in the case of language-pair were scrolled by rightclick, whereas other drop-down messages were scrolled by down-click. A unidirectional scrolling function is probably preferable if possible in this type of multitask hand-held application. Details of the scrolling functionalities are shown in Figure 4. There was some additional concern among all three testers that the green "start" button, a feature provided by most cellular phone hardware makers, was a more natural choice as the "select" button. Testers did not mention the same issue confusing the red "end" button with the "back" button on the top right, however.
In sum, testers found the programming of functionalities to the various keyboard hardware elements relatively intuitive overall. We did not find any usability obstacles in this aspect of the application. In the next section, we will address navigation and discuss why this represented the greatest usability obstacle in the Transclick translation application.
Problems in Navigation
Bergman and Haitani (2000) contrasted usage patterns between mobile and static devices, concluding that interfaces on mobile devices needed to optimize navigation, reducing the number of steps required to access frequently used items.
As the previous section showed, we programmed frequently used functionalities, like "scroll", "back" and "select" into the "mouse" (or "joystick"), the uppermost left key and uppermost right key respectively. Back-navigation could also proceed via selection of the bottommost menu item on any action. Two things that our application lacked, however, which are present on most Web interfaces, were a site map and a "main menu" link.
The W3C recommends a site map and a navigation bar as essential elements of easily navigable websites. Because of the space considerations of a mobile device of this kind, however, we decided to omit these and simplify, to the greatest extent possible, the number of elements present on each screen.
Testers noted the greatest usability concerns with differentiating between navigating back one screen and navigating back to the main menu. In the case of Text Translation, when the user has received a translated text, the menu presents the item "done" on the bottom, which will return the user to the main menu. There is, however, no "back" in this case to return to the previous, untranslated, screen.
This particular issue is actually unique to the translation application. We did not choose to allow the user to save untranslated text due to space/storage considerations, and server use. Users of typical web pages, however, are accustomed to being able to return to any previous page.
Another, related issue that testers noted was the inability to remain in translation mode. That is, they felt that once a particular translation was completed, they should be prompted for another, or, at least returned to a blank text window and not the main menu.
Thus page caching is a desireable webuser function that cannot be included here due to the unique storage-capacity obstacles presented by the phone, but future versions may allow a screen-back rather than fullback "back" function.
Lag Time and Issues in Quantitative Testing
The developers experienced usability lag times curiously not reported by testers as a usability concern. However, reliable data was difficult to gather because the lag times varied considerably and causes could range from cellular transmission in a local area to server problems.
Other more general quantitative usability tests including those discussed in the U.S. Department of Health and Human Services usability testing site (see http://usability.gov/methods/type_of_test.ht ml) were considered, such as the time to complete a task, the number of errors or problems in completing the task and the number of requests for assistance. Many other usability considerations on this type of device are very similar to considerations of general usability in websites, including issues pertaining to navigation, scrolling and font size.
We found that task-completion time was so heavily affected by turnaround time that testers did not report any factors other than turnaround time as having an effect. No errors other than translation errors and "no service" transmission errors were reported. Finally, because our testers were equipped with a User's Guide, as real-world users would be, we counted a "help" request as a failure to complete a task after consulting the User's Guide, rather than the failure to complete a task with no written instructions. None of our three testers requested assistance following this procedure.
Conclusion
This small study was intended to qualitatively assess basic usability of a textbased translator deployed on a hand-held device. The results were intended to motivate pre-release design modifications. We expect further modifications going forward following real-user feedback. As a result of our three testers feedback implemented a redesign of the "back"navigation component to perform three different functions. First, it will have a "back-one-screen" option for all actions, second, it will have a "back-to-text-window" option for the Text Translation Output window, and third, it will have a "return-tomain-menu" option for all actions. We look forward to further research by other service providers and usability-based design standardizations focused on hand-held translation devices.
Finally, we were surprised that testers did not report a usability problem with translation times, despite the fact that they can be up to 1 minute for long texts. This may be due to the (comparatively) slower method of data entry on the phone pad, which could consequently slow user expectations of a quick response. We have not exhaustively studied the word/lag-time ratio, but note that it does not increase in exact proportion. That is, a single-word input has a response time of about 3 seconds, but a 10-word input has a responsetime of much less than 30 seconds. We note that response times for longer inputs vary greatly, and mostly according to internal considerations of our server's translation code, not the wireless device or connection.
Figure 1 .
1Transclick Sub-application Hierarchy
Figure 3 .
3Motorota Web Phone
Figure 4 .
4Scrolling and the "Mouse"
Patents pending on dictionary selection and other features of the translation algorithms.
In this study, the Transclick staff, rather than naïve testers, were the main source of usability value judgments, although the testers were not part of the development team.
The screen will hold about 100 characters before scrolling is necessary.
Designing the Pilot. E Bergman, R Haitani, M Morgan Kaufman, V Tscheligi, R Giller, C Sefelin, In Bergman (ed) Information Appliances. E. Bergman, and R. Haitani. 2000. Designing the Pilot, In Bergman (ed) Information Appliances,San Francisco, Morgan Kaufman M. Tscheligi, V. Giller, R. Sefelin, C.
Empirical Studies on Usability Modules and Elements: A Prerequisite of Usable Applications Specifically Tailored to Different Mobile Devices. R Lamm, J Melcher, Schrammel, Proceedings of 7 th WWRF Meeting. 7 th WWRF MeetingLamm, R. Melcher. and J. Schrammel. 2002. Empirical Studies on Usability Modules and Elements: A Prerequisite of Usable Applications Specifically Tailored to Different Mobile Devices, In Proceedings of 7 th WWRF Meeting, 2002
Psychology of Menu Selection: Designing Cognitive Control at the Hum/Computer Interface. L Kent, Norman, Ablex Publishing CorporationKent, L. Norman. 1991. Psychology of Menu Selection: Designing Cognitive Control at the Hum/Computer Interface, Ablex Publishing Corporation, 1991
Department of Health and Human Services Usability Guidelines for Website Design. Brew/Qualcomm, User, U.SBREW/Qualcomm User Interface Guidelines http://www.qualcomm.com/brew/develop er/developing/docs/80-D4231-1_A.pdf U.S. Department of Health and Human Services Usability Guidelines for Website Design: http://usability.gov/methods/type_of_test.ht ml |
12,127,070 | Semantic Role Tagging for Chinese at the Lexical Level | This paper reports on a study of semantic role tagging in Chinese, in the absence of a parser. We investigated the effect of using only lexical information in statistical training; and proposed to identify the relevant headwords in a sentence as a first step to partially locate the corresponding constituents to be labelled. Experiments were done on a textbook corpus and a news corpus, representing simple data and complex data respectively. Results suggested that in Chinese, simple lexical features are useful enough when constituent boundaries are known, while parse information might be more important for complicated sentences than simple ones. Several ways to improve the headword identification results were suggested, and we also plan to explore some class-based techniques for the task, with reference to existing semantic lexicons. | [
7645153,
18823236,
10923333,
62182406,
2443336,
5450664,
9376308,
561429,
18312340,
11869911,
9450557,
14810207
] | Semantic Role Tagging for Chinese at the Lexical Level
2005
Oi Yee Kwong
Language Information Sciences Research Centre
City University of Hong Kong
Tat Chee AvenueKowloon, Hong Kong
Benjamin K Tsou
Language Information Sciences Research Centre
City University of Hong Kong
Tat Chee AvenueKowloon, Hong Kong
Semantic Role Tagging for Chinese at the Lexical Level
LNAI
36512005
This paper reports on a study of semantic role tagging in Chinese, in the absence of a parser. We investigated the effect of using only lexical information in statistical training; and proposed to identify the relevant headwords in a sentence as a first step to partially locate the corresponding constituents to be labelled. Experiments were done on a textbook corpus and a news corpus, representing simple data and complex data respectively. Results suggested that in Chinese, simple lexical features are useful enough when constituent boundaries are known, while parse information might be more important for complicated sentences than simple ones. Several ways to improve the headword identification results were suggested, and we also plan to explore some class-based techniques for the task, with reference to existing semantic lexicons.
Introduction
As the development of language resources progresses from POS-tagged corpora to syntactically annotated treebanks, the inclusion of semantic information such as predicate-argument relations is becoming indispensable. The expansion of the Penn Treebank into a Proposition Bank [11] is a typical move in this direction. Lexical resources also need to be enhanced with semantic information (e.g. [5]). In fact the ability to identify semantic role relations correctly is essential to many applications such as information extraction and machine translation; and making available resources with this kind of information would in turn facilitate the development of such applications.
Large-scale production of annotated resources is often labour-intensive, and thus needs automatic labelling to streamline the work. The task can essentially be perceived as a two-phase process, namely to recognise the constituents bearing some semantic relationship to the target verb in a sentence, and then to label them with the corresponding semantic roles.
In their seminal proposal, Gildea and Jurafsky approached the task using various features such as headword, phrase type, and parse tree path [6]. Such features have remained the basic and essential features in subsequent research, irrespective of the variation in the actual learning components. In addition, parsed sentences are often required, for extracting the path features during training and providing the argument boundaries during testing. The parse information is deemed important for the performance of role labelling [7,8].
More precisely, in semantic role labelling, parse information is rather more critical for the identification of boundaries for candidate constituents than for the extraction of training data. Its limited function in training, for instance, is reflected in the low coverage reported (e.g. [21]). However, given the imperfection of existing automatic parsers, which are far from producing gold standard parses, many thus resort to shallow syntactic information from simple chunking, though results often turn out to be less satisfactory than with full parses.
This limitation is even more pertinent for the application of semantic role labelling to languages which do not have sophisticated parsing resources. In the case of Chinese, for example, there is considerable variability in its syntax-semantics interface; and when one has more nested and complex sentences such as those from news articles, it becomes more difficult to capture the sentence structures by typical examples.
It is therefore worthwhile to investigate alternatives to the role labelling task for Chinese under the parsing bottleneck, both in terms of the features used and the shortcut or compromise to at least partially pin down the relevant constituents. A series of related questions deserve consideration here:
1. how much could we achieve with only parse-independent features in the role labelling process; 2. with constituent boundaries unknown in the absence of parse information, could we at least identify the headwords in the relevant constituents to be tagged; and 3. whether the unknown boundary problem varies with the nature of the dataset, e.g., will the degradation in performance from known boundaries to unknown boundaries be more serious for complicated sentences than for simple sentences.
So in the current study we experiment on the use of parse-independent features for semantic role labelling in Chinese, for locating the headwords of the constituents corresponding to arguments to be labelled. We will also compare the results on two training and testing datasets.
In Section 2, related work will be reviewed. In Section 3, the data used in the current study will be introduced. Our proposed method will be explained in Section 4, and the experiment reported in Section 5. Results and future work will be discussed in Section 6, followed by conclusions in Section 7.
Related Work
The definition of semantic roles falls on a continuum from abstract ones to very specific ones. Gildea and Jurafsky [6], for instance, used a set of roles defined according to the FrameNet model [2], thus corresponding to the frame elements in individual frames under a particular domain to which a given verb belongs. Lexical entries (in fact not limited to verbs, in the case of FrameNet) falling under the same frame will share the same set of roles. Gildea and Palmer [7] defined roles with respect to individual predicates in the PropBank, without explicit naming. To date PropBank and FrameNet are the two main resources in English for training semantic role labelling systems.
The theoretical treatment of semantic roles is also varied in Chinese. In practice, for example, the semantic roles in the Sinica Treebank mark not only verbal arguments but also modifier-head relations within individual constituents, following a head-driven principle [4]. In our present study, we use a set of more abstract semantic roles, which are generalisable to most Chinese verbs and are not dependent on particular predicates. They will be further introduced in Section 3.
The major concerns in automatic semantic role labelling include the handling of alternations (as in "the window broke" and "John broke the window", where in both cases "the window" should be tagged as "patient" despite its appearance in different positions in the sentences), and generalisation to unseen constituents and predicates. For the latter, clustering and semantic lexicons or hierarchies have been used (e.g. [6]), or similar argument structures are assumed for near-synonyms and verbs under the same frame (e.g. [11]).
Approaches in automatic semantic role labelling are mostly statistical, typically making use of a number of features extracted from parsed training sentences. In Gildea and Jurafsky [6], the features studied include phrase type (pt), governing category (gov), parse tree path (path), position of constituent with respect to the target predicate (position), voice (voice), and headword (h). The labelling of a constituent then depends on its likelihood to fill each possible role r given the features and the target predicate t, as in the following, for example:
) , , , , , | ( t voice position gov pt h r P
Subsequent studies exploited a variety of implementation of the learning component, including Maximum Entropy (e.g. [1,12]), Support Vector Machines (e.g. [9,16]), etc. Transformation-based approaches were also used (e.g. [10,19]). Swier and Stevenson [17] innovated with an unsupervised approach to the problem, using a bootstrapping algorithm, and achieved 87% accuracy.
While the estimation of the probabilities could be relatively straightforward, the key often lies in locating the candidate constituents to be labelled. A parser of some kind is needed. Gildea and Hockenmaier [8] compared the effects of Combinatory Categorial Grammar (CCG) derivations and traditional Treebank parsing, and found that the former performed better on core arguments, probably due to its ability to capture long range dependencies, but comparable for all arguments. Gildea and Palmer [7] compared the effects of full parsing and shallow chunking; and found that when constituent boundaries are known, both automatic parses and gold standard parses resulted in about 80% accuracy for subsequent automatic role tagging, but when boundaries are unknown, results with automatic parses dropped to 57% precision and 50% recall. With chunking only, performance further degraded to below 30%. Problems mostly arise from arguments which correspond to more than one chunk, and the misplacement of core arguments.
A couple of evaluation exercises for semantic role labelling were organized recently, such as the shared task in CoNLL-2004 using PropBank data [3], and the one in SENSEVAL-3 using the FrameNet dataset [15]. Most systems in SENSEVAL-3 used a parser to obtain full syntactic parses for the sentences, whereas systems participating in the CoNLL task were restricted to using only shallow syntactic information. Results reported in the former tend to be higher. Although the dataset may be a factor affecting the labelling performance, it nevertheless reinforces the usefulness of full syntactic information.
According to Carreras and Màrquez [3], for English, the state-of-the-art results reach an F 1 measure of slightly over 83 using gold standard parse trees and about 77 with real parsing results. Those based on shallow syntactic information is about 60.
The usefulness of parse information for semantic role labelling would be especially interesting in the case of Chinese, given the flexibility in its syntax-semantics interface (e.g. the object after 吃 'eat' could refer to the Patient as in
吃 蘋 果 'eat apple', Loca- tion as in 吃 食 堂 'eat canteen', Duration as in 吃 三 年
'eat three years', etc.). In the absence of sophisticated parsing resources, however, we attempt to investigate how well one could simply use a set of parse-independent features and backward guess the likelihood of headwords to partially locate the candidate constituents to be labelled.
The Data
Materials
As mentioned in the introduction, we attempted to investigate the difference between labelling simple sentences and complex ones. For this purpose, sentences from primary school textbooks were taken as examples for simple data, while sentences from a large corpus of newspaper texts were taken as complex examples.
Two sets of primary school Chinese textbooks popularly used in Hong Kong were taken for reference. The two publishers were Keys Press [22] and Modern Education Research Society Ltd [23]. Texts for Primary One to Six were digitised, segmented into words, and annotated with parts-of-speech (POS). The two sets of textbooks amount to a text collection of about 165K character tokens and upon segmentation about 109K word tokens (about 15K word types). There were about 2,500 transitive verb types, with frequency ranging from 1 to 926.
The complex examples were taken from a subset of the LIVAC synchronous corpus 1 [13,18]. The subcorpus consists of newspaper texts from Hong Kong, including local news, international news, financial news, sports news, and entertainment news, collected in 1997-98. The texts were segmented into words and POS-tagged, amounting to about 1.8M character tokens and upon segmentation about 1M word tokens (about 47K word types). There were about 7,400 transitive verb types, with frequency ranging from 1 to just over 6,300.
Training and Testing Data
For the current study, a set of 41 transitive verbs common to the two corpora (hereafter referred to as textbook corpus and news corpus), with frequency over 10 and over 50 respectively, was sampled.
Sentences in the corpora containing the sampled verbs were extracted. Constituents corresponding to semantic roles with respect to the target verbs were annotated by a trained annotator, whose annotation was verified by another. In this study, we worked with a set of 11 predicate-independent abstract semantic roles. According to the Dictionary of Verbs in Contemporary Chinese (Xiandai Hanyu Dongci Dacidian, [14], our semantic roles include the necessary arguments for most verbs such as Agent and Patient, or Goal and Location in some cases; and some optional arguments realised by adjuncts, such as Quantity, Instrument, and Source. Some examples of semantic roles with respect to a given predicate are shown in Fig. 1. Fig. 1. Examples of semantic roles with respect to a given predicate
現 代 漢 語 動 詞 大 詞 典 )
Altogether 980 sentences covering 41 verb types in the textbook corpus were annotated, resulting in 1,974 marked semantic roles (constituents); and 2,122 sentences covering 41 verb types in the news corpus were annotated, resulting in 4,933 marked constituents 2 . The role labelling system was trained on 90% of the sample sentences from the textbook corpus and the news corpus separately; and tested on the remaining 10% of the respective corpora.
Automatic Role Labelling
The automatic labelling was based on the statistical approach in Gildea and Jurafsky [6]. In Section 4.1, we will briefly mention the features employed in the training process. Then in Sections 4.2 and 4.3, we will explain our approach for locating headwords in candidate constituents associated with semantic roles, in the absence of parse information.
Training
In this study, our probability model was based mostly on parse-independent features extracted from the training sentences, namely:
下 星 期 學 校 舉 行 講 故 事 比 賽
Position (posit):
This feature shows whether the constituent being labelled appears before or after the target verb. In the first example in Fig. 1, the Experiencer and Time appear on the left of the target, while the Theme is on its right.
POS of headword (HPos):
Without features provided by the parse, such as phrase type or parse tree path, the POS of the headword of the labelled constituent could provide limited syntactic information.
Preposition (prep):
Certain semantic roles like Time and Location are often realised by prepositional phrases, so the preposition introducing the relevant constituents would be an informative feature.
Hence for automatic labelling, given the target verb t, the candidate constituent, and the above features, the role r which has the highest probability for P(r | head, posit, HPos, prep, t) will be assigned to that constituent. In this study, however, we are also testing with the unknown boundary condition where candidate constituents are not available in advance, hence we attempt to partially locate them by identifying their headwords to start with. Our approach is explained in the following sections.
Locating Candidate Headwords
In the absence of parse information, and with constituent boundaries unknown, we attempt to partially locate the candidate constituents by trying to identify their corresponding headwords first. Sentences in our test data were segmented into words and POS-tagged. We thus divide the recognition process into two steps, locating the headword of a candidate constituent first, and then expanding from the headword to determine its boundaries.
Basically, if we consider every word in the same sentence as the target verb (both to its left and to its right) a potential headword for a candidate constituent, what we need to do is to find out the most probable words in the sentence to match against individual semantic roles. We start with a feature set with more specific distributions, and back off to feature sets with less specific distributions. Hence in each round we look for ) | ( max arg set feature r P r for every candidate word. Ties are resolved by giving priority to the word nearest to the target verb in the sentence.
Constituent Boundary
Upon the identification of headwords for potential constituents, the next step is to expand from these headwords for constituent boundaries. Although we are not doing this step in the current study, it can potentially be done via some finite state techniques, or better still, with shallow syntactic processing like simple chunking if available.
The Experiment
Testing
The system was trained and tested on the textbook corpus and the news corpus respectively. The testing was done under the "known constituent" and "unknown constituent" conditions. The former essentially corresponds to the known-boundary condition in related studies; whereas in the unknown-constituent condition, which we will call "headword location" condition hereafter, we tested our method of locating candidate headwords as explained above in Section 4.2. In this study, every noun, verb, adjective, pronoun, classifier, and number within the test sentence containing the target verb was considered a potential headword for a candidate constituent Sentence:
溫 習 的 時 候 , 我 們 發 現 了 許 多 平 時 沒 有 想 到 , 或 是 未 能 解 決 的 問 題 , 於 是 就 去 問 爸 爸 。
During revision, we discover a lot of problems which we have not thought of or cannot be solved, then we go and ask father.
Candidate Round 1 … Round 4 Final Result Headwords 溫 習 (revision) Patient 時 候 (time) Time ---- Time 我 們 (we) Agent Agent 平 時 (normally) 想 到 (think) Patient 能 (can) 解 決 (solve) Patient 問 題 (problem) Patient ---- Patient 去 (go) Patient 問 (ask) Patient 爸 爸 (father) Patient
corresponding to some semantic role. The performance was measured in terms of the precision (defined as the percentage of correct outputs among all outputs), recall (defined as the percentage of correct outputs among expected outputs), and F 1 score which is the harmonic mean of precision and recall.
Results
The results are shown in Table 1, for testing on both the textbook corpus and the news corpus under the known constituent condition and the headword location condition. Under the known constituent condition, the results were good on both datasets, with an F 1 score of about 90. This is comparable or even better to the results reported in related studies for known boundary condition. The difference is that we did not use any parse information in the training, not even phrase type. Our results thus suggest that for Chinese, even without more complicated syntactic information, simple lexical information might already be useful in semantic role tagging.
Comparison of the known constituent condition with the headword location condition shows that performance for the latter has expectedly dropped. However, the degradation was less serious with simple sentences than with complex ones, as is seen from the higher precision and recall for textbook data than for news data under the headword location condition. What is noteworthy here is that recall apparently deteriorated less seriously than precision. In the case of news data, for instance, we were able to maintain over 50% recall but only obtained about 39% precision. The surprisingly low precision is attributed to a technical inadequacy in the way we break ties. In this study we only make an effort to eliminate multiple tagging of the same role to the same target verb in a sentence on either side of the target verb, but not if they appear on both sides of the target verb. This should certainly be dealt with in future experiments. The differential degradation of performance between textbook data and news data also suggests the varied importance of constituent boundaries to simple sentences and complex ones, and hence possibly their varied requirements for full parse information for the semantic labelling task.
Discussion
According to Carreras and Màrquez [3], the state-of-the-art results for semantic role labelling systems based on shallow syntactic information is about 15 lower than those with access to gold standard parse trees, i.e., around 60. Our experimental results for the headword location condition, with no syntactic information available at all, give an F 1 score of 52.89 and 44.35 respectively for textbook data and news data. This further degradation in performance is nevertheless within expectation, but whether this is also a result of the difference between English and Chinese remains to be seen.
In response to the questions raised in the introduction, firstly, the results for the known constituent condition (F 1 of 90.56 and 89.07 for textbook data and news data respectively) have shown that even if we do not use parse-dependent features such as governing category and parse tree path, results are not particularly affected. In other words, lexical features are already very useful as long as the constituent boundaries are given. Secondly, in the absence of parse information, the results of identifying the relevant headwords in order to partially locate candidate constituents were not as satisfactory as one would like to see. One possible way to improve the results, as suggested above, would be to improve the handling of ties. Other possibilities including a class-based method could also be used, as will be discussed below. Thirdly, results for news data degraded more seriously than textbook data from the known constituent condition to the headword location condition. This suggests that complex sentences in Chinese are more affected by the availability of full parse information. To a certain extent, this might be related to the relative flexibility in the syntaxsemantics interface of Chinese; hence when a sentence gets more complicated, there might be more intervening constituents and the parse information would be useful to help identify the relevant ones in semantic role labelling.
In terms of future development, apart from improving the handling of ties in our method, as mentioned in the previous section, we plan to expand our work in several respects, the major part of which is on the generalization to unseen headwords and unseen predicates. As is with other related studies, the examples available for training for each target verb are very limited; and the availability of training data is also insufficient in the sense that we cannot expect them to cover all target verb types. Hence it is very important to be able to generalize the process to unseen words and predicates. To this end, we will experiment with a semantic lexicon like Tongyici Cilin (同義詞 詞 林 , a Chinese thesaurus) in both training and testing, which we expect to improve the overall performance.
Another area of interest is to look at the behaviour of near-synonymous predicates in the tagging process. Many predicates may be unseen in the training data, but while the probability estimation could be generalized from near-synonyms as suggested by a semantic lexicon, whether the similarity and subtle differences between nearsynonyms with respect to the argument structure and the corresponding syntactic realisation could be distinguished would also be worth studying. Related to this is the possibility of augmenting the feature set with semantic features. Xue and Palmer [20], for instance, looked into new features such as syntactic frame, lexicalized constituent type, etc., and found that enriching the feature set improved the labelling performance.
Another direction of future work is on the location of constituent boundaries upon the identification of the headword. As mentioned earlier on, this could probably be tackled by some finite state techniques or with the help of simple chunkers.
Conclusion
The study reported in this paper has thus tackled the unknown constituent boundary condition in semantic role labelling for Chinese, by attempting to locate the corresponding headwords first. We experimented with both simple and complex data. Using only parse-independent features, our results on known boundary condition are comparable to those reported in related studies. Although the results for headword location condition were not as good as state-of-the-art performance with shallow syntactic information, we have nevertheless suggested some possible ways to improve the results. We have further observed that the influence of full syntactic information is more serious for complex data than simple data, which might be a consequence of the characteristic syntax-semantics interface of Chinese. As a next step, we plan to explore some class-based techniques for the task, with reference to existing semantic lexicons.
Fig. 2
2shows an example illustrating the procedures for locating candidate headwords. The target verb is 發 現 (discover). In the first round, using features head, posit, HPos, and t, 時 候 (time) and問 題 (problem) were identified as Time and Patient respectively. In the fourth subsequent round, backing off with features posit and HPos, 我 們 (we) was identified as a possible Agent. In this round a few other words were identified as potential Patients. However, since Patient was already located in the previous round, those come up in this round are not considered. So in the end the headwords identified for the test sentence are
Fig. 2 .
2Example illustrating the procedures for locating candidate headwords
(school) is the headword in the constituent corresponding to the Agent of the verb (contest) is the headword of the noun phrase corresponding to the Patient.Next
week
school
hold
tell
story
contest
Time
Agent
Target
Patient
Example: (Next week, the school will hold a story-telling contest.)
同
學
們
作
文
常
常
感
到
沒
可
(-pl) write essay
always
feel not anything
Experiencer
Target
Theme
Example: (Students always feel there is nothing to write about for their essays.)
時
,
time
什
麼
寫
can
Time
Student
write
Headword (head): The headword from each constituent marked with a semantic role
was identified. For example, in the second sentence in Fig. 1,
學
校
舉
行
(hold), and
比
賽
Table 1 .
1Results on two datasets for known constituents and headword locationTextbook Data
News Data
Precision Recall
F 1
Precision Recall
F 1
Known Constituent
93.85
87.50
90.56
90.49
87.70 89.07
Headword Location
46.12
61.98
52.89
38.52
52.25 44.35
Qisi Zhongguo Yuwen. Primary 1-6, 24 volumes, 2004. Hong Kong: Keys Press. 23. Xiandai Zhongguo Yuwen. Primary 1-6, 24 volumes, 2004. Hong Kong: Modern Education Research Society Ltd.啟
思
中
國
語
文
現
代
中
國
語
文
http://www.livac.org
These figures only refer to the samples used in the current study. In fact over 35,000 sentences in the LIVAC corpus have been semantically annotated, covering about 1,500 verb types and about 80,000 constituents were marked.
Acknowledgements
Semantic Role Labelling With Chunk Sequences. U Baldewein, K Erk, S Padó, D Prescher, Proceedings of the Eighth Conference on Computational Natural Language Learning. the Eighth Conference on Computational Natural Language LearningBoston, MassachusettsBaldewein, U., Erk, K., Padó, S. and Prescher, D. (2004) Semantic Role Labelling With Chunk Sequences. In Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004), Boston, Massachusetts, pp.98-101.
The Berkeley FrameNet Project. C F Baker, C J Fillmore, J B Lowe, Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics (COLING-ACL '98). the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics (COLING-ACL '98)Montreal, Quebec, CanadaBaker, C.F., Fillmore, C.J. and Lowe, J.B. (1998) The Berkeley FrameNet Project. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics (COLING-ACL '98), Montreal, Quebec, Canada, pp.86-90.
Introduction to the CoNLL-2004 Shared Task: Semantic Role Labeling. X Carreras, L Màrquez, Proceedings of the Eighth Conference on Computational Natural Language Learning. the Eighth Conference on Computational Natural Language LearningBoston, MassachusettsCarreras, X. and Màrquez, L. (2004) Introduction to the CoNLL-2004 Shared Task: Se- mantic Role Labeling. In Proceedings of the Eighth Conference on Computational Natu- ral Language Learning (CoNLL-2004), Boston, Massachusetts, pp.89-97.
F-Y Chen, P-F Tsai, K-J Chen, C-R Huang, Sinica Treebank (中文句結構 樹 資 料 庫 的 構 建 ). Computational Linguistics and Chinese Language Processing. 4Chen, F-Y., Tsai, P-F., Chen, K-J. and Huang, C-R. (1999) Sinica Treebank (中文句結構 樹 資 料 庫 的 構 建 ). Computational Linguistics and Chinese Language Processing, 4(2): 87-104.
Manual and Automatic Semantic Annotation with WordNet. C Fellbaum, M Palmer, H T Dang, L Delfs, S Wolf, Proceedings of the NAACL-01 SIGLEX Workshop on WordNet and Other Lexical Resources. the NAACL-01 SIGLEX Workshop on WordNet and Other Lexical ResourcesPittsburg, PAInvited TalkFellbaum, C., Palmer, M., Dang, H.T., Delfs, L. and Wolf, S. (2001) Manual and Auto- matic Semantic Annotation with WordNet. In Proceedings of the NAACL-01 SIGLEX Workshop on WordNet and Other Lexical Resources, Invited Talk, Pittsburg, PA.
Automatic Labeling of Semantic Roles. D Gildea, D Jurafsky, Computational Linguistics. 283Gildea, D. and Jurafsky, D. (2002) Automatic Labeling of Semantic Roles. Computa- tional Linguistics, 28(3): 245-288.
The Necessity of Parsing for Predicate Argument Recognition. D Gildea, M Palmer, Proceedings of the 40th Meeting of the Association for Computational Linguistics (ACL-02). the 40th Meeting of the Association for Computational Linguistics (ACL-02)Philadelphia, PAGildea, D. and Palmer, M. (2002) The Necessity of Parsing for Predicate Argument Rec- ognition. In Proceedings of the 40th Meeting of the Association for Computational Lin- guistics (ACL-02), Philadelphia, PA.
Identifying Semantic Roles Using Combinatory Categorial Grammar. D Gildea, J Hockenmaier, Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing. the 2003 Conference on Empirical Methods in Natural Language ProcessingSapporo, JapanGildea, D. and Hockenmaier, J. (2003) Identifying Semantic Roles Using Combinatory Categorial Grammar. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, Sapporo, Japan.
Semantic Role Labeling by Tagging Syntactic Chunks. K Hacioglu, S Pradhan, W Ward, J H Martin, D Jurafsky, Proceedings of the Eighth Conference on Computational Natural Language Learning. the Eighth Conference on Computational Natural Language LearningBoston, MassachusettsHacioglu, K., Pradhan, S., Ward, W., Martin, J.H. and Jurafsky, D. (2004) Semantic Role Labeling by Tagging Syntactic Chunks. In Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004), Boston, Massachusetts, pp.110-113.
A transformation-based approach to argument labeling. D Higgins, Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004). the Eighth Conference on Computational Natural Language Learning (CoNLL-2004)Boston, MassachusettsHiggins, D. (2004) A transformation-based approach to argument labeling. In Proceed- ings of the Eighth Conference on Computational Natural Language Learning (CoNLL- 2004), Boston, Massachusetts, pp.114-117.
From TreeBank to PropBank. P Kingsbury, M Palmer, Proceedings of the Third Conference on Language Resources and Evaluation (LREC-02). the Third Conference on Language Resources and Evaluation (LREC-02)Las Palmas, Canary Islands, SpainKingsbury, P. and Palmer, M. (2002) From TreeBank to PropBank. In Proceedings of the Third Conference on Language Resources and Evaluation (LREC-02), Las Palmas, Ca- nary Islands, Spain.
SENSEVAL Automatic Labeling of Semantic Roles using Maximum Entropy Models. N Kwon, M Fleischman, E Hovy, Proceedings of the Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text (SENSEVAL-3). the Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text (SENSEVAL-3)Barcelona, SpainKwon, N., Fleischman, M. and Hovy, E. (2004) SENSEVAL Automatic Labeling of Se- mantic Roles using Maximum Entropy Models. In Proceedings of the Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text (SENSEVAL-3), Barcelona, Spain, pp.129-132.
Categorial Fluidity in Chinese and its Implications for Part-of-speech Tagging. O Y Kwong, B K Tsou, Proceedings of the Research Note Session of the 10th Conference of the European Chapter of the Association for Computational Linguistics. the Research Note Session of the 10th Conference of the European Chapter of the Association for Computational LinguisticsHungaryKwong, O.Y. and Tsou, B.K. (2003) Categorial Fluidity in Chinese and its Implications for Part-of-speech Tagging. In Proceedings of the Research Note Session of the 10th Con- ference of the European Chapter of the Association for Computational Linguistics, Buda- pest, Hungary, pp.115-118.
Dictionary of Verbs in Contemporary Chinese. X Lin, L Wang, D Sun, Beijing Language and Culture University PressLin, X., Wang, L. and Sun, D. (1994) Dictionary of Verbs in Contemporary Chinese. Beijing Language and Culture University Press.
SENSEVAL-3 Task: Automatic Labeling of Semantic Roles. K C Litkowski, Proceedings of the Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text (SENSEVAL-3). the Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text (SENSEVAL-3)Barcelona, SpainLitkowski, K.C. (2004) SENSEVAL-3 Task: Automatic Labeling of Semantic Roles. In Proceedings of the Third International Workshop on the Evaluation of Systems for the Se- mantic Analysis of Text (SENSEVAL-3), Barcelona, Spain, pp.9-12.
SVM Classification of Frame-Net Semantic Roles. D Moldovan, R Girju, M Olteanu, O Fortu, Proceedings of the Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text (SENSEVAL-3). the Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text (SENSEVAL-3)Barcelona, SpainMoldovan, D., Girju, R., Olteanu, M. and Fortu, O. (2004) SVM Classification of Frame- Net Semantic Roles. In Proceedings of the Third International Workshop on the Evalua- tion of Systems for the Semantic Analysis of Text (SENSEVAL-3), Barcelona, Spain, pp.167-170.
Unsupervised Semantic Role Labelling. R S Swier, S Stevenson, Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. the 2004 Conference on Empirical Methods in Natural Language ProcessingBarcelona, SpainSwier, R.S. and Stevenson, S. (2004) Unsupervised Semantic Role Labelling. In Pro- ceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, Barcelona, Spain, pp.95-102.
LIVAC, A Chinese Synchronous Corpus, and Some Applications. B K Tsou, W F Tsoi, T B Y Lai, J Hu, S W K Chan, Proceedings of the ICCLC International Conference on Chinese Language Computing. the ICCLC International Conference on Chinese Language ComputingChicagoTsou, B.K., Tsoi, W.F., Lai, T.B.Y., Hu, J. and Chan, S.W.K. (2000) LIVAC, A Chinese Synchronous Corpus, and Some Applications. In Proceedings of the ICCLC International Conference on Chinese Language Computing, Chicago, pp. 233-238.
Learning Transformation Rules for Semantic Role Labeling. K Williams, C Dozier, A Mcculloh, Proceedings of the Eighth Conference on Computational Natural Language Learning. the Eighth Conference on Computational Natural Language LearningBoston, MassachusettsWilliams, K., Dozier, C. and McCulloh, A. (2004) Learning Transformation Rules for Semantic Role Labeling. In Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004), Boston, Massachusetts, pp.134-137.
Calibrating Features for Semantic Role Labeling. N Xue, M Palmer, Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. the 2004 Conference on Empirical Methods in Natural Language ProcessingBarcelona, SpainXue, N. and Palmer, M. (2004) Calibrating Features for Semantic Role Labeling. In Pro- ceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, Barcelona, Spain, pp.88-94.
Automatic Semantic Role Assignment for a Tree Structure. J-M You, K-J Chen, ACL-04Proceedings of the 3rd SigHAN Workshop on Chinese Language Processing. the 3rd SigHAN Workshop on Chinese Language ProcessingBarcelonaYou, J-M. and Chen, K-J. (2004) Automatic Semantic Role Assignment for a Tree Struc- ture. In Proceedings of the 3rd SigHAN Workshop on Chinese Language Processing, ACL-04, Barcelona, pp.109-115. |
216,804,955 | Birds of a Feather Linked Together: A Discriminative Topic Model using Link-based Priors | A wide range of applications, from social media to scientific literature analysis, involve graphs in which documents are connected by links. We introduce a topic model for link prediction based on the intuition that linked documents will tend to have similar topic distributions, integrating a max-margin learning criterion and lexical term weights in the loss function. We validate our approach on the tweets from 2,000 Sina Weibo users and evaluate our model's reconstruction of the social network. | [
15702125,
11103989
] | Birds of a Feather Linked Together: A Discriminative Topic Model using Link-based Priors
Association for Computational LinguisticsCopyright Association for Computational LinguisticsSeptember 2015. 2015
Weiwei Yang wwyang@cs.umd.edu
Computer Science
Computer Science University of Maryland College Park
University of Colorado Boulder
UMIACS University of Maryland
College ParkMD, CO, MD
Jordan Boyd-Graber
Computer Science
Computer Science University of Maryland College Park
University of Colorado Boulder
UMIACS University of Maryland
College ParkMD, CO, MD
Jordan Boyd Graber@
Computer Science
Computer Science University of Maryland College Park
University of Colorado Boulder
UMIACS University of Maryland
College ParkMD, CO, MD
Philip Resnik resnik@umd.edu
Computer Science
Computer Science University of Maryland College Park
University of Colorado Boulder
UMIACS University of Maryland
College ParkMD, CO, MD
Birds of a Feather Linked Together: A Discriminative Topic Model using Link-based Priors
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing
the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsSeptember 2015. 2015
A wide range of applications, from social media to scientific literature analysis, involve graphs in which documents are connected by links. We introduce a topic model for link prediction based on the intuition that linked documents will tend to have similar topic distributions, integrating a max-margin learning criterion and lexical term weights in the loss function. We validate our approach on the tweets from 2,000 Sina Weibo users and evaluate our model's reconstruction of the social network.
Introduction
Many application areas for text analysis involve documents connected by links of one or more types-for example, analysis of scientific papers (citations, co-authorship), Web pages (hyperlinks), legislation (co-sponsorship, citations), and social media (followers, mentions, etc.). In this paper we work within the widely used framework of topic modeling (Blei et al., 2003, LDA) to develop a model that is simple and intuitive, but which identifies high quality topics while also accurately predicting link structure.
Our work here is inspired by the phenomenon of homophily, the tendency of people to associate with others who are like themselves (McPherson et al., 2001). As manifested in social networks, the intuition is that people who are associated with one another are likely to discuss similar topics, and vice versa. The new topic model we propose therefore takes association links into account so that a document's topic distribution is influenced by the topic distributions of its neighbors. Specifically, we propose a joint model that uses link structure to define clusters (cliques) of documents and, following the intuition that documents in the same cluster are likely to have similar topic distributions, assigns each cluster its own separate Dirichlet prior over the cluster's topic distribution. This use of priors is consistent with previous work that has shown document-topic priors to be useful in encoding various types of prior knowledge and improving topic modeling performance (Mimno and McCallum, 2008). We then use distributed representations to "seed" the topic representations before getting down to modeling the documents. Our joint objective function uses a discriminative, max-margin approach (Zhu et al., 2012;Zhu et al., 2014) to both model the contents of documents and produce good predictions of links; in addition, it improves prediction by including lexical terms in the decision function (Nguyen et al., 2013).
Our baseline for comparison is the Relational Topic Model (Chang and Blei, 2010, henceforth RTM), which jointly captures topics and binary link indicators in a style similar to supervised LDA (McAuliffe and Blei, 2008, sLDA), instead of modeling links alone, e.g., as in the Latent Multi-group Membership Graph model (Kim and Leskovec, 2012, LMMG). We also compare our approach with Daumé III (2009), who uses document links to create a Markov random topic field (MRTF). Daumé does not, however, look at link prediction, as his upstream model (Mimno and McCallum, 2008) only generates documents conditioned on links. In contrast, our downstream model allows the prediction of links, like RTM.
Our model's primary contribution is in its novel combination of a straightforward joint modeling approach, max-margin learning, and exploitation of lexical information in both topic seeding and regression, yielding a simple but effective model for topic-informed discriminative link prediction. Like other topic models which treat binary values "probabilistically", our model can convert binary link indicators into non-zero weights, with potential application to improving models like Volkova Our corpus is collected from Sina Weibo with three types of links between documents. We first conduct a reality check of our model against LDA and MRTF and then perform link prediction tasks. We demonstrate improvements in link prediction as measured by predictive link rank and provide both qualitative and quantitative perspectives on the improvements achieved by the model.
Discriminative Links from Topics
y d,d |z d , z d , w d , w d ∼ Ψ(·|z d , z d , w d , w d , η, τ )
Step 1: Identifying birds of a feather. Prior to the generative process, given a training set of documents and document-to-document links, we begin by identifying small clusters or cliques using strongly connected components, which automatically determines the number of clusters from the link graph. Intuitively, documents in the same clique are likely to have similar topic distributions.
Therefore, each of the L cliques l (the "birds of a feather" of our title) is assigned a separate Dirichlet prior π l over K topics.
Step 2a: Using seed words to improve topic quality. To improve topic quality, we identify seed words for the K topics using distributed lexical representations: the key idea is to complement the more global information captured in LDAstyle topics with representations based on local contextual information. We cluster the most frequent words' word2vec representations (Mikolov et al., 2013) into K word-clusters using the kmeans algorithm, based on the training corpus. 1 We then enforce a one-to-one association between these discovered word clusters and the K topics. For any word token w d,n whose word type is in cluster k, the associated topic assignment z d,n can only be k. To choose topic k's seed words, within its word-cluster we compute each word w k,i 's skip-gram transition probability sum S k,i to the other words as
S k,i = N k j=1,j =i p(w k,j | w k,i ),(1)
where N k denotes the number of words in topic k. We then select the three words with the highest sum of transition probabilities as the seed words for topic k. In the sampling process (Section 3), seed words are only assigned to their corresponding topics, similar to the use of hard constraints by Andrzejewski and Zhu (2009).
Steps 2b-3: Link regression parameters. Given two documents d and d , we want to predict whether they are linked by taking advantage of their topic patterns: the more similar two documents are, the more likely it is that they should be linked together. Like RTM, we will compute a regression in Step 5 using the topic distributions of d and d ; however, we follow Nguyen et al. (2013) by also including a document's word-level distribution as a regression input. 2 The regression value of document d and d is
R d,d = η T (z d • z d ) + τ T (w d • w d ), (2) where z d = 1 N d n z d,n , and w d = 1 N d n w d,n ;
• denotes the Hadamard product; η and τ are the weight vectors for topic-based and lexically-based predictions, respectively.
Step 4: Generating documents. Documents are generated as in LDA, where each document's topic distribution θ is drawn from the cluster's topic prior (a parametric analog to the HDP of Teh et al. (2006)) and each word's topic assignment is drawn from the document's topic distribution (except for seed words, as described above).
Step 5: Generating links. Our model is a "downstream" supervised topic model, i.e., the prediction of the observable variable (here, document links) is informed by the documents' topic distributions, as in sLDA (Blei and McAuliffe, 2007). In contrast to Chang and Blei (2010), who use a sigmoid as their link prediction function Ψ, we instead use hinge loss: the probability Ψ that two documents d and d are linked is
p(y d,d = 1 | z d , z d , w d , w d ) = exp(−2c max(0, ζ d,d )),
where c is the regularization parameter. In the hinge loss function, ζ d,d is
ζ d,d = 1 − y d,d R d,d .(3)
3 Posterior Inference Sampling Topics. Following Polson and Scott (2011), by introducing an auxiliary variable λ d,d , we derive the conditional probability of a topic assignment
p(z d,n = k | z −d,n , w −d,n , w d,n = v) ∝ N −d,n k,v + β N −d,n k,· + V β × (N −d,n d,k + απ −d,n l d ,k )× d exp − (cζ d,d + λ d,d ) 2 2λ d,d ,(4)
where N k,v denotes the count of word v assigned to topic k; N d,k is the number of tokens in document d that are assigned to topic k. 3 Marginal counts are denoted by ·; −d,n denotes that the count excludes token n in document d; d denotes the indexes of documents which are linked to document d; π −d,n l d ,k is estimated based on the maximal path assumption (Wallach, 2008)
π −d,n l d ,k = d ∈S(l d ) N −d,n d ,k + α d ∈S(l d ) N −d,n d ,· + Kα ,(5)
where S(l d ) denotes the cluster which contains document d (Step 1 in the generative process).
3 More details here and throughout this section appear in the supplementary materials.
Optimizing topic and lexical regression parameters. While topic regression parameters η and lexical regression parameters τ can be sampled (Zhu et al., 2014), the associated covariance matrix is huge (approximately 12K × 12K in our experiments). Instead, we optimize these parameters using L-BFGS.
Sampling auxiliary variables. The likelihood of auxiliary variables λ follows a generalized inverse Gaussian distribution GIG(λ d,d ; 1 2 , 1, c 2 ζ 2 d,d ).
Thus we sample λ −1 d,d from a an inverse Gaussian distribution
p(λ −1 d,d | z, w, η, τ ) = IG λ −1 d,d ; 1 c|ζ d,d | , 1 . (6)
4 Experimental Results
Dataset
We crawl data from Sina Weibo, the largest Chinese micro-blog platform. The dataset contains 2,000 randomly-selected verified users, each represented by a single document aggregating all the user's posts. We also crawl links between pairs of users when both are in our dataset. Links correspond to three types of interactions on Weibo: mentioning, retweeting and following. 4
Perplexity Results
As an initial reality check, we first apply a simplified version of our model which only uses user interactions for topic modeling and does not predict links. This permits a direct comparison of our model's performance against LDA and Markov random topic fields (Daumé III, 2009, MRTF) by evaluating perplexity.
We set α = α = 15 and run the models on 20 topics for all models in this and following sections. The results are the average values of five independent runs. Following Daumé, in each run, for each document, 80% of its tokens are randomly selected for training and the remaining 20% are for test. As the training corpus is generated randomly, seeding is not applied in this section. The results are given in Table 1, where I-denotes that the model incorporates user interactions.
The results confirm that our model outperforms both LDA and MRTF and that its use of user interactions holds promise.
Link Prediction Results
In this section, we apply our model on link prediction tasks and evaluate by predictive link rank (PLR). A document's PLR is the average rank, among all documents, of the documents to which it actually links. This means that lower values of PLR are better. Figure 2 breaks out the 5-fold cross validation results and the distinct extensions of RTM. 5 The results support the value in combining all three extensions using Lex-IS-MED-RTM, although for mentioning and retweeting, Lex-IS-MED-RTM and IS-RTM are quite close.
Applying user interactions does not always produce improvements. This is because in our intrinsic evaluation, we assume that the links on the test set are not observable and cluster priors are 5 IS-denotes that the model incorporates user interactions and seed words, Lex-means that lexical terms were included in the link probability function (Equation 3), and MED-denotes max-margin learning (Zhu et al., 2014;Zhu et al., 2012). Each type of link is applied separately; e.g., in Figure 2(a) results are based only on mentioning links, ignoring retweeting and following links. not applied. However, according to the training performance (extrinsic evaluations which we are still in progress), user interactions do benefit link prediction performance when links are partially available, e.g., suggesting more links based on observed links. In contrast, hinge loss and lexical term weights do not depend on metadata availability and generally produce improvements in link prediction performance.
Illustrative Example
We illustrate model behavior qualitatively by looking at two test set users, designated A and B. User A is a reporter who runs "We Media" on his account, sending news items to followers, and B is a consultant with a wide range of interests. Their tweets reveal that both are interested in social news-a topic emphasizing words like society, country, government, laws, leaders, political party, news, etc. Both often retweet news related to unfairness in society and local government scandals (government, police, leaders, party, policy, chief secretary). For example, User A retweeted a report that a person about to be executed was unable to take a photo with his family before his execution, writing I feel heartbroken. User B retweeted news that a mayor was fired and investigated because of a bribe; in his retweet, he expresses his dissatisfaction with what the mayor did when he was in power. In addition, User A follows new technology (smart phone, Apple, Samsung, software, hardware, etc.) and B is interested in food (snacks, noodles, wine, fish, etc.).
As ground truth, there is a mentioning link from A to B; Table 2 shows this link's PLR in the mentioning models, which generally improves with model sophistication. The mentioning tweet is a news item that is consistent with the model's
Quantitative Analysis
Topic Quality. Automatic coherence detection (Lau et al., 2014) is an alternative to manual evaluations of topic quality (Chang et al., 2009). In each topic, the top n words' average pointwise mutual information (PMI)-based on a reference corpus-serves as a measure of topic coherence. 7 Topic quality improves with user interactions and max-margin learning (Table 3). PMI drops when lexical terms are added to the link probability function, however. This is consistent with the role of lexical terms in the model; their purpose is to improve link prediction performance, not improve topic quality.
Average Regression Value. One way to assess the quality of link prediction is to compare the scores of (ground-truth) linked documents to documents in general. In Table 3, the Average Regression Values show this comparison as a ratio. The higher the ratio, the more linked document pairs differ from unlinked pairs, which means that linked documents are easier to distinguish. This ratio improves as RTM extensions are added, indicating better link modeling quality. 6 Numerically its proportion is consistently lower for User A, whose interests are more diverse. 7 We set n = 20 and use a reference corpus of 1,143,525 news items from Sogou Lab, comprising items from June to July 2012, http://www.sogou.com/labs/dl/ca. html. Each averages ∼347 tokens, using the same segmentation scheme as the experimental corpus.
In the SD/Avg row of Table 3, we also compute a ratio of standard deviations to mean values. Ratios given by the models with hinge loss are lower than those not using hinge loss. This means that the regression values given by the models with hinge loss are more concentrated around the average value, suggesting that these models can better identify linked pairs, even though the ratio of linked pairs' average regression value to all pairs' average value is lower.
Conclusions and Future Work
We introduce a new topic model that takes advantage of document links, incorporating link information straightforwardly by deriving clusters from the link graph and assigning each cluster a separate Dirichlet prior. We also take advantage of locally-derived distributed representations to "seed" the model's latent topics in an informed way, and we integrate max-margin prediction and lexical regression to improve link prediction quality. Our quantitative results show improvements in predictive link rank, and our qualitative and quantitative analysis illustrate that the model's behavior is intuitively plausible.
In future work, we plan to engage in further model analysis and comparison, to explore alterations to model structure, e.g. introducing hierarchical topic models, to use other clustering methods to obtain priors, and to explore the value of predicted links for downstream tasks such as friend recommendation (Pennacchiotti and Gurumurthy, 2011) and inference of user attributes (Volkova et al., 2014).
Figure 1 :
1A graphical model of our model for two documents. The contribution of our model is the use of document clusters (π), the use of words (w) in the prediction of document links (y), and a maxmargin objective. et al.(2014), who use neighbor relationships to improve prediction of user-level attributes.
Figure 1
1is a two-document segment of our model, which has the following generative process:1. For each related-document cluster l ∈ {1, . . . , L} Draw π l ∼ Dir(α ) 2. For each topic k ∈ {1, . . . , K} (a) Draw word distribution φ k ∼ Dir(β) (b) Draw topic regression parameter η k ∼ N (0, ν 2 ) 3. For each word v ∈ {1, . . . , V } Draw lexical regression parameter τv ∼ N (0, ν 2 ) 4. For each document d ∈ {1, . . . , D} (a) Draw topic proportions θ d ∼ Dir(απ l d ) (b) For each word t d,n in document d i. Draw a topic assignment z d,n ∼ Mult(θ d ) ii. Draw a word t d,n ∼ Mult(φz d,n ) 5. For each linked pair of documents d and d Draw binary link indicator
Figure 2 :
2Lex-IS-MED-RTM, combining all three extensions, performs the best on predicting mentioning and following links, although IS-RTM achieves a close value on mentioning links and even a slightly better value on retweeting links. User interactions (denoted by "I") sometimes bring down the performance, as cluster priors are not applied in this intrinsic evaluation.
Table 2 :
2Data for Illustrative ExampleModel
RTM
IS-RTM Lex-IS-RTM MED-RTM IS-MED-RTM Lex-IS-MED-RTM
Topic PMI
1.186
1.224
1.216
1.214
1.294
1.229
Average
Regression
Values
Linked Pairs
0.2403
0.3692
0.4031
0.7220
0.6321
0.7668
All Pairs
0.06636 0.07729
0.08020
0.2482
0.2041
0.2428
Ratio
3.621
4.777
5.026
2.909
3.097
3.158
SD/Avg
0.9415
1.2081
1.2671
0.6364
0.7254
0.7353
Table 3 :
3Consistent with intuition, the prevalence of the social news topic also generally increases as the models grow more sophisticated. 6Values for Quantitative Analysis
characterization of the users' interests (particu-
larly social news and technology): a Samsung
Galaxy S4 exploded and caused a fire while charg-
ing.
In the experiment, seed words must appear at least 1,000 times.2 Both approaches contrast with the links-only approach ofKim and Leskovec (2012).
We use ICTCLAS(Zhang et al., 2003) for segmentation. After stopword and low-frequency word removal, the vocabulary includes 12,257 words, with ∼755 tokens per document and 5,404 links.
AcknowledgementsWe thank Hal Daumé III for providing his code. This work was supported in part by NSF award 1211153. Boyd-Graber is supported by NSF Grants CCF-1409287, IIS-1320538, and NCSE-1422492. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor.
Latent Dirichlet allocation with topic-in-set knowledge. David Andrzejewski, Xiaojin Zhu, Conference of the North American Chapter. Association for Computational LinguisticsDavid Andrzejewski and Xiaojin Zhu. 2009. Latent Dirichlet allocation with topic-in-set knowledge. In Conference of the North American Chapter of the Association for Computational Linguistics.
Supervised topic models. M David, Jon D Blei, Mcauliffe, Proceedings of Advances in Neural Information Processing Systems. Advances in Neural Information Processing SystemsDavid M. Blei and Jon D. McAuliffe. 2007. Super- vised topic models. In Proceedings of Advances in Neural Information Processing Systems.
Latent Dirichlet allocation. David M Blei, Andrew Y Ng, Michael I Jordan, Journal of Machine Learning Research. 3David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Ma- chine Learning Research, 3:993-1022.
Hierarchical relational models for document networks. Jonathan Chang, M David, Blei, The Annals of Applied Statistics. Jonathan Chang and David M. Blei. 2010. Hierarchi- cal relational models for document networks. The Annals of Applied Statistics, pages 124-150.
Reading tea leaves: How humans interpret topic models. Jonathan Chang, Sean Gerrish, Chong Wang, Jordan L Boyd-Graber, David M Blei, Proceedings of Advances in Neural Information Processing Systems. Advances in Neural Information Processing SystemsJonathan Chang, Sean Gerrish, Chong Wang, Jordan L. Boyd-Graber, and David M. Blei. 2009. Reading tea leaves: How humans interpret topic models. In Proceedings of Advances in Neural Information Pro- cessing Systems.
Markov random topic fields. Hal Daumé, Iii , Proceedings of the Association for Computational Linguistics. the Association for Computational LinguisticsHal Daumé III. 2009. Markov random topic fields. In Proceedings of the Association for Computational Linguistics.
Latent multi-group membership graph model. Myunghwan Kim, Jure Leskovec, Proceedings of the International Conference of Machine Learning. the International Conference of Machine LearningMyunghwan Kim and Jure Leskovec. 2012. La- tent multi-group membership graph model. In Pro- ceedings of the International Conference of Machine Learning.
Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. David Jey Han Lau, Timothy Newman, Baldwin, Proceedings of the Association for Computational Linguistics. the Association for Computational LinguisticsJey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of the Association for Computational Linguistics.
Supervised topic models. Jon D Mcauliffe, David M Blei, Proceedings of Advances in Neural Information Processing Systems. Advances in Neural Information Processing SystemsJon D. McAuliffe and David M. Blei. 2008. Super- vised topic models. In Proceedings of Advances in Neural Information Processing Systems.
Birds of a feather: Homophily in social networks. Miller Mcpherson, Lynn Smith-Lovin, James M Cook, Annual Review of Sociology. Miller McPherson, Lynn Smith-Lovin, and James M. Cook. 2001. Birds of a feather: Homophily in social networks. Annual Review of Sociology, pages 415- 444.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Proceedings of Advances in Neural Information Processing Systems. Advances in Neural Information Processing SystemsTomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Proceedings of Advances in Neural Informa- tion Processing Systems.
Topic models conditioned on arbitrary features with Dirichlet-multinomial regression. M David, Andrew Mimno, Mccallum, Proceedings of Uncertainty in Artificial Intelligence. Uncertainty in Artificial IntelligenceDavid M. Mimno and Andrew McCallum. 2008. Topic models conditioned on arbitrary features with Dirichlet-multinomial regression. In Proceedings of Uncertainty in Artificial Intelligence.
Lexical and hierarchical topic regression. Viet-An, Jordan L Nguyen, Philip Boyd-Graber, Resnik, Proceedings of Advances in Neural Information Processing Systems. Advances in Neural Information Processing SystemsViet-An Nguyen, Jordan L. Boyd-Graber, and Philip Resnik. 2013. Lexical and hierarchical topic regres- sion. In Proceedings of Advances in Neural Infor- mation Processing Systems.
Investigating topic models for social media user recommendation. Marco Pennacchiotti, Siva Gurumurthy, Proceedings of World Wide Web Conference. World Wide Web ConferenceMarco Pennacchiotti and Siva Gurumurthy. 2011. In- vestigating topic models for social media user rec- ommendation. In Proceedings of World Wide Web Conference.
Data augmentation for support vector machines. G Nicholas, Steven L Polson, Scott, Bayesian Analysis. 61Nicholas G. Polson and Steven L. Scott. 2011. Data augmentation for support vector machines. Bayesian Analysis, 6(1):1-23.
Hierarchical Dirichlet processes. Yee Whye Teh, Michael I Jordan, Matthew J Beal, David M Blei, Journal of the American Statistical Association. 101476Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. 2006. Hierarchical Dirichlet processes. Journal of the American Statistical Asso- ciation, 101(476):1566-1581.
Inferring user political preferences from streaming communications. Svitlana Volkova, Glen Coppersmith, Benjamin Van Durme, Proceedings of the Association for Computational Linguistics. the Association for Computational LinguisticsSvitlana Volkova, Glen Coppersmith, and Benjamin Van Durme. 2014. Inferring user political prefer- ences from streaming communications. In Proceed- ings of the Association for Computational Linguis- tics.
Structured topic models for language. M Hanna, Wallach, University of CambridgePh.D. thesisHanna M. Wallach. 2008. Structured topic models for language. Ph.D. thesis, University of Cambridge.
HHMM-based Chinese lexical analyzer ICTCLAS. Hua-Ping Zhang, Hong-Kui Yu, De-Yi Xiong, Qun Liu, Proceedings of the second SIGHAN workshop on Chinese language processing. the second SIGHAN workshop on Chinese language processing17Hua-Ping Zhang, Hong-Kui Yu, De-Yi Xiong, and Qun Liu. 2003. HHMM-based Chinese lexical analyzer ICTCLAS. In Proceedings of the second SIGHAN workshop on Chinese language processing-Volume 17.
MedLDA: Maximum margin supervised topic models. Jun Zhu, Amr Ahmed, Eric P Xing, Journal of Machine Learning Research. 131Jun Zhu, Amr Ahmed, and Eric P. Xing. 2012. MedLDA: Maximum margin supervised topic mod- els. Journal of Machine Learning Research, 13(1):2237-2278.
Gibbs max-margin topic models with data augmentation. Jun Zhu, Ning Chen, Hugh Perkins, Bo Zhang, Journal of Machine Learning Research. 151Jun Zhu, Ning Chen, Hugh Perkins, and Bo Zhang. 2014. Gibbs max-margin topic models with data augmentation. Journal of Machine Learning Re- search, 15(1). |
1,587,075 | New Approach to Frequency Dictionaries -Czech Example | On the example of the recent edition of the Frequency Dictionary of Czech we describe and explain some new general principles that should be followed for getting better results for practical uses of frequency dictionaries. It is mainly adopting average reduced frequency instead of absolute frequency for ordering items. The formula for calculation of the average reduced frequency is presented in the contribution together with a brief explanation, including examples clarifying the difference between the measures. Then, the Frequency Dictionary of Czech and its parts are described. | [
12227075
] | New Approach to Frequency Dictionaries -Czech Example
Jaroslava Hlaváčová hlava@ufal.mff.cuni.cz
Institute of Formal and Applied Linguistics
Charles University
Malostranské nám. 25118 00PragueCzech Republic
New Approach to Frequency Dictionaries -Czech Example
On the example of the recent edition of the Frequency Dictionary of Czech we describe and explain some new general principles that should be followed for getting better results for practical uses of frequency dictionaries. It is mainly adopting average reduced frequency instead of absolute frequency for ordering items. The formula for calculation of the average reduced frequency is presented in the contribution together with a brief explanation, including examples clarifying the difference between the measures. Then, the Frequency Dictionary of Czech and its parts are described.
Introduction
Frequency dictionaries are very popular for two main reasons:
• theoretical: they bring interesting insight into the vocabulary of the language -the core from one side (the most frequent words) and the periphery from the other one;
• practical: they can be directly used in practice, especially for selecting entries into new monolingual or bilingual dictionaries of various natures.
Frequency is one of the most popular characteristics of words. It is often the main criterion for lexicographers, who are deciding whether include a word into a dictionary, or not. The frequency of a word is easy to calculate -it is number of its occurrences in a text. If we compare frequencies of all words from a given text, we immediately see, which words are common. However, that result concerns only the text, we have used for our calculations, not the language as a whole. The frequency as a number of word occurrences depends on the text. Not only on its length, but on its subject, its author(s), style and other properties. We can take a text collection containing various styles, authors and genres. If we could make the collection out of all existing texts, written as well as spoken, then we could calculate the real frequency of all words in the language. Such a task is of course impossible, we have to manage with a sample of texts -a language corpus.
The Corpus and its Treatment
The bigger the corpus, the more reliable facts about the language we can infer from it. However, the corpus size is not the only characteristics affecting the results. It depends also on the composition of the corpus, on proportions of its individual constituents. If we, for instance, included only fiction into the corpus, we would probably not get special terms, not even the most common ones. On the other hand, including only technical reports or newspapers, the number (frequency) of more common words would become askew.
In other words, we need a representative corpus formed by a great variety of texts in order to cover the major part of language phenomena -for our purpose especially lexical ones. The basis of our dictionary was the Czech National Corpus -its version SYN2000 1 . Not only it is quite large (100 million word forms), but it includes the wide spectrum of different texts. They can be grouped into three main categories 2 :
1. fiction -15%;
2. expert texts -25%;
newspaper articles -60%
The corpus SYN2000 is automatically lemmatized and morphologically tagged. Every word form from the corpus was assigned with a unique basic form -lemma, and an appropriate morphological tag (Hajič, 2004;Hajič and Hladká, 1998). For the frequencies, we worked with lemmas, not with word forms. It would be interesting to calculate frequency dictionary of word forms, too, but the frequency dictionary of lemmas will certainly have more practical applications.
Error Handling
The original corpus SYN2000 contains errors of several types, especially:
1. misspellings and typos;
2. morphological and disambiguation errors.
One possibility was to ignore them all and calculate the frequencies only automatically. This would be very straightforward and would not demand any human intervention but some numbers would be inaccurate. That's why we decided to make some "manual" corrections, according to the type of the error.
Misspellings and Typos
This type of errors is very hard to discover. We would need to use a spellchecker, but it could not be done automatically, because there are quite a lot of words not included in any spellchecker dictionary, and still correct. A human would have to supervise the spellchecker, but there are approximately 2.5% or unrecognized words in the corpus, which is too many for any manual work. However, unrecognized words are mainly foreign names; number of spelling errors is not very high and does not affect the frequency results seriously, so we let them untouched.
Morphological and Disambiguation Errors
There are some words in Czech with two or more possible spellings. Typical doublets are citron / citrón, komunizmus / komunismus. Moreover, we find in the corpus even incorrect spellings (dýchat / dejchat -English to breathe) and still want to recognize them under the same lemma as the correct one(s).
It was necessary to go through all these possibilities manually and embrace them under the same lemma. In fact, as a side effect, this brought some hints for improving basic morphological dictionary of Czech. Disambiguation errors were more serious. Czech has a lot of homonymous word forms that needed to be disambiguated. The disambiguation was made statistically, which naturally was not errorless. That's why we decided to check all the homonymous forms. We got them from the morphological dictionary. It was not possible to check and correct them manually, because some homonymous forms are very frequent; for instance the word form bez can be either preposition (without) or nominative / accusative of noun (a bush -black elder) and their common frequency in the corpus is 85,541. We checked manually only random sample of 200 occurrences of every homonymous form and counted the ratio of the possibilities. If the ratio was less than 5%, the smaller alternative was not taken into account and all the occurences of the word form were assigned to the more frequent lemma. If the ratio was higher, we added its numeric value as a note to the dictionary entry to warn the user, that the calculated frequencies could be affected by the homonymy. We know that the results still are not correct, but it is probably better than without the manual changes. We will not go into more details about this subject, because it is not entirely language independent. Only the languages with similar degree of homonymy could take any advantage from it. The detailed description of all the manual processing is in (Čermák and Křen, 2005). The final corrected version of the corpus SYN2000 became available for all users, so that they could use both the Dictionary and the Corpus as compatible data for their own research.
About Frequencies
Having the corpus, we can easily count frequencies of all its words. If it is representative and big enough, we can trust the results more but it will never overcome unevenness of word distribution. There are always texts with unusual accumulation of a special word. Then, that word gets much higher frequency in the corpus, than would correspond with its frequency in the language. There are always texts with unusual concentration of a special word (a hero of a novel, a newly discovered species in an article of a popular journal, a name of an unknown village where something important took place, ...). Lexicographers wanting to select entries into their (never unlimited) dictionaries know the problem very well. Especially towards the lower frequencies, the order has to be manually corrected. It always happens that some special words have in the corpus higher frequency than is their frequency in the language, and the lexicographers have to count upon their individual language experience and intuition (, which, moreover, is never the same for more persons). In fact, they would need commonness of words rather than their frequency. It was the reason, why we used for our dictionary not the (absolute) frequency as the primary criterion, but the average reduced frequency (Savický and Hlaváčová, 2002). It points out those words that occur in (a few) clusters in the corpus. The average reduced frequency (ARF) of such words is much smaller than the ARF of words with the same frequency but with even distribution in the corpus. In this way we can better approach the concept of word commonness.
Average Reduced Frequency
We will present here the principles of ARF only briefly. The detailed description of its derivation can be found in (Savický and Hlaváčová, 2002). The corpus consists of so called positions. Every position is occupied by one and only one word. We number the corpus positions with numbers 1 to N . Thus, N is the lenght of the whole corpus. Let us have a word with frequency f in our corpus. We will split the whole corpus into f segments of the same lenght N/f (for simplicity we can at the beginning suppose that N is divisible by f ). If our word was spread evenly in the corpus, every segment would contain one and only one occurence of the word. Usually, the situation is different; some segments contain more than one occurence, others contain none. The number of segments occupied by at least one occurence, will be called reduced frequency.
The reduced frequency has one bad property. Its value for a word occuring in a small cluster is either 1 or 2, depending on the position of the cluster within the corpus. If the whole of the cluster is situated inside a segment, the reduced frequency would be one, if the border between segments falls in the middle of the cluster, the reduced frequency of the word is 2. To avoid this imperfection and make the measure more objective, we calculate average reduced frequency, as the arithmetic mean over all possible beginnings of the first segment. For this purpuse we imagine the corpus not as a line segment, but as a circle. After the last corpus position, the first one comes. Then, we can move the beginning of the first segment along the whole circle and count reduced frequencies for every its position. The average reduced frequency is calculated according to the following formula:
ARF = 1 v f i=1 min{d i , v}
where v = N/f and d i designate the distance between two following occurences of the word in the corpus. Particularly, if n 1 ,n 2 ,. . . ,n f are numbers of positions, where the word occurs, then d i = n i − n i−1 for every i = 2, . . . , f and d 1 = n 1 + (N − n f ), which is the distance between the last and the first occurence of the word in the cyclic order of the corpus described above.
Properties of the ARF
Though there is word "frequency" in the name of the measure, average reduced frequency can have (and usually has) non-integer value. ARF has value from the interval < 1, f >.
Only the words with absolute frequency 1 have the lowest possible ARF = 1.
Only the words with entirely evenly distribution within the corpus can reach the highest value of ARF , namely the value of the absolute frequency f . However, only words with absolute frequency 1 reach in reality that value. There was no word with higher frequency in the Czech National Corpus that was distributed entirely evenly in the whole of the corpus. A word occuring only in one small cluster has its ARF slightly higher than 1. It depends more on the length of the cluster than on its absolute frequency, how great the difference betweent the both frequencies will be. A word occuring in two small clusters has its ARF slightly higher than 2, and similarly for the following small integers. The more evenly distributed word, the less difference between the absolute and the average reduced frequencies.
In (Savický and Hlaváčová, 2002) there are presented three different measures overcoming the drawbacks of the absolute frequency. Besides ARF, there are: AWT -average waiting time:
AW T = 1 2 1 + 1 N f i=1 d 2 i
and ALD -average logarithmic distance:
ALD = 1 N f i=1 d i log 10 d i .
The ARF was chosen for two reasons:
1. this measure became part of the corpus manager Bonito 3 that is mainly exploited by users of CNC;
2. it is the most straightforward from the three measures.
Let us show the difference between the absolute and average reduced frequencies on examples from the dictionary. We will have a look at two words with the same frequency -223. The first word is molekulový (in English molecular), the second one nahromadit (in English to accumulate).
Description of the Frequency Dictionary of Czech
Having explained the general principles, let us have a look at the dictionary itself (Čermák et al., 2004). It has two versions -electronic one on a CD, and paper one in a book.
The Book
The book consists of 5 lists:
1. Frequency Dictionary of Common Words (alphabetically ordered) -50,000 items 2. Frequency Dictionary of Common Words (ordered according to absolute frequency) -20,000 items 3. Frequency Dictionary of Common Words (ordered according to average reduced frequency) -20,000 items 4. Frequency Dictionary of Proper Names (ordered according to average reduced frequency) -2,000 items 5. Frequency Dictionary of Abbreviations (ordered according to average reduced frequency) -1,000 items and 3 appendices:
Alphabetic List
The most important and the largest is the first part that includes 50,000 most common Czech words. We will call it in the rest of this contribution the central part of the dictionary. It does not contain proper names -they were gathered in the special list (number 4 in the above list). This basic list is alphabetically ordered. The words were selected from the corpus according to their average reduced frequency (ARF) -it means that the dictionary contains 50,000 most common words from the corpus SYN2000, (according to ARF). We are convinced that this characteristics is much more appropriate for the words than the (absolute) frequency. However, for various comparisons, we have put the number expressing the absolute frequency (FRQ) into the list, too. In fact, neither the frequency, nor the ARF are important.
What is important, are the ranks according to the both measures; they are, of course, incorporated into the list as well. The last thing that was calculated for every entry, is its frequency in the three main text categories listed in the description of the corpus SYN2000. As their proportions in the corpus are not the same, the frequencies were normalized. The normalized frequency expresses the frequency that the word would have, if all the genres were represented equally in the corpus (it means if 1/3 of the corpus was represented by fiction, 1/3 by expert texts and 1/3 by newspapers). For better comparison among different words, the normalized frequencies were converted into ratios (in %).
Let us have a look at the two entries from our previous examples. Though their pure frequencies (FRQ) are the same (223), their reduced frequencies (ARF) differ, and accordingly differ their respective ranks (see table 1). The numbers in the last 3 columns of the table 1 mean, that 99% of all the occurencies of the word molecular were found in the section of expert texts, 1% occured in newspapers and 0% in the fiction. The second example, to accumulate demonstrates more even representation in the three genres. It corresponds with the figures presented above.
It has the following implication. If somebody wanted to create a small pocket bilingual dictionary of say 20,000 entries, he would probably include the word to accumulate, but not the word molecular. It can be directly seen from the rank of ARF.
word English translation rankARF ARF rankFRQ FRQ fiction (%) expert (%) newspapers (%) molekulový molecular 37 502 22 18 959 223 0 99 1 nahromadit to accumulate 14 970 136 18 915 223 34 43 23 Table 1: Example of two words from the CNC with the same frequency 223.
Frequency Ordering
The next two lists (2 and 3) are ordered according to respective frequencies -FRQ and ARF. For better orientation, the individual items contain not only the respective frequency but also the both frequency ranks. The detailed information about the representation in the three genres is not included, it has to be found in the central part.
The third list -that one ordered according to ARF -is a subset of the central list. The same statement cannot be said about the second list, ordered according to the absolute frequency (FRQ). There are 15 words with the frequency F RQ < 20, 000, but with ARF > 50, 000. It is the reason, why those 15 words appear in the smaller second list, but do not appear in the central part of the dictionary. All of them are special terms from various fields (physics, computer science, biology, electrical engineering, ...). They are mainly foreign words, some of them almost do not need translation into English (e.g. suprematismus, repertorium, rezistor, heparin). We must admit that these words really do not belong to the 50,000 most common Czech words.
Proper Names and Abbreviations
The frequency dictionaries of proper names and abbreviations are both ordered according to the average reduced frequency, and for every item they contain the same information as the central part, including representation in the three genres. The ARF was especially important in the case of proper names, because in fiction, there is often a hero having a huge absolute frequency in one novel but occurring nowhere else. Despite of the high frequency of such words, they do not belong to the most common Czech proper names. The list of the most common proper names is not classified into any categories. There are names of persons, towns, countries, companies and other, gathered in one list. Famous politicians and sportsmen entered the list just because of the great amount of newspapers in the corpus. In fact, this part of the dictionary is mainly a testimony of the corpus' time of origin. The same can be stated about the dictionary of abbreviations. It was the main reason why we did not include them into the central list, as was the traditional praxis for the most of older frequency dictionaries (see for instance the old Czech frequency dictionary (Jelínek et al., 1961)).
Delimiters
Ten most frequent delimiters are presented in the same manner as the central list, ordered according to ARF. From the list, we can for instance infer that delimiters "." and "," are used in all types of texts, while "?" and "!" are much more typical for fiction, brackets for expert texts.
List of Graphemes
Only absolute frequencies were counted for the graphemes. It is interesting to compare the order of graphemes with the similar order calculated 20 years ago (Těšitelová, 1985) on the basis of much smaller corpus (540,000 words). The two orders are similar, but not the same.
Lexical Cover of Texts
This small table proves very famous fact that even small number of the most frequent words cover the majority of texts. Thus, for instance, the first 10,000 most frequent words covers more than 91% of the whole corpus SYN2000.
For the calculation of this table the absolute frequency was used.
The CD
The CD contains three lists:
1. Common Words -50,000 items 2. Proper Names -2,000 items 3. Abbreviations -1,000 items
The content of each list is identical with its counterpart in the paper book. In addition, the CD is equipped with the browser EFES that enables users to handle the data more effectively than it is possible with its paper version. Of course it is possible to reorder the data according to any of the 8 items included in the lists -ARF, FRQ, rank ARF, rank FRQ, relative representation in the three main genres, and alphabetically. It is the reason, why the two smaller lists (list 2 and 3 in the listing of the previous section) are not presented separately on the CD data. However, there is one tiny difference -those 15 words mentioned earlier (with ARF > 50, 000 and F RQ < 20, 000) are not present on the CD, because they do not belong to the central list.
The main function of the browser is to enable searching the data. The simplest search is alphabetical. As opposed to the paper version, we can search not only the words from their beginning, but also according to an end substring, or even an inside substring. Thus, we can make for instance our own frequency dictionary of individual suffixes or roots. The only drawback is, that it cannot deal with regular expressions. We can also search the lists according to other criteria and combine them into more complicated search conditions. It is possible to state the intervals for the individual numeric categories. We can for instance find all the items with rankARF > 30, 000 and rankF RQ < 10, 000. In such a way, we discover the words with very uneven distribution within the corpus. We can include into our queries also constraints on the genre representation. The results can be stored at external media for future uses.
Conclusion
We have presented the big project of the Frequency Dictionary of Czech. We have showed its main features that make it different from other similar projects. The decisions are justified in the text of this contribution. The uniqueness of the dictionary consists in using not absolute, but average reduced frequency for ordering words. It overcomes incidental unevenness of word distribution within the corpus that distorts the credibility of results.
We are convinced that using other than absolute frequency makes the frequency dictionary more appropriate for the direct use for compiling other sorts of dictionaries or encyclopedias.
Acknoledgements
This work was supported by the grants 1ET101120503 and 1ET101120413 of the Grant Agency of the Academy of Sciences of the Czech Republic.
Figure 1 :
1Distribution of all the occurrences of the word molekulový in the corpus SYN2000.
Figure 2 :
2Distribution of all the occurrences of the word nahromadit in the corpus SYN2000.
http://ucnk.ff.cuni.cz/ 2 All the categories are further divided into more subtle subcategories, but they were not taken into consideration for the frequency dictionary, because their number -several tens -is not appropriate for our purpose.
The pictures were taken from the corpus manager Bonito that was developed by Pavel Rychlý from the Masaryk University in Brno, Czech republic.
Tagging inflective languages. prediction of morphological categories for a rich, structured tagset. J Hajič, B Hladká, Proc. ACL-Coling'98. ACL-Coling'98Longman, Montreal, CanadaJ. Hajič and B. Hladká. 1998. Tagging inflective lan- guages. prediction of morphological categories for a rich, structured tagset. In Proc. ACL-Coling'98, pages 483- 490. Longman, Montreal, Canada.
Disambiguation of Rich Inflection. (Computational Morphology of Czech). J Hajič, 80-246-0282-2Praha, KarolinumJ. Hajič. 2004. Disambiguation of Rich Inflection. (Com- putational Morphology of Czech). Praha, Karolinum. ISBN 80-246-0282-2.
. J Jelínek, J V Bečka, M Těšitelová, Frekvence slov. slovních druhøu a tvarøu včeském jazyce. Státní pedagogické nakladatelstvíJ. Jelínek, J.V. Bečka, and M. Těšitelová. 1961. Frekvence slov, slovních druhøu a tvarøu včeském jazyce. Státní pedagogické nakladatelství, Praha.
Measures of word commonness. P Savický, J Hlaváčová, Journal of Quantitative Linguistics. 9P. Savický and J. Hlaváčová. 2002. Measures of word com- monness. Journal of Quantitative Linguistics, 9:215- 231.
Kvantitativní charakteristiky současnéčeštiny. M Těšitelová, Praha, Academia, PrahaM. Těšitelová. 1985. Kvantitativní charakteristiky současnéčeštiny. Praha, Academia, Praha.
New generation corpusbased frequency dictionaries: The case of czech. F Čermák, M Křen, International Journal of Corpus Linguistics. 10F.Čermák and M. Křen. 2005. New generation corpus- based frequency dictionaries: The case of czech. Inter- national Journal of Corpus Linguistics, 10:453-467.
Frekvenční slovníkčeštiny. F Čermák, Praha, NLNF.Čermák et al. 2004. Frekvenční slovníkčeštiny. Praha, NLN. ISBN 80-7106-676-1. |
9,943,147 | Content Aggregation in Natural Language Hypertext Summarization of OLAP and Data Mining Discoveries | We present a new approach to paratactic content aggregation in the context of generating hypertext summaries of OLAP and data mining discoveries. Two key properties make this approach innovative and interesting: (1) it encapsulates aggregation inside the sentence planning component, and(2)it relies on a domain independent algorithm working on a data structure that abstracts from lexical and syntactic knowledge. | [
2652169
] | Content Aggregation in Natural Language Hypertext Summarization of OLAP and Data Mining Discoveries
Jacques Robin
Centro de Informfitica (CIn) Caixa Postal
Departamento de Inform~itica (DI
Universidade Federal do Parfi (UFPA)
Universidade Federal de Pernambuco (UFPE)
66075-900 -Bel6m7851, 50732-970Recife, PattiBrazil
Eloi L Favero
Centro de Informfitica (CIn) Caixa Postal
Departamento de Inform~itica (DI
Universidade Federal do Parfi (UFPA)
Universidade Federal de Pernambuco (UFPE)
66075-900 -Bel6m7851, 50732-970Recife, PattiBrazil
Content Aggregation in Natural Language Hypertext Summarization of OLAP and Data Mining Discoveries
We present a new approach to paratactic content aggregation in the context of generating hypertext summaries of OLAP and data mining discoveries. Two key properties make this approach innovative and interesting: (1) it encapsulates aggregation inside the sentence planning component, and(2)it relies on a domain independent algorithm working on a data structure that abstracts from lexical and syntactic knowledge.
1
Research context: hypertext executive summary generation for intelligent decision-support
In this paper, we present a new approach to content aggregation in Natural Nanguage Generation (NLG). This approach has been developed for the NLG system HYSSOP (HYpertext Summary System of On-line analytical Processing) which summarizes OLAP (On-Line Analytical Processing) and Data Mining discoveries into an hypevtext report. HYSSOP is itself part of the Intelligent Decision-Support System (IDSS) MATRIKS (Multidimensional Analysis and Textual Reporting for Insight Knowledge Search), which aims to provide a comprehensive knowledge discovery environment through seamless integration of data warehousing, OLAP, data mining, expert system and NLG technologies.
The MATRIKS intelligent decisionsupport system
The architecture of MATRIKS is given in Fig. 1. It extends previous cutting-edge environments for Knowledge Discovery in Databases (KDD) such as DBMiner (Han et al. 1997) by the integration of: ® a data warehouse hypercube exploration expert system allowing automation and expertise legacy of dimensional data warehouse exploration strategies developed by human data analyst using OLAP queries and data mining tools; ® an hypertext executive summary generator reporting data hypercube exploration insights in the most concise and familiar way: a few web pages of natural language. These two extensions allow an IDSS to be used directly by decision makers without constant mediation of a data analyst.
The HYSSOP
natural language hypertext summary generator To our knowledge, the development of HYSSOP is pioneer work in coupling OLAP and data mining with natural language generation, Fig. 2. We view such coupling as a synergetic fit with tremendous potential for a wide range of practical applications. In a nutshell', while NLG is the only technology able to completely fulfill the reporting needs of i See Favero (2000) for further justification for this view, as well as for details on the motivation and technology underlying MATRIKS.
OLAP and data mining, these two technologies are reciprocally the only ones able to completely fulfill the content determination needs of a key NLG application sub-class: textual summarization of quantitative data.
Decision
. ~r-" Data . • :. maker Generators that summarize large amount of quantitative data by a short natural language text (such as ANA (Kukich 1988), GOSSIP (Carcagno and Iordanskaja 1993) CLAP and data mining are the two technologies that emerged to tackle precisely these two issues: for OLAP, efficient search in a high dimensionality, historical data search space, and for data mining, automatic discovery in such spaces, of hitherto unsuspected regularities or singularities. In the MATRIKS architecture, heuristic rules are not used to define content worth reporting in a data warehouse executive summary. Instead, they are used to guide the process of searching the warehouse for unexpected facts using OLAP and data mining operators.
A data warehouse hypercube exploration expert system encapsulates such rules in its knowledge base to perform content determination. An example outPUt of such expert system, and input tO HYSSOP, is ,given in Fig. 3:. the data cells selected for inclusion in the output textual summary are passed along with their CLAP context and the data mining annotations that justify their relevance. One output generated by HYSSOP from this input is given in Fig. 4 (Sarawagi, Agrawal and Megiddo, 1998
Fig. 4 -Example of HYSSOP front-page output
The 40% decrease in Diet Soda sales was very atypical mostly due to the combination of the two following facts. • across the rest of the regions, the July to August average variation for that product was 9% increase, o over the rest of the year, the average monthly decrease in Eastern sales for that product was only 7%." o across the rest of the product line, the Eastern sales variations from July to August was 2%
Fig. 5 -Example of HYSSOP follow-up page output (behind the 40%front page anchor link)
The architecture of HYSSOP is given in Fig. 2. Robin 1997), while surface syntactic realization HYSSOP is entirely implemented in LIFE (Ait-follows the approach described in (Favero and Kaci and Lincoln. 198.9), a languagethat .extends.. ....... Robin.,2000b),:-H¥-SSOP~-makes -two innovative Prolog with functional programming, arityless feature structure unification and hierarchical type constraint inheritance.
For content realization, HYSSOP relies on feature structure unification. Lexicalization is inspired from the approach described in (Elhadad, McKeown and contributions to NLG research: one to hypertext content planning presented in (Favero and Robin 2000a) and one to content aggregation presented in the rest of this paper.
2
Research focus: content aggregation in natural language generation Natural language generation system is traditionally decomposed in the following subtasks: content determination, discourse-level content organization, sentence-level content organization, lexical content realization and grammatical content realization. The first three ......................... subtasks together ate_often=referred toas.Jzontent planning, and the last two together as linguistic realization. This separation is now fairly standard and most implementations encapsulate each task in a separate module (Robin 1995), (Reiter 1994). Another generation subtask that has recently received much attention is content aggregation. However, there is still no consensus on the exact scope of aggregation and on its precise relation with the five standard generation tasks listed above. To avoid ambiguity, we define aggregation here as: grouping several content units, sharing various semantic features, inside a single linguistic structure, in such a way that the shared features are maximally factored out and minimally repeated in the generated text. Defined as above, aggregation is essentially a key subtask of sentence planning. As such, aggregation choices are constrained by discourse planning decisions and they in turn constrain lexical choices.
In HYSSOP, aggregation is carried out by the sentence planner in three steps: 1. content factorization, which is performed on a tabular data structure called a Factorization Matrix (FM) ; 2. generation from the FM of a discourse tree representing the hypertext plan to pass down to the lexicalizer; 3. top-down traversal of the discourse tree to detect content units with shared features occurring in non-adjacent sentences and
Content faetorization i,iHYSSOP
The key properties of the factorization matrix that sets it apart from previously proposed data structures on which to perform aggregation are that: ® it fully abstracts from lexical and syntactic information; q. ~...it. focuses, on, ,two =types,:ofAnformation. kept separate in most generators, (1) the semantic features of each sentence constituent (generally represented only before lexicalization), and (2) the linear precedence constraints between them (generally represented only late during syntactic realization); ® it visually captures the interaction between the two, which underlies the factorization phenomenon at the core of aggregation. In HYSSOP, the sentence planner receives as input from the discourse planner an FM representing the yet unaggregated content to be conveyed, together with an ordered list of candidate semantic dimensions to consider for outermost factoring. The pseudo-code of HYSSOP's aggregation algorithm is given in Fig. 10. We now illustrate this algorithm on the input example FM that appears inside the bold sub-frame of the overall HYSSOP input given in Fig. 3. For this example, we assume that the discourse planner directive is to factor out first the exception dimension, followed by the product dimension, i.e., FactoringStrategy = [except,product]. This example illustrates the mixed initiative choice of the aggregation strategy: part of it is dictated by the discourse planner to ensure that aggregation will not adversely affect the high-level textual organization that it carefully planned.
The remaining part, in our example factoring along the place and time dimensions, is left to annotate them as anaphora. Such annotations are then used by the lexicalizer to choose the appropriate cue word to insert near or in place of the anaphoric item.
-: :-:. : the:initiative.~f'~the:-sentence planner. The. first step of HYSSOP's aggregation algorithm is to shift the priority dimension D of the factoring strategy to the second leftmost column of the FM. The second step is to sort the FM rows in (increasing or decreasing) order of their D cell values. The third step is to horizontally slice the • FM into row groups withidentical D cellvalues. The fourth step is to merge these identical cells and annotate the merged cell with the number of cells that it replaced. The FM resulting from these four first steps on the input FM inside the bold sub-frame of Fig. 3 using exception as factoring dimension is given in Fig. 6.
• The fifth step consists,.,oPreetlrsi~vely'eaHingthe entire aggregation algorithm inside each row group on the sub-FM to the right of D, using the remaining dimensions of the factoring strategy.
Let us now follow one such recursive call: the one on the sub-FM inside a bold sub-frame in Fig. 6 to the right of the exception column in the third row group. The result of the first four aggregation steps of this recursive call is given in Fig. 7. This time it is the product dimension that has been left-shifted and that provided the basis for row sorting, row grouping and cell merging. Further recursive calls are now triggered. These calls are different from the preceding ones, however, in that at this point all the input constraints provided by the discourse planner have already been satisfied. It is thus now up to the sentence planner to choose along which dimension to perform the next factorization step.
In the current implementation, the column with the lowest number of distinct values is always chosen. In our example, this translates as factoring along the time dimension for some row groups and along the space dimension for the others. The result of the recursive aggregation call on the sub-FM inside the bold frame of Fig. 7 is given in Fig. 8. In this case, factoring occurred along the time dimension. The fully aggregated FM resulting from all the recursive calls is given in Fig. 9. Note how the left to right embedding of its cells reflects exactly the left to right embedding of the phrases in the natural language summary of Fig. 4 generated from it.
Cue word generation in HYSSOP
Once content factorization is completed, the sentence planner builds in two passes the discourse tree that the lexicalizer expects as input. In the first pass. the sentence planner patterns the recursive structure of the tree (that itself-prefigures the output-text linguistic constituent structure) after the left to right and narrowing embedding of sub-matrices inside the FM.
Fig. 9 -Final fully aggregated FM after all recursive calls
In the second pass, the sentence planner traverses this initial discourse tree to enrich it with anaphoric annotations that the lexicalizer needs to generated cue words such as "again", "both", "neither", "except" etc. Planning cue planner output discourse tree built formthe ..... aggregated FM of Fig. 9 is given in Fig. 12. The discourse tree spans horizontally with .its root to the left of the feature structure and its leaves to the right. Note in Fig. 12 A special class of aggregation-related cue mentioning the group's cardinal. An example phrases involves not only the sentence planner summary front page generated using such a and the lexicalizer but also the discourse strategy is given in Fig. 11. The count annotation planner. One discourse strategy option that in the cell merging function of HYSSOP's HYSSOP implements is to precede each aggregation algorithm are computed for that aggregation group by a cue phrase explicitly purpose. While the decision to use an explicit count discourse strategy lies within the discourse planner, the counts are computed by the sentence planner and their realization as cue phrases are carried out by the lexicalizer.
Related work in content aggregation
The main previous works on content aggregation are due to: o (Dalianis 1995(Dalianis , 1996, whose ASTROGEN system generates natural language paraphrases of formal software specification for validation purposes;
(Huang and Fiedler 1997), whose PROVERB system generates natural language mathematical proofs from a theorem prover reasoning trace; (Robin and McKeown, 1996), whose STREAK system generates basketball game summaries from a semantic network representing the key game statistics and their historical context; (Shaw 1998), whose CASPER discourse and sentence planner has been used both in the PLANDoc system that generates telecommunication equipment installation plan documentation from an expert system trace and the MAGIC system that generates extracted from a dimensional data warehouse hypercube. In contrast, the other systems all take as input either a semantic network extracted from a knowledge base or a pre-linguistic representation of the text to generate such as Meteer's text structure (Meteer 1992) or Jackendoffs semantic structure (Jackendoff 1985). Such natural language processing ICU measurements. In this section, we briefly compare these research efforts with ours along four dimensions: (1) the definition of aggregation and the scope of the aggregation task implemented in the generator, (2) the type of representation the generator takes as input and the type of output text that it produces, (3) the generator's architecture and the localization of the aggregation task within it, and (4) the data structures and algorithms used to implement aggregation.
Definition of the aggregation task
The definition of aggregation that we gave at the beginning of previous section is similar to those provided by Dalianis and Huang, although it focuses on common feature factorization to insure aggregation remains a proper subset of sentence planning. By viewing aggregation only as a process of combining clauses, Shaw's definition is more restrictive. In our view, aggregation is best handled prior to commit to specific syntactic categories and the same abstract process, such the algorithm of Fig. 10
representation and generated
A second characteristic that sets HYSSOP apart from other generators perfornfing aggregation is the nature of its input: a set of data cells generation task and hide important issues that come up in real life applications for which raw data is often the only available input. In terms of output, HYSSOP differs from most other systems in that it generates hypertext instead of linear text. It thus tackles the content aggregation problem in a particularly demanding application requiring the generator to simultaneously start from raw data, produce hypertext output and enforce conciseness constraints.
Generation architecture and aggregation localization
While its overall architecture is a conventional pipeline, HYSSOP is unique in encapsulating all aggregation processing in the sentence planner and carrying it out entirely on a deep semantic representation. In contrast, most other systems distribute aggregation over several processing components and across several levels of internal representations: deep semantic, thematic and even surface syntactic for some of them.
Data structures and algorithms for aggregation
All previous approaches to aggregations relied on rules that included some domain-specific semantic or lexical information. In contrast, the aggregation algorithm used by HYSSOP is domain independent since it relies only on (1) generic matrix row and column shuffling operations, and (2) on a generic similarity . =:meas ure.betveeen-arbi trary data cells.
Conclusion
We presented a new approach to content aggregation in the context of a very challenging and practical generation application: summarizing OLAP and data mining discoveries as a few linked web pages of fluent and concise natural language. We believe that the key contribution to our work is to show the feasibility to perform effective paratactic aggregation: ® encapsulated within a single generation component ( (1996) Paraphrasing and ordering constraints on paratactic aggregation while completely abstracting from domain semantic idiosyncrasies as well as from lexicai and syntactic details. This is a first success towards the development of a plug-in content aggregation component for text generation, reusable across application domains. In future work, we intend to empirically evaluate the summaries generated by HYSSOP.
.Fig. 1 -
1.... ,or) ~.t. w are,, .... The architecture of MA TRIKS
Fig. 10 -
10HYSSOP's aggregation algorithm
Fig. 12 -
12Fragment of LIFE feature structure representing the discourse tree output of the sentence planner and input to the lexicalizer.
.......................... QC-C :ot0., PLANDoc
(McKeown, Kukich and Shaw 1994) among
others) generally perform content determination
by relying on a fixed set of domain-dependent
heuristic rules. Such an approach suffers from
two severe limitations that prevent it from
reporting the most interesting content from an
underlying database:
o it does not scale up for analytical contexts
with high dimensionality and which take into
account the .hi-smrical.:.e~olution .,of data
through time; such complex context would
require a combinatorially explosive number
of summary content determination heuristic
rules;
o it can only select facts whose class have been
thought ahead by the rule base author, while
in most cases, it is its very unexpectedness
that makes a fact interesting to report;
/.-~war~ho~--'-.,~
hypercube explorat|on
)
I !
I el*course
*!
J
T
I Hypertext )
•
planner
]
| Sentence |
L pla ....
/ +rl
/
{ GrammatlcaZ]
L i~,~ Izer 1
-1
d Web pages
.~
Fig. 2 -The architecture of HYSSOP
° Diet Soda sales" 19% increase in the Southern region from July to August, followed by its two opposite regional variations from August to September, +10% in the East but -17% in the West; .national Jolt Cola sales' +6._% from August to September. To know what makes one of these variations unusual in the context of this year's sales, click on it.). The part inside the bold sub-frame is the input to the
sentence planner
Last year, the most atypical sales variations from one month to the next occurred for:
® Birch Beer with a 42% national increase from September to October;
® Diet Soda with a 40% decrease in the Eastern region from July to August.
At the next level of idiosyncrasy came:
® Cola "s Colorado sales, falling 40% from July to August and then a further 32% from September to October;
•
again Diet Soda Eastern sales, falling 33% from September to October.
Less aberrant but still notably atypical were:
•
again nationwide Birch Beer sales'-12% from June to July and -10% from November to December;
•
Cola's 11% fall from July to August in the Central region and 30% dive in Wisconsin from August to
September;
the cue word directive: buildFactoringStrategy(Matrix): returns inside a list a pair (Dim, increasing) where Dim is the matrix's dimension (i.e., column) with the lowest number of distinct values. leftShiftColumn (Matrix, Dim1): moves Dirn I to the second leftrnost column next to the cell id co/urnn. sortRows(Matrix, Diml,0rder): sorts the Matrix's rows in order of their Dim1 cell value; Order specifies whether the order should be increasing or decreasing. horizSlice(Matrix, Dim 1): horizontally slices the Matrix into row groups with equal value along Dim I.[anaph=loccur=2 ~a, repeated=[product, region]]].
It indicates that this is the second mention in the
text of a content unit with produc( .= ."Birch
words can be considered?art--of-,aggregation ......... Beer"~afrd:~regiow=:tiation.~T-heqexica:i~zer~useg ....
since it makes the aggregation structures explicit
this annotation to generate the cue word "again"
to the reader and prevents ambiguities that may
before the second reference to "nationwide
otherwise be introduced by aggressive content
Birch Beer sales".
factorization. A fragment of the sentence
factor(Matrix, FactoringStrategy)
variables: Matrix = a factorization matrix
FactoringStrategy = a list of pairs (Dimension, Order) where Dimension ~ dimensions(Matrix)
and Order E {increasing, decreasing}
RowGroups = list of sub-rnatrices of Matnx
begin
ff FactoringStrategy = ernptyList
then FactoringStrategy <-buildFactodngStrategy(Matrix) ;
(Dim l, 0rderl ) <-first( FactoringStrategy) ;
RernainingFactoringStrategy <-rest(FactoringStrategy) ;
Matrix <-leftShiftColumn(Matrix, Diml);
Matrix <-sortRows(Matnx, Dim 1, Order1) ;
RowGroups <-horizSlice(Matrix, Dim 1);
for each RowGroup in RowGroups do:
RowGroup <-mergeCells(RowGroup, Dim 1) ;
(LeftSubMatrix, RighSubMatrix) <-cut(RowGroup,Diml) ;
FactoredRightSubMatnx <-factor(RightSubMatrix, RernainingFactoringStrategy) ;
RowGreup <-paste(LeftSubMatrix,FactoredRightSubMatrix,Dim 1) ;
Matrix <-update(Matrix,RowGroup);
endfor;
return Matrix ;
end.
rnergeCetls(RowGroup,Diml): merges (by definition equal valued) cells of Dim1 in RowGroup.
cut(RowGroup,Diml): cuts RowGroup into two sub-rnatrices, one to the/eft of Dim1 (including Dim1) and the other to the
right of Dim1
paste(LeftSubMatrix, FactoredRightSubMatrix, Diml): pastes together/eft and right sub-matrices.
update(Matrix, RowGroup): identifies the rows R~ of Matrix whose cell ids match those of RowGroup RG and substitute
those RM by RG inside Matrix
%% again Diet Soda Eastern sales, falling 33% from Sep to Oct I cat =aggr .... %% Less aberrant but still notably a~/pical were: ...Last year, there were 13 exceptions in the beverage product line.
The most striking was Birch Beer's 42% national fall from Sep to Oct.
The remaining exceptions clustered around four products were:
•
Again, Birch Beers sales accounting for other two national exceptions, both decreasing mild values:
1. a 12% from Jun to Jul;
2. a 10% from Nov to Dec;
°
.Cola's sales accountingofor.four.exceptions: ......... .........
" : " ........... : .......... -' -. ..... = ......
1. two medium in Colorado, a 40% from Jul to Aug and a 32% from Aug to Sep;
2. two mild, a 11% in Wisconsin from Jul to Aug and a 30% in Central region from Aug to Sep;
°
Diet Soda accounting for 5 exceptions:
1. one strong, a 40% slump in Eastern region from Jul to Aug;
2. one medium, a 33% slump in Eastern region from Sep to Oct;
3. three mild: two increasing, a 10 % in Eastern region from Aug to Sep and a 19% in Southern region
from Jul to Aug; and one falling, a 17% in Westem region from Aug to Sep;
® Finally, Jolt Cola's sales accounting for one mild exception, a 6% national fall from Aug to Sep.
Fig. 11 HYSSOP's frontpage output using discourse strategy with explicit counts
cat = aggr, level =1, ngroup =2, nmsg =2
common I Exceptionallity = high %% The most atypical sales variations from one moth to the next occurred
=
I for
distinct =
I cat =msg, attr =[product ="Birch beer", time =9, place =nation, vat=+42]
I %%Birch Beer with a 42% national increase from Sept to Oct
cat =msg, attr =[product ="Diet Soda", time =7, place -=east, var=-40]
%%Diet Soda with a 40% decrease in the Eastern region from Jul to Aug
cat =aggr, level=l, ngroup=2, nmsg=3
common I exceptionallity = medium %%At next level of idiosyncrasy came:
=
I
distinct =
] cat =aggr, level =2, ngroup =2, nmsg=2,
I common /pr°duct=C°la' place=Colorado %% Cola's sales
=
distinct = i
I cat=msg, attr=[time=7, var =-40] %% failing 40% from Jun to Jul
I I cat=msg, attr=[time=9 var =-32 %% and then a further32 from Sep to Oct
l cat =msg, attr =[product ="Diet Soda", time =9, place =east, var=-33
anaph [occurr =2nd, repeated=[product, place]
, can be used to aggregate content units inside linguistic constituents of any syntactic category (clause, nominal, prepositional phrases, adjectival phrases, etc.). In terms of aggregation task coverage, HYSSOP focuses on paratactic forms of aggregation. In contrast,ASTROGEN, patient status :.~.:br~efs .~r~in~ .: m6di ~a~. ~ .~` :.~6iiented~:inputk:tend `t~ ~ gi.mp~ify~..the: ~vera~ .text PROVERB and STREAK also
hypotactic
and
paradigmatic
CASPER,
perform
aggregation.
3.2 Input
output text
the sentence planner) Proc. of 5 th International. Con/brence on Applications of Natural Language to Information Systems, NLDB'2000, 28-30 June, Versailles France. Favero E. L. and Robin J. (2000b). Implementing Functional Unification Grammars for Text Generation as Featured Definite Clause Grammars. Submitted to Natural Language Engineering. using a domain-independent algorithm and a ~. ..... ~,-~-.-,._.~,. ....... . ........... DBMiner~ ..... (20.00~:....~http-ltdl~,sfia~du/DBMiner/ .......... simple data .structure,'-me -~-raetonzauon ........... index.html matrix, that captures the key structural and Huang G. and Fiedler A@
LIFE -A natural language for natural language. H A'it-Kaci, P Lincoln, Association pour le Traitement Automatique des Langues. Paris France30A'it-Kaci H. and Lincoln P. (1989) LIFE -A natural language for natural language. T.A . Informations, 30(1-2):37-67, Association pour le Traitement Automatique des Langues, Paris France.
Content determination and text structuring; two interrelated processes. D Carcagno, L Iordanskaja, New concepts in NLG: Planning. realisation and systems. H HoracekLondonPinter PublishersCarcagno D. and Iordanskaja L. (1993) Content determination and text structuring; two interrelated processes. In H Horacek (ed.) New concepts in NLG: Planning. realisation and systems. London: Pinter Publishers, pp 10-26.
Aggregation, Formal specification and Natural Language Generation. H Dalianis, Proc. of the NLDB'95 First International Workshop on the application of NL to Databases. of the NLDB'95 First International Workshop on the application of NL to DatabasesVersailles, FranceDalianis H. (1995) Aggregation, Formal specification and Natural Language Generation. In Proc. of the NLDB'95 First International Workshop on the application of NL to Databases, 135-149, Versailles, France.
Aggregation as a subtask of text and sentence planning. H Dalianis, Proc. of Florida AI Research symposium, FLAIRS-96, Florida. of Florida AI Research symposium, FLAIRS-96, FloridaDalianis H. (1996) Aggregation as a subtask of text and sentence planning. In Proc. of Florida AI Research symposium, FLAIRS-96, Florida, pp 1-5.
Floating constraints in lexica. M Elhadad, K Mckeown, J Robin, choice. Computational Linguistics. 223Elhadad M., McKeown K. and Robin J. (1997) Floating constraints in lexica[ choice. Computational Linguistics, 23(2).
Generating hypertext summaries of data mining discoveries in multidimensional databases. E L Favero, UFPE, Recife, BrazilPhD Thesis. Centro de lnform~ticaFavero E. L. (2000). Generating hypertext summaries of data mining discoveries in multidimensional databases. PhD Thesis. Centro de lnform~tica, UFPE, Recife, Brazil.
Using OLAP and data mining for content planning in natural language generation. Accepted for publication in aggregation argumentative text using text structure. E L Favero, J Robin, Proc. of the 8th International NLG Workshop. of the 8th International NLG WorkshopSussex, UKFavero E. L. and Robin J. (2000a). Using OLAP and data mining for content planning in natural language generation. Accepted for publication in aggregation argumentative text using text structure. In Proc. of the 8th International NLG Workshop, pages 21-3, Sussex, UK.
Semantics and Cognition. R Jackendoff, MIT PressCambridge, MAJackendoff R. (1985) Semantics and Cognition. MIT Press, Cambridge, MA, June 15-17.
Fluency in Natural Language Reports in Natural Language Generation Systems. K Kukich, McDonald, D. & Bloc, L.Springer-VerlagKukich K. (1988) Fluency in Natural Language Reports in Natural Language Generation Systems, McDonald, D. & Bloc, L. (Eds.), Springer-Verlag.
Practical issues in automatic document generation. K Mckeown, K Kukich, J Shaw, Proc. of ANLP '94. of ANLP '94StuttgartMcKeown K., Kukich, K. and Shaw J. (1994) Practical issues in automatic document generation. In Proc. of ANLP '94, pages 7-14, Stuttgart, Oct.
0994) Has a Consensus NL Generation Architecture Appeared, and is it Psycholinguistically Plausible?. M Meteer, London, E Reiter, Proc of the Seventh International Workshop on Natural Language Generation (INLGW-I994). of the Seventh International Workshop on Natural Language Generation (INLGW-I994)Kennebunkport, Maine, USAPinter Publisher LimitedExpressibility and the problem of efficient text planning. Communication in Artificial IntelligenceMeteer M. (1992) Expressibility and the problem of efficient text planning. Communication in Artificial Intelligence. Pinter Publisher Limited, London, Reiter E. 0994) Has a Consensus NL Generation Architecture Appeared, and is it Psycholinguistically Plausible? In Proc of the Seventh International Workshop on Natural Language Generation (INLGW-I994), pages 163- 170. Kennebunkport, Maine, USA.
Revision-based generation of natural language summaries providing historical background: corpus-based analysis, design, implementation and evaluation. J Robin, CUCS-034-94357New York, USAColumbia University, Computer Science DepartmentPh.D. Thesis.Robin J. (1995) Revision-based generation of natural language summaries providing historical background: corpus-based analysis, design, implementation and evaluation. Ph.D. Thesis. CUCS-034-94, Columbia University, Computer Science Department, New York, USA. 357p.
Empirically designing and evaluating a new revision-based model for summary generation. J Robin, K Mckeown, Artificial hTtelligence. 8557Robin J. and McKeown K. (1996) Empirically designing and evaluating a new revision-based model for summary generation. Artificial hTtelligence, 85(1-2). 57p.
998) Discovery-driven exploration of MDDB data cubes. --S.-Agrawal R Sarawagi, N Megiddo, Proc. Int. Cotf of Extending Database .... Technology (ED2BT'98). Int. Cotf of Extending Database .... Technology (ED2BT'98)Sarawagi--S.-Agrawal R and Megiddo N. ([998) Discovery-driven exploration of MDDB data cubes. In Proc. Int. Cotf of Extending Database .... Technology (ED2BT'98), March.
Segregatory coordination and ellipsis in text generation. J Shaw, Proc. of the 17 'h COLING '98. of the 17 'h COLING '98Shaw J. (1998) Segregatory coordination and ellipsis in text generation. In Proc. of the 17 'h COLING '98. |
202,541,036 | Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation | Neural networks are part of many contemporary NLP systems, yet their empirical successes come at the price of vulnerability to adversarial attacks. Previous work has used adversarial training and data augmentation to partially mitigate such brittleness, but these are unlikely to find worst-case adversaries due to the complexity of the search space arising from discrete text perturbations. In this work, we approach the problem from the opposite direction: to formally verify a system's robustness against a predefined class of adversarial attacks. We study text classification under synonym replacements or character flip perturbations. We propose modeling these input perturbations as a simplex and then using Interval Bound Propagation -a formal model verification method. We modify the conventional loglikelihood training objective to train models that can be efficiently verified, which would otherwise come with exponential search complexity. The resulting models show only little difference in terms of nominal accuracy, but have much improved verified accuracy under perturbations and come with an efficiently computable formal guarantee on worst case adversaries. | [
11217889,
19204066,
1671874,
4956100,
3626819,
13694466,
6067240,
25422730,
6628106,
17730607,
3488815,
21698802,
990233,
1922134,
52962648,
1957433,
7228830
] | Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation
November 3-7, 2019
Po-Sen Huang posenhuang@google.com
University College London
Robert Stanforth stanforth@google.com
University College London
Johannes Welbl j.welbl@cs.ucl.ac.uk
University College London
Chris Dyer cdyer@google.com
University College London
Dani Yogatama dyogatama@google.com
University College London
Sven Gowal sgowal@google.com
University College London
Krishnamurthy Dvijotham
University College London
Pushmeet Kohli pushmeet@google.com
University College London
† Deepmind
University College London
Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaNovember 3-7, 20194083
Neural networks are part of many contemporary NLP systems, yet their empirical successes come at the price of vulnerability to adversarial attacks. Previous work has used adversarial training and data augmentation to partially mitigate such brittleness, but these are unlikely to find worst-case adversaries due to the complexity of the search space arising from discrete text perturbations. In this work, we approach the problem from the opposite direction: to formally verify a system's robustness against a predefined class of adversarial attacks. We study text classification under synonym replacements or character flip perturbations. We propose modeling these input perturbations as a simplex and then using Interval Bound Propagation -a formal model verification method. We modify the conventional loglikelihood training objective to train models that can be efficiently verified, which would otherwise come with exponential search complexity. The resulting models show only little difference in terms of nominal accuracy, but have much improved verified accuracy under perturbations and come with an efficiently computable formal guarantee on worst case adversaries.
Introduction
Deep models have been shown to be vulnerable against adversarial input perturbations (Szegedy et al., 2013;Kurakin et al., 2016). Small, semantically invariant input alterations can lead to drastic changes in predictions, leading to poor performance on adversarially chosen samples. Recent work (Jia and Liang, 2017;Belinkov and Bisk, 2018;Ettinger et al., 2017) also exposed the vulnerabilities of neural NLP models, e.g. with small ‡ Work done during an internship at DeepMind. § Equal contribution. From the left, input perturbations define the extreme points of a simplex (in red, projected to 2D here) around the statement "great event" that is propagated through a model. At each layer, this shape deforms itself, but can be bounded by axis-parallel bounding boxes, which are propagated similarly. Finally, in logit space, we can compute an upper bound on the worst-case specification violation (e.g., prediction changes).
character perturbations (Ebrahimi et al., 2018) or paraphrases (Ribeiro et al., 2018;. These adversarial attacks highlight often unintuitive model failure modes and present a challenge to deploying NLP models. Common attempts to mitigate the issue are adversarial training (Ebrahimi et al., 2018) and data augmentation (Belinkov and Bisk, 2018;Li et al., 2017), which lead to improved accuracy on adversarial examples. However, this might cause a false sense of security, as there is generally no guarantee that stronger adversaries could not circumvent defenses to find other successful attacks (Carlini and Wagner, 2017;Athalye et al., 2018;. Rather than continuing the race with adversaries, formal verification (Baier and Katoen, 2008;Barrett and Tinelli, 2018;) offers a different approach: it aims at providing provable guarantees to a given model specification. In the case of adversarial robustness, such a specification can be formulated as prediction consistency under any altered -but semantically invariant -input change.
In this paper, we study verifiable robustness, i.e., providing a certificate that for a given network and test input, no attack or perturbation under the specification can change predictions, using the example of text classification tasks, Stanford Sentiment Treebank (SST) (Socher et al., 2013) and AG News (Zhang et al., 2015). The specification against which we verify is that a text classification model should preserve its prediction under character (or synonym) substitutions in a character (or word) based model. We propose modeling these input perturbations as a simplex and then using Interval Bound Propagation (IBP) Mirman et al., 2018; to compute worst case bounds on specification satisfaction, as illustrated in Figure 1. Since these bounds can be computed efficiently, we can furthermore derive an auxiliary objective for models to become verifiable. The resulting classifiers are efficiently verifiable and improve robustness on adversarial examples, while maintaining comparable performance in terms of nominal test accuracy.
The contributions of this paper are twofold:
• To the best of our knowledge, this paper is the first to introduce verification and verifiable training for neural networks in natural language processing ( §3).
• Through a series of experiments ( §4), we demonstrate (a) the effectiveness of modeling input perturbations as a simplex and using simplex bounds with IBP for training and testing, (b) the weakness of adversarial training under exhaustive verification, (c) the effects of perturbation space on the performance of different methods, and (d) the impact of using GloVe and counter-fitted embeddings on the IBP verification bounds.
Related Work
Adversarial Examples in NLP. Creating adversarial examples for NLP systems requires identifying semantically invariant text transformations to define an input perturbation space. In this paper, given our specification, we study word-and character-level HotFlip attacks (Ebrahimi et al., 2018) -which consist of character and synonym replacements -on text classification tasks. We compare our verifiable approach to other defenses including adversarial training (Goodfellow et al., 2014) and data augmentation (Li et al., 2017;Belinkov and Bisk, 2018). Note that some existing adversarial perturbations such as syntactically controlled paraphrasing , exploiting backtranslation systems (Ribeiro et al., 2018), or using targeted keyword attack (Cheng et al., 2018) are beyond the specification in this paper.
Formal Verification of Neural Networks. Formal verification provides a provable guarantee that models are consistent with a specification for all possible model inputs. Previous work can be categorised into complete methods that use Mixed-Integer Programming (MIP) (Bunel et al., 2017;Cheng et al., 2017) or Satisfiability Modulo Theory (SMT) , and incomplete methods that solve a convex relaxation of the verification problem (Weng et al., 2018;Wong and Kolter, 2018;. Complete methods perform exhaustive enumeration to find the worst case. Hence, complete methods are expensive and difficult to scale, though they provide exact robustness bounds. Incomplete methods provide loose robustness bounds, but can be more scalable and used inside the training loop for training models to be robust and verifiable (Raghunathan et al., 2018;Wong and Kolter, 2018;. Our work is the first to extend incomplete verification to text classification, considering input perturbations on a simplex and minimising worst case bounds to adversarial attacks in text classification. We highlight that the verification of neural networks is an extremely challenging task, and that scaling complete and incomplete methods to large models remains an open challenge.
Representations of Combinatorial Spaces.
Word lattices and hypergraphs are data structures that have often been used to efficiently represent and process exponentially large numbers of sentences without exhaustively enumerating them. Applications include automatic speech recognition (ASR) output rescoring (Liu et al., 2016), machine translation of ASR outputs (Bertoldi et al., 2007), paraphrase variants (Onishi et al., 2010), and word segmentation alternatives (Dyer et al., 2008).
The specifications used to characterise the space of adversarial attacks are likewise a compact representation, and the algorithms discussed below operate on them without exhaustive enumeration.
Methodology
We assume a fixed initial vector representation z 0 of a given input sentence z 1 (e.g. the concatenation of pretrained word embeddings) and use a neural network model, i.e. a series of differentiable transformations h k :
z k = h k (z k 1 ) k = 1, . . . , K(1)
where z k is the vector of activations in the k-th layer and the final output z K consists of the logits for each class. Typically each h k will be an affine transformation followed by an activation function (e.g. ReLU or sigmoid). The affine transformation can be a convolution (with the inputs and outputs having an implied 2D structure) of a vector of activations at each point in a sequence; in what follows these activations will be concatenated along the sequence to form a vector z k .
Verification
Verification is the process of examining whether the output of a model satisfies a given specification. Formally, this means establishing whether the following holds true for a given normal model input
x 0 : 8z 0 2 X in (x 0 ) : z K 2 X out ,
where X out characterizes a constraint on the outputs, and X in (x 0 ) defines a neighbourhood of x 0 throughout which the constraint should be satisfied. In our concrete use case, we consider a specification of robustness against adversarial attacks which are defined by bounded input perturbations (synonym flips up to words, or character flips up to characters) of the original sentence x. The attack space X in (x 0 ) is the set of vector representations (embeddings) of all such perturbed sentences. Denoting by z K,y the logit of label y, we formulate the output constraint that for all classes y : z K,ytrue z K,y . This specification establishes that the prediction of all perturbed sentences z 0 2 X in (x 0 ) should correspond to the correct label y true . This specification may equivalently be formulated as a set of half-space constraints on the logits: for each class y
(e y e ytrue ) > z K 0 8z 0 2 X in (x 0 )(2)
where e i is a one-hot vector with 1 in the i-th position. In other words, the true class logit should be greater or equal than those for all other classes y, which means the prediction remains constant.
Verification as Optimisation
Verifying the specification in Eq.
(2) can be done by solving the following constrained optimisation problem to find the input that would most strongly violate it:
maximize z 0 2X in (x 0 ) c > z K subject to z k = h k (z k 1 ) k = 1, . . . , K(3)
where c is a vector with entries c y = 1, c ytrue = 1 and 0 everywhere else. If the optimal value of the above optimisation problem is smaller than 0, then the specification in Eq.
(2) is satisfied, otherwise a counter-example has been found. In our case, this corresponds to a successful adversarial attack.
Modeling Input Perturbations using Simplices
In the interests of computational feasibility, we will actually attempt to verify the specification on a larger, but more tractable input perturbation spacē X in ◆ X in . Any data point that is verifiable on this larger input perturbation space is necessarily verifiable with respect to the original specification.
In the domain of image classification, X in is often modeled as an L 1 -ball, corresponding to input perturbations in which each pixel may be independently varied within a small interval. However, using such interval bounds is unsuitable for our situation of perturbations consisting of a small number of symbol substitutions. Although we could construct an axis-aligned bounding boxX in in embedding space that encompasses all of X in , it would over-approximate the perturbation space to such an extent that it would contain perturbations where all symbols in the sentence have been substituted simultaneously.
To remedy this, we propose a tighter overapproximation in the form of a 'simplex' in embedding space. We first define this for the special case
= 1, in which X in = {x 0 } [ {p (m) 0
: 1 m M } consists of the representations of all M sentences p (m) derived from x by performing a single synonym (or character) substitution, together with the unperturbed sentence x itself. In this case we defineX in to be the convex hull S 1 of X in . Note we are not considering contextual embeddings (Peters et al., 2018)
here. Each 'vertex' p (m) 0
is a sequence of embedding vectors that differs from x 0 at only one word (or character) position.
For a larger perturbation radius > 1, the cardinality of X in grows exponentially, so manipulating its convex hull becomes infeasible. However, dilating S 1 centered at x 0 , scaling it up by a factor of , yields a simplex S with M + 1 vertices that contains X in .
More formally, we define a region in the input embedding space based on the M 'elementary' perturbations {p
= x 0 + · (p (m) 0
x 0 ). The convex hull is an over-approximation of X in (x 0 ): it contains the representations of all sentences derived from x by performing up to substitutions at distinct word (or character) positions.
Interval Bound Propagation
To estimate the optimal value of the problem (3), given an input z 0 , we can propagate the upper/lower bounds on the activations z k of each layer using interval arithmetic .
We begin by computing interval bounds on the first layer's activations. Recall that any input z 0 2 X in will lie within the convex hull of certain vertices {z (m) 0 : m = 0 . . . M}. Then, assuming that the first layer h 1 is an affine transformation (e.g. linear or convolutional) followed by a monotonic activation function, the lower and upper bounds on the components z 1,i of the first layer's activations z 1 are as follows:
z 1,i ( ) = min m=0,...,M e > i h 1 (z (m) 0 ) z 1,i ( ) = max m=0,...,M e > i h 1 (z (m) 0 )(4)
Note that these bounds are efficient to compute (by passing each perturbation z (m) 0 through the first layer); in particular there is no need to compute the convex hull polytope.
For subsequent layers k > 1, the bounds on the components z k,i of z k are:
z k,i ( ) = min z k 1 ( )z k 1 z k 1 ( ) e > i h k (z k 1 ) z k,i ( ) = max z k 1 ( )z k 1 z k 1 ( ) e > i h k (z k 1 )(5)
The above optimisation problems can be solved in closed form quickly for affine layers and monotonic activation functions, as illustrated in . Finally, the lower and upper bounds of the output logits z K can be used to construct an upper bound on the solution of (3):
maximize z K ( )z K z K ( ) c > z K (6)
Verifiable Training. The upper bound in (6) is fast to compute (only requires two forward passes for upper and lower bounds through the network). Hence, we can define a loss to optimise models such that the models are trained to be verifiable. Solving (6) is equivalent to finding the worst-case logit difference, and this is achieved when the logit of the true class is equal to its lower bound, and all other logits equal to their upper bounds. Concretely, for each class y 6 = y true :ẑ K,y ( ) = z K,y ( ), and z K,ytrue ( ) = z K,ytrue ( ). The training loss can then be formulated as
L = `(z K , y true ) | {z } L normal +(1 )`(ẑ K ( ), y true ) | {z } Lspec(7)
where`is the cross-entropy loss, a hyperparameter that controls the relative weights between the classification loss L normal and specification loss L spec . If = 0 then z K =ẑ K ( ), and thus L reduces to a standard classification loss. Empirically, we found that a curriculum-based training, starting with =1 and linearly decreasing to 0.25, is effective for verifiable training.
Experiments
We conduct verification experiments on two text classification datasets, Stanford Sentiment Treebank (SST) (Socher et al., 2013) and AG News corpus, processed in (Zhang et al., 2015). We focus on word-level and character-level experiments on SST and character-level experiments on AG News.
Our specification is that models should preserve their prediction against up to synonym substitutions or character typos, respectively.
A Motivating Example
We provide an example from Table 2 to highlight different evaluation metrics and training methods. Given a sentence, "you ' ve seen them a million times .", that is predicted correctly (called Nominal Accuracy 2 ) by a classification model, we want to further examine whether the model is robust against character typos (e.g., up to = 3 typos) to this example. One way is to use some heuristic to search for a valid example with up to 3 typos that can change the prediction the most (called adversarial example). We evaluate the model using this adversarial example and report the performance (called Adversarial Accuracy). However, even if the adversarial example is predicted correctly, one can still ask: is the model truly robust against any typos (up to 3) to this example? In order to have a certificate that the prediction will not change under any = 3 character typos (called verifiably robust), we could in theory exhaustively search over all possible cases and check whether any of the predictions is changed (called Oracle Accuracy). If we only allow a character to be replaced by another character nearby on the keyboard, already for this short sentence we need to exhaustively search over 2,951 possible perturbations. To avoid this combinatorial growth, we can instead model all possible perturbations using the proposed simplex bounds and propagate the bounds through IBP at the cost of two forward passes. Following Eq.
(3), we can check whether this example can be verified to be robust against all perturbations (called IBP-Verified Accuracy).
There are also a number of ways in which the training procedure can be enhanced to improve the verifiable robustness of a model against typos to the sentence. The baseline is to train the model with the original/normal sentence directly (called Normal Training). Another way is to randomly sample typo sentences among the 2,951 possible perturbations and add these sentences to the training data (called Data Augmentation Training). Yet another way is to find, at each training iteration, the adversarial example among the (subset of) 2,951 possible perturbations that can change the prediction the most; we then use the adversarial example alongside the training example (called Adversarial Training). Finally, as simplex bounds with IBP is efficient to run, we can train a model to be verifiable by minimising Eq. (7) (called Verifiable Training).
Baselines
In this section we detail our baseline models.
Adversarial Training. In adversarial training (Madry et al., 2018;Goodfellow et al., 2014), the goal is to optimise the following saddle point problem:
min ✓ E (x 0 ,y) max z 0 2X in (x 0 )`✓ (z 0 , y)(8)
where the inner maximisation problem is to find an adversarial perturbation z 0 2 X in (x 0 ) that can maximise the loss. In the inner maximisation problem, we use HotFlip (Ebrahimi et al., 2018) with perturbation budget to find the adversarial example. The outer minimisation problem aims to update model parameters such that the adversarial risk of (8) is minimised. To balance between the adversarial robustness and nominal accuracy, we use an interpolation weight of 0.5 between the original cross-entropy loss and the adversarial risk.
Data Augmentation Training. In the data augmentation setup, we randomly sample a valid perturbation z with perturbation budget from a normal input x, and minimise the cross-entropy loss given the perturbed sample z (denoted as data augmentation loss). We also set the interpolation weight between the data augmentation loss and the original normal cross-entropy loss to 0.5.
Normal Training. In normal training, we use the likelihood-based training using the normal training input x.
Setup
We use a shallow convolutional network with a small number of fully-connected layers for SST and AG News experiments. The detailed model architectures and hyperparameter details are introduced in the supplementary material. Although we use shallow models for ease of verifiable training, our nominal accuracy is on par with previous work such as Socher et al. (2013) For word-level experiments, we construct the synonym pairs using the PPDB database (Ganitkevitch et al., 2013) and filter the synonyms with fine-grained part-of-speech tags using Spacy (Honnibal and Montani, 2017). For character-level experiments, we use synthetic keyboard typos from Belinkov and Bisk (2018), and allow one possible alteration per character that is adjacent to it on an American keyboard. The allowable input perturbation space is much larger than for word-level synonym substitutions, as shown in Table 3.
Evaluation Metrics
We use the following four metrics to evaluate our models: i) test set accuracy (called Acc.), ii) adversarial test accuracy (called Adv. Acc.), which uses samples generated by HotFlip attacks on the original test examples, iii) verifiable accuracy under IBP verification (called IBP-verified), that is, the ratio of test samples for which IBP can verify that the specification is not violated, and iv) exhaustively verified accuracy (called Oracle), computed by enumerating all possible perturbations given the perturbation budget , where a sample is verifiably robust if the prediction is unchanged under all valid perturbations. Table 1 shows the results of IBP training and baseline models under = 3 and = 2 3 perturbations on SST and AG News, respectively. Figures 2 and 3 show the character-and word-level results with between 1 and 6 under four metrics on the SST test set; similar figures for SST word-level (adversarial training, data augmentation) models and AG News dataset can be found in the supplementary material.
Results
Oracle Accuracy and Adversarial Accuracy.
In Table 1, comparing adversarial accuracy with 3 Note that the exhaustive oracle is not computationally feasible beyond = 2 on AG News. exhaustive verification accuracy (oracle), we observe that although adversarial training is effective at defending against HotFlip attacks (74.9 / 76.8 / 85.5%), the oracle adversarial accuracy under exhaustive testing (25.8 / 74.6 / 81.6%) is much lower in SST-character / SST-word / AG-character level, respectively. For illustration, we show some concrete adversarial examples from the HotFlip attack in Table 2. For some samples, even though the model is robust with respect to HotFlip attacks, its predictions are incorrect for stronger adversarial examples obtained using the exhaustive verification oracle. This underscores the need for verification, as robustness with respect to suboptimal adversarial attacks alone might give a false sense of security.
Effectiveness of Simplex Bounds with IBP.
Rather than sampling individual points from the perturbation space, IBP training covers the full space at once. The resulting models achieve the highest exhaustively verified accuracy at the cost of only moderate deterioration in nominal accuracy (Table 1). At test time, IBP allows for constant-time verification with arbitrary , whereas exhaustive verification requires evaluation over an exponentially growing search space.
Perturbation Space Size. In Table 1, when the perturbation space is larger (SST character-level vs. SST word-level), (a) across models, there is a larger gap in adversarial accuracy and true robustness Figure 3: SST word-level models with normal and verifiable training objectives (trained at =3) using GloVe and counter-fitted (CF) embeddings against different perturbation budgets in nominal accuracy, adversarial accuracy, exhaustively verified accuracy (Oracle), and IBP verified accuracy. Note that exhaustive verification is not scalable to perturbation budget 6 and beyond.
(oracle); (b) the difference in oracle robustness between IBP and adversarial training is even larger (73.1% vs. 25.8% and 76.5% vs. 74.6%).
Perturbation Budget. In Figures 2 and 3, we compare normal training, adversarial training, data augmentation, and verifiable training models with four metrics under various perturbation budgets on the SST dataset. Overall, as the perturbation budget increases, the adversarial accuracy, oracle accuracy, and IBP-verified accuracy decrease. We can observe that even for large perturbation budgets, verifiably trained models are still able to verify a sizable number of samples. Again, although adversarial accuracy flattens for larger perturbation budgets in the word level experiments, oracle verification can further find counterexamples to change the prediction. Note that exhaustive verification becomes intractable with large perturbation sizes.
Computational Cost of Exhaustive Verification. The perturbation space in NLP problems is discrete and finite, and a valid option to verify the specification is to exhaustively generate predictions for all z 0 2 X in (x 0 ), and then check if at least one does not match the correct label. Conversely, such an exhaustive (oracle) approach can also identify the strongest possible attack. But the size of X in grows exponentially with , and exhaustive verification quickly becomes prohibitively expensive.
In Table 3, we show the maximum perturbation space size in the SST and AG News test set for different perturbation radii . This number grows exponentially as increases. To further illustrate this, Figure 4 shows the number of forward passes required to verify a given proportion of the SST test set for an IBP-trained model using exhaustive verification and IBP verification. IBP reaches verification levels comparable to an exhaustive verification oracle, but requires only two forward passes to verify any sample -one pass for computing the upper, and one for the lower bounds. Exhaustive verification, on the other hand, requires several orders of magnitude more forward passes, and there is a tail of samples with extremely large attack spaces.
Counter-Fitted Embeddings
As shown in Figures 2 and 3a, although IBP can verify arbitrary networks in theory, the ver-ification bound is very loose except for models trained to be IBP-verifiable. One possible reason is the potentially large volume of the perturbation simplex. Since representations of substitution words/characters are not necessarily close to those of synonyms/typos in embedding space, the vertices of the simplex could be far apart, and thus cover a large area in representation space. Therefore, when propagating the interval bounds through the network, the interval bounds become too loose and fail to verify most of the examples if the models are not specifically trained. To test this hypothesis, we follow Mrkšić et al. (2016) and use fine-tuned GloVe embeddings trained to respect linguistic constraints; these representations (called counter-fitted embeddings) force synonyms to be closer and antonyms to be farther apart using word pairs from the PPDB database (Ganitkevitch et al., 2013) and WordNet (Miller, 1995). We repeat the word level experiments with these counter-fitted embeddings, Figures 3c and 3d show the experimental results. We observe that IBP verified accuracy is now substantially higher across models, especially for = 1, 2, 3. The examples which IBP can verify increase by up to 33.2% when using the counter-fitted embeddings (normal training, = 1). Moreover, adversarial and exhaustively verified accuracy are also improved, at the cost of a mild deterioration in nominal test accuracy. The IBP-trained model also further improves both its oracle accuracy and IBP verified accuracy. These results validate our hypothesis that reducing the simplex volume via soft linguistic constraints can provide even tighter bounds for IBP, resulting in larger proportions of verifiable samples.
Discussion
Our experiments indicate that adversarial attacks are not always the worst adversarial inputs, which can only be revealed via verification. On the other hand, exhaustive verification is computationally very expensive. Our results show that using the proposed simplex bounds with IBP can verify a sizable amount of test samples, and can be considered a potent verification method in an NLP context. We note however two limitations within the scope of this work: i) limited model depth: we only investigated models with few layers. IBP bounds are likely to become looser as the number of layers increases. ii) limited model types: we only studied models with CNN and fully connected layers.
We focused on the HotFlip attack to showcase specification verification in the NLP context, with the goal of understanding factors that impact its effectiveness (e.g. the perturbation space volume, see Section 4.6). It is worth noting that symbol substitution is general enough to encompass other threat models such as lexical entailment perturbations (Glockner et al., 2018), and could potentially be extended to the addition of pre/postfixes (Jia and Liang, 2017;Wallace et al., 2019).
Interesting directions of future work include: tightening IBP bounds to allow applicability to deeper models, investigating bound propagation in other types of neural architectures (e.g. those based on recurrent networks or self-attention), and exploring other forms of specifications in NLP.
Conclusion
We introduced formal verification of text classification models against synonym and character flip perturbations. Through experiments, we demonstrated the effectiveness of the proposed simplex bounds with IBP both during training and testing, and found weaknesses of adversarial training compared with exhaustive verification. Verifiably trained models achieve the highest exhaustive verification accuracy on SST and AG News. IBP verifies models in constant time, which is exponentially more efficient than naive verification via exhaustive search.
Figure 1 :
1Illustration of verification with the input simplex and Interval Bound Propagation.
:
m = 1 . . . M} of x 0 defined earlier for the = 1 case. For perturbations of up to substitutions, we defineX in (x 0 ) as the convex hull of {z
Figure 2 :
2SST character-level models with different training objectives (trained at =3) against different perturbation budgets in nominal accuracy, adversarial accuracy, exhaustively verified accuracy (Oracle), and IBP verified accuracy. Note that exhaustive verification is not scalable to perturbation budget 4 and beyond.(a) Normal Training (GloVe) (b) Verifiable Training (IBP) (GloVe) (c) Normal Training (CF) (d) Verifiable Training (IBP) (CF)
Table 1 :
1Experimental results for changes up to =3 and =2 symbols on SST and AG dataset, respectively.We
Table 2 :
2Pairs of original inputs and adversarial examples for SST sentiment classification found via an exhaus-
tive verification oracle, but not found by the HotFlip attack (i.e., the HotFlip attack does not change the model
prediction). The bold words/characters represent the flips found by the adversary that change the predictions.
Table 3 :
3Maximum perturbation space size in the SST and AG News test set using word / character substitutions, which is the maximum number of forward passes per sentence to evaluate in the exhaustive verification.Figure 4: Verified accuracy vs. computation budget for exhaustive verification oracles on the SST characterlevel test set, for an IBP-trained model trained with =3. Solid lines represent the number of forward passes required to verify a given proportion of the test set using exhaustive search. Dashed lines indicate verification levels achievable using IBP verification, which comes at small and constant cost, and is thus orders of magnitude more efficient.
For brevity, we will refer both to the original symbol sequence and its corresponding vector representation with the same variable name, distinguishing them by styling.
We use the term "nominal accuracy" to indicate the accuracy under various adversarial perturbations is much lower.
Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. Anish Athalye, Nicholas Carlini, David A , ICML. WagnerAnish Athalye, Nicholas Carlini, and David A. Wag- ner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In ICML, pages 274-283.
Principles of Model Checking. Christel Baier, Joost-Pieter Katoen, MIT pressChristel Baier and Joost-Pieter Katoen. 2008. Princi- ples of Model Checking. MIT press.
Satisfiability modulo theories. Clark Barrett, Cesare Tinelli, Handbook of Model Checking. SpringerClark Barrett and Cesare Tinelli. 2018. Satisfiability modulo theories. In Handbook of Model Checking, pages 305-343. Springer.
Synthetic and natural noise both break neural machine translation. Yonatan Belinkov, Yonatan Bisk, International Conference on Learning Representations. Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine transla- tion. In International Conference on Learning Rep- resentations.
Speech translation by confusion network decoding. Nicola Bertoldi, Richard Zens, Marcello Federico, Proc. ICASSP. ICASSPNicola Bertoldi, Richard Zens, and Marcello Federico. 2007. Speech translation by confusion network de- coding. In Proc. ICASSP.
Rudy Bunel, Ilker Turkaslan, H S Philip, Pushmeet Torr, M Pawan Kohli, Kumar, arXiv:1711.00455Piecewise linear neural network verification: a comparative study. arXiv preprintRudy Bunel, Ilker Turkaslan, Philip HS Torr, Pushmeet Kohli, and M Pawan Kumar. 2017. Piecewise lin- ear neural network verification: a comparative study. arXiv preprint arXiv:1711.00455.
. Nicholas Carlini, Guy Katz, Clark Barrett, David L Dill, arXiv:1709.10207Ground-truth adversarial examples. arXiv preprintNicholas Carlini, Guy Katz, Clark Barrett, and David L Dill. 2017. Ground-truth adversarial examples. arXiv preprint arXiv:1709.10207.
Adversarial examples are not easily detected: Bypassing ten detection methods. Nicholas Carlini, David Wagner, Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. the 10th ACM Workshop on Artificial Intelligence and SecurityACMNicholas Carlini and David Wagner. 2017. Adver- sarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Secu- rity, pages 3-14. ACM.
Maximum resilience of artificial neural networks. Chih-Hong Cheng, Georg Nührenberg, Harald Ruess, International Symposium on Automated Technology for Verification and Analysis. SpringerChih-Hong Cheng, Georg Nührenberg, and Harald Ruess. 2017. Maximum resilience of artificial neu- ral networks. In International Symposium on Au- tomated Technology for Verification and Analysis, pages 251-268. Springer.
Seq2Sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples. Minhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen, Cho-Jui Hsieh, abs/1803.01128CoRRMinhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen, and Cho-Jui Hsieh. 2018. Seq2Sick: Evaluating the robustness of sequence-to-sequence models with ad- versarial examples. CoRR, abs/1803.01128.
Training verified learners with learned verifiers. Krishnamurthy Dvijotham, Sven Gowal, Robert Stanforth, Relja Arandjelovic, O' Brendan, Jonathan Donoghue, Pushmeet Uesato, Kohli, arXiv:1805.10265arXiv preprintKrishnamurthy Dvijotham, Sven Gowal, Robert Stan- forth, Relja Arandjelovic, Brendan O'Donoghue, Jonathan Uesato, and Pushmeet Kohli. 2018. Train- ing verified learners with learned verifiers. arXiv preprint arXiv:1805.10265.
Generalizing word lattice translation. Christopher Dyer, Smaranda Muresan, Philip Resnik, Association for Computational Linguistics. Columbus, OhioProceedings of ACL-08: HLTChristopher Dyer, Smaranda Muresan, and Philip Resnik. 2008. Generalizing word lattice transla- tion. In Proceedings of ACL-08: HLT, pages 1012- 1020, Columbus, Ohio. Association for Computa- tional Linguistics.
HotFlip: White-box adversarial examples for text classification. Javid Ebrahimi, Anyi Rao, Daniel Lowd, Dejing Dou, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics2Short Papers)Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial exam- ples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 31-36. Association for Computational Linguistics.
Towards linguistically generalizable NLP systems: A workshop and shared task. Allyson Ettinger, Sudha Rao, Hal Daumé, Iii , Emily M Bender, Proceedings of the First Workshop on Building Linguistically Generalizable NLP Systems. the First Workshop on Building Linguistically Generalizable NLP SystemsAllyson Ettinger, Sudha Rao, Hal Daumé III, and Emily M. Bender. 2017. Towards linguistically gen- eralizable NLP systems: A workshop and shared task. In Proceedings of the First Workshop on Build- ing Linguistically Generalizable NLP Systems.
PPDB: The paraphrase database. Juri Ganitkevitch, Benjamin Van Durme, Chris Callison-Burch, Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAtlanta, GeorgiaAssociation for Computational LinguisticsJuri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 758-764, Atlanta, Georgia. Associa- tion for Computational Linguistics.
Breaking NLI systems with sentences that require simple lexical inferences. Max Glockner, Vered Shwartz, Yoav Goldberg, 10.18653/v1/P18-2103Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics2Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with sentences that re- quire simple lexical inferences. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 650-655, Melbourne, Australia. Association for Computational Linguistics.
J Ian, Goodfellow, arXiv:1412.6572Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprintIan J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversar- ial examples. arXiv preprint arXiv:1412.6572.
On the effectiveness of interval bound propagation for training verifiably robust models. Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Relja Arandjelovic, Timothy A Mann, Pushmeet Kohli, abs/1810.12715CoRRSven Gowal, Krishnamurthy Dvijotham, Robert Stan- forth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Relja Arandjelovic, Timothy A. Mann, and Push- meet Kohli. 2018. On the effectiveness of inter- val bound propagation for training verifiably robust models. CoRR, abs/1810.12715.
2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. Matthew Honnibal, Ines Montani, To appearMatthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremen- tal parsing. To appear.
Adversarial example generation with syntactically controlled paraphrase networks. Mohit Iyyer, John Wieting, Kevin Gimpel, Luke Zettlemoyer, 10.18653/v1/N18-1170Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875-1885, New Orleans, Louisiana. Association for Computational Linguistics.
Adversarial examples for evaluating reading comprehension systems. Robin Jia, Percy Liang, Empirical Methods in Natural Language Processing (EMNLP). Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. In Empirical Methods in Natural Language Process- ing (EMNLP).
Reluplex: An efficient SMT solver for verifying deep neural networks. Guy Katz, Clark Barrett, L David, Kyle Dill, Julian, Kochenderfer, International Conference on Computer Aided Verification. SpringerGuy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. 2017. Reluplex: An efficient SMT solver for verifying deep neural networks. In International Conference on Computer Aided Verifi- cation, pages 97-117. Springer.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, International Conference on Learning Representations. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. International Conference on Learning Representations.
Alexey Kurakin, Ian Goodfellow, Samy Bengio, arXiv:1607.02533Adversarial examples in the physical world. arXiv preprintAlexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533.
Robust training under linguistic adversity. Yitong Li, Trevor Cohn, Timothy Baldwin, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. the 15th Conference of the European Chapter of the Association for Computational Linguistics2Yitong Li, Trevor Cohn, and Timothy Baldwin. 2017. Robust training under linguistic adversity. In Pro- ceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 2, Short Papers, pages 21-27.
Two efficient lattice rescoring methods using recurrent neural network language models. Xunying Liu, Xie Chen, Yongqiang Wang, J F Mark, Philip C Gales, Woodland, IEEE/ACM Trans. Audio, Speech & Language Processing. 248Xunying Liu, Xie Chen, Yongqiang Wang, Mark J. F. Gales, and Philip C. Woodland. 2016. Two efficient lattice rescoring methods using recurrent neural net- work language models. IEEE/ACM Trans. Audio, Speech & Language Processing, 24(8):1438-1449.
Towards deep learning models resistant to adversarial attacks. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu, International Conference on Learning Representations. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversar- ial attacks. In International Conference on Learning Representations.
WordNet: A lexical database for English. A George, Miller, Commun. ACM. 3811George A. Miller. 1995. WordNet: A lexical database for English. Commun. ACM, 38(11):39-41.
Differentiable abstract interpretation for provably robust neural networks. Matthew Mirman, Timon Gehr, Martin Vechev, Proceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine Learning80Matthew Mirman, Timon Gehr, and Martin Vechev. 2018. Differentiable abstract interpretation for prov- ably robust neural networks. In Proceedings of the 35th International Conference on Machine Learning, volume 80, pages 3578-3586.
Counter-fitting word vectors to linguistic constraints. Nikola Mrkšić, Diarmuidó Séaghdha, Blaise Thomson, Milica Gašić, Lina M Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, Steve Young, 10.18653/v1/N16-1018Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsNikola Mrkšić, DiarmuidÓ Séaghdha, Blaise Thom- son, Milica Gašić, Lina M. Rojas-Barahona, Pei- Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 142-148, San Diego, California. Association for Computational Linguis- tics.
Paraphrase lattice for statistical machine translation. Takashi Onishi, Masao Utiyama, Eiichiro Sumita, Proceedings of the ACL 2010 Conference Short Papers. the ACL 2010 Conference Short PapersUppsala, SwedenAssociation for Computational LinguisticsTakashi Onishi, Masao Utiyama, and Eiichiro Sumita. 2010. Paraphrase lattice for statistical machine translation. In Proceedings of the ACL 2010 Con- ference Short Papers, pages 1-5, Uppsala, Sweden. Association for Computational Linguistics.
GloVe: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, 10.3115/v1/D14-1162Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Association for Computational LinguisticsJeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 1532-1543. Association for Computational Linguistics.
Deep contextualized word representations. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, 10.18653/v1/N18-1202Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.
Certified defenses against adversarial examples. Aditi Raghunathan, Jacob Steinhardt, Percy Liang, International Conference on Learning Representations. Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. 2018. Certified defenses against adversarial exam- ples. In International Conference on Learning Rep- resentations.
Semantically equivalent adversarial rules for debugging NLP models. Sameer Marco Tulio Ribeiro, Carlos Singh, Guestrin, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational Linguistics1Long Papers). Association for Computational LinguisticsMarco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversar- ial rules for debugging NLP models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 856-865. Association for Computational Lin- guistics.
Baseline needs more love: On simple word-embedding-based models and associated pooling mechanisms. Dinghan Shen, Guoyin Wang, Wenlin Wang, Martin Renqiang Min, Qinliang Su, Yizhe Zhang, Chunyuan Li, Ricardo Henao, Lawrence Carin, 10.18653/v1/P18-1041Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaLong Papers1Association for Computational LinguisticsDinghan Shen, Guoyin Wang, Wenlin Wang, Mar- tin Renqiang Min, Qinliang Su, Yizhe Zhang, Chun- yuan Li, Ricardo Henao, and Lawrence Carin. 2018. Baseline needs more love: On simple word-embedding-based models and associated pool- ing mechanisms. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 440- 450, Melbourne, Australia. Association for Compu- tational Linguistics.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, Christopher Potts, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsRichard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642. Association for Computational Linguistics.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, arXiv:1312.6199Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprintChristian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
Robustness may be at odds with accuracy. Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, Aleksander Madry, International Conference on Learning Representations. Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. 2019. Ro- bustness may be at odds with accuracy. In Interna- tional Conference on Learning Representations.
Adversarial risk and the dangers of evaluating against weak attacks. Jonathan Uesato, O' Brendan, Pushmeet Donoghue, Aron Kohli, Van Den Oord, ICML. Jonathan Uesato, Brendan O'Donoghue, Pushmeet Kohli, and Aron van den Oord. 2018. Adversarial risk and the dangers of evaluating against weak at- tacks. In ICML, pages 5032-5041.
Universal trigger sequences for attacking and analyzing NLP. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, Sameer Singh, Empirical Methods in Natural Language Processing (EMNLP). Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gard- ner, and Sameer Singh. 2019. Universal trigger se- quences for attacking and analyzing NLP. In Em- pirical Methods in Natural Language Processing (EMNLP).
Formal security analysis of neural networks using symbolic intervals. Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, Suman Jana, 27th USENIX Security Symposium (USENIX Security 18). Baltimore, MD. USENIX AssociationShiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. 2018. Formal security analysis of neural networks using symbolic inter- vals. In 27th USENIX Security Symposium (USENIX Security 18), pages 1599-1614, Baltimore, MD. USENIX Association.
Towards fast computation of certified robustness for ReLU networks. Lily Weng, Huan Zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Luca Daniel, PMLRProceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningStockholm Sweden80Duane Boning, and Inderjit Dhillon. StockholmsmssanLily Weng, Huan Zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Luca Daniel, Duane Boning, and In- derjit Dhillon. 2018. Towards fast computation of certified robustness for ReLU networks. In Proceed- ings of the 35th International Conference on Ma- chine Learning, volume 80 of Proceedings of Ma- chine Learning Research, pages 5276-5285, Stock- holmsmssan, Stockholm Sweden. PMLR.
Provable defenses against adversarial examples via the convex outer adversarial polytope. Eric Wong, Zico Kolter, International Conference on Machine Learning. Eric Wong and Zico Kolter. 2018. Provable defenses against adversarial examples via the convex outer ad- versarial polytope. In International Conference on Machine Learning, pages 5283-5292.
Character-level convolutional networks for text classification. Xiang Zhang, Junbo Zhao, Yann Lecun, Advances in Neural Information Processing Systems. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Advances in Neural Information Pro- cessing Systems, pages 649-657. |
63,423,276 | The FBK Participation in the WMT15 Automatic Post-editing Shared Task | In this paper, we describe the "FBK English-Spanish Automatic Post-editing (APE)" systems submitted to the APE shared task at the WMT 2015. We explore the most widely used statistical APE technique (monolingual) and its most significant variant (context-aware). In this exploration, we introduce some novel task-specific dense features through which we observe improvements over the default setup of these approaches. We show these features are useful to prune the phrase table in order to remove unreliable rules and help the decoder to select useful translation options during decoding. Our primary APE system submitted at this shared task performs significantly better than the standard APE baseline. | [
5810690,
4895939,
7397197
] | The FBK Participation in the WMT15 Automatic Post-editing Shared Task
Association for Computational LinguisticsCopyright Association for Computational LinguisticsSeptember 2015. 2015
Rajen Chatterjee chatterjee@fbk.eu
Fondazione Bruno Kessler
Marco Turchi Fondazione
Bruno Kessler
Matteo Negri Fondazione
Bruno Kessler
The FBK Participation in the WMT15 Automatic Post-editing Shared Task
Proceedings of the Tenth Workshop on Statistical Machine Translation
the Tenth Workshop on Statistical Machine TranslationLisboa, PortugalAssociation for Computational LinguisticsSeptember 2015. 2015
In this paper, we describe the "FBK English-Spanish Automatic Post-editing (APE)" systems submitted to the APE shared task at the WMT 2015. We explore the most widely used statistical APE technique (monolingual) and its most significant variant (context-aware). In this exploration, we introduce some novel task-specific dense features through which we observe improvements over the default setup of these approaches. We show these features are useful to prune the phrase table in order to remove unreliable rules and help the decoder to select useful translation options during decoding. Our primary APE system submitted at this shared task performs significantly better than the standard APE baseline.
Introduction
Over the last decade a lot of research has been carried out to mimic the human post-editing process in the field of Automatic Post-Editing (APE). The objective of APE is to learn how to correct machine translation (MT) errors leveraging the human post-editing feedback. The variety of data generated by human feedback, in terms of post editing, possess an unprecedented wealth of knowledge about the dynamics (practical and cognitive) of the translation process. APE leverages the potential of this knowledge to improve MT quality. The problem is appealing for several reasons. On one side, as shown by Parton et al. (2012), APE systems can improve MT output by exploiting information unavailable to the decoder, or by performing deeper text analysis that is too expensive at the decoding stage. On the other side, APE represents the only way to rectify errors present in the "black-box" scenario where the MT system is unknown or its internal decoding information is not available.
The goal of the APE task is to challenge the research groups to improve the MT output quality by the use of a dataset consisting of triplets of sentences (source, MT output, human post-edition). We are facing the "MT-as-Black-box" scenario, so neither we have access to the MT engine nor do we have any decoding trace. The data for this pilot task belongs to generic news domain which reflects data sparseness, and the post-edition of the MT output is obtained through crowdsourcing which makes it vulnerable to noise thus making this task even more challenging.
To begin with, §2 discusses the statistical APE methods used to implement the APE systems. §3 describes the data set available for this shared task, and provides detail of the experimental setup.
§4 is our major contribution which discusses the FBK-APE pipeline and shows that incorporation of task-specific dense features can be useful to enhance APE systems. Our final submitted system is reported in §5 followed by conclusion in §6.
Statistical APE Methods
In this paper we examine the most widely used statistical phrase-based post-editing strategy proposed by Simard et al. (2007) and its most significant variant proposed by Béchara et al. (2011). We describe the two methods and there pros and cons in the following subsections.
2.1 APE-1 (Simard et al., 2007) In this approach APE systems are trained in the same way as the statistical machine translation (SMT) system. But, as contrast to SMT which makes use of the source and target language parallel corpus, APE uses the MT output and its corresponding human post-edited data in the form of parallel corpus. One of the most important missing concepts in this "monolingual translation" is the inclusion of source information, which has been incorporated in the next approach.
2.2 APE-2 (Béchara et al., 2011) This technique is the most significant variant of (Simard et al., 2007), where they come up with a new data representation to include the source information along with the MT output on the source side of the parallel corpus. For each MT word f , the corresponding source word (or phrase) e is identified through word alignment and used to obtain a joint representation f #e. This results in a new intermediate language F #E that represents the new source side of the parallel data used to train the statistical APE system. This "context-aware" variant seems to be more precise but faces two potential problems. First, preserving the source context comes at the cost of a larger vocabulary size and, consequently, higher data sparseness that will eventually reduce the reliability of the translation rules being learned. Second, the joint representation f #e may be infected by the word alignment errors which may mislead the learning of translation option.
Recently, Chatterjee et al. (2015) showed a fair systematic comparison of these two approaches over multiple language pairs and revealed that inclusion of source information in the form of context-aware variant is useful to improve translation quality over standard monolingual translation approach. They also showed that using monolingual translation alignment to build context-aware APE helps to mitigate the sparsity issue at the level of word alignment and for this reasons, we use this configuration to implement APE-2 method.
Data set and Experimental setup
Data: In this shared task we are provided with a tri-parallel corpus consisting of source (src), MT output (mt), and human post-edits (pe). While APE-1 uses only the last two elements of the triplet, all of them are used in the context-aware APE-2. To obtain joint representation (f #e) in APE-2, word alignment model is trained on src-mt parallel corpus of the training data. The training set consist of ∼11K triplets, we divide the development set into dev and test set consisting of 500 triplets each. Our evaluation is based on the performance achieved on this test set. We tokenize the data set using the tokenizer available in the MOSES (Koehn et al., 2007) toolkit. Training and evaluation of our APE systems are performed on the true-case data.
Experiment Settings: To develop the APE systems we use the phrase-based statistical machine translation toolkit MOSES (Koehn et al., 2007). For all the experiments mentioned in this paper we use "grow-diag-final-and" as alignment heuristic and "msd-bidirectional-fe" heuristic for reordering model. MGIZA++ (Gao and Vogel, 2008) is used for word alignment. The APE systems are tuned to optimize TER (Snover et al., 2006) with MERT (Och, 2003). We follow an incremental strategy to develop the APE systems, at each stage of the APE pipeline we find the best configuration of a component and then proceed to explore the next component. Our APE pipeline consist of various stages like language model selection, phrase table pruning, and feature designing as discussed in the following sections.
Evaluation Metric: We select TER (Snover et al., 2006) as our evaluation metric because it mimics the human post-editing effort by measuring the edit operation needed to translate the MT output into its human-revised version.
Apart from TER as an evaluation metric we also compute number of sentences being modified 1 in the test set and then compute the precision as follow: Precision = N umberof SentencesImproved N umberof SentencesM odif ied Baseline: Our baseline is the MT output asis. To evaluate, we use the corresponding human post-edited corpus which gives us 23.10 TER score.
APE Pipeline
In this section we describe various components that we explore at each stage of the pipeline. At each stage, we study the effect of several configuration of each component on both the APE methods (APE-1 and APE-2)
Language Model Selection (APE-LM)
We use various data set to train multiple language models to see which of them have high impact on the translation quality. All the LMs are trained us-ing IRSTLM toolkit (Federico et al., 2008) having order of 5 gram with kneser-ney smoothing. The data set varies in quality and quantity as described below:
• LM 1 contains only the training data(∼11K) provided in this shared task. Although the data set contains few sentences to train a language model compared to the data used in MT, it is quite reliable because it is sampled from the same distribution of the test set.
• Results of both the APE systems are shown in Table 1. We notice that the performance of the APE systems do not show much variation for different LMs. This can come from the fact that the news commentary and new crawl data might not resemble well the shared task data. For this reason, the in-domain LM1 is selected and used in the next stages. 2 http://www.statmt.org/wmt13/translation-task.html
Pruning Strategy (APE-LM1-Prun)
To remove unreliable translation rules generated from the data obtained through crowd-sourcing, pruning strategies are investigated. First, we test the classic pruning technique by Johnson et al. (2007) which is based on the significance testing of phrase pair co-occurrence in the parallel corpus. According to our experiments, this technique is too aggressive when applied on limited amounts of sparse data. Nearly 5% of the phrase table is retained after pruning with mostly self-rules (translation options that contain same source and target phrase).
For this reason we develop a novel feature for pruning which measures the usefulness of a translation option present in the phrase table. For each translation option in the phrase table, all the parallel sentences are retrieved from the training set such that the source phrase of the translation option is present in the source sentence of the parallel corpus. We then substitute the target phrase of the translation option in the source sentence of the parallel corpus and then compute the TER score wrt. the corresponding target sentence. If TER increases then we increment the neg-count by 1, and if TER decreases we increment the pos-count by 1. Finally, we compute the neg-impact and the pos-impact as follows:
neg-impact = neg-count N umberof RetrievedSentences pos-impact = pos-count N umberof RetrievedSentences Once these ratios are computed for all translation options, we filter the phrase table by thresholding on the neg-impact to remove rules which are not useful (higher the neg-impact less useful it is). All translation options greater than or equal to the threshold value are filtered out. We apply this pruning strategy for both the APE methods over various threshold values. Table 2 and Table 3 show the performance after pruning the APE-1-LM1 and APE-2-LM1 systems respectively. In Table 2, we observe that TER score for various threshold values are very close to each other, so in order to select the best threshold value we base our decision on precision. So for APE-1, we select the threshold value of 0.4 which shows the highest precision, namely APE-1-LM1-Prun0.4. For APE-2, it is evident from the result in Table 3 proves to be the best in terms of TER score (reduction by 1.13 point) as well as in terms of precision (APE-2-LM1-Prun0.2). These results suggest that our pruning technique has a larger impact on the APE-2 method compared to APE-1. This is motivated by the fact that the context-aware approach is affected by the data sparseness problem resulting in a large number of unreliable translation options that can be removed from the phrase table.
New Dense Features Design
The final stage of our APE pipeline is the feature design. When a translation system is trained using Moses, it generates translation model consisting of default dense features like phrase translation probability (direct and indirect) and lexical translation probability (direct and indirect). In the task of Automatic Post-editing where we have the source and target phrases in the same language, we can leverage this information to provide the decoder with some useful insights. In the light of this direction we design four task-specific dense features to raise the "awareness" of the decoder.
• Similarity (f 1):
This feature (f 1) is quite similar to the one proposed in (Grundkiewicz and Junczys-Dowmunt, 2014) which measures the similarity between the source and target phrase of the translation options. The score for f 1 is computed as follows: f 1 score = e 1−ter(s,t) where ter measures the number of edit operations required to translate the source phrase s to the target phrase t and it is computed using TER (Snover et al., 2006).
• Reliability (f 2.1 and f 2.2) :
We allow the model to learn the reliability of the translation option by providing it with the statistics of the quality (in terms of HTER) of the parallel sentences used to learn that particular translation option. Better the quality, higher the likelihood to learn reliable rules. For each translation option in the phrase table, all the parallel sentence pairs from the training data containing the source phrase in the machine translated sentence of the pair and target phrase in the post-edited sentence are retrieved along with their HTER score. These scores are then used to compute the following two features: Median (f 2.1): The median of the HTER values of all the retrieved pairs.
Standard Deviation (f 2.2): The standard deviation of the HTER values of all the retrieved pairs.
• Usefulness (f 3) : As discussed in Section 4.2 we use pos-impact as a feature to measure the positive impact of a translation option over the training set. Higher the positive impact, higher is its usefulness.
We study the impact of individual features when applied one at a time and when used all together. Table 5: Performance (TER score) of the APE-2-LM1-Prun0.2 for different features APE-2-LM1-Prun0.2 systems respectively. We observe, on this data set, that the use of these features retains the APE performance in terms of TER score but slight improvement is observed in terms of precision over both the APE systems, which indicate its contribution to improve the translation quality.
Features
Final Submitted Systems
Our primary system is the best system in Table 5 i.e. APE-2-LM1-Prun0.2-f1 and contrastive system is the best system in Table 4 i.e. APE-1-LM1-Prun0.4-f2.1-f2.2. According to the shared task evaluation report the scores of our submitted systems are shown in Table 6 Systems Although we could not beat the Baseline (MT), but we see a clear improvement over APE baseline (Simard et al., 2007) by the inclusion of our novel features and the use of the pruning strategy.
Conclusion
The APE shared task was challenging in many terms (black-box MT, generic news domain data, crowdsourced post-editions). Though we were unable to beat the MT baseline but we gained some positive experience through this shared task. First, our primary APE system performed significantly better (0.61 TER reduction) over the standard APE baseline (Simard et al., 2007) as reported in Table 6. Second, our novel dense feature (neg-impact) used to prune phrase table shows significant improvement in the context-aware APE performance.
Third, other task-specific dense features which measure similarity and reliability of the translation options help to improve the precision of our APE systems. To encourage the use of our features we have publicly released the scripts at https: //bitbucket.org/turchmo/apeatfbk/ src/master/papers/WMT2015/APE_ 2015_System_Scripts.zip.
that the threshold value of 0.2Threshold TER
Number of
sentences
modified
Precision
0.8
23.90
88
0.12
0.6
23.91
90
0.13
0.4
23.98
94
0.15
0.2
23.77
70
0.12
Table 2: Performance (TER score) of the APE-1-
LM1 after pruning at various threshold values
Threshold TER
Number of
sentences
modified
Precision
0.8
24.29
130
0.20
0.6
23.99
103
0.18
0.4
23.66
70
0.18
0.2
23.46
50
0.22
Table 3 :
3Performance (TER score) of the APE-2-
LM1 after pruning at various threshold values
Table 4 and Table 5show the performance of various features for APE-1-LM1-prun0.4 andTER Number of
sentences
modified
Precision
f 1
23.87
81
0.16
f2.1, f2.2
23.92
94
0.19
f 3
23.88
82
0.14
f 1, f 2.1,
f 2.2, f 3
23.97
85
0.12
Table 4: Performance (TER score) of the APE-1-
LM1-Prun0.4 for different features
Features
TER Number of
sentences
modified
Precision
f1
23.50
52
0.27
f 2.1, f 2.2 23.50
53
0.20
f 3.1
23.52
59
0.22
f 1, f 2.1,
f 2.2, f 3.1
23.52
54
0.19
Table 6 :
6APE shared task evaluation score (TER)
For each sentence in the test set, if the TER score of APE system is different than the baseline then we consider it as a modified sentence
AcknowledgementsThis work has been partially supported by the ECfunded H2020 project QT21 (grant agreement no. 645452).
Statistical post-editing for a statistical mt system. Hanna Béchara, Yanjun Ma, Josef Van Genabith, MT Summit. 13Hanna Béchara, Yanjun Ma, and Josef van Genabith. 2011. Statistical post-editing for a statistical mt sys- tem. In MT Summit, volume 13, pages 308-315.
Exploring the Planet of the APEs: a Comparative Study of State-of-the-art Methods for MT Automatic Post-Editing. Rajen Chatterjee, Marion Weller, Matteo Negri, Marco Turchi, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics). the 53rd Annual Meeting of the Association for Computational Linguistics)Beijing, ChinaRajen Chatterjee, Marion Weller, Matteo Negri, and Marco Turchi. 2015. Exploring the Planet of the APEs: a Comparative Study of State-of-the-art Methods for MT Automatic Post-Editing. In Pro- ceedings of the 53rd Annual Meeting of the Associa- tion for Computational Linguistics), Beijing, China.
Irstlm: an open source toolkit for handling large scale language models. Marcello Federico, Nicola Bertoldi, Mauro Cettolo, Interspeech. Marcello Federico, Nicola Bertoldi, and Mauro Cet- tolo. 2008. Irstlm: an open source toolkit for han- dling large scale language models. In Interspeech, pages 1618-1621.
Parallel implementations of word alignment tool. Qin Gao, Stephan Vogel, Software Engineering, Testing, and Quality Assurance for Natural Language Processing. Association for Computational LinguisticsQin Gao and Stephan Vogel. 2008. Parallel implemen- tations of word alignment tool. In Software Engi- neering, Testing, and Quality Assurance for Natural Language Processing, pages 49-57. Association for Computational Linguistics.
The amu system in the conll-2014 shared task: Grammatical error correction by data-intensive and feature-rich statistical machine translation. Roman Grundkiewicz, Marcin Junczys-Dowmunt, 25Roman Grundkiewicz and Marcin Junczys-Dowmunt. 2014. The amu system in the conll-2014 shared task: Grammatical error correction by data-intensive and feature-rich statistical machine translation. CoNLL-2014, page 25.
Improving translation quality by discarding most of the phrasetable. John Howard Johnson, Joel Martin, George Foster, Roland Kuhn, John Howard Johnson, Joel Martin, George Foster, and Roland Kuhn. 2007. Improving translation quality by discarding most of the phrasetable.
Moses: Open source toolkit for statistical machine translation. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions. the 45th annual meeting of the ACL on interactive poster and demonstration sessionsAssociation for Computational LinguisticsPhilipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Pro- ceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions, pages 177-180. Association for Computational Linguis- tics.
Minimum error rate training in statistical machine translation. Franz Josef Och, Proceedings of the 41st Annual Meeting on Association for Computational Linguistics. the 41st Annual Meeting on Association for Computational LinguisticsAssociation for Computational Linguistics1Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Compu- tational Linguistics-Volume 1, pages 160-167. As- sociation for Computational Linguistics.
Can automatic post-editing make mt more meaningful. Kristen Parton, Nizar Habash, Kathleen Mckeown, Gonzalo Iglesias, Adrià De Gispert, Proceeding EAMT. 12Kristen Parton, Nizar Habash, Kathleen McKeown, Gonzalo Iglesias, and Adrià de Gispert. 2012. Can automatic post-editing make mt more meaningful. Proceeding EAMT, 12:111-118.
Statistical Phrase-Based Post-Editing. Michel Simard, Cyril Goutte, Pierre Isabelle, Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL HLT). the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL HLT)Rochester, New YorkMichel Simard, Cyril Goutte, and Pierre Isabelle. 2007. Statistical Phrase-Based Post-Editing. In Proceedings of the Annual Conference of the North American Chapter of the Association for Compu- tational Linguistics (NAACL HLT), pages 508-515, Rochester, New York.
A study of translation edit rate with targeted human annotation. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, John Makhoul, Proceedings of association for machine translation in the Americas. association for machine translation in the AmericasMatthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine transla- tion in the Americas, pages 223-231. |
235,482,401 | A Preliminary Study on Deep Learning Neural Networks-based Multi-Model Sentiment Detection | [] | A Preliminary Study on Deep Learning Neural Networks-based Multi-Model Sentiment Detection
Tai-Rong 陳泰融
廖元甫Yuan-Fu Chen
國立臺北科技大學電子工程系 Liao yfliao@mail.ntut.edu.tw
Chen-Ming 潘振銘 chenming@cht.com.tw
郭姿秀Tzu-Hsiu Pan
Kuo
Matúš Pleva matus.pleva@tuke.sk
Daniel Hládek daniel.hladek@tuke.sk
Department of Electronic Engineering
Department of Electronics and Multimedia Communications
National Taipei University of Technology
Telecommunication Laboratories
Chunghwa TelecomTaoyuan Taiwan
Technical University of Košice
Slovakia
A Preliminary Study on Deep Learning Neural Networks-based Multi-Model Sentiment Detection
基於深度類神經網路之多模式情感偵測初步探討
Acknowledgements
work was partly supported by Slovak Research and Development Agency under contract no. APVV SK-TW-2017-0005, APVV-15-0517, APVV-15-0731, partly Cultural and educational grant agency from project KEGA 009TUKE-4/2019 and partly Scientific grant agency by realization of research project VEGA 1/0511/17 both financed by the Ministry of Education, Science, Research and Sport of the Slovak Republic and partly by the Taiwan Ministry of Science and Technology MOST-SRDA contract No. 107-2911-I-027-501, 108-2911-I-027-501, 107-2221-E-027-102, 107-3011-F-027-003 and 108-2221-E-027-067 and partly supported by Telecommunication Laboratories, Chunghwa Telecom, Taoyuan Taiwan contract No. TL-108-D301.
參考文獻
關鍵詞:情感檢測,CNN,BERT, 多模式情感檢測 一、簡介 本論文針對 Fearless Steps Challenge 競賽中的 sentiment detection 任務,進行語音情 感偵測初步探討。Fearless Steps Challenge 比賽,是為了慶祝登月計劃 50 週年所舉辦的 不見得可以反應實際運用時的情境。而 Fearless Steps Challenge 的語音資料,是真實太 空任務中,太空人與任務中心的通訊對話錄音,因此會有許多自然的雜訊和對話。最重 要的是,此 Fearless Steps Corpus 語料庫總共包含 100 小時的語料,而且情緒標籤都是 為了確保 Fearless Steps Challenge 數據公平性,在訓練資料和測試資料中由 Fearless Steps Challenge 來挑選 SNR 較為公平的資料,並根據靜音持續時間和語音持續時間來 進行進一步挑選。表二為分別五個不同場景的 SNR 平均值和 SNR 的標準差。表上有分 別五種不同的錄音場地,分別不同場景有分別不同的 SNR 標準差,其中 Mission 在訓練的過程中,由 Fearless Steps Challenge 來做 SNR 資料分群,Fearless Steps Challenge frequency filterbank 參數,再獲得最終的 MFCCs。在本文中,我們還使用 20 個濾波器 組和 40-MFCC 進行特徵提取,再將 MFCCs 矩陣輸入 CNN 中。 圖二、Sliding Windows for Sentiment Detection 示意圖 在我們的例子中,CNN 扮演一個從語音訊號頻譜中,提取聲學情緒特徵參數的重要 作用。由圖三中可以看出,本論文將 MFCCs 作為 2D 放是作爲輸入,輸入緊接六層 CNN 的基本層數,如圖三所示,CNN 具有[INPUT-CONV-RELU-POOL-CONV-RELUPOOL] 的基本架構。CNN 輸入的大小為 40 * 32,為了盡可能保留 Fearless Steps Challenge 提 BERT 與其他模型不同的是,採用了一種簡單的方法,即隨機屏蔽(masking)部分輸入 token,然後只預測那些被屏蔽的 token。將這個過程稱為(masked LM, MLM), 他在訓練 雙向語言模型時把少量的詞彙替換成 Mask。 本論文為了輸入較多跟情緒相依性的特徵,一樣採用 Sliding Windows 的方式對文 字進行每幀的數據採樣,作法如圖四所示,在當前單字往前取 12 個單詞往後曲 12 個單 詞總共 25 個單詞作為 BERT 句子的輸入,對句子做單一個詞的位移來取得下一句,得 到句子之後對句子做 Word Embedding 輸入進 BERT Model 如圖四,最後串接 Dens 層連 接 Softmax 進行分類。 圖四、BERT 輸入文字採樣示意圖及 BERT 串接 LSTM 示意圖 本實驗使用 Fearless Steps Challenge 資料庫行訓練,Fearless Steps Challenge 資料庫 分成了 Train data。本實驗將 Train data 資料庫的每份音檔利用 Sliding Windows 的方式 切出訓練資料,窗口大小為 2s 每次位移 10ms 進行 Train data 的資料採樣。以下先單獨 NTUT_sys4 使用 Sliding Windows 的方式進行驗證將音框設為 2s 位移時間為 10ms, 進行單音頻測試神經網路模型如圖三, Fearless Steps Challenge 官方正確率為 44.07% 5. NTUT_sys5 使用 Sliding Windows 的方式進行驗證將文字採樣範圍調整為前後 14 個字,進行單文字測試神經網路模型如圖四,Fearless Steps Challenge 官方正確率 為 44.63% 6. NTUT_sys6 為 NTUT_ sys4 的重複卻認正確率,因此在回傳一次給 Fearless Steps Challenge 官方卻認正確率為 44.07% 7. NTUT_sys7 為 NTUT_ sys5 的重複卻認正確率,因此在回傳一次給 Fearless Steps Challenge 官方卻認正確率為 44.63% 8. NTUT_sys8 使用 Fearless Steps Challenge Train data 所算出的答案進行回傳,因此 不列在官方排名中 9. NTUT_sys9 使用 Sliding Windows 的方式進行驗證,多模式神經網路進行識別, state machine 暫存器設為 3 個,Fearless Steps Challenge 官方正確率為 70.47% 10. NTUT_sys10 使用 Sliding Windows 的方式進行驗證,多模式神經網路進行識別, state machine 暫存器設為 15 個,Fearless Steps Challenge 官方正確率為 73.11% 狀態偵測效能。並以 state machine 減緩輸出跳動的情況,有效的解決輸出時產生的不穩 定性,提升準確度。最後,由正式比賽結果發現,我們的系統的情緒狀態偵測正確率達 到 73.11%,在所有隊伍提交中的 20 個結果中,排第三名,不但超越主辦單位提供的基 準參考系統(49.75%) ,並只差第一名(74.07)不到 1%。大規模競賽。由德州賽拉達分校將登月任務中所有通訊對話數位化,並發行 Fearless
Steps Corpus 語料,支援各個競賽項目,提供大量的訓練資料及測試資料,因為此項比
賽主要是希望可以使用自然環境當中所錄製的資料庫進行比賽,所以 Fearless Steps
Corpus 的語音資料,是真實太空任務中,太空人與任務中心的通訊對話錄音。
我們會選擇參加此項競賽,主要是因為目前大部分可取得的情緒相關語料庫,大都
是由演員表演的,且語料都過於完美或是過於乾淨,導致在這些語料庫上所獲得的結果,
經由人工標註驗證,因此研究獲得的結果會更加有公信力。
傳統上針對語音情緒偵測,通常專注於先提取情緒的低階聲學特徵[3-14]。一些廣
泛使用的頻譜特徵是 Mel-Frequency Cepstral Coefficients(MFCC)[1]、線性預測倒譜係
數或是音高軌跡。然後再用高斯混合模型、支持向量機或是馬爾可夫模型進行情緒辨認。
而若想從語意來求取情緒特徵參數,則需要先有語音辨認器,將語音轉成逐字稿,再以
自然語言處理方式,例如以 word-to-vector 求取特徵向量,再以類神經網路進行情緒辨
認。然而,人類情感與聲學低階特徵的表現,實際上不見得一致。而若用逐字稿,則通
常會有語音辨認錯誤,影響最終判斷的情形。
針對 Fearless Steps Challenge 比賽,我們在進行初步實驗測試時,發現若單獨只用
聲音製作模型,或是單獨使用文字訓練模型,所得到效果都有所不足。主要是語音中的
情緒特徵,可能同時表現在音色、語氣或是文字用語上。因此在比賽當中,我們除了分
別嚐試對於聲音和逐字稿抽取其隱含的情緒相關特徵,並希望以多模式神經網路,將兩
者的特徵參數進行結合,同時以聲音中與逐字稿中的情緒特徵來建立模型,以提升情緒
偵測的正確率。
因此,本論文提出如圖一的多模式情緒偵測模型。主要想法是同時考慮語音訊號中
包含的聲學與語意資訊,提出基於深度類神經網路之多模式語音情緒偵測模型,用以偵
測語音訊號中傳達的情緒狀態。實際做法包括(1)利用捲積神經網絡(Convolutional
Neural Network, CNN) ,從聲學頻譜自動求取情緒特徵參數,與(2)以雙向編碼變換器
(Bidirectional Encoder Representation from Transformers, BERT) ,求取語音逐字稿的語
意特徵參數。再將此兩類特徵參數向量融合,以強化系統的情緒狀態偵測效能。圖一為
我們進行情緒偵測的框架結構:
圖一、多模式神經網路模型架構圖
此系統的運作包含三個模組,包括:( 1)我們將原始語音信號轉換為類似圖像的頻
譜圖方式,作為 CNN 的輸入[2]。因此,可以使用以大量語音語料預訓練的深度 CNN
模型進行學習,擷取高級聲學情緒特徵。 (2)對於逐字稿的多個連續段落,可以用以大
規模文字數據集預訓練的 BERT 模型進行訓練,萃練高階的語意情緒特徵。 (3)由 2D-
CNN 和 BERT 學習的聲學和語意情緒特徵參數,被集成在多模式的融合網絡中。最後,
我們採用多模式的最後一個隱藏層的輸出作為分段的情感標籤。
二、Fearless Steps Challenge
(一)、數據集
為了評估本文所提出的模型性能,我們使用 Fearless Steps Challenge 所提供的美國
宇航局阿波羅計劃的全程無線電通訊錄音資料庫,共有 100 個小時,包括火箭升空約佔
25 小時,登月約 50 小時,月球漫步約 25 小時。此外由於任務的不同,語料庫的語音
活動密度在整個任務中常常變化,且語音數據的質量也常在 0 到 20dB (Signal-to-noise
ratio , SNR)之間變化。Fearless Steps Challenge 為了確保能將數據公平地分配到的訓練,
評估和開發子集中,根據噪聲水平與活動密度,對數據進行分類。
Fearless Steps Challenge 所提供的訓練子集,皆經人工轉寫逐字稿與標記情緒標籤。
評估子集則只提供自動產生的逐字稿與情緒標籤。但 測試子集則無提供任何情緒標籤也
無逐字稿,因此本論文在測試資料時,需要對音檔先進行文字轉寫處理。由於 Fearless
Steps Challenge 所提供得是太空中不同場景的語音記錄,總共提供了五個不同部分的頻
道場景,Flight Director (FD)、Mission Operations Control Room (MOCR) 、Guidance
Navigation and Control (GNC)、Network Controller (NTWK)、Electrical, Environmental and
Consumables Manager (EECOM),表一提供不同事件的五個場景的時間分布表。
表一、Total Speech Durations per Channel and Event
ECOM
FD
GNC
MOCR
NTWK
Total
Lift Off
2.1
1.2
1.3
0.8
3.9
9.3
Lunar Landing
3.7
1.3
4.0
0.9
4.4
14.3
Lunar Walking
3.9
1.1
3.0
1.4
2.8
12.2
Total
9.7
3.6
8.3
3.1
11.1
35.8
Operations Control Room (MOCR)的標準差最高,但 Mission Operations Control Room
(MOCR)在其中 SNR 平均值為最低,在這個錄音場景下的噪音動態範圍較為浮動。
表二、Signal to Noise Ratio Statistics (dB SNR) per channel for Dev Data
ECOM
FD
GNC
MOCR
NTWK
SNR (Mean)
13.32
14.67
14.91
5.07
10.68
SNR (Std. Dev)
7.40
10.51
11.96
12.60
11.17
在依據各個音檔的 SNR(Std Dev)數值去做消除雜訊的動作。
三、基於多模式之情緒檢測系統
(一)、卷積神經網路聲學情緒模型架構
本篇碖文提出的方法的第一階段,先對輸入語音信號執行取音框與求取語音信號的
頻譜。其中我們使用 2 秒的窗口大小與 10ms 的音框位移,來獲得足夠可訓練資料。然
後將訊號轉換至頻域,在此處以 Mel-frequency 三角形濾波器組過濾頻譜,轉成 Mel-
供的信息,我們為每個情緒資料利用 Sliding Windows 窗口採樣訓練數據。最後,將完
整的 CNN 架構音頻識別的部分加入混合神經網路模型架構。
圖三、CNN Architecture for Sentiment Detection
(二)、BERT 神經網路語意情緒模型架構
我們使用 Google 的 BERT 模型。輸入的部分是情緒句子的句向量[ 1, 2, 3 … ],
(三)、混合神經網路情緒模型架構
在長時訓練及辨識的文字和語音由 Fearless Steps Challenge 所提供的美國宇航局阿
波羅計劃的全程無線電資料庫裡,我們使用混合模型將語音及文字進行同步訓練。
特徵級融合是最常見和直接的方式,其中所有提取的特徵直接連接成單個高維特徵
向量。然後,可以用這種高維特徵向量訓練單個分類器用於情緒識別。大量先前的作品
[15-19]證明了情感識別任務中特徵級融合的表現。但是,因為它以直接的方式合併音頻
和文字特徵,所以特徵級融合不能模擬複雜的關係。 具體地,每個輸入模態用情緒分
類器獨立建模,然後將這些識別結果與某些代數規則組合,例如:"Max"、"Min"、"Sum"
等。因此,在情感識別中採用了決策融合。然而,決策層融合無法捕捉不同模態之間的
相互關聯,因為這些模態被認為是獨立的。因此,決策級融合不符合人類情緒特徵的特
性。
模型級融合作為特徵級融合和決策級融之間的折衷,也被用於情感識別最佳解決方
法。該方法旨在獲得音頻和文字模態的聯合特徵表示。其實現主要取決於所使用的融合
模型。例如,[4]採用(Hidden Markov Model , MFHMM)來實現模型級融合。 [8]採用誤
差半耦合馬爾可夫模型融合以進行情感識別。對於神經網絡,通過首先連接對應於多個
輸入模態的神經網絡的不同隱藏層的特徵表示來執行模型級融合。然後,添加額外的隱
藏層以從連接的特徵學習聯合特徵表示。現有的模型級融合方法仍然不能有效地模擬音
頻和文字模態之間的高度非線性相關性。
四、情緒偵測分類實驗
(一)訓練與測試語料
長時訓練及辨識的文字和語音由 Fearless Steps Challenge 所提供的美國宇航局阿波
羅計劃的全程無線電資料庫,包括 100 個小時。選擇的阿波羅 11 號任務主要分為三個
階段: (i)升空、 (ii)登月、 (iii)月球行走。為任務系統開發提供了 80 小時的音頻。
在這 80 個小時內,提供了 20 小時的經過人工驗證的答案。對於剩餘的 60 小時音頻,
提供 Baseline 系統生成的輸出答案,另外一組 20 小時將發布用於開放測試。
(二) 評估指標
Fearless Steps Challenge 比賽規則如下,音檔實際判斷正確時間長度單位為 10ms,
在參考答案當中只有偵測到和參考答案範圍內一樣才給予得分如圖五,若判斷超出參考
得分範圍則不扣分也不予計分只計算真實得分數,每個得分區域將計算每 10ms 幀的真
實相同答案的數值(標籤上的最低分辨率) 。
圖五、得分範圍參考圖
評分公式如下,
為 System Detected 的真實得分的總時間和,
ℎ
為 Reference annotation 的參考答案時間總和,
除以
ℎ
再乘以百分比為最後,Fearless Steps
Challenge 比賽排名的參考依據。
A
=
ℎ
五、實驗語競賽結果
對各個部分進行實驗,分為 CNN 架構的音頻部分和 BERT 文字部分別進行討論,再討
論多模式情緒偵測模型。
此外,圖六為我們提交至 Fearless Steps Challenge 官方,經過官方評測後的成績排
名結果,我們總共提交了 10 個不同設定的系統,在以下實驗中會逐步說明。
圖六、Fearless Steps Challenge 官方排名總表
實驗一,聲學與文字模式情緒偵測
1. 聲學 CNN 模型
多模式模型音頻前及處理部分單獨進行討論, Fearless Steps Challenge 的答案共分
為三種 positive、neutral、negative,而在評估指標內還有包含 Non-Sentiment 的部分,因
此在訓練同時將測試集 Non-Sentiment 的部分使用 Sliding Windows 進行數據採集。
在音頻測試中可以看到,因資料庫音檔雜訊過多且在大部分音檔當中的對話情緒起
伏並不明顯,所以造成 positive、negative 的準確率偏低,但在 neutral、silence 的部分以
圖七混淆矩陣來看 silence 的準確率最高,因此在單音頻測試模型下成效較為顯著,但
在 neutral 的判斷還是有部分些許不準確。
本論文將音頻單獨使用 CNN 神經網路模型進行單獨資料庫訓練,模型如圖三,在
Fearless Steps Challenge 官方網站準確率為 44.07%參考排名如表五,因此單音頻測試對
於 silence 和 neutral 偵測有一定的準確度,但離最高準確率還是需要靠文字的輔助下達
成。
2. 文字 BERT
由於 Fearless Steps Challenge 測試集並無提供音頻的文字,因此在使用測試集辨識
時將音頻使用語音辨識(Automatic Speech Recognition, ASR)進行辨識,但語音辨識只能
獲得單詞起始時間和結束時間,所以在文字預處理本論文使用 Sliding Windows 方式,
在要辨識單詞時間底下往前往後取一定範圍的單詞量組成句向量,輸入如圖四所示
BERT 模型內進行單文字測試。
在測試文字中發現,文字採樣範圍不同時會有不同準確率,當採樣文字採樣範圍達
到往前往後 14 個字時之後準確率趨近於穩定,如表四所示,在採樣範圍從 2 至 8 個字
時明顯採樣特徵不足因此造成準確率沒有明顯提升,因此將採樣範圍提升從 9 至 22 個
字進行測試,在 8 至 9 個字時準確率有明顯提升,由此實驗可證實當文字採樣範圍會對
於情緒識別準確率有一定的成效。
表四、BERT 模型各種文字採樣範圍正確率
40.07
41.2
42.05
43.04
43.47
43.24
43.6
45.46
44.9
45.22
45.37
45.14
45.04
45.93
44.46
45.32
45.28
45.84
45.92
45.39
45.25
45.33
Acc
Txt Range
本論文將文字單獨使用 BERT 神經網路模型進行單獨資料庫訓練,模型如圖四,在
Fearless Steps Challenge 官方網站準確率為 44.63%參考排名如表三,因此使用單文字識
別情緒時如果有相關情緒字眼出現時則會對準確率照成一定的影響,但某些場景下無法
純粹依靠單文字測試,因此本論文使用多模式神經網路模型將兩者模型混合。
實驗二,多模式情緒偵測
在多模式實驗當中本論文使用音頻模型和文字模型進行情緒識別,交叉測試發現音
頻測試當中發現 positive 和 negative 的情緒類別較為不準確,文字測試部分也發現文字
採樣範圍對於實驗結果有一定影響,音頻無法識別的 positive 和 negative 利用多模式模
型,由文字模型來識別 positive 和 negative 的相關情緒字眼,以利於提升模型準確率,
音頻測試當中 silence 和 neutral 的準確度也有一定成效,因此也可輔助多模型識別 Non-
Sentiment 的切割位置準確度和 neutral 的正確率,所以本論文使用混合神經網路模型架
構來提升模型準確度。
資料庫分為三類 negative,neutral,positive,進行這三類的辨別。因情緒變化動態
較慢而我們所使用的 Sliding Windows 的辨識方式讓結果輸出的變化太大,所以在連續
辨認時設置了 state machine 的輸出機制,在連續輸出一定數量的答案才會確定輸出否則
會繼續輸出前一個答案。例如:圖八在未使用 state machine 時輸出答案不穩定會一直跳
動,但在加上 state machine 後可以看到答案輸出趨近於穩定。
state machine 狀態圖如圖七,預設輸出為 0 當 D 連續輸出三次轉態為 1 時 VAD 才
會判斷輸出為 1,當D轉態出現中斷或是小於 3 次時回道原始狀態的 VAD 值,反之則
將狀態轉為轉態數值。也就是說,當輸出第二次出現不一樣的數值時先放入暫存器,然
而繼續輸出相同數值,直到連續得到相同轉態數值,才確定轉態。這可以使本論文模型
輸出趨近於穩定。
圖七、state machine 狀態圖
圖八、state machine 前後比較
在 state machine 的幫助下,本實驗使用圖五的多模式神經網路架構,將文字以及聲
音使用 Sliding Windows 的方式切出訓練資料,窗口大小為 2s 每次位移 10ms 進行訓練。
在 128 訓練次數後,正確率達到 60.51%,
在第二次實驗下,修改 state machine 的暫存器個數來讓情緒浮動的範圍不會變化的
太快,將 state machine 暫存器修改為 15 個最本論文最高正確率,正確率來到了 73.11%
為 Fearless Steps Challenge 比賽中 Sentiment Detection 項目的 Rank 3 排名,
以下分別說明在 Fearless Steps Challenge 官方網站總排名中,不同 NTUT_sys 系統
的做法與設定差異:
1. NTUT_sys1 使用 google 語音辨識將音檔切割為 15 秒一個單位音檔不重疊,進行單
音頻測試神經網路模型如圖三,Fearless Steps Challenge 官方正確率為 39.49%
2. NTUT_sys2 使用 google 語音辨識將音檔切割為 15 秒一個單位音檔不重疊,進行多
模試神經網路模型如圖一,Fearless Steps Challenge 官方正確率為 60.51%
3. NTUT_sys3 為 NTUT_sys2 的重複卻認正確率,因此在回傳一次給 Fearless Steps
Challenge 官方卻認正確率為 60.51%
4.
An investigation of speech-based human emotion recognition. Y Wang, L Guan, Y. Wang, L. Guan, An investigation of speech-based human emotion recognition, pp. 15- 18, 2004.
. Z. Huang,M。Dong,Q。Mao,Y。Zhan,Speech Emotion Recognition Using CNN,. Z. Huang,M。Dong,Q。Mao,Y。Zhan,Speech Emotion Recognition Using CNN, pp.801-804,2014。
Recognizing human emotional state from audiovisual signals. Y Wang, L Guan, IEEE Trans. Multimedia. 105Y. Wang, L. Guan, "Recognizing human emotional state from audiovisual signals", IEEE Trans. Multimedia, vol. 10, no. 5, pp. 936-946, Aug. 2008.
Audio-visual affective expression recognition through multistream fused HMM. Z Zeng, J Tu, B M Pianfetti, T S Huang, IEEE Trans. Multimedia. 104Z. Zeng, J. Tu, B. M. Pianfetti, T. S. Huang, "Audio-visual affective expression recognition through multistream fused HMM", IEEE Trans. Multimedia, vol. 10, no. 4, pp. 570-577, Jun. 2008.
Multimodal information fusion application to human emotion recognition from face and speech. M Mansoorizadeh, N M Charkari, Multimedia Tools Appl. 492M. Mansoorizadeh, N. M. Charkari, "Multimodal information fusion application to human emotion recognition from face and speech", Multimedia Tools Appl., vol. 49, no. 2, pp. 277-297, 2010.
Multiple classifier systems for the classification of audio-visual emotional states. M Glodek, Affective Computing and Intelligent Interaction. Berlin, GermanySpringer6975M. Glodek et al., "Multiple classifier systems for the classification of audio-visual emotional states" in Affective Computing and Intelligent Interaction, Berlin, Germany:Springer, vol. 6975, pp. 359-368, 2011.
Multimodal emotion recognition in response to videos. M Soleymani, M Pantic, T Pun, IEEE Trans. Affect. Comput. 32M. Soleymani, M. Pantic, T. Pun, "Multimodal emotion recognition in response to videos", IEEE Trans. Affect. Comput., vol. 3, no. 2, pp. 211-223, Apr./Jun. 2012.
Error weighted semi-coupled hidden Markov model for audio-visual emotion recognition. J.-C Lin, C.-H Wu, W.-L Wei, IEEE Trans. Multimedia. 141J.-C. Lin, C.-H. Wu, W.-L. Wei, "Error weighted semi-coupled hidden Markov model for audio-visual emotion recognition", IEEE Trans. Multimedia, vol. 14, no. 1, pp. 142-156, Feb. 2012.
Exploring fusion methods for multimodal emotion recognition with missing data. J Wagner, E Andre, F Lingenfelser, J Kim, IEEE Trans. Affect. Comput. 24J. Wagner, E. Andre, F. Lingenfelser, J. Kim, "Exploring fusion methods for multimodal emotion recognition with missing data", IEEE Trans. Affect. Comput., vol. 2, no. 4, pp. 206-218, Oct. 2011.
Context-sensitive learning for enhanced audiovisual emotion classification. A Metallinou, M Wöllmer, A Katsamanis, F Eyben, B Schuller, S Narayanan, IEEE Trans. Affect. Comput. 32A. Metallinou, M. Wöllmer, A. Katsamanis, F. Eyben, B. Schuller, S. Narayanan, "Context-sensitive learning for enhanced audiovisual emotion classification", IEEE Trans. Affect. Comput., vol. 3, no. 2, pp. 184-198, Apr./Jun. 2012.
Audio-visual emotion recognition using FCBF feature selection method and particle swarm optimization for fuzzy ARTMAP neural networks. D Gharavian, M Bejani, M Sheikhan, Multimedia Tools Appl. 762D. Gharavian, M. Bejani, M. Sheikhan, "Audio-visual emotion recognition using FCBF feature selection method and particle swarm optimization for fuzzy ARTMAP neural networks", Multimedia Tools Appl., vol. 76, no. 2, pp. 2331-2352, 2017.
BAUM-1: A spontaneous audio-visual face database of affective and mental states. S Zhalehpour, O Onder, Z Akhtar, C E Erdem, IEEE Trans. Affect. Comput. S. Zhalehpour, O. Onder, Z. Akhtar, C. E. Erdem, "BAUM-1: A spontaneous audio-visual face database of affective and mental states", IEEE Trans. Affect. Comput..
FF-SKPCCA: Kernel probabilistic canonical correlation analysis. R R Sarvestani, R Boostani, Appl. Intell. 462R. R. Sarvestani, R. Boostani, "FF-SKPCCA: Kernel probabilistic canonical correlation analysis", Appl. Intell., vol. 46, no. 2, pp. 438-454, 2017.
Audiovisual emotion recognition using ANOVA feature selection method and multi-classifier neural networks. M Bejani, D Gharavian, N M Charkari, Neural Comput. Appl. 242M. Bejani, D. Gharavian, N. M. Charkari, "Audiovisual emotion recognition using ANOVA feature selection method and multi-classifier neural networks", Neural Comput. Appl., vol. 24, no. 2, pp. 399-412, 2014.
Recognizing human emotional state from audiovisual signals. Y Wang, L Guan, IEEE Trans. Multimedia. 105Y. Wang, L. Guan, "Recognizing human emotional state from audiovisual signals", IEEE Trans. Multimedia, vol. 10, no. 5, pp. 936-946, Aug. 2008.
Multimodal information fusion application to human emotion recognition from face and speech. M Mansoorizadeh, N M Charkari, Multimedia Tools Appl. 492M. Mansoorizadeh, N. M. Charkari, "Multimodal information fusion application to human emotion recognition from face and speech", Multimedia Tools Appl., vol. 49, no. 2, pp. 277-297, 2010.
Kernel cross-modal factor analysis for information fusion with application to bimodal emotion recognition. Y Wang, L Guan, A N Venetsanopoulos, IEEE Trans. Y. Wang, L. Guan, A. N. Venetsanopoulos, "Kernel cross-modal factor analysis for information fusion with application to bimodal emotion recognition", IEEE Trans.
. Multimedia, 14Multimedia, vol. 14, no. 3, pp. 597-607, Jun. 2012.
Audiovisual recognition of spontaneous interest within conversations. B Schuller, R Müller, B Hörnler, A Höthker, H Konosu, G , Proc. 9th Int. Conf. Multimodal Interfaces (ICMI). 9th Int. Conf. Multimodal Interfaces (ICMI)B. Schuller, R. Müller, B. Hörnler, A. Höthker, H. Konosu, G. Rigoll, "Audiovisual recognition of spontaneous interest within conversations", Proc. 9th Int. Conf. Multimodal Interfaces (ICMI), pp. 30-37, 2007.
Analysis of emotion recognition using facial expressions speech and multimodal information. C Busso, Proc. 6th Int. Conf. Multimodal Interfaces (ICMI). 6th Int. Conf. Multimodal Interfaces (ICMI)C. Busso et al., "Analysis of emotion recognition using facial expressions speech and multimodal information", Proc. 6th Int. Conf. Multimodal Interfaces (ICMI), pp. 205-211, 2004. |
|
15,680,042 | Extrinsic Evaluation of Dialog State Tracking and Predictive Metrics for Dialog Policy Optimization | During the recent Dialog State Tracking Challenge (DSTC), a fundamental question was raised: "Would better performance in dialog state tracking translate to better performance of the optimized policy by reinforcement learning?" Also, during the challenge system evaluation, another nontrivial question arose: "Which evaluation metric and schedule would best predict improvement in overall dialog performance?" This paper aims to answer these questions by applying an off-policy reinforcement learning method to the output of each challenge system. The results give a positive answer to the first question. Thus the effort to separately improve the performance of dialog state tracking as carried out in the DSTC may be justified. The answer to the second question also draws several insightful conclusions on the characteristics of different evaluation metrics and schedules. | [
10250499,
10079468,
1046547,
3083188,
1294169,
9457948
] | Extrinsic Evaluation of Dialog State Tracking and Predictive Metrics for Dialog Policy Optimization
Association for Computational LinguisticsCopyright Association for Computational Linguistics18-20 June 2014. 2014
Sungjin Lee sungjin.lee@cs.cmu.edu
Language Technologies Institute
Carnegie Mellon University
PittsburghPennsylvaniaUSA
Extrinsic Evaluation of Dialog State Tracking and Predictive Metrics for Dialog Policy Optimization
Proceedings of the SIGDIAL 2014 Conference
the SIGDIAL 2014 ConferencePhiladelphia, U.S.A.Association for Computational Linguistics18-20 June 2014. 2014
During the recent Dialog State Tracking Challenge (DSTC), a fundamental question was raised: "Would better performance in dialog state tracking translate to better performance of the optimized policy by reinforcement learning?" Also, during the challenge system evaluation, another nontrivial question arose: "Which evaluation metric and schedule would best predict improvement in overall dialog performance?" This paper aims to answer these questions by applying an off-policy reinforcement learning method to the output of each challenge system. The results give a positive answer to the first question. Thus the effort to separately improve the performance of dialog state tracking as carried out in the DSTC may be justified. The answer to the second question also draws several insightful conclusions on the characteristics of different evaluation metrics and schedules.
Introduction
Statistical approaches to spoken dialog management have proven very effective in gracefully dealing with noisy input due to Automatic Speech Recognition (ASR) and Spoken Language Understanding (SLU) error (Lee, 2013;. Most recent advances in statistical dialog modeling have been based on the Partially Observable Markov Decision Processes (POMDP) framework which provides a principled way for sequential action planning under uncertainty (Young et al., 2013). In this approach, the task of dialog management is generally decomposed into two subtasks, i.e., dialog state tracking and dialog policy learning. The aim of dialog state tracking is to accurately estimate the true dialog state from noisy observations by incorporating patterns between turns and external knowledge as a dialog unfolds (Fig. 1). The dialog policy learning process then strives to select an optimal system action given the estimated dialog state.
Many dialog state tracking algorithms have been developed. Few studies, however, have reported the strengths and weaknesses of each method. Thus the Dialog State Tracking Challenge (DSTC) was organized to advance state-of-the-art technologies for dialog state tracking by allowing for reliable comparisons between different approaches using the same datasets . Thanks to the DSTC, we now have a better understanding of effective models, features and training methods we can use to create a dialog state tracker that is not only of superior performance but also very robust to realistic mismatches between development and deployment environments (Lee and Eskenazi, 2013).
Despite the fruitful results, it was largely limited to intrinsic evaluation, thus leaving an important question unanswered: "Would the improved performance in dialog state tracking carry over to dialog policy optimization?" Furthermore, there was no consensus on what and when to measure, resulting in a large set of metrics being evaluated with three different schedules. With this variety of metrics, it is not clear what the evaluation result means. Thus it is important to answer the question: "Which metric best serves as a predictor to the improvement in dialog policy optimization" since this is the ultimate goal, in terms of end-to-end dialog performance. The aim of this paper is to answer these two questions via corpus-based experiments. Similar to the rationale behind the DSTC, the corpus-based design allows us to compare different trackers on the same data. We applied a sample efficient off-policy reinforcement learning (RL) method to the outputs of each tracker so that we may examine the relationship between the performance of dialog state tracking and that of the optimized policy as well as which metric shows the highest correlation with the performance of the optimized policy. This paper is structured as follows. Section 2 briefly describes the DSTC and the metrics adopted in the challenge. Section 3 elaborates on the extrinsic evaluation method based on offpolicy RL. Section 4 presents the extrinsic evaluation results and discusses its implication on metrics for dialog state tracking evaluation. Finally, Section 5 concludes with a brief summary and suggestions for future research.
DSTC Task and Evaluation Metrics
This section briefly describes the task for the DSTC and evaluation metrics. For more details, please refer to the DSTC manual 1 . 1 http://research.microsoft.com/apps/pubs/?id=169024
Task Description
DSTC data is taken from several different spoken dialog systems which all provided bus schedule information for Pittsburgh, Pennsylvania, USA (Raux et al., 2005) as part of the Spoken Dialog Challenge (Black et al., 2011). There are 9 slots which are evaluated: route, from.desc, from.neighborhood, from.monument, to.desc, to.neighborhood, to.monument, date, and time. Since both marginal and joint representations of dialog states are important for deciding dialog actions, the challenge takes both into consideration. Each joint representation is an assignment of values to all slots. Thus there are 9 marginal outputs and 1 joint output in total, which are all evaluated separately.
The dialog tracker receives the SLU N-best hypotheses for each user turn, each with a confidence score. In general, there are a large number of values for each slot, and the coverage of N-best hypotheses is good, thus the challenge confines its determination of whether a goal has been reached to slots and values that have been observed in an SLU output. By exploiting this aspect, the task of a dialog state tracker is to generate a set of observed slot and value pairs, with a score between 0 and 1. The sum of all Figure 1: An example of dialog state tracking for the Route slot. At each turn the system asks for user's goal or attempts to confirm one of hypotheses. The user's utterance is recognized to output an N-best list. The SLU module generates semantic inputs to the dialog manager by parsing the N-best hypotheses. Each SLU hypothesis receives a confidence score. From the current turn's SLU hypotheses and all previous ones thus far, the dialog state tracker computes a probability distribution over a set of dialog state hypotheses. Note that the number of hypotheses in a dialog state can be different from the number of SLU hypotheses, e.g., at turn t+1, 3 and 5 respectively.
scores is restricted to sum to 1.0. Thus 1.0total score is defined as the score of a special value None that indicates the user's goal has not yet appeared on any SLU output.
Evaluation Metrics
To evaluate tracker output, the correctness of each hypothesis is labeled at each turn. Then hypothesis scores and labels over the entire dialogs are collected to compute 11 metrics:
Accuracy measures the ratio of states under evaluation where the top hypothesis is correct. ROC.V1 computes the following quantity:
( ) ( )
where is the total number of top hypotheses over the entire data and ( ) denotes the number of correctly accepted top hypotheses with the threshold being set to . Similarly FA denotes false-accepts and FR false-rejects. From these quantities, several metrics are derived. ROC.V1.EER computes FA.V1(s) where FA.V1(s) = FR.V1(s). The metrics ROC.V1.CA05, ROC.V1.CA10, and ROC.V1.CA20 compute CA.V1(s) when FA.V1(s) = 0.05, 0.10, and 0.20 respectively. These metrics measure the quality of score via plotting accuracy with respect to false-accepts so that they may reflect not only accuracy but also discrimination. ROC.V2 computes the conventional ROC quantity:
( ) ( ) ( )
ROC.V2.CA05, ROC.V2.CA10, and ROC.V2.CA20 do the same as the V1 versions. These metrics measure the discrimination of the score for the top hypothesis independently of accuracy.
Note that Accuracy and ROC curves do not take into consideration non-top hypotheses while the following measures do.
L2 calculates the Euclidean distance between the vector consisting of the scores of all hypotheses and a zero vector with 1 in the position of the correct one. This measures the quality of tracker's output score as probability. AvgP indicates the averaged score of the correct hypothesis. Note that this measures the quality of the score of the correct hypothesis, ignoring the scores assigned to incorrect hypotheses. MRR denotes the mean reciprocal rank of the correct hypothesis. This measures the quality of rank instead of score.
As far as evaluation schedule is concerned, there are three schedules for determining which turns to include in each evaluation.
Schedule 1: Include all turns. This schedule allows us to account for changes in concepts that are not in focus. But this makes acrossconcept comparison invalid since different concepts appear at different times in a dialog. Schedule 2: Include a turn for a given concept only if that concept either appears on the SLU N-Best list in that turn, or if the system's action references that concept in that turn. Unlike schedule 1, this schedule makes comparisons across concepts valid but cannot account for changes in concepts which are not in focus. Schedule 3: Include only the turn before the system starts over from the beginning, and the last turn of the dialog. This schedule does not consider what happens during a dialog.
Extrinsic Evaluation Using Off-Policy Reinforcement Learning
In this section, we present a corpus-based method for extrinsic evaluation of dialog state tracking. Thanks to the corpus-based design where outputs of various trackers with different characteristics are involved, it is possible to examine how the differences between trackers affect the performance of learned policies. The performance of a learned policy is measured by the expected return at the initial state of a dialog which is one of the common performance measures for episodic tasks.
Off-Policy RL on Fixed Data
To learn an optimal policy from fixed data, we applied a state-of-the-art kernelized off-policy RL method. Off-policy RL methods allows for optimization of a policy by observing how other policies behave. The policy used to control the system's behavior is called Behavior policy. As far as a specific algorithm is concerned, we have adopted Least-Squares Temporal Difference (LSTD) (Bradtke and Barto, 1996) for policy evaluation and Least-Squares Policy Iteration (LSPI) (Lagoudakis and Parr, 2003) for policy learning. LSTD and LSPI have been well known to be sample efficient, thus easily lending themselves to the application of RL to fixed data . LSPI is an instance of Approximate Policy Iteration where an approximated action-state value function (a.k.a Q function) is established for a current policy and an improved policy is formed by taking greedy actions with respect to the estimated Q function. The process of policy evaluation and improvement iterates until convergence. For value function approximation, in this work, we adopted the following linear approximation architecture:
̂ ( ) ( )
where is the set of parameters, ( ) an activation vector of basis functions, a state and an action. Given a policy and a set of state transitions ( ) , where is the reward that the system would get from the environment by executing action at state , the approximated state-action value function ̂ is estimated by LSTD. The most important part of LSTD lies in the computation of the gradient of temporal difference:
( ) (( ))
In LSPI, ( ) takes the form of greedy policy:
( ) ̂ ( )
It is however critical to take into consideration the inherent problem of insufficient exploration in fixed data to avoid overfitting (Henderson et al., 2008). Thus we confined the set of available actions at a given state to the ones that have an occurrence probability greater than some threshold :
( ) ( | ) ̂ ( )
The conditional probability ( | ) can be easily estimated by any conventional classification methods which provide posterior probability. In this study, we set to 0.1.
State Representation and Basis Function
In order to make the process of policy optimization tractable, the belief state is normally mapped to an abstract space by only taking crucial information for dialog action selection, e.g., the beliefs of the top and second hypotheses for a concept. Similarly, the action space is also mapped into a smaller space by only taking the predicate of an action. In this work, the simplified state includes the following elements:
The scores of the top hypothesis for each concept with None excluded The scores of the second hypothesis for each concept with None excluded The scores assigned to None for each concept Binary indicators for a concept if there are hypotheses except None The values of the top hypothesis for each concept A binary indicator if the user affirms when the system asks a yes-no question for next bus It has been shown that the rapid learning speed of recent approaches is partly attributed to the use of kernels as basis functions (Gasic et al., 2010;Lee and Eskenazi, 2012;. Thus to make the best of the limited amount of data, we adopted a kernelized approach. Similar to previous studies, we used a product of kernel functions:
( ) ( ) ∏ ( )
where ( ) is responsible for a vector of continuous elements of a state and ( ) for each discrete element. For the continuous elements, we adopted Gaussian kernels:
( ) ( ‖ ‖ )
where governs the value at center, controls the width of the kernel and represents the vector of continuous elements of a state. In the experiments, and were set to 4 and 3, respectively. For a discrete element, we adopted delta kernel:
( ) ( )
where ( ) returns one if , zero otherwise and represents an element of a state.
As the number of data points increases, kernelized approaches commonly encounter severe computational problems. To address this issue, it is necessary to limit the active kernel functions being used for value function approximation. This sparsification process has to find out the sufficient number of kernels which keeps a good balance between computational tractability and approximation quality. We adopted a simple sparsification method which was commonly used in previous studies (Engel et al., 2004). The key intuition behind of the sparsification method is that there is a mapping ( ) to a Hilbert space in which the kernel function ( ) is represented as the inner product of ( ) and ( ) by the Mercer's theorem. Thus the kernel-based representation of Q function can be restated as a plain linear equation in the Hilbert space:
̂ ( ) ∑ ( ) 〈 ( ) ∑ ( ) 〉
where denotes the pair of state and action. The term ∑ ( ) plays the role of the weight vector in the Hilbert space. Since this term takes the form of linear combination, we can safely remove any linearly dependent ( ) without changing the weighted sum by tuning . It is known that the linear dependence of ( ) from the rest can be tested based on kernel functions as follows:
( ) ( ) (1) where ( ) ( ) and
is a sparsification threshold. When equation 1 is satisfied, can be safely removed from the set of basis functions. Thus the sparsity can be controlled by changing . It can be shown that equation 1 is minimized when ( ) , where is the Gram matrix excluding . In the experiments, was set to 3.
Reward Function
The reward function is defined following a common approach to form-filling, task-oriented systems:
Every correct concept filled is rewarded 100 Every incorrect concept filled is assigned -200 Every empty concept is assigned -300 if the system terminated the session, -50 otherwise. At every turn, -20 is assigned
The reward structure is carefully designed such that the RL algorithm cannot find a way to maximize the expected return without achieving the user goal.
Experimental Setup
In order to see the relationship between the performance of dialog state tracking and that of the optimized policy, we applied the off-policy RL method presented in Section 3 to the outputs of each tracker for all four DSTC test datasets 2 . The summary statistics of the datasets are presented in Table 1. In addition, to quantify the impact of dialog state tracking on an end-to-end dialog, the performance of policies optimized by RL was compared with Behavior policies and another set of learned policies using supervised learning (SL). Note that Behavior policies were developed by experts in spoken dialog research. The use of a learned policy using supervised 2 We took the entry from each team that achieved the highest ranks of that team in the largest number of evaluation metrics: entry2 for team3 and team6, entry3 for team8, entry4 for team9, and entry1 for the rest of the teams. We were not, however, able to process the tracker output of team2 due to its large size. This does not negatively impact the general results of this paper. Training Test Training Test DS1 274 312 2594 2168 DS2 321 339 3394 2579 DS3 277 286 2221 1988 DS4 141 165 1060 979 Table 1: The DSTC test datasets (DS1-4) were evenly divided into two groups of datasets for off-policy RL training and test. To simplify the analysis, the dialogs that include startover and canthelp were excluded.
# Dialogs # Turns
learning (Hurtado et al., 2005) is also one of the common methods of spoken dialog system development. We exploited the SVM method with the same kernel functions as defined in Section 3.2 except that the action element is not included. The posterior probability of the SVM model was also used for handling the insufficient exploration problem (in Section 3.1).
Results and Discussion
The comparative results between RL, SL and Behavior policies are plotted in Fig. 2. Despite the relatively superior performance of SL policies over Behavior policies, the performance improvement is neither large nor constant. This confirms that Behavior policies are very strong baselines which were designed by expert researchers. RL policies, however, consistently outperformed Behavior as well as SL policies, with a large performance gap. This result indicates that the policies learned by the proposed off-policy RL method are a lot closer to optimal ones than the hand-crafted policies created by human experts. Given that many state features are derived from the belief state, the large improvement in performance implies that the estimated belief state is indeed a good summary representation of a state, maintaining the Markov property between states. The Markov property is a crucial property for RL methods to approach to the optimal policy. On the other hand, most of the dialog state trackers surpassed the baseline tracker (team0) in the performance of RL policies. This result assures that the better the performance in dialog state tracking, the better a policy we can learn in the policy optimization stage. Given these two results, we can strongly assert that dialog state tracking plays a key role in enhancing end-to-end dialog performance.
Another interesting result worth noticing is that the performance of RL policies does not exactly align with the accuracy measured at the end of a dialog (Schedule 3) which would have been the best metric if the task were a one-time classification (Fig. 2). This misalignment therefore supports the speculation that accuracy-schedule3 might not be the most appropriate metric for predicting the effect of dialog state tracking on end-to-end dialog performance. In order to better understand What To Measure and When To Measure to predict end-to-end dialog performance, a correlation analysis was carried out between the performance of RL policies and that of the dialog state tracking measured by different metrics and schedules. The correlations are listed in descending order in Fig. 3. This result reveals several interesting insights for different metrics.
First, metrics which are intended to measure the quality of a tracker's score (e.g., L2 and AvgP) are more correlated than other metrics. This tendency can be understood as a consequence of the sequential decision-making nature of a dialog task. A dialog system can always initiate an additional turn, unless the user terminates the session, to refine its belief state when there is no dominant hypothesis. Thus accurate estimation of the beliefs of all observed hypotheses is essential. This is why the evaluation of only the top hypothesis does not provide sufficient information.
Second, schedule1 and schedule3 showed a stronger correlation than schedule2. In fact schedule2 was more preferred in previous studies since it allows for a valid comparison of different concepts (Williams, 2013;. This result can be explained by the fact that the best system action is selected by considering all of the concepts together. For example, when the system moves the conversation focus from one concept to another, the beliefs of the concepts that are not in focus are as important as the concept in focus. Thus evaluating all concepts at the same time is more suitable for predicting the performance of a sequential decision-making task involving multiple concepts in its state.
Finally, metrics for evaluating discrimination quality (measured by ROC.V2) have little correlation with end-to-end dialog performance. In order to understand this relatively unexpected result, we need to give deep thought to how the scores of a hypothesis are distributed during the session. For example, the score of a true hypothesis usually starts from a small value due to the uncertainty of ASR output and gets bigger every time positive evidence is observed. The score of a false hypothesis usually stays small or medium. This leads to a situation where both true and false hypotheses are pretty much mixed in the zone of small and medium scores without significant discrimination. It is, however, very important for a metric to reveal a difference between true and false hypotheses before their scores fully arrive at sufficient certainty since most additional turns are planned for hypotheses with a small or medium score. Thus general metrics evaluating discrimination alone are hardly appropriate for a tracking problem where the score develops gradually. Furthermore, the choice of threshold (i.e. FA = 0.05, 0.10, 0.20) was made to consider relatively unimportant regions where the true hypothesis is likely to have a higher score, meaning that no further turns need to be planned.
Conclusion
In this paper, we have presented a corpus-based study that attempts to answer two fundamental questions which, so far, have not been rigorously addressed: "Would better performance in dialog state tracking translate to better performance of the optimized policy by RL?" and "Which evaluation metric and schedule would best predict improvement in overall dialog performance?" The result supports a positive answer to the first question. Thus the effort to separately improve the performance of dialog state tracking as carried out in the recent held DSTC may be justified. As a way to address the second question, the correlations of different metrics and schedules with the performance of optimized policies were computed. The results revealed several insightful conclusions: 1) Metrics measuring score quality are more suitable for predicting the performance of an optimized policy. 2) Evaluation of all concepts at the same time is more appropriate for predicting the performance of a sequential decision making task involving multiple concepts in its state. 3) Metrics evaluating only discrimination (e.g., ROC.V2) are inappropriate for a tracking problem where the score gradually develops. Interesting extensions of this work include finding a composite measure of conventional metrics to obtain a better predictor. A data-driven composition may tell us the relative empirical importance of each metric. In spite of several factors which generalize our conclusions such as handling insufficient exploration, the use of separate test sets and various mismatches between test sets, it is still desirable to run different policies for live tests in the future. Also, since the use of an approximate policy evaluation method (e.g. LSTD) can introduce systemic errors, more deliberate experimental setups will be designed for a future study: 1) the application of different RL algorithms for training and test datasets 2) further experiments on different datasets, e.g., the datasets for DSTC2 (Henderson et al., 2014). Although the state representation adopted in this work is quite common for most systems that use a POMDP model, different state representations could possibly reveal new insights.
Figure 2 :
2The left vertical axis is associated with the performance plots of RL, SL and Behavior policies for each team. The right vertical axis measures the accuracies of each team's tracker at the end of a dialog (schedule 3).
Figure 3 :
3The correlations of each combination of metric and schedule with the performance of optimized polices.
Spoken dialog challenge 2010: Comparison of live and control test results. A Black, Proceedings of SIGDIAL. SIGDIALA. Black et al., 2011. Spoken dialog challenge 2010: Comparison of live and control test results. In Proceedings of SIGDIAL.
Linear Least-Squares algorithms for temporal difference learning. S Bradtke, A Barto, Machine Learning. 22S. Bradtke and A. Barto, 1996. Linear Least-Squares algorithms for temporal difference learning. Machine Learning, 22, 1-3, 33-57.
The Kernel Recursive Least Squares Algorithm. Y Engel, S Mannor, R Meir, IEEE Transactions on Signal Processing. 52Y. Engel, S. Mannor and R. Meir, 2004. The Kernel Recursive Least Squares Algorithm. IEEE Transactions on Signal Processing, 52:2275-2285.
Effective handling of dialogue state in the hidden information state POMDP-based dialogue manager. M Gasic, S Young, ACM Transactions on Speech and Language Processing. 73M. Gasic and S. Young, 2011. Effective handling of dialogue state in the hidden information state POMDP-based dialogue manager. ACM Transactions on Speech and Language Processing, 7(3).
Gaussian Processes for Fast Policy Optimisation of POMDPbased Dialogue Managers. M Gasic, F Jurcicek, S Keizer, F Mairesse, B Thomson, K Yu, S Young, Proceedings of SIGDIAL. SIGDIALM. Gasic, F. Jurcicek, S. Keizer, F. Mairesse, B. Thomson, K. Yu and S. Young, 2010. Gaussian Processes for Fast Policy Optimisation of POMDP- based Dialogue Managers, In Proceedings of SIGDIAL, 2010.
Hybrid reinforcement/supervised learning of dialogue policies from fixed data sets. J Henderson, O Lemon, K Georgila, Computational Linguistics. 344J. Henderson, O. Lemon and K. Georgila, 2008. Hybrid reinforcement/supervised learning of dialogue policies from fixed data sets. Computational Linguistics, 34(4):487-511.
The Second Dialog State Tracking Challenge. M Henderson, B Thomson, J Williams, Proceedings of SIGDIAL. SIGDIALM. Henderson, B. Thomson and J. Williams, 2014. The Second Dialog State Tracking Challenge. In Proceedings of SIGDIAL, 2014.
A Stochastic Approach to Dialog Management. L Hurtado, D Grial, E Sanchis, E Segarra, Proceedings of ASRU. ASRUL. Hurtado, D. Grial, E. Sanchis and E. Segarra, 2005. A Stochastic Approach to Dialog Management. In Proceedings of ASRU, 2005.
Least-squares policy iteration. M Lagoudakis, R Parr, Journal of Machine Learning Research. 4M. Lagoudakis and R. Parr, 2003. Least-squares policy iteration. Journal of Machine Learning Research 4, 1107-1149.
Structured Discriminative Model For Dialog State Tracking. S Lee, Proceedings of SIGDIAL. SIGDIALS. Lee, 2013. Structured Discriminative Model For Dialog State Tracking. In Proceedings of SIGDIAL, 2013.
Incremental Sparse Bayesian Method for Online Dialog Strategy Learning. S Lee, M Eskenazi, IEEE Journal of Selected Topics in Signal Processing. 68S. Lee and M. Eskenazi, 2012. Incremental Sparse Bayesian Method for Online Dialog Strategy Learning. IEEE Journal of Selected Topics in Signal Processing, 6(8).
Recipe For Building Robust Spoken Dialog State Trackers: Dialog State Tracking Challenge System Description. S Lee, M Eskenazi, Proceedings of SIGDIAL. SIGDIALS. Lee and M. Eskenazi, 2013. Recipe For Building Robust Spoken Dialog State Trackers: Dialog State Tracking Challenge System Description. In Proceedings of SIGDIAL, 2013.
Sample Efficient Batch Reinforcement Learning for Dialogue Management Optimization. O Pietquin, M Geist, S Chandramohan, H Frezza-Buet, ACM Transactions on Speech and Language Processing. 73O. Pietquin, M. Geist, S. Chandramohan and H. Frezza-buet, 2011. Sample Efficient Batch Reinforcement Learning for Dialogue Management Optimization. ACM Transactions on Speech and Language Processing, 7(3).
Sample Efficient On-Line Learning of Optimal Dialogue Policies with Kalman Temporal Differences. O Pietquin, M Geist, S Chandramohan, Proceedings of IJCAI. IJCAIO. Pietquin, M. Geist, and S. Chandramohan, 2011. Sample Efficient On-Line Learning of Optimal Dialogue Policies with Kalman Temporal Differences. In Proceedings of IJCAI, 2011.
Let's Go Public! Taking a Spoken Dialog System to the Real World. A Raux, B Langner, D Bohus, A Black, M Eskenazi, Proceedings of Interspeech. InterspeechA. Raux, B. Langner, D. Bohus, A. W Black, and M. Eskenazi, 2005. Let's Go Public! Taking a Spoken Dialog System to the Real World. In Proceedings of Interspeech.
Multi-domain learning and generalization in dialog state tracking. J Williams, Proceedings of SIGDIAL. SIGDIALJ. Williams, 2013. Multi-domain learning and generalization in dialog state tracking. In Proceedings of SIGDIAL, 2013.
The Dialog State Tracking Challenge. J Williams, A Raux, D Ramachandran, A Black, Proceedings of SIGDIAL. SIGDIALJ. Williams, A. Raux, D. Ramachandran and A. Black, 2013. The Dialog State Tracking Challenge. In Proceedings of SIGDIAL, 2013.
POMDP-based Statistical Spoken Dialogue Systems: a Review. S Young, M Gasic, B Thomson, J Williams, IEEE101S. Young, M. Gasic, B. Thomson and J. Williams 2013. POMDP-based Statistical Spoken Dialogue Systems: a Review. IEEE, 101(5):1160-1179. |
253,628,237 | [] | Department of Data Science
Chunghwa Telecom laboratories
TaoyuanTaiwan
Soochow University
TaipeiTaiwan
(BODY),症狀 (SYMP),醫療器材 (INST),檢驗 (EXAM),化學物質 (CHEM), 疾病 (DISE),藥品 (DRUG),營養品 (SUPP), 治療 (TREAT),時間 (TIME)。資料是以 BIO 格 式 去 標 記 。 例 如"肌 肉"會 被 標 記 成"B- BODY"和"I-BODY","咳嗽"則是"B-SYMP"和 "I-SYMP",以此類推。類別以外的字全標為 "O"。而其中區分為訓練資料(或學習。其中 (1) 至 (4) 分別為 Input Gate, Forget Gate 和 Output Gate 計算公式。其中 為 memory,ℎ −1 為 hidden state。 = � • ℎ −1 + • + � (1) = ( • ℎ −1 + • + ) (2) = ℎ( • ℎ −1 + • + ) (3) = × −1 + × (4) = ( • ℎ −1 + • + ) (5) ℎ = × ℎ( ) (6) Forget Gate,取決要忘記多少舊資料,Input Gate 則 是 取 多 少 新 資 料 從 新 (candidate memory) 取出,放入 成為下一次的 Memory, 因此相互獨立,而 範圍超出正一到負一, 需要 ℎ( ) 的 ℎ 進行標準化,最後相 乘起來成為新的 hidden state,最後由各個參數 以及各個 決定 及 ℎ −1 分別代表當前 的輸入以及上一時間點的輸出,有了這些門 的機制,LSTM 可以記住長期的資料訊息,也 避免有梯度消失或爆炸的問題。而 BiLSTM 則 使用在學習時間序列的關互關係,使此能夠 有隱馬爾可夫模型類似的能力,為雙向循環 神經網路 (Schuster & Paliwal, 1997),通過訓練 輸入閘、遺忘閘、輸出閘等權重來學習序列 輸入中應該注意的權重信息,而在訓練時使 用來自序列兩端的信息來估計輸出為雙向傳 遞更新 (Graves & Schmidhuber, 2005),也就是 說,我們使用文字未來的字,以及過去文字 的種種信息來進行預測。而我們任務中的並 不是預測下一個字,而是整個句子的分析並 且各個字之間帶有時間前後輸出信息向量, 因此我們最佳選擇是使用 BiLSTM 完成此任務。
Conditional Random Field
條件隨機場 (conditional random field, CRF),它 經常使用於各種標籤的問題上,在此使用的 是實體標籤,但不同於其他模型,其特點是 狀態序列 (實體標籤序列: ) 下觀測序列 (句子 切 割 後 序 列: ) 的 條 件 機 率 分 布 , 使 用 Hammersley-Clifford Theorem,損失函數為對 數似然函數。基本條件隨機場的定義如下, 設 與 是隨機變數, ( | )是在給定 的條 件機率分布。如隨機變數 構成一個由無向圖 = ( , )表示的馬爾可夫隨機場,則 ( | , , ≠ ) = ( | , ,~) (7) 對任意頂點 成立,稱條件概率分佈 ( | ) 為條件隨機場,其中 ~ 表示圖 = ( , ) 中與頂點 有邊連接的所有頂點 , ≠ 表示頂點 以外的所有頂點, 與 為頂點 與 對應的隨機變數。 實際應用上,是使用線性條件隨機場最為 廣泛,一般設 和 有相同的圖結構,定義 如下,設 = ( 1 , 2 , … ), = ( 1 , 2 , … ) 均為線性表示的隨機變數序列,若再給定隨 機變數序列 的條件下,隨機變數序列 的 條件機率分布 ( | ) 構成條件隨機場,即滿 足馬爾可夫性。 ( | , 1 , … , −1 , +1 , … , ) = ( | , −1 , +1 ) (8) 而 稱 ( | ) 是 線 性 條 件 隨 機 場 , 其 中 = 1,2, … , ,在 = 1 和 時只考慮單邊。且將 隨機變數 取值為 的條件下,隨機變數 取值為 的條件機率具有以下形式。 ( | ) = 1 ( ) exp (� ( −1 , , , ) , + � ( , , )) ,(9)上式表示輸入序列 ,對輸出序列 預測的條 件概率,其中 ( ) 為為歸一化因子, 、 是特徵函數,也是二值函數,函數值為 0 或者 1。 ( ) = � exp (� ( −1 , , , ) , + � ( , , )) ,(10)換句話說,滿足特徵條件取值為 1,否則為 0, 是定義在邊上的特徵函數,稱為轉移特徵, 依賴於當前和前一個位置。 ( −1 , , , ) � 1, 0, ℎ(11)
另一個 ,是定義在節點的特徵函數,稱為 狀態特徵,依賴於當前位置:
在這項研究中,我們提交了三個命名實體識 別的模型,並將其應用於醫療領域。根據其 名稱分別為人體 (BODY),症狀 (SYMP),醫 療 器 材 (INST), 檢 驗 (EXAM), 化 學 物 質 (CHEM),疾病 (DISE),藥品 (DRUG),營養 品 (SUPP),治療 (TREAT),時間 (TIME)。資 料是以 BIO 格式去標記。例如"肌肉"會被標記 成"B-BODY"和"I-BODY","咳 嗽"是 "B- SYMP"和"I-SYMP",以此類推。類別以外的 字全標為"O"。最終我們使用
train.json)擁有 28,161 句子和測試資料(test.json)有 2531 個句 子和 7305 命名實體。由於大會最終是會提供 另外的 3204 句當作最終的測試集(test),故我 們可以將 HealthNER 中的測試資料(test.json)當 作我們模型的驗證資料集(dev)使用。 The Association for Computational Linguistics and Chinese Language Processing3 Proposed Method
3.1 Embedding method
Pytorch 的 embedding 轉換詞向量機制,為一
個簡單的尋找表,其模型通常用於存儲詞向
量並使用索引檢索它們。模型的輸入是索引
列表,輸出是相應的詞嵌入。其模型的可學
習權重,使用是初始化均值 (mean)為 0、方差
(variance) 為 1 的常態分佈 (normal distribution)。
其輸入值是索引值的張量形式,輸出則是和
輸入的張量相同形式維度形式。
得到詞向量後,使用自行定義好的特殊符
號作為 mask 組成單元,有 [UNK] 表示 [未知
詞]、[PAD] 表示 [填充]、[START] 表示 [文本
開頭]、[END] 表示 [文本结束],共 4 種特殊符
號,將每一句子依照上方表示,轉換成每句
完整的 mask,以提供標註作為所使用之 label,
以此作為下一步驟要使用的輸入值。
3.2 BERT and RoBERTa
BERT (Bidirectional Encoder Representations
from Transformer) 模型,是 Google 以無監督的
方式利用大量無標記文本的模型。訓練資料
來源于 Wikipedia 2.5B 語料集加上 BookCorpus
800M 語料集。批量大小為 1024 * 128 長度或
256 * 512 長度。BERT 分為 BERT-Base (12-
layer, 768-Hidden, 12-head) 和 BERT-Large (24-
layer, 1024hidden, 16-head) 兩種形式。BERT 無
需標記好的資料或解釋即可進行分析。BERT
是 Transformer 的前半部分核心模組(encoder),
而注意力 (attention) 機制是 Transformer 的前段
核心部分,主要是增強語義向量,在不同的
字結合中,代表識別字所帶來的意思。因此
在 BERT 中,注意力機制為 BERT 的主要構成
之一。
RoBERTa 是 BERT 模型問世之後的優化模型
之一,主要其優化為效能上的優化,用途為
分類以及閱讀理解,而進而分別為中文上的
預訓練模型以及英文上的模型,其中英文的
RoBERTa 主要訓練的數據集為維基百科及書
籍語料庫,中文的 RoBERTa 主要是使用哈工
大訊飛聯合實驗室發布的 RoBERTa-wwm-ext-
large 模型(Cui et al., 2020),該模型經過了第三
方中文基準測試 CLUE 的驗證。CLUE 的基準
測試包含了 6 個中文文本分類數據集和 3 個閱
讀理解數據集,其中包括哈工大訊飛聯合實
驗室發布的 CMRC 2018 閱讀理解數據集。在
目前的基準測試中,哈工大訊飛聯合實驗室
發布的 RoBERTa-wwm-ext-large 模型在分類和
閱讀理解任務中都取得了當前最好的綜合效
果 (Xu et al., 2020)。
3.3 LSTM
LSTM 是為了解決 RNN 的缺點,例如不能準
確處理長期序列、時間的資料。LSTM 是由四
個閘 (gate) 結構所組成,輸入閘 (Input Gate),
儲存細胞 (Memory Cell),遺忘閘 (Forget Gate),
輸出閘 (Output Gate)。Input Gate 主要負責控制
這個值輸入,Memory Cell 儲存值,下階段在
使用,Output Gate 輸出結果,Forget Gate 是否
保留或刪除 feature。LSTM 思路就是把輸入到
類神經網路層處理產生出結果,過程當中,
記住某些特征,然後會跟著這些經驗來判斷
The 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
Taipei, Taiwan, November 21-22, 2022.
ID 並 添 加 一 些 特 殊 的 標 記 於 句 子 前 後 ([CLS] 和 [SEP ])。再將每個句子填充(PAD)成 同等長度的句子,我們設置訓練集中最大句 子的長度 441 與 batch_size = 16 並以adamw 為 優化器進行訓練。 首先,我們以大會提供之 訓練集 train.json 資料檔進行 BERT_Based 的模 型訓練,並以驗證集 test.json 資料檔進行模型 測試。發現到衡量指標 precision 只有 69.55%, 推測應是 test.json 中包含 train.json 有未出現的 新實體。借鏡吳恩達 (Andrew Ng) 近期提倡的 以資料為中心的人工智慧 (Data-Centric AI)方The 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
Taipei, Taiwan, November 21-22, 2022. The Association for Computational Linguistics and Chinese Language Processing
( , , ) �
1,
0,
ℎ
(12)
, 為對應權重,接下來將轉移特徵和狀態
特徵結合成,使用對數似然函數修正,用
Viterbi 學習算法取得最佳結果。
4 Experimental Result
此次競賽中,大會允許提交三個最佳的預測
結果。我們在以下各小節分別說明三次提交
(RUNS)採用的方法與相關參數設置。
4.1 BiLSTM+CRF (RUN 1)
RUN 1 採 用 目 前 在 英 語 NER 表 現 良 好 的
BiLSTM+CRF 網路模型。我們採用 Pytorch 的
embedding 轉換詞向量機制來對每個中文字進
行向量編碼。參數設置上從圖 1 中,可以看到
batch size,在其他參數固定下,所設為 32 值
的 F1 Score 以及 Recall 都比其餘兩者高,因此
選擇 32 值作為 embedding dim 和 max length 的
實驗固定參數。接著,看到 embedding dim,
在 batch size 設為 32 值,max length 參數固定
不變下,所設為 210 值的 F1 Score 以及 Recall
都比其餘四者高,因此選擇 210 值作為 max
length 的 實 驗 固 定 參 數 。 最 後 , 看 到 max
length, 在 batch size 設為 32 值和 embedding
dim 設為 210 值下,所設為 250 值的 F1 Score
以及 Precision 都比其餘兩者高,因此 max
length 設為 250 值為最終實驗模型選擇參數值。
所以要得到最優的模型,參數 max length 設為
250 值,batch size 設為 32 值 embedding dim 設
為 210 值。實驗結果採用大會最終提供的測試
集進行衡量如下表 1 中的 BiLSTM+CRF (RUN
1)所 示 , 準 確 性(Accuracy) 82.23%、 精 確 度
(Precision) 55.96%、 招 回 率(Recall) 72.38%與
F1 score 63.12%。
4.2 RoBERTa+BiLSTM + CRF (RUN 2)
RUN 2 採用 RoBERTa+BiLSTM + CRF 模型來
進行實驗。我們分別選取句子長度以及批量
大小來決定哪個模型可以訓練出較好的正確
率,而句子長度分別使用長度為 150、200、
250 個字,批量大小為 16、32、64 分別做為模
型訓練。最後我們的模型使用 SGD 隨機梯度
下降,學習率為 0.012,weight decay 為 1e-5,
且利用 scheduler 每兩次 epoch 時學習率減少
0.9。實驗結果採用大會最終提供的測試集進
行衡量如下表 1 中的 RoBERTa+BiLSTM + CRF
(RUN 2)所示,準確性(Accuracy) 91.56%、精
確 度(Precision) 78.96%、 招 回 率(Recall)
78.21%與 F1 score 78.58%。
4.3 BERT Token Classifier (RUN 3)
相對於文本分類的 BERT 和用於解決 NER 問
題的 BERT,其做法上區別在於我們如何設置
模型的輸出。如圖 2 所示,對於文本分類問題,
我們僅使用來自特殊 [CLS] 標記的嵌入向量輸
出。而 BERT 用於 NER 任務,我們需要使用
所有標記的嵌入向量輸出。通過使用所有標
記的嵌入向量輸出,我們可以對每個標記進
行分類來預測每個標記的命名實體為何。
圖 1:批量大小、維度和長度對模型的評估
圖 2:BERT Token Classifier
The 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
Taipei, Taiwan, November 21-22, 2022. The Association for Computational Linguistics and Chinese Language Processing
RUN3 使 用 中 研 院 中 文 計 算 語 言 研 究 小
(Chinese Knowledge and Information Processing,
CKIP) 所發布的 BERT 繁體中文預訓練模型
(ckiplab/bert-base-chinese) (Yang and Ma, 2021),
對每句訓練語句的每個標記 token 產生 768 維
的輸出向量。再將輸出向量接入一個線性分
類器進行分類。然而在將這些文本輸入模型
之前,我們需要先進行預處理。 也就是對這
些輸入文字進行轉換為預訓練詞彙表中的相
應 式,持續提升資料品質能增進模型的預測能
力。由於提升資料品質不是一次性能完成的
任務,而是持續改進的循環過程。故我們先
以 train.json 資 料 訓 練 一 個 基 礎 模 型
(BERT_Based) 再以預訓練模型的微調 (fine-
tune) 方式加入 test.json 資料持續訓練一個模型
(BERT_Cont)。最後以此模型預測大會的測試
檔提交為 Run3。最後依據大會提供的標準答
案(golden)所得到的實驗結果如表 1 所示。整
體來說 BERT_Cont 模型表現較佳,其在準確
性(Accuracy) 93.10%、 精 確 度(Precision)
80.18%、 招 回 率(Recall) 78.30%與 F1 score
79.23% 皆高於其它模型。
5 Conclusion and future work
HealthNER 的所 有資料 30,692句子當訓練與驗證資料集而以大 會提供的 3204 個句子為測試資料集分別對三 種模型進行驗證。實驗結果表明,RUN1 使用 的是 BiLSTM+CRF 網路模型其效能最差。 RUN2 採 用 的 是 簡 體 中 文 模 型 的 RoBERTa+BiLSTM + CRF 就能取得不錯的實 驗結果。而 RUN3 採用 CKIP 繁體中文的 BERT Classifier 系統取得了最好的系統效能, 其準確性(Accuracy) 93.10%、精確度(Precision) 80.18%、 招 回 率(Recall) 78.30%與 F1 score 79.23% 皆高於其它模型。由此可知,預訓練 模型的方法在此實驗上有比過去表現良好的 BiLSTM+CRF 網路模型擁有較佳的效能表現。 未來我們可再針對繁體中文的 BERT 模型再加 上 CRF 來探討效能是否能再提升。 Street Jr, Richard L. 2013. How clinician-patient communication contributes to health improvement: modeling pathways from talk to outcome. Patient education and counseling. 92(3): 286-291. https://doi.org/10.1016/j.pec.2013.05.004. patient communication in the e-health era. Israel journal of The 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022) Taipei, Taiwan, November 21-22, 2022. The Association for Computational Linguistics and Chinese Language Processing https://doi.org/10.1186/2045-4015-1-33.References
Weiner,
Jonathan
P.
2012.
Doctor-模型/效能
Accuracy
precision
recall
F1
BiLSTM+CRF
(RUN 1)
82.23%
55.96%
72.38%
63.12%
RoBERTa+BiLSTM + CRF
(RUN 2)
91.56%
78.96%
78.21%
78.58%
BERT_Based Token Classifier
91.75%
79.35%
76.24%
77.77%
BERT_Cont Token Classifier
(RUN 3)
93.10%
80.18%
78.30%
79.23%
表 1: 實驗結果
health
policy
research.
1(33):
1-7.
. S Azizi, B Mustafa, F Ryan, Z Beaver, J Freyberg, J Deaton, A Loh, A Karthikesalingam, S Kornblith, T Chen, N Vivek, M Norouzi, Azizi, S., Mustafa, B., Ryan, F., Beaver, Z., Freyberg, J., Deaton, J., Loh A., Karthikesalingam A., Kornblith S., T. Chen, N. Vivek and Norouzi, M.
Abridge: A Mission Driven Approach to Machine Learning for Healthcare Conversation. S Konam, S Rao, Journal of Commercial Biotechnology. 262Konam, S., and Rao S. 2021. Abridge: A Mission Driven Approach to Machine Learning for Healthcare Conversation. Journal of Commercial Biotechnology. 26(2): 62-66.
A prospective, multicenter study of pharmacist activities resulting in medication error interception in the emergency department. A E Patanwala, A B Sanders, M C Thomas, N M Acquisto, K A Weant, S N Baker, E Merritt, B L Erstad, 10.1016/j.annemergmed.2011.11.013Annals of emergency medicine. 595Patanwala, A. E., Sanders, A. B., Thomas, M. C., Acquisto, N. M., Weant, K. A., Baker, S. N., Merritt, E., and Erstad, B. L. 2012. A prospective, multicenter study of pharmacist activities resulting in medication error interception in the emergency department. Annals of emergency medicine. 59(5): 369-373. https://doi.org/10.1016/j.annemergmed.2011.11.013.
Z Huang, W Xu, Yu , K , 10.48550/arXiv.1508.01991arXiv:1508.01991Bidirectional LSTM-CRF models for sequence tagging. arXiv preprintHuang, Z., Xu, W., and Yu, K. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991. https://doi.org/10.48550/arXiv.1508.01991
Multiple embeddings enhanced multi-graph neural networks for Chinese healthcare named entity recognition. L H Lee, Y Lu, 10.1109/JBHI.2020.3048700IEEE Journal of Biomedical and Health Informatics. 257Lee, L. H., & Lu, Y. (2021). Multiple embeddings enhanced multi-graph neural networks for Chinese healthcare named entity recognition. IEEE Journal of Biomedical and Health Informatics, 25(7), 2801- 2810. https://doi.org/10.1109/JBHI.2020.3048700.
Overview of the ROCLING 2022 shared task for Chinese healthcare named entity recognition. L.-H Lee, C.-Y Chen, L.-C Yu, Y.-H Tseng, Proceedings of the 34th Conference on Computational Linguistics and Speech Processing. the 34th Conference on Computational Linguistics and Speech ProcessingLee, L.-H., Chen, C.-Y., Yu, L.-C., and Tseng, Y.-H. 2022. Overview of the ROCLING 2022 shared task for Chinese healthcare named entity recognition. Proceedings of the 34th Conference on Computational Linguistics and Speech Processing.
Revisiting pre-trained models for Chinese natural language processing. Y Cui, W Che, T Liu, B Qin, S Wang, G Hu, arXiv:2004.13922arXiv preprintCui, Y., Che, W., Liu, T., Qin, B., Wang, S., & Hu, G. (2020). Revisiting pre-trained models for Chinese natural language processing. arXiv preprint arXiv:2004.13922.
L Xu, H Hu, X Zhang, L Li, C Cao, Y Li, Y Xu, K Sun, D Yu, C Yu, Y Tian, Q Dong, W Liu, B Shi, Y Cui, J Li, J Zeng, R Wang, W Xie, Y Li, Y Patterson, Z Tian, Y Zhang, H Zhou, S Liu, Z Zhao, Q Zhao, C Yue, X Zhang, Z Yang, K Richardson, Zhenzhong Lan, 10.48550/arXiv.2004.05986arXiv:2004.05986CLUE: A Chinese language understanding evaluation benchmark. arXiv preprintXu, L., Hu, H., Zhang, X., Li, L., Cao, C., Li, Y., Xu, Y., Sun, K., Yu, D., Yu, C., Tian, Y., Dong, Q., Liu, W., Shi, B., Cui, Y., Li, J., Zeng, J., Wang, R., Xie, W., Li, Y., Patterson, Y., Tian, Z., Zhang, Y., Zhou, H., Liu, S., Zhao, Z., Zhao, Q., Yue, C., Zhang, X., Yang, Z., Richardson, K., Zhenzhong Lan, Z. 2020. CLUE: A Chinese language understanding evaluation benchmark. arXiv preprint arXiv:2004.05986. https://doi.org/10.48550/arXiv.2004.05986.
Bidirectional recurrent neural networks. M Schuster, K K Paliwal, IEEE transactions on Signal Processing. 4511Schuster, M., and Paliwal, K. K. 1997. Bidirectional recurrent neural networks. IEEE transactions on Signal Processing. 45(11): 2673-2681.
. 10.1109/78.650093https://doi.org/10.1109/78.650093.
Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural networks. A Graves, J Schmidhuber, 18Graves, A., and Schmidhuber, J. 2005. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural networks. 18(5-6): 602-610.
. 10.1016/j.neunet.2005.06.042https://doi.org/10.1016/j.neunet.2005.06.042.
ckiplab/ckiptransformers. Mu Yang, W.-Y Ma, Conference on Computational Linguistics and Speech Processing. ROCLING 2022Yang, Mu, and Ma, W.-Y. 2021. ckiplab/ckip- transformers. https://github.com/ckiplab/ckip- transformers The 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
Taiwan Taipei, The Association for Computational Linguistics and Chinese Language Processing. Taipei, Taiwan, November 21-22, 2022. The Association for Computational Linguistics and Chinese Language Processing |
||
6,396,023 | Free Indexation: Combinatorial Analysis and A Compositional Algorithm* | The principle known as 'free indexation' plays an important role in the determination of the referential properties of noun phrases in the principleand-parameters language framework. First, by investigating the combinatorics of free indexation, we show that the problem of enumerating all possible indexings requires exponential time. Secondly, we exhibit a provably optimal free indexation algorithm. | [
118033329
] | Free Indexation: Combinatorial Analysis and A Compositional Algorithm*
Sandiway Fong internet:sandiway@ai.mit.edu
545 Technology Square
Artificial Intelligence Laboratory
MIT
NE43-810, 02139Rm, CambridgeMA
Free Indexation: Combinatorial Analysis and A Compositional Algorithm*
The principle known as 'free indexation' plays an important role in the determination of the referential properties of noun phrases in the principleand-parameters language framework. First, by investigating the combinatorics of free indexation, we show that the problem of enumerating all possible indexings requires exponential time. Secondly, we exhibit a provably optimal free indexation algorithm.
Introduction
In the principles-and-parameters model of language, the principle known as 'free indexation' plays an important part in the process of determining the referential properties of elements such as anaphors and pronominals. This paper addresses two issues. (1) We investigate the combinatorics of free indexation. By relating the problem to the n-set partitioning problem, we show that free indexation must produce an exponential number of referentially distinct phrase structures given a structure with n (independent) noun phrases. (2) We introduce an algorithm for free indexation that is defined compositionally on phrase structures. We show how the compositional nature of the algorithm makes it possible to incrementally interleave the computation of free indexation with phrase structure construction. Additionally, we prove the algorithm to be an 'optimal' procedure for free indexation. More precisely, by relating the compositional structure of the formulation to the combinatorial analysis, we show that the algorithm enumerates precisely all possible indexings, without duplicates.
Free Indexation
Consider the ambiguous sentence:
(1) John believes Bill will identify him *The author would like to acknowledge Eric S. Ristad, whose interaction helped to motivate much of the analysis in this paper. Also, Robert C. Berwick, Michael B. Kashket, and Tanveer Syeda provided many useful comments on earlier drafts. This work is supported by an IBM Graduate Fellowship.
In (1), the pronominal "him" can be interpreted as being coreferential with "John", or with some other person not named in (1), but not with "Bill". We can represent these various cases by assigning indices to all noun phrases in a sentence together with the interpretation that two noun phrases are coreferential if and only if they are coindexed, that is, if they have the same index. Hence the following indexings represent the three coreference options for pronominal "him" :1
(2) a. John1 believes Bill2 will identify him1 b. John1 believes Bill2 will identify him3 c. *John1 believes Bills will identify him2
In the principles-and-parameters framework (Chomsky [3]), once indices have been assigned, general principles that state constraints on the locality of reference of pronominals and names (e.g. "John" and "Bill") will conspire to rule out the impossible interpretation (2c) while, at the same time, allow the other two (valid) interpretations. The process of assigning indices to noun phrases is known as "free indexation," which has the following general form:
(4) Assign indices freely to all noun phrases? In such theories, free indexation accounts for the fact that we have coreferential ambiguities in language. Other principles interact so as to limit the 1Note that the indexing mechanism used above is too simplistic a framework to handle binding examples involving inclusion of reference such as: (3) a. We1 think that I1 will win b. We1 think that Is will win c. *We1 like myself 1 d. John told Bill that they should leave Richer schemes that address some of these problems, for example, by representing indices as sets of numbers, have been proposed. See Lasnik [9] for a discussion on the limitations of, and alternatives to, simple indexation. Also, Higginbotham [7] has argued against coindexation (a symmetric relation), and in favour of directed links between elements (linking theory). In general, there will be twice as many possible 'linkings' as indexings for a given structure. However, note that the asymptotic results of Section 3 obtained for free indexation will also hold for linking theory. number of indexings generated by free indexation to those that are semantically well-formed.
In theory, since the indices are drawn from the set of natural numbers, there exists an infinite number of possible indexings for any sentence. However, we are only interested in those indexings that are distinct with respect to semantic interpretation. Since the interpretation of indices is concerned only with the equality (and inequality) of indices, there are only a finite number of semantically different indexings. 3 For example, "John1 likes Mary2" and "John23 likes Mary4" are considered to be equivalent indexings. Note that the definition in (4) implies that "John believes Bill will identify him" has two other indexings (in addition to those in (2)):
(5) a. *John1 believes Bill1 will identify him1 b. *John1 believes Bill1 will identify him2 subsets. For example, a set of four elements {w, x, y, z} can be partitioned into two subsets in the following seven ways:
{w, z}{y} {w, y, y}
y, z){w}
The number of partitions obtained thus is usually represented using the notation {~} (Knuth [8]). In general, the number of ways of partitioning n elements into m sets is given by the following formula. (See Purdom & Brown [10] for a discussion of (6).) In some versions of the theory, indices are only freely assigned to those noun phrases that have not been coindexed through a rule of movement (Move-a). (see Chomsky [3] (pg.331)). For example, in "Who1 did John see [NPt]l?", the rule of movement effectively stipulates that "Who" and its trace noun phrase must be coreferential. In particular, this implies that free indexation must not assign different indices to "who" and its trace element. For the purposes of free indexation, we can essentially 'collapse' these two noun phrases, and treat them as if they were only one. Hence, this structure contains only two independent noun phrases. 4
3
The Combinatorics of Free Indexation ........
In this section, we show that free indexation generates an exponential number of indexings in the number of independent noun phrases in a phrase structure. We achieve this result by observing that the problem of free indexation can be expressed in terms of a well-known combinatorial partitioning problem.
Consider the general problem of partitioning a set of n elements into m non-empty (disjoint) 2The exact form of (4) varies according to different versions of the theory. For example, in Chomsky [4] (pg.59), free indexation is restricted to apply to Apositions at the level of S-structure, and to A-positions at the level of logical form.
ZIn other words, there are only a finite number of equivalence classes on the relation 'same core[erence relatlons hold.' This can easily be shown by induction on the number of indexed elements.
4TechnicaJly, "who" and its trace are said to form a chain. Hence, the structure in question contains two distinct chains. for n,m > 0
The number of ways of partitioning n elements into zero sets, {o}, is defined to be zero for n > 0 and one when n = 0. Similarly, {,no}, the number of ways of partitioning zero elements into m sets is zero for m > 0 and one when m = 0.
We observe that the problem of free indexation may be expressed as the problem of assigning 1, 2,... ,n distinct indices to n noun phrases where n is the number of noun phrases in a sentence. Now, the general problem of assigning m distinct indices to n noun phrases is isomorphic to the problem of partitioning n elements into m non-empty disjoint subsets. The correspondence here is that each partitioned subset represents a set of noun phrases with the same index. Hence, the number of indexings for a sentence with n noun phrases is: (7) m=l (The quantity in (7) is commonly known as Bell's Exponential Number B.; see Berge [2].)
The recurrence relation in (6) has the following solution (Abramowitz [1]): (8) Using (8), we can obtain a finite summation form for the number of indexings:
(9) (-1) k" S. = (¥7 k-7.' rn=l k=0
It can also be shown (Graham [6]) that Bn is asymptotically equal to (10): (10) mrtn em~-n-~ where the quantity mn is given by:
(11) 1 mn In mn= n --2
That is, (10) is both an upper and lower bound on the number of indexings. More concretely, to provide some idea of how fast the number of possible indexings increases with the number of noun phrases in a phrase structure, the following table exhibits the values of (9) for the first dozen values of n:
NPs Indexings NPs Indexings 1 1 7 877 2 2 8 4140 3 5 9 21147 4 15 10 115975 5 52 11 678570 6 203 12 4123597
A Compositional Algorithm
In this section, we will define a compositional algorithm for freeindexation that provably enumerates all and only all the possible indexings predicted by the analysis of the previous section. The PO-PARSER is a parser based on a principles-and-parameters framework with a uniquely flexible architecture ( [5]). In this parser, linguistic principles such as free indexation may be applied either incrementally as bottom-up phrase structure construction proceeds, or as a separate operation after the complete phrase structure for a sentence is recovered. The PO-PARSER was designed primarily as a tool for exploring how to organize linguistic principles for efficient processing. This freedom in principle application allows one to experiment with a wide variety of parser configurations.
Perhaps the most obvious algorithm for free indexation is, first, to simply collect all noun phrases occurring in a sentence into a list. Then, it is easy to obtain all the possible indexing combinations by taking each element in the list in turn, and optionally coindexing it with each element following it in the list. This simple scheme produces each possible indexing without any duplicates and works well in the case where free indexing applies after structure building has been completed.
The problem with the above scheme is that it is not flexible enough to deal with the case when free 107 indexing is to be interleaved with phrase structure construction. Conceivably, one could repeatedly apply the algorithm to avoid missing possible indexings. However, this is very inefficient, that is, it involves much duplication of effort. Moreover, it may be necessary to introduce extra machinery to keep track of each assignment of indices in order to avoid the problem of producing duplicate indexings. Another alternative is to simply delay the operation until all noun phrases in the sentence have been parsed. (This is basically the same arrangement as in the non-interleaved case.) Unfortunately, this effectively blocks the interleaved application of other principles that are logically dependent on free indexation to assign indices. For example, this means that principles that deal with locality restrictions on the binding of anaphors and pronominals cannot be interleaved with structure building (despite the fact that these particular parser operations can be effectively interleaved).
An algorithm for free indexation that is defined compositionally on phrase structures can be effectively interleaved. That is, free indexing should be defined so that the indexings for a phrase is some function of the indexings of its sub-constituents. Then, coindexings can be computed incrementally for all individual phrases as they are built. Of course, a compositional algorithm can also be used in the non-interleaved case. Basically, the algorithm works by maintaining a set of indices at each sub-phrase of a parse tree. 5 Each index set for a phrase represents the range of indices present in that phrase. For example, "Whoi did Johnj see tiT' has the phrase structure and index sets shown in Figure 1.
There are two separate tasks to be performed whenever two (or more) phrases combine to form a larger phrase, s First, we must account for the possibility that elements in one phrase could be coindexed (cross-indexed) with elements from the other phrase. This is accomplished by allowing indices from one set to be (optionally) merged with distinct indices from the other set. For example, the phrases "[NpJohni]" and "[vP likes himj]" have index sets {i} and {j}, respectively. Free indexation must allow for the possibilities that "John" and "him" could be coindexed or maintain distinct indices. Cross-indexing accounts for this by optionally merging indices i and j. Hence, we obtain: i not merged with j Secondly, we must find the index set of the aggregate phrase. This is just the set union of the index sets of its sub-phrases after cross-indexation.
In the example, "John likes him", (12a) and (125) have index sets {i} and {i, j}.
More precisely, let Ip be the set of all indices associated with the Binding Theory-relevant elements in phrase P.
Assume, without loss of generality, that phrase structures are binary branching. 7 Consider a phrase P = Iv X Y] with immediate constituents X and Y. Then:
1. Cross Indexing: Let fx represent those elements of Ix which are not also members of Iv, that is, (Ix -Iv). Similarly, let iv be (Iv -Ix). s The nondeterminism in step (lc) of crossindexing will generate all and only all (i.e. without duplicates) the possible indexings. We will show this in two parts. First, we will argue that eSome rea£lers may realize that the algorithm must have an additional step in cases where the larger phrase itself may be indexed, for instance, as in [NPi[NP, John's ] mother]. In such cases, the third step is slCmply to merge the singleton set consisting of the index of the larger phrase with the result of crossindexing in the first step. ( Figure 2 Right-branching tree the above algorithm cannot generate duplicate indexings: That is, the algorithm only generates distinct indexings with respect to the interpretation of indices. As shown in the previous section, the combinatorics of free-indexlng indicates that there are only B, possible indexings. Next, we will demonstrate that the algorithm generates exactly that number of indexings. If the algorithm satisfies both of these conditions, then we have proved that it generates all the possible indexings exactly once.
1. Consider the definition of cross-indexing, ix represents those indices in X that do not appear in Y. (Similarly for iv.) Also, whenever two indices are merged in step (lb), they are 'removed' from ix and iv before the next iteration. Thus, in each iteration, z and y from step (lb) are 'new' indices that have not been merged with each other in a previous iteration. By induction on tree structures, it is easy to see that two distinct indices cannot be merged with each other more than once. Hence, the algorithm cannot generate duplicate indexings.
2. We now demonstrate why the algorithm generates exactly the correct number of indexings by means of a simple example. Without loss of generality, consider the right-branching phrase scheme shown in Figure 2. Observe that this recurrence relation has the same form as equation (6). Hence the algorithm generates exactly the same number of indexings as demanded by combinatorial analysis.
Conclusions
This paper has shown that free indexation produces an exponential number of indexings per phrase structure. This implies that all algorithms that compute free indexation, that is, assign indices, must also take at least exponential time. In this section, we will discuss whether it is possible for a principle-based parser to avoid the combinatorial 'blow-up' predicted by analysis. First, let us consider the question whether the 'full power' of the free indexing mechanism is necessary for natural languages. Alternatively, would it be possible to 'shortcut' the enumeration procedure, that is, to get away with producing fewer than B, indexings? After all, it is not obvious that a sentence with a valid interpretation can be constructed for every possible indexing. However, it turns out (at least for small values of n; see Figures 5 and 6 below) that language makes use of every combination predicted by analysis. This implies, that all parsers must be capable of producing every indexing, or else miss valid interpretations for some sentences.
There are B3 = 5 possible indexings for three noun phrases. Figure 5 contains example sentences for each possible indexing. 9 Similarly, there are fifteen possible indexings for four noun phrases. The corresponding examples are shown in Figure 6.
Although it may be the case that a parser must be capable of producing every possible indexing, it does not necessarily follow that a parser must enumerate every indexing when parsing a parlicular sentence. In fact, for many cases, it is possible to avoid exhaustively exploring the search space of possibilities predicted by combinatorial analysis. To do this, basically we must know, a priori, what classes of indexings are impossible for a given sentence. By factoring in knowledge about restrictions on the locality of reference of the items to be indexed (i.e. binding principles), it is possible to explore the space of indexings in a controlled fashion. For example, although free indexation implies that there are five indexings for "John thought [s Tom forgave himself ] ", we can make use of the fact that "himself" must be coindexed with an element within the subordinate clause to avoid gen-STo make the boundary cases match, just define c(0, 0) to be 1, and let c(0, m) = 0 and c(n, 0) = 0 for m > 0 and n > 0, respectively. Figure 6 Example sentences for B4 crating indexings in which "Tom" and "himself" are not coindexed. 1° Note that the early elimination of ill-formed indexings depends crucially on a parser's ability to interleave binding principles with structure building. But, as discussed in Section 4, the interleaving of binding principles logically depends on the ability to interleave free indexation with structure building. Hence the importance of an formulation of free indexation, such as the one introduced in Section 4, which can be effectively interleaved.
reasons, we consider only pure indices. The actual algorithm keeps track of additional information, such as agreement features like person, number and gender, associated with each index. For example, irrespective of configuration, "Mary" and "him" can never have the same index.
[
cP [NP who/] [~-did [IP [NP Johnj] [vP see [NP tdl]
a) If either ix or fr are empty sets, then done. (b) Let x and y be members of ix and fy, respectively. (c) Eifher merge indices z and y or do nothing. (d) Repeat from step (la) with ix_ -{z} in place of ix. Replace Ir with Iv -{y} if and y have been merged. 2. Index Set Propagation: Ip = Ix O Iv.
Figure 4
4Now consider the decision tree shown in Figure 3 for computing the possible indexings of the right-branching tree in a bottom-up fashion. Each node in the tree represents the index set of the combined phrase depending on whether the noun phrase at the same level is cross-Condensed decision tree indexed or not. For example, {i} and {i, j}on the level corresponding to NPj are the two possible index sets for the phrase Pij. The path from the root to an index set contains arcs indicating what choices (either to coindex or to leave free) must have been made in order to build that index set. Next, let us just consider the cardinality of the index sets in the decision tree, and expand the tree one more level (for NP~) as shown inFigure 4.Informally speaking, observe that each decision tree node of cardinality i 'generates' i child nodes of cardinality i plus one child node of cardinality i + 1. Thus, at any given level, if the number of nodes of cardinality m is cm, and the number of nodes of cardinality m-1 is c,,-1, then at the next level down, there will be mcm + c,n-1 nodes of cardinality m. Let c(n,m) denote the number of nodes at level n with cardinality m. Let the top level of the decision tree be level 1. Then:(13)c(n+l, re+l) = c(n, m)+(m+l)c(n, re+l)
9PRO is an empty (non-overt) noun phrase element.
(
For the above example, the extra step is to just merge {i} with {j}.) For expository reasons, we will ignore such cases. Note that no loss of generality is implied since a structure of the form [NPI [NPj... ~.. -]... ~...] can be can always be handled as [P1 [NPi][P2[NPj... o¢...].../~...]].rThe algorithm generalizes to n-ary branching using iteration. For example, a ternary branching structure such as [p X Y Z] would be handled in the sameway as [p X[p, Y Z]].
SNote that ix and iv are defined purely for no-
tational convenience. That is, the algorithm directly
operates on the elements of Ix and Iy.
108
/
NPk/~
N Pj
Y Pi
111) John1 wanted PRO1 to forgive himselfl John1 wanted PRO1 to forgive him2 Johnl wanted Mary 2 to forgive himl Johnl wanted Mary 2 to forgive herself2 John1 wanted Mary 2 to forgive him3 Figure 5 Example sentences for B3 persuaded himselfl that hel should give himselfl up persuaded Mary 2 PRO2 to forgive herself2 persuaded himselfl PRO1 to forgive hers persuaded Mary 2 PROs to forgive himl persuaded Mary 2 PRO~ to forgive him3 wanted Bill2 to ask Mary a PRO3 to leave PRO1 to tell Mary 2 about herself2 Mary 2 to tell him1 about himselfl PRO1 to tell Mary 2 about himself1 Bill2 to tell Marya about himself2 PRO1 to tell Mary 2 about Torna Mary 2 to tell him1 about Torn3 Mary 2 to tell Toma about himl Mary2 to tell Toma about Bill4012)
(121)
(122)
(123)
(1111)
(1222)
(1112)
(1221)
(1223)
(1233)
(1122)
(1211)
(1121)
(1232)
0123)
0213)
0e31)
(1234)
John1
John1
John1
Johnl
Johnl
John1
Johnl
John1
JOhnl
John1
John1
John1
John1
John1
wanted
wanted
wanted
wanted
wanted
wanted
wanted
wanted
Handbook of Mathematical Functions. M A Abramowitz ~ I, Stegun, DoverM. Abramowitz ~ I.A. Stegun, Handbook of Mathematical Functions. 1965. Dover.
Principles of Combinatorics. C Berge, Academic PressBerge, C., Principles of Combinatorics. 1971. Academic Press.
N A Chornsky, Lectures on Government and Binding: The Pisa Lectures. 1981. Foris Publications. Chornsky, N.A., Lectures on Government and Binding: The Pisa Lectures. 1981. Foris Pub- lications.
John" is coindexed with "Tom" and "himself", and (2) where "John" has a separate index. Similarly, if we make use of the fact that "Tom" cannot be coindexed with. John. 01°This leaves only two remaining indexings: (1) where. we can pare the list of indexings down to just one (the second case1°This leaves only two remaining indexings: (1) where "John" is coindexed with "Tom" and "himself", and (2) where "John" has a separate index. Similarly, if we make use of the fact that "Tom" cannot be coin- dexed with "John", we can pare the list of indexings down to just one (the second case). ii0
Some Concepts and Consequences of of the Theory of Government and Binding. N A Chomsky, MIT PressChomsky, N.A., Some Concepts and Conse- quences of of the Theory of Government and Binding. 1982. MIT Press.
The Computational Implementation of Principle-Based Parsers. S &: R C Fong, Berwick, InternationM Workshop on Parsing Technologies. Carnegie Mellon UniversityFong, S. &: R.C. Berwick, "The Compu- tational Implementation of Principle-Based Parsers," InternationM Workshop on Pars- ing Technologies. Carnegie Mellon University. 1989.
R L Graham, D E Knuth, & O Patashnik, Concrete Mathematics: A Foundation for Computer Science. Addison-WesleyGraham, R.L., D.E. Knuth, & O. Patash- nik, Concrete Mathematics: A Foundation for Computer Science. 1989. Addison-Wesley.
Logical Form, Binding, and Nominals. J Higginbotham, 14Linguistic Inquiry. SummerHigginbotham, J., "Logical Form, Binding, and Nominals," Linguistic Inquiry. Summer 1983. Volume 14, Number 3.
The Art of Computer Programming: Volume 1 / Fundamental Algorithms. D E Knuth, Addison-Wesley2nd EditionKnuth, D.E., The Art of Computer Program- ming: Volume 1 / Fundamental Algorithms. 2nd Edition. 1973. Addison-Wesley.
A Course in GB Syntax: Lectures on Binding and Empty Categories. H & J Lasnik, Uriagereka, M.I.T. PressLasnik, H. & J. Uriagereka, A Course in GB Syntax: Lectures on Binding and Empty Cat- egories. 1988. M.I.T. Press.
The Analysis of Algorithms. P W Purdom, ~ C A Jr, Brown, CBS PublishingPurdom, P.W., Jr. ~ C.A. Brown, The Anal- ysis of Algorithms. 1985. CBS Publishing. |
15,488,392 | Graph-based Semi-supervised Gene Mention Tagging | The rapidly growing biomedical literature has been a challenging target for natural language processing algorithms. One of the tasks these algorithms focus on is called named entity recognition (NER), often employed to tag gene mentions.Here we describe a new approach for this task, an approach that uses graphbased semi-supervised learning to train a Conditional Random Field (CRF) model. Benchmarking it on the BioCreative II Gene Mention tagging task, we achieved statistically significant improvements in Fmeasure over BANNER, a widely used biomedical NER system. We note that our tool is transductive and modular in nature, and can be integrated with other CRF-based supervised NER tools. | [
11311232,
10986188,
14000702,
13936575
] | Graph-based Semi-supervised Gene Mention Tagging
August 12
Golnar Sheikhshab
School of Computing Science
Simon Fraser University
BurnabyBCCanada
Canada'
Elizabeth Starks
Canada'
Aly Karsan akarsan@bcgsc.ca
Canada'
Anoop Sarkar
School of Computing Science
Simon Fraser University
BurnabyBCCanada
Inanc Birol ibirol@bcgsc.ca
School of Computing Science
Simon Fraser University
BurnabyBCCanada
Canada'
Michael Smith Genome Sciences Centre
British Columbia Cancer Agency
VancouverBCCanada
Graph-based Semi-supervised Gene Mention Tagging
Proceedings of the 15th Workshop on Biomedical Natural Language Processing
the 15th Workshop on Biomedical Natural Language ProcessingBerlin, GermanyAugust 12
The rapidly growing biomedical literature has been a challenging target for natural language processing algorithms. One of the tasks these algorithms focus on is called named entity recognition (NER), often employed to tag gene mentions.Here we describe a new approach for this task, an approach that uses graphbased semi-supervised learning to train a Conditional Random Field (CRF) model. Benchmarking it on the BioCreative II Gene Mention tagging task, we achieved statistically significant improvements in Fmeasure over BANNER, a widely used biomedical NER system. We note that our tool is transductive and modular in nature, and can be integrated with other CRF-based supervised NER tools.
Introduction
Detecting biomedical named entities such as genes and proteins is one of the first steps in many natural language processing systems that analyze biomedical text. Finding relations between entities, and expanding knowledge bases are examples of research that highly depend on the accuracy of gene and protein mention tagging.
Named entity recognition is typically modelled as a sequence tagging problem (Sha and Pereira, 2003). One of the most commonly used models for sequence tagging is a Conditional Random Field (CRF) (Lafferty et al., 2001;Sha and Pereira, 2003).
Many popular and best performing biomedical named entity recognition systems, such as BAN-NER (Leaman et al., 2008), Gimli (Campos et al., 2013) and BANNER-CHEMDNER (Munkhdalai et al., 2015) use CRF as their core machine learning model built on the MALLET toolkit (McCallum, 2002).
Inspired by the success of graph-based semisupervised learning methods in other NLP tasks (Subramanya et al., 2010;Zhu et al., 2003;Subramanya and Bilmes, 2009;Alexandrescu and Kirchhoff, 2009;Liu et al., 2012;Saluja et al., 2014;Tamura et al., 2012;Talukdar et al., 2008;Das and Petrov, 2011), we integrated the graph based semi-supervised algorithm of Subramanya et al. (2010) and adapted their approach to improve on the results from BANNER. We show that our approach achieves a statistically significant improvement in terms of F-measure on the BioCreative II dataset for gene mention tagging.
Semi-supervised learning for gene mention tagging is not without precedent. There has been several semi-supervised approaches for the gene mention task and they have always been more successful than fully supervised approaches (Jiao et al., 2006;Ando, 2007;Campos et al., 2013;Munkhdalai et al., 2015).
Ando (2007) used a semi-supervised approach, Alternative Structure Optimization or ASO, in the BioCreative II gene mention shared task along with other extensions, such as using a lexicon or combining several classifiers. ASO ranked first among all competitors in the shared task competition 2007. Ando reported usage of unlabeled data as the most useful part of his system improving the F-measure of the baseline by 2.09 points where the complete (winning) system had a total improvement of 3.23 points over the baseline CRF (Ando, 2007). Jiao et al. (2006) used conditional entropy over the unlabeled data combined with the conditional likelihood over the labeled data in the objective function of CRF (Jiao et al., 2006). Munkhdalai et al. (2015) trained word representations using Brown clustering (Brown et al., 1992) and word2vec (Mikolov et al., 2013) on MEDLINE and PMC document collections and used them as features along with traditional features in a CRF. Like many of these approaches we also use unlabeled data to augment our baseline CRF model. In all these previous studies the unlabelled data was orders of magnitude more than labelled data and distinct from the test data.
In this paper we take a transductive approach and use the test set as our unlabelled data. Moreover, our approach is orthogonal to all these approaches and can be used to augment many of them. This approach can be easily implemented as a post-processing step in any system that uses a CRF model. Examples of such systems include Gimli (Campos et al., 2013) and BANNER-CHEMNDNER (Munkhdalai et al., 2015). These tools have achieved the highest F-scores in the literature after ASO (Ando, 2007). Our approach relies on the extraction of label distributions from the CRF and augments the decoding algorithm to incorporate the new information about gene mentions from the graph-based learning approach we describe in this paper.
Method
Like many previous studies (Leaman et al., 2008;Munkhdalai et al., 2015;Campos et al., 2013), we formulate the gene mention tagging problem as a word level sequence prediction problem, where labels for each word in the input are either Gene-Beginning, Gene-Inside, and Outside (not a gene). This representation is called IOB (for inside-outside-beginning). We applied a graphbased semi-supervised learning (SSL) approach, previously shown effective on a similar labelling task, part-of-speech tagging, for gene mention tagging. (Subramanya et al., 2010) In graph-based SSL, a graph is constructed to represent partially labelled data. Each node in the graph represents a single word-level gene mention tagging decision and the edges between the nodes represent similarity between the nodes. The goal is to associate probability distributions over the IOB tags to all vertices. Label distributions for vertices that appear in labelled data are estimated based on the reference labels and propagate to vertices for unlabelled data in the graph. These label distributions are combined with the CRF decoding algorithm used for labelling the test data. Graph-based SSL is categorized into inductive and transductive approaches. In inductive settings (e.g. Subramanya et al. (2010)), a model is trained and can be used as-is for unseen data. In transductive settings however, the unlabelled data includes test data. We took a transductive approach in constructing our graph on the union of train set and test set as labelled and unlabelled data.
Since the graph is the cornerstone of the algorithm, let us describe its construction and usage before the overall algorithm.
Graph Construction
We use the following steps for constructing the graph for the gene mention tagging task adapted from the graph construction for part-of-speech tagging described in Subramanya et al. (2010):
1. Each vertex represents a 3-gram type and the middle word of this 3-gram is the word which is tagged as a gene mention using the IOB tags. The label distribution for this middle word is learned during graph propagation and subsequently combined with the CRF model at test time.
2. A vertex is represented by a vector of pointwise mutual information values between feature instances and its 3-gram type.
3. Edge weights represent the similarity between vertices and are obtained by computing the cosine similarity of feature vectors of their two end vertices.
4. For each vertex only the K nearest neighbours are kept (default = 10).
We considered several feature sets, namely contextual features (Table 1), simplified contextual features (Table 2), all features from the base CRF model, and the most informative features from the base CRF model. We picked the simplified contextual features based on preliminary results using cross-validation on our development set. To represent a vertex v with 3-gram w −1 w 0 w 1 , we look at all occurrences of its 3-gram in the text, consider the larger context w −2 w −1 w 0 w 1 w 2 and get the lemmas of these words. v is represented by a vector of point-wise mutual information values between all possible feature instances (e.g. all possible lemmas for w −2 ) and w −1 w 0 w 1 . We eliminated extremely frequent features (default > 10,000) to reduce the time complexity of graph construction. This should not affect the structure of the graph substantially because the point-wise mutual information between a feature and any given vertex decreases as the frequency of the feature increases leaving extremely frequent features with relatively small weights.
Description Feature 3-gram + Context w −2 w −1 w 0 w 1 w 2 3-gram w −1 w 0 w 1 Left Context w −1 w −2 Right Context w 1 w 2 Center Word w 0 Trigram − Center Word w −1 w 1 Left Word + Right Context w −1 w 1 w 2 Left Context + Right Word w −2 w −1 w 1
Graph Propagation
In graph propagation we associate any given vertex u with a label distribution X u that represents how likely we think each label is for that vertex. The goal of graph-based SSL is to propagate existing knowledge about the labels through the graph. The initial knowledge about graph nodes is provided by the labeled data and potentially some prior knowledge. Figure 1 shows how graph propagation can assign label distributions to unlabelled vertices and change the label distributions coming from labelled data.
Propagation is accomplished by optimizing an objective function over the label distributions at each node in the graph. The objective function consists of three types of constraints:
1. For any labeled vertex u, the associated label distribution X u should be close to the reference distributionX u (obtained from labeled data); 2. Adjacent vertices u and k should have similar label distributions X u and X k ;
3. The label distributions of all vertices should comply with the prior knowledge, if such knowledge exists, or be uniformly distributed, otherwise.
The following objective function represents these three components:
C(X) = u∈L ||X u −X u || 2 2 +µ u∈V k∈N (u) w u,k ||X u − X k || 2 2 +ν u∈V ||X u − U || 2 2(1)
where u and v are nodes in the graph, L is the set of labelled vertices, V is the set of all vertices, N (u) is the set of neighbours of u, U is the uniform distribution over all labels, and µ and ν are weight constants for constraints 2 and 3, respectively. We used Euclidean distance as the distance metric.
While the first two terms in the objective function, and their corresponding constraints make intuitive sense, the uniformity constraint needs further explanation. The rationale behind using distance from uniform distribution is to avoid preferring a label over others in the absence of strong evidence.
The objective function is optimized using stochastic gradient descent. We implement the optimization algorithm for this as described in Subramanya et al. (2010):
X (m) i (y) = γ i (y) k i γ i (y) =X i (y)δ(i ∈ L) + k∈N (i) w i,k X m−1 k (y) + ν 1 Y k i =δ(i ∈ L) + ν + µ k∈N (i) w i,k (2) X (m) i and X (m−1) i
denote the label distributions of vertex i in iterations m and m − 1, respectively, δ(i ∈ L) is 1 if and only if i is a labeled vertex, and Y is the number of labels.
Overall algorithm
Once propagated the label distributions through the graph, we would need to combine what we learned in the graph with the tagging results from the CRF model. For that we use a self-training algorithm, shown in Figure 2.
On an input of a partially-labeled corpus, we first train a CRF model in a supervised fashion on the labeled data (crf-train, line 1); we then use this trained CRF model to assign label probability distributions to each word in the entire (labeled + unlabeled) corpus (posterior decode, line 4). As a result, each n-gram token in the corpus has a label distribution (the posteriors). For each n-gram type u (a vertex in the graph), we find all instances (n-gram tokens) of u and average over the label distributions of these instances to get a label distribution for u (token to type, line 5). Next, we perform graph-propagation (i.e. we optimize the objective function in equation 1) to learn the label distributions for all vertices. Finally, we linearly interpolate the trained CRF model and the label distributions from the graph:
X int (t) = αX CRF (t) + (1 − α)X Graph (t) (3)
where t is a 3-gram token in a specific sentence, X CRF (t) denotes the posterior probability from the CRF model for the middle word in t, X Graph (t) denotes the label distribution of the 3gram type t after graph propagationn, and α ∈ [0, 1] is the mixture parameter between the CRF and graph models. The best label for all words in the entire corpus is then found using Viterbidecoding for the CRF using X int instead of X CRF (viterbi-decode, line 7). Viterbi decoding provides us with the best label for every n-gram token in the unlabeled corpus, which implies that our labeled set has grown to include the unlabeled corpus. We re-train the CRF on this expanded training set (crftrain, line 8); and iterate until convergence.
Note that the steps indicated by lines 1, 4, and 8 work on the corpus whereas graph propagation in line 6 works on the graph. So, the step in line 5 takes us from corpus to the graph, and the step in line 7 takes us back from the graph to the corpus.
Integration with BANNER
BANNER (Leaman et al., 2008) is a well-known open-source biomedical named entity recognizer that is widely used. Many studies have used BANNER for gene mention tagging Hakala et al., 2015;Pyysalo et al., 2015;Lee et al., 2014;Leaman et al., 2013) and many have cited it as a biomedical NER system with good performance (Dai et al., 2015;Krallinger et al., 2015;Luo et al., 2016;Gonzalez et al., 2016;Hebbring et al., 2015). BANNER uses CRF as its machine learning core, and we used it as our base CRF in lines 1 and 8 in Figure 2. We also modified BANNER's source code in order to extract the posterior proba- 84.93 88.28 86.57 Table 3: Graph-based SSL improves BANNER by increasing the precision.
bilities from the underlying MALLET CRF model (line 4). These probabilities were used in lines 5 through 7 in Figure 2. Furthermore, the lemmas we used as features in our graph construction (see section 2.1) came from BANNER's lemmatizer.
BANNER also does some post-processing: it discards all the mentions that contain unmatched brackets. We ran our method with and without this post-processing step and verified its utility in our approach as well.
Experiments
We show improvements over BANNER on the dataset of BioCreative II Gene Mention Tagging Task. This data set contains 15,000 training sentences and 5,000 test sentences. Annotations are given by the starting character index and finishing character index of the gene in the sentence (space characters are ignored). Some sentences have alternative annotations presented in a separate file.
The upper part of Table 3 shows the results of BANNER; Graph-Based SSL without postprocessing; and Graph-Based SSL with postprocessing. The hyper-parameters of Graph-Based SSL were chosen by cross-validation over different train/test splits with different hyperparameters tested for each split (α = 0.02, µ = 10 −6 , ν = 10 −4 , and number of iterations = 2). Table 3 shows that the improvement we get in F-measure is due to better precision which is further boosted by dropping the candidates with unmatched parentheses (which is our only postprocessing step).
The lower part of Table 3 puts our method in context. Although our method is competitive with these best performing methods in the literature, it has not outperformed any of them other than BANNER. Its precision however, is better than all other methods with the exception of Gimli. It would be interesting to integrate the graph-based approach to the ones with CRF as their machine learning core (BANNER-ChemdNER, Gimli, and the approach of ) to further test the utility of the graph approach.
Qualitative analysis
To understand the differences between BANNER and the graph propagation results, a human domain expert compared the errors occurring in their respective outputs. Table 4 shows the number of these errors as well as some examples. These examples illustrate two important observations. First, there are examples of categories more general than genes in both false positives and false negatives for both systems. For example Kinase is a functional group of proteins; POZ/Zn, Iglike domain, and SH2 are protein domains; and E3 ubiquitin and NF-kappaB are gene families. Anecdotal evidence suggests that this is due to presence of similar annotations in the training/test data set. For example the bZIP protein, a protein family, and Ig-like domain, a gene/protein functional domain were both annotated as genes. This calls for a better gene mention corpus annotated according to more recent gene annotation guidelines. Second, there are some hard to explain false positives in BANNER. Examples include Ann Arbor, a city in Michigan, SAS GLM, a type of statistical test, and 1.6-kb cDNA, a molecular length. Our graphbased approach has eliminated these false positives.
Cross validation study
We conducted extensive cross-validation experiments using different train and test splits in order to explore the hyper-parameter values and to Figure 4: The same points as in Figure 3 shown as the difference from the Banner scores for the same train/test split. The origin in this graph is the BANNER score. Each cluster of points in Figure 3 becomes a line in this graph. detect trends in the values that were optimal for this task. The results show that graph-propagation consistently improves results over BANNER. Figures 3 and 4 were created by running graphpropagation over different train and test splits with different hyper-parameter values for each split. For each train/test split, we show only the Pareto optimal points (for each choice of hyperparameters we include it in the graph only if the performance is not dominated by another choice in both recall and precision). Figure 3 illustrates two points: 1) the precision and recall for the different Pareto optimal points for each train/test split is very similar, and 2) overall the different train/test splits have similar precision and recall values. Figure 4 shows the performance for each train/test split shown as the difference from the BANNER scores for that split. It shows that the precision scores of graph-propagation is always better than the BANNER baseline, while recall is sometimes worse. The F-scores for all train/test splits and for all Pareto optimal points in each split is always better than the BANNER baseline.
We can collect useful statistics about which hyper-parameter values are the most useful in graph-propagation in this task from the extensive set of experiments described above: for different train/test splits and for each split with different hyper-parameter values. Figure 5 shows the number of times different hyper-parameter values have appeared in the set of Pareto optimal points over all the train/test splits.
The hyper-parameter α (see equation 3) controls the interpolation between the BANNER posterior probability over labels and the label distri-bution from the graph-propagation step. Higher α values would prefer BANNER over graphpropagation. Figure 5 shows that smaller α values are preferred, which implies that the label distribution produced through graph-propagation is found to be more useful than the label distribution produced by BANNER. We also investigated the two extreme cases of α = 0 (only graph) and α = 1.0 (only BANNER followed by an extra Viterbi decoding step), and observed that both of these options were worse than the BANNER baseline.
In equation (1) higher ν values keep the label distribution at each vertex of the graph closer to the uniform distribution. Higher µ values would allow adjacent vertices to have a greater influence on the label distribution at the vertex. Figure 5 shows that, in our experiments, graph-propagation is sensitive to the values of µ. Lower µ values appear in Pareto optimal points more often. On the other hand, Figure 5 shows that graph-propagation is not as sensitive to different values of ν as long as it is not too high (10 −1 ). This might be due to our setting, where about 73% of vertices are labelled.
We looked for strong correlations between ν values, µ values, and number of iterations in graph propagation and found none.
Finally, for different iteration numbers of graphpropagation, we collected the frequency with which each number appeared in the Pareto optimal results. One iteration of graph-propagation produced 68 Pareto optimal points, two iterations produced 198 points, and three iterations produced 120 points in our experiments. This shows that having more than one iteration of graphpropagation can improve the results.
Our algorithm (Figure 2) has two levels of iterations. One outer iteration (the while loop) and one inner iteration in graph propagation. The numbers mentioned above refer to this inner iteration. All our results reported are for one outer iteration only. Our experiments in this paper were in a transductive setting where the graph was constructed over the test and training data. For this reason we did not experiment extensively with more than one outer iteration. In future work, we plan to experiment with increasing the amount of unlabeled data, and in this setting explore increasing the number of outer iterations.
A note on scalability
The most time consuming step in our approach was graph construction, where the bootleneck is to compute the edge weights between any possible vertex pairs. We experimented with a naive algorithm, where for every vertex pair the values of feature vectors for shared features were considered, and the cosine similarity was computed. We also implemented a variation on it, where the similarities between all pairs sharing a specific feature instance were computed, and the contributions of individual feature instances were summed to give the final similarity between any given pair. The first algorithm was too slow as expected due to its O(|V | 2 ) time complexity; the second one was too slow due to high frequency features. This is an important issue since the graph needs to be constructed for our approach to work on a new dataset.
Apart from the graph construction, the graph based approach is as scalable as CRF if a labeled train set is available for the new domain, as the CRF only needs to be trained on the new labelled set. If we wish to adapt the method in a domain where there is no labelled data in the target domain, there is no need for any training. Figure 5: These graphs show the number of times specific hyper-parameter values α, µ and ν appeared in Pareto optimal points over all train/test splits. .
Conclusion and future directions
Our results show that propagating labels from 3grams present in training set to 3-grams only appearing in the test set can significantly improve BANNER, a well-known frequently used biomedical named entity recognition system for the gene mention tagging task. Our cross-validation study shows the robustness of this improvement. We also presented qualitative comparison by a human domain expert. Our ideas for future work are categorized into three groups:
1. Adding more unlabelled data: The only unlabelled data we included in the graph were the test data. Since the success of semi-supervised learning methods is usually due to huge amount of unlabelled data, we plan to use many more PubMed abstracts to construct the graph. This however will be challenging because the graph construction can be time consuming as it was in our case due to high frequency features.
2. Constructing a better graph: Contextual features we used to construct our graph are only one of the feature sets that have been shown useful in gene mention tagging task. Other feature sets include orthographic features, contextual features learnt from neural networks, features from parse trees. These features may also prove useful in constructing a graph that represents the similarity between gene mentions. Also, we can preprocess the raw sentences to collapse some collocations into one word so that the middle word in the 3-gram vertices are more meaningful.
3. Improving the latest approach: Although BANNER is one of the most frequently used biomedical named entity recognition system, it is not one with the best performance ever. Previous approaches have improved BANNER in a variety of ways, including semi-supervised learning. In particular, Munkhdalia et al. have achieved an Fmeasure of 87.04 by including word representations learnt from massive unlabelled data as features (Munkhdalai et al., 2015) . We plan to test our approach on their freely available system.
Figure 2 :
2Iterative semi-supervised training of CRF with label distributions from the graph.(Subramanya et al., 2010).
Figure 1 :
1Neighbours of one vertex before and after Propagation. I,O,B stand for Inside-gene, Outsidegene, Beginning-gene.
Table 1 :
1Complete set of contextual features.Description
Feature
Left Context Word
w −2
Left Word
w −1
Center Word
w 0
Right Word
w 1
Right Context Word
w 2
Table 2 :
2Simplified set of contextual features.
Table 4 :
4Qualitative comparison by a human domain expert between BANNER and Graph Propagation. FN: false negatives. FP: false positives.Figure 3: Precision and recall for different
train/test splits and hyper-parameter choices. Each
color represents a single train/test split. We in-
clude only the Pareto optimal points for each split.
AcknowledgementsThe authors thank the funding organizations, Genome Canada, British Columbia Cancer Foundation, and Genome British Columbia for their partial support. The research was also partially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC RGPIN 262313 and RGPAS 446348) to the fourth author.
Graph-based learning for statistical machine translation. Andrei Alexandrescu, Katrin Kirchhoff, Andrei Alexandrescu and Katrin Kirchhoff. 2009. Graph-based learning for statistical machine trans- lation. In NAACL 2009.
BioCreative II gene mention tagging system at IBM Watson. Ando Rie Kubota, Proceedings of the Second BioCreative Challenge Evaluation Workshop. the Second BioCreative Challenge Evaluation WorkshopMadrid, Spain23Centro Nacional de Investigaciones Oncologicas (CNIO)Rie Kubota Ando. 2007. BioCreative II gene men- tion tagging system at IBM Watson. In Proceed- ings of the Second BioCreative Challenge Evalua- tion Workshop, volume 23, pages 101-103. Centro Nacional de Investigaciones Oncologicas (CNIO) Madrid, Spain.
Class-based n-gram models of natural language. Peter F Brown, V Peter, Robert L Desouza, Vincent J Della Mercer, Jenifer C Pietra, Lai, Computational linguistics. 184Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai. 1992. Class-based n-gram models of natural language. Computational linguistics, 18(4):467-479.
Gimli: open source and high-performance biomedical name recognition. David Campos, Sérgio Matos, José Luís Oliveira, BMC bioinformatics. 14154David Campos, Sérgio Matos, and José Luís Oliveira. 2013. Gimli: open source and high-performance biomedical name recognition. BMC bioinformatics, 14(1):54.
Enhancing of chemical compound and drug name recognition using representative tag scheme and fine-grained tokenization. Hong-Jie Dai, Po-Ting Lai, Yung-Chun Chang, Richard Tzong-Han Tsai, Journal of cheminformatics. 7S1Hong-Jie Dai, Po-Ting Lai, Yung-Chun Chang, and Richard Tzong-Han Tsai. 2015. Enhancing of chemical compound and drug name recognition us- ing representative tag scheme and fine-grained tok- enization. Journal of cheminformatics, 7(S1):1-10.
Unsupervised part-of-speech tagging with bilingual graph-based projections. Dipanjan Das, Slav Petrov, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies1Association for Computational LinguisticsDipanjan Das and Slav Petrov. 2011. Unsupervised part-of-speech tagging with bilingual graph-based projections. In Proceedings of the 49th Annual Meeting of the Association for Computational Lin- guistics: Human Language Technologies-Volume 1, pages 600-609. Association for Computational Lin- guistics.
Recent advances and emerging applications in text and data mining for biomedical discovery. H Graciela, Tasnia Gonzalez, Tahsin, C Britton, Anna C Goodale, Casey S Greene, Greene, Briefings in bioinformatics. 171Graciela H Gonzalez, Tasnia Tahsin, Britton C Goodale, Anna C Greene, and Casey S Greene. 2016. Recent advances and emerging applications in text and data mining for biomedical discovery. Briefings in bioinformatics, 17(1):33-42.
Application of the evex resource to event extraction and network construction: Shared task entry and result analysis. Kai Hakala, Sofie Van Landeghem, Tapio Salakoski, BMC bioinformatics. 163Yves Van de Peer, and Filip Ginter. Suppl 16Kai Hakala, Sofie Van Landeghem, Tapio Salakoski, Yves Van de Peer, and Filip Ginter. 2015. Appli- cation of the evex resource to event extraction and network construction: Shared task entry and result analysis. BMC bioinformatics, 16(Suppl 16):S3.
Application of clinical text data for phenomewide association studies (PheWASs). J Scott, Majid Hebbring, Zhan Rastegar-Mojarad, John Ye, Crystal Mayer, Simon Jacobson, Lin, Bioinformatics. 3112Scott J Hebbring, Majid Rastegar-Mojarad, Zhan Ye, John Mayer, Crystal Jacobson, and Simon Lin. 2015. Application of clinical text data for phenome- wide association studies (PheWASs). Bioinformat- ics, 31(12):1981-1987.
High-recall gene mention recognition by unification of multiple backward parsing models. Han-Shen Huang, Yu-Shi Lin, Kuan-Ting Lin, Cheng-Ju Kuo, Yu-Ming Chang, Bo-Hou Yang, I-Fang Chung, Chun-Nan Hsu, Proceedings of the second BioCreative challenge evaluation workshop. the second BioCreative challenge evaluation workshopMadrid, Spain23Centro Nacional de Investigaciones Oncologicas (CNIO)Han-Shen Huang, Yu-Shi Lin, Kuan-Ting Lin, Cheng- Ju Kuo, Yu-Ming Chang, Bo-Hou Yang, I-Fang Chung, and Chun-Nan Hsu. 2007. High-recall gene mention recognition by unification of multi- ple backward parsing models. In Proceedings of the second BioCreative challenge evaluation workshop, volume 23, pages 109-111. Centro Nacional de In- vestigaciones Oncologicas (CNIO) Madrid, Spain.
Semisupervised conditional random fields for improved sequence segmentation and labeling. Feng Jiao, Shaojun Wang, Chi-Hoon Lee, Russell Greiner, Dale Schuurmans, Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsFeng Jiao, Shaojun Wang, Chi-Hoon Lee, Russell Greiner, and Dale Schuurmans. 2006. Semi- supervised conditional random fields for improved sequence segmentation and labeling. In Proceed- ings of the 21st International Conference on Com- putational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 209-216. Association for Computational Lin- guistics.
The chemdner corpus of chemicals and drugs and its annotation principles. Martin Krallinger, Obdulia Rabal, Florian Leitner, Miguel Vazquez, David Salgado, Zhiyong Lu, Robert Leaman, Yanan Lu, Donghong Ji, M Daniel, Lowe, Journal of cheminformatics. 7S1Martin Krallinger, Obdulia Rabal, Florian Leitner, Miguel Vazquez, David Salgado, Zhiyong Lu, Robert Leaman, Yanan Lu, Donghong Ji, Daniel M Lowe, et al. 2015. The chemdner corpus of chemi- cals and drugs and its annotation principles. Journal of cheminformatics, 7(S1):1-17.
Rich feature set, unification of bidirectional parsing and dictionary filtering for high f-score gene mention tagging. Cheng-Ju Kuo, Yu-Ming Chang, Han-Shen Huang, Kuan-Ting Lin, Bo-Hou Yang, Yu-Shi Lin, Chun-Nan Hsu, I-Fang Chung, Proceedings of the second BioCreative challenge evaluation workshop. the second BioCreative challenge evaluation workshopMadrid, Spain23Centro Nacional de Investigaciones Oncologicas (CNIO)Cheng-Ju Kuo, Yu-Ming Chang, Han-Shen Huang, Kuan-Ting Lin, Bo-Hou Yang, Yu-Shi Lin, Chun- Nan Hsu, and I-Fang Chung. 2007. Rich fea- ture set, unification of bidirectional parsing and dic- tionary filtering for high f-score gene mention tag- ging. In Proceedings of the second BioCreative challenge evaluation workshop, volume 23, pages 105-107. Centro Nacional de Investigaciones Onco- logicas (CNIO) Madrid, Spain.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. John Lafferty, Andrew Mccallum, Fernando Pereira, ICML. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. In ICML.
Banner: an executable survey of advances in biomedical named entity recognition. Robert Leaman, Graciela Gonzalez, Pacific Symposium on Biocomputing. Citeseer13Robert Leaman, Graciela Gonzalez, et al. 2008. Ban- ner: an executable survey of advances in biomedical named entity recognition. In Pacific Symposium on Biocomputing, volume 13, pages 652-663. Citeseer.
Dnorm: disease name normalization with pairwise learning to rank. Robert Leaman, Zhiyong Rezarta Islamaj Dogan, Lu, Bioinformatics. 2922Robert Leaman, Rezarta Islamaj Dogan, and Zhiy- ong Lu. 2013. Dnorm: disease name normaliza- tion with pairwise learning to rank. Bioinformatics, 29(22):2909-2917.
tmchem: a high performance approach for chemical named entity recognition and normalization. Robert Leaman, Chih-Hsuan Wei, Zhiyong Lu, J. Cheminformatics. 73Robert Leaman, Chih-Hsuan Wei, and Zhiyong Lu. 2015. tmchem: a high performance approach for chemical named entity recognition and normaliza- tion. J. Cheminformatics, 7(S-1):S3.
Oncosearch: cancer gene search engine with literature evidence. Nucleic acids research. Hee-Jin Lee, Tien Cuong Dang, Hyunju Lee, Jong C Park, 368Hee-Jin Lee, Tien Cuong Dang, Hyunju Lee, and Jong C Park. 2014. Oncosearch: cancer gene search engine with literature evidence. Nucleic acids re- search, page gku368.
mirtex: A text mining system for mirna-gene relation extraction. Gang Li, Karen E Ross, Cecilia N Arighi, Yifan Peng, Cathy H Wu, K Vijay-Shanker, PLoS Comput Biol. 1191004391Gang Li, Karen E Ross, Cecilia N Arighi, Yifan Peng, Cathy H Wu, and K Vijay-Shanker. 2015. mirtex: A text mining system for mirna-gene relation extrac- tion. PLoS Comput Biol, 11(9):e1004391.
Learning translation consensus with structured label propagation. Shujie Liu, Chi-Ho Li, Mu Li, Ming Zhou, Shujie Liu, Chi-Ho Li, Mu Li, and Ming Zhou. 2012. Learning translation consensus with structured label propagation. In ACL 2012.
Bridging semantics and syntax with graph algorithmsstate-of-the-art of extracting biomedical relations. Yuan Luo, Özlem Uzuner, Peter Szolovits, Briefings in bioinformatics. 1Yuan Luo,Özlem Uzuner, and Peter Szolovits. 2016. Bridging semantics and syntax with graph algorithmsstate-of-the-art of extracting biomedical relations. Briefings in bioinformatics, page bbw001.
Mallet: A machine learning for language toolkit. Andrew Kachites Mccallum, Andrew Kachites McCallum. 2002. Mal- let: A machine learning for language toolkit. http://mallet.cs.umass.edu.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, arXiv:1301.3781arXiv preprintTomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
Incorporating domain knowledge in chemical and biomedical named entity recognition with word representations. Tsendsuren Munkhdalai, Meijing Li, Khuyagbaatar Batsuren, Hyeon Park, Nak Choi, Keun Ho Ryu, J. Cheminformatics. 79Tsendsuren Munkhdalai, Meijing Li, Khuyagbaatar Batsuren, Hyeon Park, Nak Choi, and Keun Ho Ryu. 2015. Incorporating domain knowledge in chemical and biomedical named entity recognition with word representations. J. Cheminformatics, 7(S-1):S9.
Overview of the cancer genetics and pathway curation tasks of bionlp shared task. Sampo Pyysalo, Tomoko Ohta, Rafal Rak, Andrew Rowley, Hong-Woo Chun, Sung-Jae Jung, Sung-Pil Choi, Jun'ichi Tsujii, Sophia Ananiadou, BMC bioinformatics. 16102SupplSampo Pyysalo, Tomoko Ohta, Rafal Rak, Andrew Rowley, Hong-Woo Chun, Sung-Jae Jung, Sung-Pil Choi, Jun'ichi Tsujii, and Sophia Ananiadou. 2015. Overview of the cancer genetics and pathway cura- tion tasks of bionlp shared task 2013. BMC bioin- formatics, 16(Suppl 10):S2.
Graph-based semi-supervised learning of translation models from monolingual data. Avneesh Saluja, Hany Hassan, Kristina Toutanova, Chris Quirk, Avneesh Saluja, Hany Hassan, Kristina Toutanova, and Chris Quirk. 2014. Graph-based semi-supervised learning of translation models from monolingual data. In ACL 2014.
Shallow parsing with conditional random fields. Fei Sha, Fernando Pereira, NAACL. Fei Sha and Fernando Pereira. 2003. Shallow parsing with conditional random fields. In NAACL.
Entropic graph regularization in non-parametric semisupervised classification. Amarnag Subramanya, Jeff A Bilmes, Advances in Neural Information Processing Systems. Amarnag Subramanya and Jeff A Bilmes. 2009. En- tropic graph regularization in non-parametric semi- supervised classification. In Advances in Neural In- formation Processing Systems, pages 1803-1811.
Efficient graph-based semisupervised learning of structured tagging models. Amarnag Subramanya, Slav Petrov, Fernando Pereira, Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. the 2010 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsAmarnag Subramanya, Slav Petrov, and Fernando Pereira. 2010. Efficient graph-based semi- supervised learning of structured tagging models. In Proceedings of the 2010 Conference on Empiri- cal Methods in Natural Language Processing, pages 167-176. Association for Computational Linguis- tics.
Weakly-supervised acquisition of labeled class instances using graph random walks. Partha Pratim Talukdar, Joseph Reisinger, Marius Paşca, Deepak Ravichandran, Rahul Bhagat, Fernando Pereira, EMNLP. Partha Pratim Talukdar, Joseph Reisinger, Marius Paşca, Deepak Ravichandran, Rahul Bhagat, and Fernando Pereira. 2008. Weakly-supervised acqui- sition of labeled class instances using graph random walks. In EMNLP 2008.
Bilingual lexicon extraction from comparable corpora using label propagation. Akihiro Tamura, Taro Watanabe, Eiichiro Sumita, EMNLP-CoNLL. Akihiro Tamura, Taro Watanabe, and Eiichiro Sumita. 2012. Bilingual lexicon extraction from comparable corpora using label propagation. In EMNLP-CoNLL 2012.
Semi-supervised learning using gaussian fields and harmonic functions. Xiaojin Zhu, Zoubin Ghahramani, John Lafferty, ICML. 3Xiaojin Zhu, Zoubin Ghahramani, John Lafferty, et al. 2003. Semi-supervised learning using gaussian fields and harmonic functions. In ICML, volume 3, pages 912-919. |
252,819,268 | EM-PERSONA: EMotion-assisted Deep Neural Framework for PERSONAlity Subtyping from Suicide Notes | The World Health Organization has emphasised the need of stepping up suicide prevention efforts to meet the United Nation's Sustainable Development Goal target of 2030 (Goal 3: Good health and well-being). We address the challenging task of personality subtyping from suicide notes. Most research on personality subtyping has relied on statistical analysis and feature engineering. Moreover, state-of-the-art transformer models in the automated personality subtyping problem have received relatively less attention. We develop a novel EMotionassisted PERSONAlity Detection Framework (EM-PERSONA). We annotate the benchmark CEASE-v2.0 suicide notes dataset with personality traits across four dichotomies: Introversion (I)-Extraversion (E), Intuition (N)-Sensing (S), Thinking (T)-Feeling (F), Judging (J)-Perceiving (P). Our proposed method outperforms all baselines on comprehensive evaluation using multiple state-of-the-art systems. Across the four dichotomies, EM-PERSONA improved accuracy by 2.04%, 3.69%, 4.52%, and 3.42%, respectively, over the highest performing single-task systems. | [
196189733,
52967399,
218516872,
218977418,
6628106,
11212020,
11336213,
6857205,
1957433,
23583643
] | EM-PERSONA: EMotion-assisted Deep Neural Framework for PERSONAlity Subtyping from Suicide Notes
1105 October 12-17, 2022
Soumitra Ghosh
Department of Computer Science and Engineering
IIT Patna
India
Dhirendra Kumar Maurya
Department of Computer Science and Engineering
IIT Patna
India
Asif Ekbal
Department of Computer Science and Engineering
IIT Patna
India
Pushpak Bhattacharyya
Department of Computer Science and Engineering
IIT Bombay
India
EM-PERSONA: EMotion-assisted Deep Neural Framework for PERSONAlity Subtyping from Suicide Notes
Proceedings of the 29th International Conference on Computational Linguistic
the 29th International Conference on Computational Linguistic10981105 October 12-17, 20221098
The World Health Organization has emphasised the need of stepping up suicide prevention efforts to meet the United Nation's Sustainable Development Goal target of 2030 (Goal 3: Good health and well-being). We address the challenging task of personality subtyping from suicide notes. Most research on personality subtyping has relied on statistical analysis and feature engineering. Moreover, state-of-the-art transformer models in the automated personality subtyping problem have received relatively less attention. We develop a novel EMotionassisted PERSONAlity Detection Framework (EM-PERSONA). We annotate the benchmark CEASE-v2.0 suicide notes dataset with personality traits across four dichotomies: Introversion (I)-Extraversion (E), Intuition (N)-Sensing (S), Thinking (T)-Feeling (F), Judging (J)-Perceiving (P). Our proposed method outperforms all baselines on comprehensive evaluation using multiple state-of-the-art systems. Across the four dichotomies, EM-PERSONA improved accuracy by 2.04%, 3.69%, 4.52%, and 3.42%, respectively, over the highest performing single-task systems.
Introduction
Suicide continues to be one of the significant causes of death worldwide (Ghosh et al., 2020). Given the significance of personality as a basis for understanding psychopathology (Krueger and Tackett, 2006) and the variability in risk factors associated with suicide, subtyping patients based on their personalities can provide greater specificity than simple comparisons of suicidal to non-suicidal individuals (Ortigo et al., 2009). Pompili et al. (2008) showed that emotions such as anger, aggressiveness, anxiety, and sadness were associated with personality traits of individuals who attempted suicide.
We quote a few excerpts from suicide notes (SNs) and online personality posts (PPs) in Table 1 to show how personality discussions on public forums and genuine SNs generally follow similar They just do whatever the fuck they want and justify it later. PP-1 They can do anything they like to make any law they like. He is an ugly stupid faggot and we should kill him. PP-2 They're oppressing you, kill them all! language patterns. We computed cosine similarity (CosSim) scores between SNs (from CEASE-v2.0 corpus (Ghosh et al., 2022)) and PPs (from MBTI dataset 1 ) and observed an alarming number of SNs having a considerable amount of word-based similarity with generic PPs. The results are shown in Table 2. We observed CosSim scores over 0.6, 0.5, 0.4 for 12, 39 and 113 SNs, respectively with respect to the PPs in MBTI dataset. Our primary contributions are two-fold: we present a novel corpus of suicide notes annotated with personality traits across four dichotomies and develop an end-to-end multi-task emotion-assisted system for simultaneous detection of these traits from suicide notes.
Related Work
The Myers-Briggs Type Indicator (MBTI) (Myers, 1962), based on psychiatrist Carl Jung's ideas, is a popular personality metric that employs four dichotomies as indications of personality traits: Introversion (I) / Extraversion (E), Intuition (N) / Sensing (S), Thinking (T) / Feeling (F), Judging (J) / Perceiving (P). Another popular model like MBTI is the Big Five (Goldberg, 1993) that produces very specific and individual results, which can be tedious to draw general insights and advice from test results making the practical application of the knowledge very difficult, especially when the data is scarce (as in the case of suicide note corpus). The fact that the categories for the Big Five Personality Traits are too wide and absolute to offer any meaningful insight is another issue with them. Humans are adaptive beings that adapt to their environment. In situations where we are around close friends, for instance, we could be more open, whilst in foreign settings you might be less open.
Artificial intelligence (AI) is already playing a crucial role in mental healthcare (chatbot: WoeBot, virtual assistant: Ellie (D'Alfonso, 2020)) in handling the increased demand for services, stretched workloads, high costs for treatment, and associated stigma with mental illness (Gamble, 2020). More recently, personality detection studies (Mehta et al., 2020;Yang et al., 2021;Ren et al., 2021) using computational methods have gained traction especially transformer-based (Vaswani et al., 2017) pretrained language models. However, the existing suicide note corpora (Ghosh et al., 2020(Ghosh et al., , 2022) are annotated at the sentence level and existing studies do not exploit the emotional content inherent in them. This motivated us to devise an approach for utilizing the sentence-level information inherent in the existing datasets and address the closely associated tasks at the document level. Moreover, statistical analysis (Ji et al., 2021) and feature engineering (Bharadwaj et al., 2018) have been used in the bulk of the studies on this topic. However, most of the research on personality subtyping has been on domains like essays (Big Five dataset (Pennebaker and King, 1999)) and social media (MBTI Dataset) and none on the domain of suicide. This is the first attempt, to our knowledge, to identify personality subtypes of individuals who have completed suicide.
Personality traits annotation
Three annotators 3 sufficiently acquainted with labeling tasks and knowing the concepts of personality profile identification annotated each suicide note. For assistance in understanding the annotation task, the annotators were provided with ample instances for each personality class from the highly popular Myers Briggs Personality Type Test Dataset (MBTI dataset). The annotation task is performed across four dichotomies: I or E, N or S, F or T and J or P. For a suicide note, annotators categorised a personality trait as Unclear (U) if they could not evaluate the correct class owing to a lack of relevant/sufficient information. The final labels were obtained through a majority voting approach on the labels assigned by three annotators.
Traits
Distribution The distribution of annotated suicide notes over the various personality trait classes is shown in Table 3. As multiple raters are involved in the annotation process, we employ the Fleiss-Kappa (κ) (Spitzer et al., 1967) measure to compute the agreement among the annotators. We obtain an average Kappa agreement of 0.61 over the four personality dichotomies, indicating substantial agreement among the annotators. The score also indicates the difficulty of perceiving and synthesizing clinical ideas and labeling such tasks. The annotators relatively faced more difficulty in marking with labels I-E and J-P than N-S and J-P, which are also reflected in the attained κ scores. In both the MBTI and our annotated dataset, we observe that certain classes such as Introversion and Sensing are overrepresented while Extroversion and Intuition are relatively under-represented.
Task Definition
Given a suicide note (N) with each sentence annotated with an emotion class 4 , classify the author of the note into one of the two categories for each of the following personality dichotomies: (I/E), (N/S), (F/T), (J/P). Let N m n = (s m 1 , s m 2 , .., s m n ) denote a suicide note with n sentences (s) and (e m 1 , e m 2 , ..., e m n ) represents the corresponding sentence-level emotion (e) labels in the m th note. The objective is to maximise the value of the following function:
argmax θ (Π m i=0 P (y i I−E , y i N −S , y i F −T , y i J−P | s i n , s i n−1 , ..., s i 1 ; θ)) (1)
y: output labels, P : log likelihood function, and, θ: model parameters to be optimized.
EMotion-assisted Deep Neural Framework for PERSONAlity Subtyping (EM-PERSONA)
The EM-PERSONA system takes a suicide note documents as input and categorizes the author of the note into four personality classes: I/E, N/S, F/T and J/P. Each training instance comprises of a suicide note document that is encoded using the Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019) encoder into a contextualized document representation (Ω). The individual sentences of the same note are processed in parallel by four convolutional and max pool layers (Conv Max Pool) of the region (k) size 1, 2, 3, and 4 and 50 filters, each of which generates sentence-level feature representations (sr i ). We use convolutional neural networks (CNN) as they are easier to parallelise, faster to train than recurrent neural networks, and effective for short sentences (Hu et al., 2018;Wang et al., 2021) (average sentence length in the CEASE corpus is 15). Word vectors, at the sentence level, are fetched from the pre-trained GloVe (Pennington et al., 2014) embedding.
To produce contextualized sentence representations (ϕ i ), we apply additive-attention (Bahdanau et al., 2015) between the sentence representations (sr i ) and the contextualized document representation (Ω). The attention-mechanism can be realized through the following equations:
γ = W T 3 tanh(W 1 Ω + W 2 sr c i )(2)α i = exp(γ(Ωsr c i )) c j=1 exp(γ(Ωsr c j ))(3)ϕ i = c t=1 α i sr c i(4)
where W 1 , W 2 , W 3 are the learnable weight matrices, tanh is a non-linear function and c is the sentence length in words.
The Conv + Max Pool outputs are also passed through sentence-specific dense layers and corresponding output layers with softmax activation to generate emotion classes (EO i ). The intermediate emotion-aware sentence-specific dense representations are added (⊚) with the corresponding ϕ i and passed through a linear layer to produce abstract emotion-aware sentence representations (ω i ).
ω i = Dense(ϕ i ⊚ Dense i (sr i ))(5)
The emotion-aware sentence representations (ω i ) are concatenated (⊕) and passed through a bidirectional gated recurrent unit (BiGRU) (Cho et al., 2014) layer of 100 units to learn the contextual information. We apply multi-head self-attention (Vaswani et al., 2017) (self-attn) to attend to dif-ferent parts of the BiGRU output and produce an contextualized emotion-aware document representation (δ), which is then pooled globally.
δ = BiGRU (ω 1 ⊕ ω 2 ⊕ ... ⊕ ω n ) (6) ∆ = Pooling(Trans. Enc.(δ))(7)
The pre-trained BERT language model allows us to produce general contextual representations while dealing with a small supervised dataset, avoiding the need to train all the parameters from the start. We linearly concatenate Ω with the pooling layer output, ∆, and pass to four task-specific dense layers followed by the output dense layers with sof tmax activation to get the output probability p m t values over the four personality trait variables. p m t = softmax(W t (Dense t (∆ ⊕ Ω)) + b t ) (8) W and b are learnable weight and bias matrices and t represents the four personality subtyping tasks: I-E, N-S, F-T and J-P.
where λ denotes the categorical cross-entropy loss, t represents the four personality traits tasks. α and β are the loss weights for the personality traits (PT) detection tasks and emotion recognition tasks (ER). We limit our experiments to the uniform task weighting approach, i.e., α t and β t are both 1.
Computation of loss
The model is trained by summing the documentlevel cross-entropy losses of the four personality subtasks, as well as the cross-entropy losses for the sentence-level emotion classification task.
Λ = 4 t=1 αt * λ P T t + n q=1 βq * λ ER t (9) Models F1 I-E F1 N-S F1 F-T F1 J-P
Experiments and Results
In this section, we discuss the experiments performed and the results and analysis.
Experimental Setup
We evaluate EM-PERSONA against five state-ofthe-art systems: Hierarchical Attention Networks (HAN) (Yang et al., 2016), Convolutional Neural Network+Context Long Short Term Memory (CNN+cLSTM) (Poria et al., 2017), BERT-Base (Devlin et al., 2019), RoBERTa (Liu et al., 2019) and MT-BERT (Peng et al., 2020). We perform 10fold cross-validation on the personality annotated CEASE-v2.0 dataset and consider the macro-F1 metric to evaluate our approach against multiple baselines, as class imbalance problem persists in the dataset. We discuss the details of the baselines and the hyperparameters for our experiments in Sections A.1 and A.2 of the Appendix. Table 4 shows that the proposed EM-PERSONA system considerably outperforms all baseline systems, with improvements of 2.04, 3.69, 4.52, and 3.42 points over the best performing single-task systems on the four personality subtasks, respectively. The low F1 scores on the J-P trait over the HAN and CNN+cLSTM single-task baselines align with past research (Lima and de Castro, 2019; Yamada et al., 2019) where predictions on J/P dichotomy consistently underperforms compared to the other dichotomies. This is not the case for the language models, BERT and RoBERTa and the multitask systems (MT-BERT and EM-PERSONA), which produces comparable scores across all dichotomies, showing the effectiveness of transformer-based systems and also depicting that the correlations among the various personality traits can be effectively exploited when all the tasks are learned jointly. Commendable performance by the EM-PERSONA approach indicates that emotion information plays a crucial role in perceiving the personality traits of an individual through textual content-based analysis.
Results and Discussion
Ablation study: To test the impact of the emotion-assisting setup, we remove the emotionspecific dense layers in EM-PERSONA and see a notable drop in scores across all the personality subtasks (shown in Table 4).
Qualitative Analysis: The first example in Table 5 5 shows the effectiveness of learning the vari- ous personality tasks jointly as both the multitasking systems, MT-BERT and EM-PERSONA correctly classified all the traits. In the second example, the EM-PERSONA system uses the emotion information in the note to classify all the traits correctly, unlike the MT-BERT system, which could only classify two personality traits correctly. Error Analysis: The last two examples in Table 5 show some sample predictions from the MT-BERT baseline system and our proposed EM-PERSONA system where the models fail to classify the output classes correctly. The relevance of knowing emotion information while attempting to identify various personality traits can be realized from the observations in the third example. Here, we notice that, unlike the MT-BERT system that fails to identify a single personality trait correctly, the EM-PERSONA system makes correct predictions on two of the four personality traits. Rigorous analysis of instances where both the multitask systems found difficulty giving correct predictions (as in example 4) indicates that the models have a relatively more challenging time differentiating between I-E and J-P than N-S and F-T.
Test for Significance: We conducted the experiments five times and conducted a student's t-test with a 5% significance level to illustrate that the scores obtained by the proposed system have not happened by chance. We obtain the p-values of 0.039, 0.041, 0.013, and 0.009 compared with the best-performing baselines for each task, indicating that the obtained scores are statistically significant. from genuine suicide notes and maybe deemed sensitive.
Conclusion
Our study focuses on artificial intelligence's assistive role, emphasising that cognitive technology is designed to enhance human intelligence rather than replace it. The proposed method is developed to serve practitioners (computer-aided diagnosis and learning) and individuals (self-monitoring) in their combined effort toward low-profile first-hand evaluation of their personalities. The findings of this study imply that (1) present state-of-the-art methods, both conversational and document encoding methods in general, fail to comprehend personality information in suicide notes to a substantial extent, (2) to improve overall system performance at the document level (such as depression, perceived burdensomeness, and thwarted belongingness), sentence-level information (such as temporal orientation, sentiment, and emotion) can be incorporated into document representations produced by existing transformer architectures, and, (3). large personality traits annotated balanced corpora are required to obtain solid findings, and the introduced resource can facilitate related studies. Identifying key subgroups of people with suicidal inclinations will help us better understand risk factors and therapies based on subtypes.
In future work, we want to address the two major limitations of our study. First, personality traits are not so simple that they can be squeezed into fixed binary categories across four dimensions, as examined in this study. Second, the short context length problem may be addressed by testing with much bigger datasets than the one used in this work.
Ethical Consideration
Our resource creation utilizes publicly available CEASE-v2.0 (Ghosh et al., 2022) benchmark suicide notes dataset. We followed the data usage restrictions and did not violate any copyright issues.
This study was also evaluated and approved by our Institutional Review Board (IRB). The data is available at https: //www.iitp.ac.in/~ai-nlp-ml/ resources.html#EMPERSONA.
Figure 1 :
1Architecture of the EMotion-assisted deep neural framework for PERSONAlity Subtyping.
5 :
5Sample predictions by the MT-BERT and EM-PERSONA systems over various categories are shown here. BL: baseline MT-BERT, PP: proposed EM-PERSONA, PC: partially correct, FC: fully correct, IC: fully incorrect. I: Introversion, E: Extraversion, N: Intuition, S: Sensing, T: Thinking: F: Feeling, J: Judging, P: Perceiving.
Table 1 :
1Sample excerpts from a couple of SNs and PPs.
Table 2 :
2Cosine Similarity scores between suicide notes and personality posts.
Table 3 :
3Data distribution over various personality traits.
Table 4 :
4Scores from 10-fold cross-validation experiments. Values in bold are the maximum scores attained.
Table
https://www.kaggle.com/datasnaek/ mbti-type
DatasetWe consider the benchmark CEASE-v2.0 dataset 2(Ghosh et al., 2022) which is a fine-grained emotion annotated suicide notes corpus containing 4,932 sentences from suicide notes, each annotated independently (without any contextual information) with multi-label emotions from 15 fine-grained emotion classes.2 Dataset sourced from: https://www.iitp.ac.in/ ai-nlp-ml/resources.html
MethodologyHere, we discuss the proposed EMotion-assisted deep neural framework for PERSONAlity Subtyping (EM-PERSONA). The overall architecture is illustrated inFig. 1.3 all doctoral researchers, one from the computer science discipline, two from the computational linguistics discipline
For simplicity, we consider only the predominant emotion (Emo1) from CEASE-v2.0.
Reader caution is suggested since the test cases given are
we experimented with epochs as 4, 6, and 8 and learning rate as 2e-5, 3e-5.
AcknowledgementAsif Ekbal acknowledges the Young Faculty Research Fellowship (YFRF), supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia).A AppendixA.1 BaselinesThe following baseline methods are considered for the comprehensive evaluation of our proposed.• Hierarchical Attention Network (HAN)(Yang et al., 2016): The attention mechanism in HAN takes into consideration the hierarchical structure of texts and identifies the most relevant words in a sentence and most of relevant sentences in a document while taking contextual information into account.• CNN+cLSTM(Poria et al., 2017): A CNN is used to extract textual characteristics from utterances, after which a cLSTM is used to learn contextual information.• BERT(Devlin et al., 2019): We experiment with the base version of the state-of-the-art BERT language model by developing four single-task binary BERT classifiers (one for each personality trait variable).• RoBERTa(Liu et al., 2019): This is an optimized version of BERT trained with more computing power and data than BERT and is known to outperform BERT in many downstream tasks. Similar to BERT, we develop four single-task RoBERTa classifiers.• MT-BERT(Peng et al., 2020): We build a multitask (MT) variant of BERT based on theA.2 Experimental SettingWe set the sequence length as 15 and the context length as 13 as the average sentence length and context length in the CEASE-v2.0 dataset. The experiments are run on an NVIDIA GeForce RTX 2080 Ti GPU. We experiment with the base version of BERT and RoBERTa imported from the Tensorflow Hub (https://www. tensorflow.org/hub) library. For maximum utilization of the GPU and considering the small size of the dataset, we run the MT-BERT and EM-PERSONA systems with a batch size of 2. Adam optimizer (Kingma and Ba, 2015) is used to train EM-PERSONA by minimizing the cross-entropy losses. Through grid search, we set the learning rates as 3e-5 and 2e-5 for the MT-BERT and EM-PERSONA systems respectively 6 . We observe empirically that setting higher epochs causes the models to overfit; hence we set the epochs as 3. We use ReLU activation on all dense layers (except the output dense) followed by a dropout(Srivastava et al., 2014)of 25% to prevent overfitting. We employ five self-attention heads for the selfattention layer, embedding dimensions = 200 and feed-forward dimensions = 400. Each task-specific dense layer has 100 neurons, whereas intermediate dense layers contain 200 neurons. To account for the non-determinism of TensorFlow GPU operations, we present F1 scores averaged across five 10-fold cross-validation runs.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Persona traits identification based on myers-briggs type indicator(mbti) -A text classification approach. Srilakshmi Bharadwaj, Srinidhi Sridhar, Rahul Choudhary, Ramamoorthy Srinath, 10.1109/ICACCI.2018.85548282018 International Conference on Advances in Computing. Bangalore, IndiaIEEESrilakshmi Bharadwaj, Srinidhi Sridhar, Rahul Choud- hary, and Ramamoorthy Srinath. 2018. Persona traits identification based on myers-briggs type indi- cator(mbti) -A text classification approach. In 2018 International Conference on Advances in Comput- ing, Communications and Informatics, ICACCI 2018, Bangalore, India, September 19-22, 2018, pages 1076-1082. IEEE.
On the properties of neural machine translation: Encoder-decoder approaches. Kyunghyun Cho, Dzmitry Bart Van Merrienboer, Yoshua Bahdanau, Bengio, Proceedings of SSST@EMNLP 2014, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. SSST@EMNLP 2014, Eighth Workshop on Syntax, Semantics and Structure in Statistical TranslationDoha, QatarKyunghyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. In Proceedings of SSST@EMNLP 2014, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, Doha, Qatar, 25 October 2014, pages 103-111.
BERT: pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/n19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019Minneapolis, MN, USA1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186.
Ai in mental health. D' Simon, Alfonso, Current Opinion in Psychology. 36Simon D'Alfonso. 2020. Ai in mental health. Current Opinion in Psychology, 36:112-117.
Artificial intelligence and mobile apps for mental healthcare: a social informatics perspective. Alyson Gamble, 10.1108/AJIM-11-2019-0316Aslib J. Inf. Manag. 724Alyson Gamble. 2020. Artificial intelligence and mo- bile apps for mental healthcare: a social informatics perspective. Aslib J. Inf. Manag., 72(4):509-523.
Cease, a corpus of emotion annotated suicide notes in english. Soumitra Ghosh, Asif Ekbal, Pushpak Bhattacharyya, Proceedings of The 12th Language Resources and Evaluation Conference. The 12th Language Resources and Evaluation ConferenceMarseille, France2020Soumitra Ghosh, Asif Ekbal, and Pushpak Bhat- tacharyya. 2020. Cease, a corpus of emotion an- notated suicide notes in english. In Proceedings of The 12th Language Resources and Evaluation Con- ference, LREC 2020, Marseille, France, May 11-16, 2020, pages 1618-1626.
Asif Ekbal, and Pushpak Bhattacharyya. 2022. A multitask framework to detect depression, sentiment and multi-label emotion from suicide notes. Soumitra Ghosh, 10.1007/s12559-021-09828-7Cogn. Comput. 141Soumitra Ghosh, Asif Ekbal, and Pushpak Bhat- tacharyya. 2022. A multitask framework to detect depression, sentiment and multi-label emotion from suicide notes. Cogn. Comput., 14(1):110-129.
The structure of phenotypic personality traits. R Lewis, Goldberg, American psychologist. 48126Lewis R Goldberg. 1993. The structure of phenotypic personality traits. American psychologist, 48(1):26.
Short text classification with A convolutional neural networks based method. Yibo Hu, Yang Li, Tao Yang, Quan Pan, 10.1109/ICARCV.2018.858133215th International Conference on Control, Automation, Robotics and Vision. SingaporeIEEEYibo Hu, Yang Li, Tao Yang, and Quan Pan. 2018. Short text classification with A convolutional neural net- works based method. In 15th International Confer- ence on Control, Automation, Robotics and Vision, ICARCV 2018, Singapore, November 18-21, 2018, pages 1432-1435. IEEE.
Suicidal ideation detection: A review of machine learning methods and applications. Shaoxiong Ji, Shirui Pan, Xue Li, Erik Cambria, Guodong Long, Zi Huang, 10.1109/TCSS.2020.3021467IEEE Trans. Comput. Soc. Syst. 81Shaoxiong Ji, Shirui Pan, Xue Li, Erik Cambria, Guodong Long, and Zi Huang. 2021. Suicidal ideation detection: A review of machine learning methods and applications. IEEE Trans. Comput. Soc. Syst., 8(1):214-226.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsDiederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
Personality and psychopathology. F Robert, Jennifer L Krueger, Tackett, Guilford PressRobert F Krueger and Jennifer L Tackett. 2006. Person- ality and psychopathology. Guilford Press.
Tecla: A temperament and psychological type prediction framework from twitter data. Ana Carolina, E S Lima, Leandro Nunes De Castro, Plos one. 143212844Ana Carolina ES Lima and Leandro Nunes de Castro. 2019. Tecla: A temperament and psychological type prediction framework from twitter data. Plos one, 14(3):e0212844.
. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
Bottom-up and top-down: Predicting personality with psycholinguistic and language model features. Yash Mehta, Samin Fatehi, Amirmohammad Kazameini, Clemens Stachl, Erik Cambria, Sauleh Eetemadi, 10.1109/ICDM50108.2020.0014620th IEEE International Conference on Data Mining. Sorrento, ItalyIEEE2020Yash Mehta, Samin Fatehi, Amirmohammad Kazameini, Clemens Stachl, Erik Cambria, and Sauleh Eetemadi. 2020. Bottom-up and top-down: Predicting person- ality with psycholinguistic and language model fea- tures. In 20th IEEE International Conference on Data Mining, ICDM 2020, Sorrento, Italy, November 17-20, 2020, pages 1184-1189. IEEE.
The myers-briggs type indicator: Manual. Isabel Briggs Myers, Isabel Briggs Myers. 1962. The myers-briggs type indi- cator: Manual (1962).
Personality subtypes of suicidal adults. M Kile, Drew Ortigo, Rebekah Westen, Bradley, The Journal of nervous and mental disease. 1979687Kile M Ortigo, Drew Westen, and Rebekah Bradley. 2009. Personality subtypes of suicidal adults. The Journal of nervous and mental disease, 197(9):687.
An empirical study of multi-task learning on BERT for biomedical text mining. Yifan Peng, Qingyu Chen, Zhiyong Lu, 10.18653/v1/2020.bionlp-1.22Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing. the 19th SIGBioMed Workshop on Biomedical Language ProcessingOnline2020Yifan Peng, Qingyu Chen, and Zhiyong Lu. 2020. An empirical study of multi-task learning on BERT for biomedical text mining. In Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Pro- cessing, BioNLP 2020, Online, July 9, 2020, pages 205-214.
Linguistic styles: language use as an individual difference. W James, Laura A Pennebaker, King, Journal of personality and social psychology. 7761296James W Pennebaker and Laura A King. 1999. Lin- guistic styles: language use as an individual differ- ence. Journal of personality and social psychology, 77(6):1296.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, 10.3115/v1/d14-1162Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. the 2014 Conference on Empirical Methods in Natural Language ProcessingDoha, QatarA meeting of SIGDAT, a Special Interest Group of the ACLJeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Pro- cessing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1532-1543.
Temperament and personality dimensions in suicidal and nonsuicidal psychiatric inpatients. Maurizio Pompili, Zoltán Rihmer, S Hagop, Marco Akiskal, Paolo Innamorati, Kareen K Iliceto, David Akiskal, Valentina Lester, Stefano Narciso, Roberto Ferracuti, Tatarelli, Psychopathology. 415Maurizio Pompili, Zoltán Rihmer, Hagop S Akiskal, Marco Innamorati, Paolo Iliceto, Kareen K Akiskal, David Lester, Valentina Narciso, Stefano Ferracuti, Roberto Tatarelli, et al. 2008. Temperament and personality dimensions in suicidal and nonsuicidal psychiatric inpatients. Psychopathology, 41(5):313- 321.
Context-dependent sentiment analysis in user-generated videos. Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Majumder, Amir Zadeh, Louis-Philippe Morency, 10.18653/v1/P17-1081Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaLong Papers1Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Majumder, Amir Zadeh, and Louis-Philippe Morency. 2017. Context-dependent sentiment anal- ysis in user-generated videos. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics, ACL 2017, Vancouver, Canada, July 30 -August 4, Volume 1: Long Papers, pages 873-883.
A sentiment-aware deep learning approach for personality detection from text. Zhancheng Ren, Qiang Shen, Xiaolei Diao, Hao Xu, 10.1016/j.ipm.2021.102532Inf. Process. Manag. 583102532Zhancheng Ren, Qiang Shen, Xiaolei Diao, and Hao Xu. 2021. A sentiment-aware deep learning approach for personality detection from text. Inf. Process. Manag., 58(3):102532.
Quantification of agreement in psychiatric diagnosis: A new approach. L Robert, Jacob Spitzer, Cohen, L Joseph, Jean Fleiss, Endicott, Archives of General Psychiatry. 171Robert L Spitzer, Jacob Cohen, Joseph L Fleiss, and Jean Endicott. 1967. Quantification of agreement in psychiatric diagnosis: A new approach. Archives of General Psychiatry, 17(1):83-87.
Dropout: a simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, J. Mach. Learn. Res. 151Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929- 1958.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Long Beach, CA, USAAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
A short text classification method based on convolutional neural network and semantic extension. Haitao Wang, Keke Tian, Zhengjiang Wu, Lei Wang, 10.2991/ijcis.d.201207.001Int. J. Comput. Intell. Syst. 141Haitao Wang, Keke Tian, Zhengjiang Wu, and Lei Wang. 2021. A short text classification method based on convolutional neural network and semantic extension. Int. J. Comput. Intell. Syst., 14(1):367-375.
Incorporating textual information on user behavior for personality prediction. Kosuke Yamada, Ryohei Sasano, Koichi Takeda, 10.18653/v1/p19-2024Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. the 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, Italy2Student Research WorkshopKosuke Yamada, Ryohei Sasano, and Koichi Takeda. 2019. Incorporating textual information on user be- havior for personality prediction. In Proceedings of the 57th Conference of the Association for Compu- tational Linguistics, ACL 2019, Florence, Italy, July 28 -August 2, 2019, Volume 2: Student Research Workshop, pages 177-182.
Multi-document transformer for personality detection. Feifan Yang, Xiaojun Quan, Yunyi Yang, Jianxing Yu, Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021. Feifan Yang, Xiaojun Quan, Yunyi Yang, and Jianx- ing Yu. 2021. Multi-document transformer for per- sonality detection. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 14221-14229.
Hierarchical attention networks for document classification. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J Smola, Eduard H Hovy, 10.18653/v1/n16-1174The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Peng et al.San Diego California, USANAACL HLT 2016. Peng et al., 2020) for our four personality subtypesZichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J. Smola, and Eduard H. Hovy. 2016. Hi- erarchical attention networks for document classifica- tion. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, San Diego California, USA, June 12-17, 2016, pages 1480-1489. architecture proposed by Peng et al. (Peng et al., 2020) for our four personality subtypes. |
219,301,180 | [] | Dialect MT: A Case Study between Cantonese and Mandarin
Xiaoheng Zhang ctxzhang@polyu.edu.hk
Dept. of Chinese &. Bilingual Studies
The Hong Kong Polytechnic University
Hung Horn
Dept. of Chinese &. Bilingual Studies
The Hong Kong Polytechnic University
Hong Kowloon
Dept. of Chinese &. Bilingual Studies
The Hong Kong Polytechnic University
Kong
Dept. of Chinese &. Bilingual Studies
The Hong Kong Polytechnic University
Dialect MT: A Case Study between Cantonese and Mandarin
Machine Translation (MT) need not be confined to inter-language activities. In this paper, we discuss inter-dialect MT in general and Cantonese-Mandarin MT in particular. Mandarin and Cantonese are two most important dialects of Chinese. The former is the national lingua franca and the latter is the most influential dialect in South China, Hong Kong and overseas. The difference in between is such that mutual intelligibility is impossible. This paper presents, from a computational point of view, a comparative study of Mandarin and Cantonese at the three aspects of sound systems, grammar rules and vocabulary contents, followed by a discussion of the design and implementation of a dialect MT system between them.
Introduction
Automatic Machine Translation (MT) between different languages, such as English, Chinese and Japanese, has been an attractive but extremely difficult research area. Over forty years of MT history has seen limited practical translation systems developed or commercialized in spite of the considerable development in computer science and linguistic studies. High quality machine translation between two languages requires deep understanding of the intended meaning of the source language sentences, which in turn involves disambiguation reasoning based on intelligent searches and proper uses of a great amount of relevant knowledge, including common sense (Nirenburg, et. al. 1992). The task is so demanding that some researchers are looking more seriously at machine-aided human translation as an alternative way to achieve automatic machine translation (Martin, 1997a(Martin, , 1997b.
Translation or interpretation is not necessarily an inter-language activity. In many cases, it happens among dialects within a single language. Similarly, MT can be inter-dialect as well. In fact, automatic translation or interpretation seems much more practical and achievable here since inter-dialect difference is much less serious than inter-language difference. Interdialect MT l also represents a promising market, especially in China. In the following sections we will discuss inter-dialect MT with special emphasis on the pair of Chinese Cantonese and Chinese Mandarin.
Dialects and Chinese Dialects
Dialects of a language are that language's systematic variations, developed when people of a common language are separated geographically and socially. Among this group of dialects, normally one serves as the lingua franca, namely, the common language medium for communication among speakers of different dialects. Inter-dialect differences exist in prommciation, vocabulary and syntactic rules. However, they are usually insignificant in comparison with the similarities the dialects have. It has been declared that dialects of one language are mutually intelligible (Fromkin and Rodman 1993, p. 276).
Nevertheless, this is not true to the situation in China. There are seven major Chinese dialects: the Northern Dialect (with Mandarin as its standard version), Cantonese, Wu, Min, Hakka, Xiang and Gan (Yuan, 1989), that for the most part are mutually unintelligible, and inter-dialect In this paper, MT refers to both computer-based translation and intelpretation. translation is often found indispensable for successful communication, especially between Cantonese, the most popular and the most influential dialect in South China and overseas, and Mandarin, the lingual franca of China.
Linguistic Consideration of Dialect
MT
Most differences among the dialects of a language are found in their sound inventory and phonological systems. Words with similar written forms are often pronounced differently in different dialects. For example, the same Chinese word "-~-}~ " (Hong Kong) is pronounced xianglgang3 2 in Mandarin, but hoenglgong2 in Cantonese. There are also lexical differences although dialects share most of their words. Different dialects may use different words to refer to the same thing. For example, the word "umbrella" is N (yu3san3) in Mandarin, and ~ (zel) in Cantonese. Differences in syntactic structure are less common but they are linguistically more complicated and computationally more challenging. For example, the positions of some adverbs may vary from dialect to dialect. To express "You go first", we have Mandarin:
zou3
(1 ) %ubbrella where the word entry flag "word" is followed by three arguments: the part of speech and the corresponding words (in Chinese characters and pinyins) in Mandarin and in Cantonese. English comments are marked with "%".
Morphologically, there are some useful rules for word formation. For example, in Mandarin, the prefixes "~" (gongl) and "]X~" (xiong2) are for male animals, and "-E~" (mu3) and [~ (Cantonese), KJ:.~J: (Mandarin). The problem caused by syntactic difference can be tackled with linguistic rules, for example, the rules below can be used for Cantonese-Mandarin MT of the previous example sentences:
Rule 1 : NP xianl VP <--> NP VP sinl NP first VP <--> NP VP first Rule 2:bi3 NP ADJP <--> ADJP go3 NP than more Rule 3:gei3 (%give) Operson Othing <--> bei2 (%give) Othing Operson Inter-dialect syntactic differences largely exists in word orders, the key task for MT is to decide what part(s) of the source sentence should be moved, and to where. It seems unlikely for words to be moved over long distances, because dialects normally exist in spoken, short sentences.
Another problem to be considered is whether dialect MT should be direct or indirect, i.e., should there be an intermediate language/dialect? It seems indirect MT with the lingua franca as the intermediate representation medium is promising. The advantage is twofold: (a) good for multi-dialect MT; (b) more useful and practical as a lingua franca is a common and the most influential dialect in the family, and maybe the only one with a complete written system.
Still another problem is the forms of the source and target dialects for the MT program. Most MT systems nowadays translate between written languages, others are trying speech-tospeech translation. For dialects MT, translation between written sentences is not that admirable because the dialects of a language virtually share a common written system. On the other hand, speech to speech translation involves speech recognition and speech generation, which is a challenging research area by itself. It is worthwhile to take a middle way: translation at the level of phonetic symbols. There are at least three major reasons: (a) The largest difference among dialects exists in sound systems. (b) Phonetic symbol translation is a prerequisite for speech translation. (e) Some dialect words can only be represented in sound. In our case, pinyins have been selected to represent both input and output sentences, because in China pinyins are the most popular tools to learn dialects and to input Chinese characters to computers.
Chinese pinyin schemes, for Mandarin and for ordinary dialects are romanized, i.e., they virtually only use English letters, to the convenience of computer processing.
Of course, pinyin-to-pinyin translation is more difficult than translation between written words in Chinese block characters because the former involves linguistics analysis at all the three aspects of sound systems, grammar rules and vocabulary contents in stead of two.
The Problem of Ambiguities
Ambiguity is always the most crucial and the most challenging problem for MT. Since interdialect differences mostly exist in words, both in pronunciation and in characters, our discussion will concentrate on word disambignation for Cantonese-Mandarin MT. In the Cantonese vocabulary, there are about seven thousand to eight thousand dialect words (including idioms and fixed phrases), i.e., those words with different character fomls from any Mandarin words, or with meanings different from the Mandarin words of similar forms. These dialect words account for about one third of the total Cantonese vocabulary. In spoken Cantonese the frequency of use of Cantonese dialect words is close to 50 percent (Li, et. al., 1995, p236 (intersection) are both pronounced xiangljiaol in Mandarin, but in Cantonese they are pronounced hoenglziul and soenglgaaul respectively, though their written characters remain unchanged.
To tackle these ambiguities, we employs the techniques of hierarchical phrase analysis (Zhang and Lu, 1997) and word collocation processing (Sinclair, 1991), both rule-based and corpus-based. Briefly speaking, the hierarchical phrase analysis method firstly tries to solve a word ambiguity in the context of the smallest phrase containing the ambiguous word(s), then the next layer of embedding phrase is used if needed, and so on. As a result, the problem will be solved within the minimally sufficient context. To further facilitate the work, large amount of commonly used phrases and phrase schemes are being collected into the dictionary. Further more, interaction between the users and the MT system should be allowed for difficult disambiguation (Martin, 1997a).
System Design and hnplementation
A rudimentary design of a Cantonese-Mandarin dialect MT system has been made, as shown in Figure 1. The system takes Cantonese Pinyin sentences as input and generates Mandarin sentences in Hanyu Pinyin and in Chinese characters. The translation is roughly done in three steps: syntax conversion, word disambiguation and source-target words substitution. The knowledge bases include linguistic rules, a word collocation list and a bidialect MT dictionary.
A simplified example will make the basic ideas clearer. Suppose the example word entries and transformational rules in Section 2 are included in the MT system's knowledge base. Example sentence (2) in Cantonese, i.e., nei5 hang4 sinl 4~ ~T ~ (2) you go first is given as input for the system to translate into Mandarin. Because the input sentence contains the time adverb "sianl" (first), according to grammar rules, it is syntactically different from its counterpart in Mandarin. According to the flowchart, the Cantonese pinyin sentence is converted into a Mandarin structure. Rule 1 in the knowledge base is applied, producing nei5 sinl hang4 you first go Then the dictionary is accessed. The Cantonese word ~T(hang4) corresponds to two Mandarin words, i.e., ~(vi. go, walk) and ~T(n. row Mandarin words in pinyin l and in characters.
Output Mandarin sentence --,1~ data/eonlrol flow knowledgebase assessment Similarly, with transformational rule 1-3, a more complicated Cantonese sentence like goulgwo3 wo3 ge3 yan4 bei2 cin4 keoi5 sinl tall more me PART person give money him first can be correctly translated into Mandarin: bi3 wo3 gaol de ren2 xianl gei3 tal qian2 than me tall PART persons first give him money Those who are taller than me will give him some money first.
We are in the progress of implementing an interdialect MT prototype, called CPC, for translation between Cantonese and Putonghua (i.e., Mandarin), both Cantonese-to-Putonghua and Putonghua-to-Cantonese. Input and output sentences are in pinyins or Chinese characters. The programming languages used are Prolog and Java. We are doing Cantonese-to-Putonghua first, based on the design. At its current state, we have built a Cantonese-Mandarin bi-dialect dictionary of about 3000 words and phrases based on some well established books (e.g., Zeng, 1984;Mai and Tang, 1997), (When completed, there will be around 10,000 word entries) and a handful of rules. A Cantonese-Mandarin dialect corpus is also being built. The program can process sentences of a number of typical patterns. The funded project has two immediate purposes: to facilitate language communication and to help Hong Kong students write standard Mandarin Chinese.
Conclusion
Compared with inter-language MT, inter-dialect MT is much more manageable, both linguistically and technically. Though generally ignored, the development of inter-dialect MT systems is both rewarding and more feasible. The present paper discusses the design and implementation of dialect MT systems at pinyin and character levels, with special attention on the Chinese Mandarin and Cantonese. When supported by the modem technology for multimedia communication of the Interact and the WWW, dialect MT systems will produce even greater benefits (Zhang and Lau, 1996).
Nonetheless, the research reported in this paper can only be regarded as an initial exploratory step into a new exciting research area. There is large room for further research and discussion, especially in word disambiguation and syntax analysis. And we should also notice that the grammars of ordinary dialects are normally less well described than those of lingua francas.
"~:"(ci2) female animals. But in most southern China dialects, the suffixes "~-}/~i" and "~/~" are often used instead.
Figure 1 :
1A Design for Cantonese-Mandarin MT
~_ (go, walk) and ~y (row). Translation at the sound or pinyin level has to deal with another kind of ambiguity: the homophones of a word in the source dialect may not have their counterpart synonyms in the target dialect pronounced as homophones as well. For example, the words ~(banana) and ~3~).
Because of historical reasons, Hong Kong
Cantonese is linguistically more distant from
Mandarin than other regions in Mainland China.
One can easily spot Cantonese dialect articles in
Hong Kong newspapers which are totally
unintelligible to Mandarin speakers, while
Mandarin articles are easily understood by
Cantonese speakers. To translate a Cantonese
article into Mandarin, the primary task is to deal
with the Cantonese dialect words, especially
those that do not have semantically equivalent
counterparts in the target dialect. For example,
the Mandarin ~i(ju2, orange) has a much larger
coverage than the Cantonese }-~(gwatl). In
addition to the Cantonese ~r~, the Mandarin J(~
also includes the fruits Cantonese refers to as ~[~
(gaml) and ~(caang2). On the other hand, the
Cantonese ~
semantically covers the
Mandarin ;
AcknowledgementsThe research is funded by Hong Kong Polytechnic University, under the project account number of 0353 131 A3 720.
An Introduction to Language. V Fromkin, R Rodman, Harcourt Brace Jovanovich College Publishers276Orlando, Florida, USA5th editionFromkin V. and Rodman R. (1993) An Introduction to Language (5th edition). Harcourt Brace Jovanovich College Publishers, Orlando, Florida, USA., p. 276.
Guangzhou Fangyan Janjiu (Research in Cantonese DialecO. X Li, J Huang, Q Shi, Y Mai, D Chen, Guangdong People's Press236Guangzhou, ChinaLi X., Huang J., Shi Q., Mai Y. and Chen D. (1995) Guangzhou Fangyan Janjiu (Research in Cantonese DialecO. Guangdong People's Press, Guangzhou, China, p. 236.
Xiandai Hanyu Cidian (Contemporary Chinese Dictionary). Beijing, China. LSHKCommercial PressLICASS (Language Institute, the Chinese Academy of Social SciencesLICASS (Language Institute, the Chinese Academy of Social Sciences) (1996) Xiandai Hanyu Cidian (Contemporary Chinese Dictionary). Commercial Press, Beijing, China. LSHK (1997)
The Chinese Character List with Cantonese Pinyin). Zibiao Yueyu Pinyin, Hong KongLinguistic Society of Hong KongYueyu Pinyin Zibiao (The Chinese Character List with Cantonese Pinyin). Linguistic Society of Hong Kong, Hong Kong.
Shiyong Guangzhouhua Fenlei Cidian (A Practical Semantically-Classified Dictionary of Cantonese). Y Mai, B Tang, Guandong People's PressGuangzhou, ChinaMai Y. and Tang B. (1997) Shiyong Guangzhouhua Fenlei Cidian (A Practical Semantically-Classified Dictionary of Cantonese). Guandong People's Press, Guangzhou, China.
The proper place of men and machines in language translation. Machine Translation, 1-2/12. K Martin, Martin K. (1997a) The proper place of men and machines in language translation. Machine Translation, 1-2/12, pp. 3-23.
It's still the proper place. Machine Translation, 1-2/12. K Martin, Martin K. (1997b) It's still the proper place. Machine Translation, 1-2/12, pp. 35-38.
Machine Translation: A Knowledge-Based Approach. S Nirenburg, J Carbonell, M Tomita, K Goodman, Morgan Kaufmann PublishersSan Marco, California, USANirenburg S., Carbonell J., Tomita M. and Goodman K. (1992) Machine Translation: A Knowledge-Based Approach. Morgan Kaufmann Publishers, San Marco, California, USA.
Corpus, Concordance and Collocation. J Sinclair, Collins, London, UKSinclair J. (1991) Corpus, Concordance and Collocation. Collins, London, UK.
Hanyu Fangyan Gaiyao (Introduction to Chinese Dialects). J Yuan, Wenzi Gaige PressBeijing, ChinaYuan J. (1989) Hanyu Fangyan Gaiyao (Introduction to Chinese Dialects). Wenzi Gaige Press, Beijing, China.
Guangzhouhua-Putonghua Kouyuei Duiyi Shouee (A Translation Manual of Cantonese~ Mandarin Spoken Words and Phrases). Z F Zeng, Joint PublishingHong KongZeng Z. F. (1984) Guangzhouhua-Putonghua Kouyuei Duiyi Shouee (A Translation Manual of Cantonese~ Mandarin Spoken Words and Phrases). Joint Publishing, Hong Kong.
Chinese inter-dialeet machine translation on the Web. X Zhang, C F S Lau, F Mak, Collaboration via the Virtual Orient Express: Proceedings of the Asia-Pacific World Wide Web Conference. Zhang X. and Lau C. F. (1996) Chinese inter-dialeet machine translation on the Web. In "Collaboration via the Virtual Orient Express: Proceedings of the Asia- Pacific World Wide Web Conference" S. Mak, F.
. & J Castro, Bacon-Shone, Hong Kong UniversityCastro & J. Bacon-Shone, ed., Hong Kong University, pp. 4 l 9--429.
Intelligent Chinesepinyim character conversion based on phrase analysis and dynamic semantic collocation. X Zhang, F Lu, Q. YuanTsinghua University PressBeijing, ChinaLanguage EngineeringZhang X. and Lu F. (1997) Intelligent Chinesepinyim character conversion based on phrase analysis and dynamic semantic collocation. In "Language Engineering", L. Chen and Q. Yuan, ed., Tsinghua University Press, Beijing, China, pp. 389-395. |
||
61,218,567 | Détection de la cohésion lexicale par voisinage distributionnel : application à la segmentation thématique | Cette étude s'insère dans le projet VOILADIS (VOIsinage Lexical pour l'Analyse du DIScours), qui a pour objectif d'exploiter des marques de cohésion lexicale pour mettre au jour des phénomènes discursifs. Notre propos est de montrer la pertinence d'une ressource, construite par l'analyse distributionnelle automatique d'un corpus, pour repérer les liens lexicaux dans les textes. Nous désignons par voisins les mots rapprochés par l'analyse distributionnelle sur la base des contextes syntaxiques qu'ils partagent au sein du corpus. Pour évaluer la pertinence de la ressource ainsi créée, nous abordons le problème du repérage des liens lexicaux à travers une application de TAL, la segmentation thématique. Nous discutons l'importance, pour cette tâche, de la ressource lexicale mobilixsée ; puis nous présentons la base de voisins distributionnels que nous utilisons ; enfin, nous montrons qu'elle permet, dans un système de segmentation thématique inspiré de (Hearst, 1997), des performances supérieures à celles obtenues avec une ressource traditionnelle.Abstract. The present work takes place within the Voiladis project (Lexical neighborhood for discourse analysis), whose purpose is to exploit lexical cohesion markers in the study of various discursive phenomena. We want to show the relevance of a distribution-based lexical resource to locate interesting relations between lexical items in a text. We call neighbors lexical items that share a significant number of syntactic contexts in a given corpus. In order to evaluate the usefulness of such a resource, we address the task of topical segmentation of text, which generally makes use of some kind of lexical relations. We discuss here the importance of the particular resource used for the task of text segmentation. Using a system inspired by(Hearst, 1997), we show that lexical neighbors provide better results than a classical resource.Mots-clés : Cohésion lexicale, ressources lexicales, analyse distributionnelle, segmentation thématique. | [
15754496,
8574660,
2384391,
6048999,
170558160,
39184340,
1049
] | Détection de la cohésion lexicale par voisinage distributionnel : application à la segmentation thématique
2009. Senlis, 24-26 juin 2009
Clémentine Adam adam@univ-tlse2.fr
CLLE
Université de Toulouse & CNRS
François Morlane-Hondère morlanehondere@gmail.com
CLLE
Université de Toulouse & CNRS
Détection de la cohésion lexicale par voisinage distributionnel : application à la segmentation thématique
2009. Senlis, 24-26 juin 2009Lexical cohesionlexical resourcesdistributional analysistext segmenta- tion Clémentine AdamFrançois Morlane-Hondère
Cette étude s'insère dans le projet VOILADIS (VOIsinage Lexical pour l'Analyse du DIScours), qui a pour objectif d'exploiter des marques de cohésion lexicale pour mettre au jour des phénomènes discursifs. Notre propos est de montrer la pertinence d'une ressource, construite par l'analyse distributionnelle automatique d'un corpus, pour repérer les liens lexicaux dans les textes. Nous désignons par voisins les mots rapprochés par l'analyse distributionnelle sur la base des contextes syntaxiques qu'ils partagent au sein du corpus. Pour évaluer la pertinence de la ressource ainsi créée, nous abordons le problème du repérage des liens lexicaux à travers une application de TAL, la segmentation thématique. Nous discutons l'importance, pour cette tâche, de la ressource lexicale mobilixsée ; puis nous présentons la base de voisins distributionnels que nous utilisons ; enfin, nous montrons qu'elle permet, dans un système de segmentation thématique inspiré de (Hearst, 1997), des performances supérieures à celles obtenues avec une ressource traditionnelle.Abstract. The present work takes place within the Voiladis project (Lexical neighborhood for discourse analysis), whose purpose is to exploit lexical cohesion markers in the study of various discursive phenomena. We want to show the relevance of a distribution-based lexical resource to locate interesting relations between lexical items in a text. We call neighbors lexical items that share a significant number of syntactic contexts in a given corpus. In order to evaluate the usefulness of such a resource, we address the task of topical segmentation of text, which generally makes use of some kind of lexical relations. We discuss here the importance of the particular resource used for the task of text segmentation. Using a system inspired by(Hearst, 1997), we show that lexical neighbors provide better results than a classical resource.Mots-clés : Cohésion lexicale, ressources lexicales, analyse distributionnelle, segmentation thématique.
Introduction
L'étude de la structure du discours est un champ de la linguistique qui a suscité de nombreux travaux depuis les années soixante, et qui bénéficie actuellement d'un regain d'intérêt lié aux enjeux qu'il soulève pour le traitement automatique des langues. L'automatisation de la mise au jour des structures discursives pourrait en effet avoir un impact important sur toute application de TAL nécessitant une vision des textes allant au delà du simple « sac de mots » : fouille de données textuelles, résumé automatique, navigation intra-documentaire, etc. (Péry-Woodley & Scott, 2006). L'analyse du discours repose sur l'observation selon laquelle un texte n'est pas une simple succession de phrases, mais un tout cohérent. Cette cohérence, propriété intrinsèque des textes, est reflétée par les observables que sont les marques de cohésion. Le concept de cohésion englobe tous les phénomènes qui permettent de relier entre elles les phrases d'un texte, participant ainsi à créer sa texture (Halliday & Hasan, 1976). Les procédés cohésifs classiquement considérés, dans la continuité d'Halliday & Hassan, sont la référence, la substitution, l'ellipse, la conjonction et surtout la cohésion lexicale, qui est reconnue comme étant le principal vecteur de texture (Hoey, 1991).
Le projet VOILADIS 1 (VOIsinage Lexical pour l'Analyse du DIScours), dans lequel s'inscrit cette étude, a pour but d'utiliser des indices lexicaux pour la mise au jour de phénomènes discursifs, dans une visée d'automatisation. Ce champ est encore peu exploré : la cohésion lexicale reste peu exploitée sur le plan applicatif car elle est difficile à appréhender, et donc à repérer automatiquement. En effet, elle réside généralement dans des relations non classiques, que les lexiques ne recensent pas (Morris & Hirst, 2004).
Dans le cadre du projet VOILADIS, la ressource mobilisée est une base de voisins distributionnels : l'analyse distributionnelle automatique de grands corpus permet en effet de rapprocher des mots présentant des contextes d'apparition similaires, lesquels ont tendance à être liés par une relation sémantique qui va souvent au-delà des classifications traditionnelles. Cette méthode permet également de disposer d'une ressource qui reflète véritablement les relations qui opèrent sur un texte donné, dans le sens où la base distributionnelle est avant tout le fruit de l'analyse syntaxique du corpus. L'objectif à long terme du projet VOILADIS, encore en phase exploratoire, est d'évaluer l'apport d'une telle ressource à différentes approches du discours.
Dans une première étape, nous nous sommes intéressés à une application qui a tout particulièrement exploité les indices de nature lexicale : la segmentation thématique. Cette approche assez empirique du discours vise l'identification de segments textuels, de blocs homogènes du point de vue de leur objet. Cette tâche est parmi les plus tributaires des phénomènes de cohésion, dans le sens où les zones thématiques ne sont définies que par le fait qu'elles se trouvent être particulièrement cohésives, ce qui laisse à penser que plus le repérage des relations lexicales sera efficace, mieux les zones seront définies. Pour évaluer la pertinence de notre base de voisins distributionnels pour le repérage des liens de cohésion lexicale, nous avons donc choisi de mesurer son apport à un système de segmentation automatique.
Dans la suite de cet article, nous discutons de la dépendance de la segmentation thématique à une prise en compte de la cohésion lexicale, qu'elle soit basique ou plus fine. Puis nous décrivons la base de voisins que nous avons mobilisée, et plus largement, la méthode qui a permis de la construire, et discutons a priori de sa pertinence pour appréhender des relations lexicales variées. Enfin, nous relatons la démarche que nous avons suivie pour montrer l'apport de cette ressource à un système de segmentation thématique.
2 Segmentation thématique et cohésion lexicale Le but de la segmentation thématique est d'effectuer le pavage d'un texte en segments consécutifs censés présenter une homogénéité du point de vue de leurs thèmes. Cette tâche peut permettre d'améliorer les performances de diverses applications : recherche d'information - (Callan et al., 1992), entre autres, a montré qu'un système de recherche d'information gagne à indexer des unités inférieures au document -, résumé automatique (Brunn et al., 2001), extraction d'information, etc.
De nombreux algorithmes ont été développés pour la segmentation thématique, que l'on peut grosso modo regrouper en deux familles (Hernandez, 2004) : (a) ceux qui parcourent linéairement le texte selon une fenêtre d'observation glissante, et procèdent donc de manière ascendante et (b) ceux qui calculent une matrice de similarité pour l'ensemble des unités du texte avant de décider où placer les ruptures, procédant donc de manière descendante (Malioutov & Barzilay, 2006). Les systèmes les plus connus représentant ces deux familles sont d'une part l'algorithme TextTiling de (Hearst, 1997), et d'autre part l'algorithme C99 de (Choi, 2000).
Malgré la variété des approches, la différence entre algorithmes n'apparaît pas cruciale. Ce qui compte avant tout, ce sont les indices utilisés pour mesurer la similarité entre unités de segmentation, que ce soit au niveau local ou global. Pour mesurer la « force » de la cohésion lexicale entre deux pans de texte, on se base sur le nombre de liens (éventuellement pondérés) qu'entretiennent les unités lexicales qu'ils contiennent. Ces liens peuvent être de natures diverses.
Beaucoup de systèmes de segmentation thématique se cantonnent aux liens de répétition lexicale, c'est-à-dire aux répétitions de formes, de formes tronquées ou de lemmes (Hearst, 1997;Choi, 2000) ; une extension consiste à prendre en compte les répétitions de n-grammes, en leur attribuant un poids plus important (Beeferman et al., 1997). L'inconvénient de ces approches est que les scores sont alors basés sur un nombre très restreint d'occurrences, et que beaucoup de liens participant à la cohésion sont donc ignorés. Pour pallier ce problème, il est nécessaire de faire appel à une ressource extérieure. Cette solution est toujours présentée, à notre connaissance, comme permettant d'améliorer les performances des systèmes, au point que les auteurs mettent parfois plus l'accent sur la ressource utilisée que sur l'originalité de leur algorithme. Les ressources mobilisées varient beaucoup du point de vue des relations lexicales qu'elles permettent de détecter.
Certains utilisent une ressource générique, construite à partir d'un dictionnaire ou d'un thésaurus (Kozima, 1993;Lin et al., 2004;Morris & Hirst, 2004). Ainsi, les liens de synonymie, et dans le meilleur des cas ceux relevant d'autres relations classiques telles que l'hyperonymie et l'antonymie, peuvent être pris en compte. D'autres s'appuient sur des ressources construites en corpus (Choi et al., 2001;Bolshakov & Gelbukh, 2001;Ferret, 2002). Les méthodes de constitution de ces ressources varient, mais tournent toujours autour de l'extraction de collocations ou de cooccurrences. Par exemple, l'analyse sémantique latente (ASL) permet d'évaluer la proximité sémantique entre des couples de mots en fonction de leurs cooccurrences au sein de mêmes phrases, paragraphes ou textes sur l'ensemble d'un corpus (Choi et al., 2001).
Les auteurs prônant des approches basées sur corpus font généralement état de meilleures performances. En effet, les liens lexicaux mis en jeu dans la cohésion lexicale vont bien au delà des relations traditionnelles. Classiquement, suivant (Halliday & Hasan, 1976), on distingue entre : -Des relations de réitération, qui englobent la répétition lexicale, la reprise par un synonyme ou un hyperonyme, voire d'autres relations paradigmatiques classiques telles que l'antonymie et la méronymie ; -Des relations dites de collocation, qui associent des mots présentant une tendance à apparaître ensemble, mais ne relevant pas de la réitération. Il s'agit pas nécessairement de relations d'ordre syntagmatique, l'acception du terme « collocation » étant ici plus vaste. Des études (Morris & Hirst, 2004) ont montré que chez les lecteurs, les relations les plus pertinentes pour le repérage des structures discursives étaient dans la plupart des cas des relations échappant donc aux typologies traditionnelles. Lorsqu'il s'agit d'interpréter un texte, les relations comme la synonymie, l'antonymie, etc. cèdent le pas à des relations « non classiques », moins facilement définissables car plus dépendantes des mots entre lesquels elles se manifestent (chien/aboyer, abeille/miel, etc.). Ainsi, construire une ressource à partir d'un corpus permet d'appréhender des relations plus variées, et plus pertinentes pour la structuration du discours. Nous nous inscrivons dans cette veine. Mais nous ne nous limitons pas à l'extraction de collocations ou de cooccurrences : l'originalité de notre ressource est qu'elle ne se cantonne pas aux relations syntagmatiques. Construite grâce à l'analyse distributionnelle d'un corpus, elle repose sur des informations linguistiques plus riches, susceptibles de mettre au jour des relations d'ordre paradigmatique.
3 Notre ressource : les voisins distributionnels La base de voisins que nous avons utilisée pour cette étude a été générée par le programme Upéry (Bourigault, 2002) à partir d'un corpus constitué de l'ensemble des articles de la version francophone de Wikipédia, soit plus de 470 000 articles pour 194 millions de mots 2 . Ces données ont été préalablement traitées par l'analyseur syntaxique Syntex (Bourigault, 2007), qui utilise l'analyse en dépendance pour générer une représentation du texte particulièrement propice à la méthode distributionnelle. arguments partageant les mêmes prédicats. Concrètement, dans notre base de voisins, le prédicat manger_obj est rapproché de se nourrir_de via les arguments pousse, bourgeon, crustacé, etc., et l'argument biscuit est rapproché de sucre via les prédicats fabrique_de, marque_de, production_de, etc. Lors de cette étape, le programme attribue à chaque paire de voisins un score de proximité qui indique dans quelle mesure les distributions des deux mots rapprochés sont similaires. Ce score est obtenu par la mesure de Lin (Lin, 1998) ; il se déroule en deux étapes. La première consiste à calculer la quantité d'information (QI) de chaque prédicat et argument, c'est-à-dire le rapport du nombre d'arguments/prédicats avec lesquels ils se combinent dans le corpus, sur la totalité des arguments/prédicats avec lesquels ils pourraient se combiner. Dans un second temps, il s'agit de rapprocher les arguments/prédicats entre eux en s'appuyant sur le nombre de prédicats/arguments qu'ils partagent et de diviser la QI de ces cooccurrents syntaxiques communs aux deux voisins (multipliée par 2) par la somme des QI respectives de chacun des voisins.
Cette méthode possède l'avantage de permettre les rapprochements intercatégoriels qui font cruellement défaut aux ressources actuellement disponibles. Ainsi, le verbe prédicat manger_obj se retrouve également associé à des prédicats nominaux comme bouillon_de, cuisson_de, re-cette_de ou plat_de via des arguments communs comme viande, poulet, poisson, spaghetti, etc. par Dicosyn comme étant des synonymes (plaine/vallée), des co-hyponymes (nord/sud-est/sudouest), mais également des mots qui entretiennent des relations moins faciles à catégoriser comme pays/frontière, ou frontière/nord. Ces dernières relations sont celles qui nous sont le plus précieuses, puisque leur repérage est une des spécificités de notre méthode. Et même si l'on pourrait considérer que le premier couple, pays/frontière, relève d'une relation de méronymie, il est difficile de donner un nom à la relation frontière/nord. Dans la mesure où ces deux mots font partie du même champ sémantique (celui de la géographie) et que leur mode de liaison échappe à toute classification, on peut considérer qu'on a là un cas de collocation au sens de (Halliday & Hasan, 1976). Indépendamment des performances que nous présenterons dans la suite de l'article, on voit déjà que les voisins présentent un intérêt évident du fait qu'ils mettent au jour des liens qu'aucune ressource classique ne permettrait de détecter.
L'algorithme de segmentation Pour la segmentation des textes du corpus, nous avons opté pour une approche linéaire, par fenêtre glissante, à la manière de (Hearst, 1997). Cette approche n'est pas forcément la plus performante, mais nous l'avons préférée en raison de sa simplicité d'implémentation ; en effet, nous ne poursuivons pas ici l'efficacité, mais la comparaison entre différentes ressources ; l'important pour nous était donc avant tout d'appliquer le même algorithme pour chaque ressource, quel que soit cet algorithme.
Notre unité de base est la phrase ; la fenêtre d'observation que nous appliquons pour calculer les scores de similarité est d'une taille de 6 unités (paramètre conseillé par (Hearst, 1997)). Ainsi, à la fin de chaque phrase, un score basé sur les liens entretenus par deux blocs de trois phrases est calculé (figure 4). La courbe des scores calculés est ensuite lissée. Toutes les vallées sont repérées, et leurs profondeurs calculées. On appelle vallée un point de la courbe qui est entouré par des points de valeurs plus élevées (c'est-à-dire un minimum local). Pour déterminer la profondeur d'une vallée, on remonte de part et d'autre du point considéré tant que l'on rencontre des valeurs plus élevées ; la profondeur de la vallée est calculée en faisant la moyenne des deux différences calculées (à gauche et à droite du point). Les vallées dont la profondeur dépasse l'écart-type à la moyenne sont considérées comme correspondant aux ruptures du texte. Ces ruptures, situées entre deux phrases, sont ramenées à la frontière de paragraphe la plus proche, ce qui produit le texte segmenté final.
Décisions prises sur l'évaluation Évaluer un système de segmentation thématique est délicat. De nombreux problèmes sont soulevés, et peuvent grosso modo être ramenés à deux questions : (a) Quelle référence ? (b) Quel mesure d'évaluation ?
(a) Pour évaluer la segmentation automatique, il faut la comparer à une segmentation de référence. Certains font pour cela appel à des annotations manuelles, mais font généralement état d'accords inter-annotateurs très faibles. D'autres prennent le parti d'accoler bout à bout des séquences appartenant à des textes différents ; les ruptures thématiques sont alors les ruptures entre textes. Cette position pose un problème évident de circularité : on fabrique l'objet que l'on postule. Pour cette expérience, nous avons décidé d'utiliser comme ruptures de référence les positions des titres de sections.
(b) Les scores habituels de précision et de rappel ne sont pas adaptés pour évaluer un système de segmentation thématique. En effet, ils ne permettent pas de rendre compte du fait qu'une rupture proche de la rupture de référence est meilleure qu'une rupture éloignée. D'autres scores ont été proposés, dont les plus usités sont les mesures Pk (Beeferman et al., 1999) et WindowDiff (Pevzner & Hearst, 2002). La mesure Pk consiste à compter le nombre de fois où deux mots pris au hasard à une distance k sont dans le même segment à la fois dans la référence et dans l'hypothèse. La mesure WindowDiff consiste à calculer la différence du nombre de ruptures dans une fenêtre glissante. Nous donnons ici nos résultats selon ces deux mesures.
Résultats Nous reportons dans le tableau 1 les résultats obtenus par l'algorithme de segmentation appliqué au corpus décrit, selon les liens cohésifs pris en compte (répétition, synonymie ou voisinage distributionnel). Les scores affichés correspondent aux moyennes des scores obtenus pour chaque texte. Il est à noter qu'avec les mesures Pk et WindowDiff, un score moins élevé reflète de meilleures performances. Pour mettre en perspective les résultats présentés, nous rapportons également les résultats obtenus avec des ruptures placées au hasard, le nombre de ruptures de références étant approximativement 6 connu. Ces résultats sont corrects compte-tenu de la difficulté de la tâche : chacune des approches permet une segmentation significativement meilleure que le hasard. Globalement, les résultats 6. Pour chaque texte, une variation de ±3 ruptures par rapport à la référence est autorisée.
observés avec les différents types de liens cohésifs sont assez proches. L'utilisation des liens de voisinage semble justifiée, puisqu'elle apporte les meilleures performances dans cette expérience, alors que les synonymes ne se démarquent que très peu de l'approche basique par répétitions. Il est donc confirmé que les voisins permettent une détection plus fine de la cohésion lexicale, du moins selon l'étalon que nous avions choisi : la performance d'un système de segmentation thématique.
Conclusions et perspectives
L'objectif de cette étude était de montrer la pertinence du voisinage distributionnel pour la détection de la cohésion lexicale. Nous avons à cette fin impliqué les voisins recensés par notre ressource dans un système de segmentation thématique. Les résultats obtenus montrent un apport significatif de la ressource mobilisée. Ainsi, nous avons vu qu'une ressource obtenue grâce à l'analyse distributionnelle présente des avantages que n'ont pas les ressources traditionnelles. Cette expérience mériterait d'être approfondie ; nous aimerions notamment comparer les voisins avec une ressource plus similaire, comme par exemple avec des collocations ; il serait également intéressant d'étudier les possibilités de combinaisons de ressources.
Comme nous l'avons indiqué en introduction, la segmentation thématique n'est pas pour nous une fin en soi. Si nous nous sommes ici donné pour but de repérer des zones cohésives, c'est avant tout pour confronter différentes méthodes de détection des liens lexicaux. En effet, le projet VOILADIS s'inscrit dans une démarche résolument axée vers une analyse du discours qui va au-delà du simple découpage thématique. En nous appuyant sur une ressource obtenue grâce à l'analyse distributionnelle, nous espérons mettre au point une méthode de détection des liens de cohésion lexicale assez efficace pour nous permettre de capter des particularités dans les fonctionnements discursifs des textes, à l'instar des topic opening et topic closing que repère (Hoey, 1991) en se basant sur la partie du texte vers laquelle pointent les liens cohésifs.
FIGURE 1 -FIGURE 2 -
12Liens de répétition de répétition : seulement 3 liens là où la synonymie et le voisinage permettent de détecter respectivement 7 et 8 liens. La répétition est toutefois dans cet exemple la seule méthode qui permet de tisser des liens cohésifs entre noms propres (ici, Danube). Les liens de synonymie Liens de synonymie concernent majoritairement des adjectifs, avec grand, haut et long, tous synonymes entre eux. Le lien de synonymie entre s'étendent et contiennent est difficilement interprétable, en tout cas dans ce contexte : on touche ici aux limites d'une ressource constituée in abstracto, dans le sens où les relations lexicales qu'elle recense ne sont en aucun cas adaptées à un type de texte ou à un corpus en particulier, contrairement à la base de voisins qui est construite de façon dynamique. Dans cet exemple, les relations de voisinage permettent de lier des mots répertoriés 5. Pour chacun des 30 textes de n paragraphes, on a n − 1 ruptures possibles.
FIGURE 3 -
3Liens de voisinage distributionnel
FIGURE 4 -
4Représentation du calcul du score avec fenêtre glissante Le score calculé est le suivant : S = log N liens N liens possibles . Le nombre de liens N liens est le nombre de couples de mots jugés similaires à cheval entre les deux blocs de trois phrases. Le nombre de liens possibles N liens possibles est le produit des nombres de mots pouvant être liés dans chaque bloc (c'est-à-dire des noms, adjectifs et verbes).
1 .
1Projet du PRES Toulouse coordonné par Cécile Fabre impliquant des chercheurs des laboratoires IRIT (équipe LiLac) et CLLE-ERSS (axes TAL et S'caladis).
Le processus permettant d'obtenir une base de voisins distributionnels à partir d'un corpus se divise en trois étapes :1. le corpus est étiqueté avec TreeTagger 3 ; 2. il est ensuite traité par Syntex, qui extrait les relations syntaxiques sous forme de triplets <gouverneur, relation, dépendant>. Ainsi, le programme va repérer dans la phrase Pierre mange un biscuit les triplets <manger, suj, Pierre> et <manger, obj, biscuit>. Quand la relation de dépendance syntaxique se fait via une préposition, cette dernière prend la place de la relation au sein du triplet (biscuit au chocolat se représente <biscuit, à, chocolat>) ; 3. afin de faciliter leur traitement par le logiciel Upéry, les triplets obtenus sont ramenés sousla forme de couples <prédicat, argument> où le prédicat correspond au gouverneur auquel
on accole la relation, et où l'argument correspond au dépendant (<biscuit, à, chocolat>
devient <biscuit_à, chocolat>). Cette formalisation va permettre d'opérer un double rap-
prochement : celui des prédicats partageant les mêmes arguments, mais aussi celui des
2. Il s'agit d'une version de Wikipedia datant d'avril 2007. Son traitement ainsi que la création de la base de
voisins sont dus au travail de Franck Sajous (CLLE-ERSS).
3. Université de Stuttgart (www.ims.uni-stuttgart.de/projekte/corplex/TreeTagger/).
La base ainsi obtenue compte environ quatre millions de couples, qui exhibent des relations très hétérogènes : des études comme(Bourigault & Galy, 2005) ou(Fabre & Bourigault, 2006) ont montré qu'il est difficile de dégager une typologie des relations de voisinage extraites ; c'est la conséquence de l'application des méthodes d'analyse distributionnelle à un corpus non spécialisé, qui présente moins de redondance et donc des restrictions syntaxiques moins fortes. En contrepartie, cette ressource offre la possibilité de capter un large éventail de relations de proximité sémantique, à condition d'introduire des filtres sur les scores de voisinage.Le but de cette expérience est de montrer la pertinence du voisinage distributionnel pour détecter les liens de cohésion lexicale, en nous appuyant sur les résultats d'un système de segmentation thématique. Nous avons à cet effet implémenté un algorithme de segmentation inspiré de Text-Tiling(Hearst, 1997) et basé uniquement sur la prise en compte de liens lexicaux. Nous avons soumis au système développé un même corpus en spécifiant chaque fois des liens différents : uniquement des liens de répétition lexicale dans un premier temps ; des liens de synonymie repérés à partir d'un dictionnaire de synonymes 4 dans un deuxième temps ; et enfin, des liens de voisinage distributionnel. Nous décrivons dans la suite de cette section les différentes étapes de cette expérience : constitution du corpus, projection des liens lexicaux, application de l'algorithme de segmentation thématique et évaluation.4 Voisins distributionnels et segmentation thématique
Corpus Le corpus que nous utilisons est constitué de 30 articles issus de l'encyclopédie en
ligne Wikipedia (dans la version ayant servi à construire la base de voisins distributionnels). Ces
articles traitent tous de lieux -pays (par exemple Danemark) ou villes (Salzbourg). En effet,
selon nos observations, dans cette catégorie d'articles, les différentes sections correspondent
généralement à différents « thèmes » (histoire, géographie, culture, etc.). Cela nous permet
ainsi de justifier l'utilisation des titres comme ruptures de référence lors de l'évaluation de la
segmentation effectuée. Le corpus est divisé en 1584 paragraphes (donc 1584 − 30 = 1554
ruptures possibles 5 ) et contient 302 titres de sections (donc 302 ruptures de référence).
Projection des liens cohésifs sur le corpus Nous relions dans le corpus des couples de mots,
sans pondérer le lien qu'ils entretiennent. Seuls les liens allant au delà de la phrase sont pris
en compte. Pour notre première baseline, toutes les répétitions de lemmes de noms, de verbes
et d'adjectifs sont indiquées. Pour la seconde, toutes les paires de synonymes recensées par
notre dictionnaire sont projetées. Pour projeter les voisins, nous avons dû fixer (de manière
empirique) différents seuils dus au caractère pléthorique de la ressource : les couples projetés
sont ceux dont le score de Lin dépasse 0.25 et pour lesquels chaque membre du couple est parmi
les 15 meilleurs voisins de l'autre membre.
Nous proposons dans les figures 1, 2 et 3 des visualisations des liens obtenus avec les trois
approches décrites, pour un extrait de l'article Slovaquie. On peut constater la rareté des liens
TABLE 1 -Performances de la segmentation thématique selon les liens pris en compteLiens pris en compte
Pk
WindowDiff
Hasard
0.436
0.452
Répétition
0.353
0.359
Synonymie
0.349
0.358
Voisinage
0.329
0.336
. Le dictionnaire Dicosyn. Développé au CRISCO (Université de Caen), il regroupe les synonymes présents dans sept dictionnaires classiques, à savoir le Bailly, le Benac, le Du Chazaud, le Guizot, le Lafaye, le Larousse et le Robert. Il compte environ 49 000 entrées pour 396 000 relations synonymiques et est consultable en ligne à l'adresse suivante : http ://www.crisco.unicaen.fr/cgi-bin/cherches.cgi
PÉRY-WOODLEY & SCOTT, Eds. (2006). Discours et Document : traitements automatiques. Numéro thématique, volume TAL 47(2).
Text segmentation using exponential models. Beeferman D, Berger A, Lafferty J, Proceedings of the Second Conference on Empirical Methods in Natural Language Processing. the Second Conference on Empirical Methods in Natural Language ProcessingProvidenceBEEFERMAN D., BERGER A. & LAFFERTY J. (1997). Text segmentation using exponential models. In Proceedings of the Second Conference on Empirical Methods in Natural Language Processing, p. 35-46, Providence.
Statistical models for text segmentation. Beeferman D, Berger A, Lafferty J, Mach. Learn. 341-3BEEFERMAN D., BERGER A. & LAFFERTY J. (1999). Statistical models for text segmenta- tion. Mach. Learn., 34(1-3), 177-210.
Text segmentation into paragraphs based on local text cohesion. A Bolshakov I, Gelbukh A, TSD '01 : Proceedings of the 4th International Conference on Text, Speech and Dialogue. Zelezna RudaBOLSHAKOV I. A. & GELBUKH A. (2001). Text segmentation into paragraphs based on local text cohesion. In TSD '01 : Proceedings of the 4th International Conference on Text, Speech and Dialogue, p. 158-166, Zelezna Ruda.
UPERY : un outil d'analyse distributionnelle étendue pour la construction d'ontologies à partir de corpus. Bourigault D, Actes de la 9 e conférence sur le Traitement Automatique de la Langue Naturelle. s de la 9 e conférence sur le Traitement Automatique de la Langue NaturelleNancyBOURIGAULT D. (2002). UPERY : un outil d'analyse distributionnelle étendue pour la construction d'ontologies à partir de corpus. In Actes de la 9 e conférence sur le Traitement Automatique de la Langue Naturelle, Nancy.
Un analyseur syntaxique opérationnel : SYNTEX. Habilitation à diriger des recherches. Bourigault D, Université Toulouse II -Le MirailBOURIGAULT D. (2007). Un analyseur syntaxique opérationnel : SYNTEX. Habilitation à diriger des recherches. Université Toulouse II -Le Mirail.
Analyse distributionnelle de corpus de langue générale et synonymie. Bourigault D. & Galy E, 4 es Journées de la linguistique de corpus. LorientBOURIGAULT D. & GALY E. (2005). Analyse distributionnelle de corpus de langue générale et synonymie. In 4 es Journées de la linguistique de corpus, p. 163-174, Lorient.
Text summarization using lexical chains. M Brunn, Y & Chali, J Pinchak C, Proceedings of the Document Understanding Conference. the Document Understanding ConferenceNouvelle OrléansBRUNN M., CHALI Y. & PINCHAK C. J. (2001). Text summarization using lexical chains. In Proceedings of the Document Understanding Conference (DUC 2001), p. 135-140, Nouvelle Orléans.
The inquery retrieval system. J P Callan, B Croft W, M Harding S, Proceedings of the Third International Conference on Database and Expert Systems Applications. the Third International Conference on Database and Expert Systems ApplicationsCALLAN J. P., CROFT W. B. & HARDING S. M. (1992). The inquery retrieval system. In Proceedings of the Third International Conference on Database and Expert Systems Applica- tions, p. 78-83.
Advances in domain independent linear text segmentation. Y Y Choi F, Proceedings of the first conference on North American chapter of the Association for Computational Linguistics. the first conference on North American chapter of the Association for Computational LinguisticsSan FranciscoCHOI F. Y. Y. (2000). Advances in domain independent linear text segmentation. In Procee- dings of the first conference on North American chapter of the Association for Computational Linguistics, p. 26-33, San Francisco.
Latent semantic analysis for text segmentation. Y Y Choi F, Wiemer-Hastings P, Moore J, Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing. the 2001 Conference on Empirical Methods in Natural Language ProcessingPittsburghCHOI F. Y. Y., WIEMER-HASTINGS P. & MOORE J. (2001). Latent semantic analysis for text segmentation. In Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing, p. 109-117, Pittsburgh.
Extraction de relations sémantiques entre noms et verbes au-delà des liens morphologiques. C Fabre, Bourigault D, Actes de la 13 e conférence sur le Traitement Automatique de la Langue Naturelle. s de la 13 e conférence sur le Traitement Automatique de la Langue NaturelleLouvainFABRE C. & BOURIGAULT D. (2006). Extraction de relations sémantiques entre noms et verbes au-delà des liens morphologiques. In Actes de la 13 e conférence sur le Traitement Au- tomatique de la Langue Naturelle, Louvain.
Segmenter et structurer thématiquement des textes par l'utilisation conjointe de collocations et de la récurrence lexicale. Ferret O, Actes de TALN. s de TALNNancyFERRET O. (2002). Segmenter et structurer thématiquement des textes par l'utilisation conjointe de collocations et de la récurrence lexicale. In Actes de TALN 2002, p. 155-165, Nancy.
A K Halliday M, Hasan R, Cohesion in English. Longman (Londres). HALLIDAY M. A. K. & HASAN R. (1976). Cohesion in English. Longman (Londres).
Texttiling : segmenting text into multi-paragraph subtopic passages. M A Hearst, Computational Linguistics. 231HEARST M. A. (1997). Texttiling : segmenting text into multi-paragraph subtopic passages. Computational Linguistics, 23(1), 33-64.
Description et détection automatique de structures de textes. Hernandez N, Université Paris-SudPhD thesisHERNANDEZ N. (2004). Description et détection automatique de structures de textes. PhD thesis, Université Paris-Sud.
Patterns of lexis in text. Hoey M, Oxford University PressOxfordHOEY M. (1991). Patterns of lexis in text. Oxford University Press (Oxford).
Text segmentation based on similarity between words. H Kozima, Proceedings of the 31st annual meeting on Association for Computational Linguistics. the 31st annual meeting on Association for Computational LinguisticsColumbusKOZIMA H. (1993). Text segmentation based on similarity between words. In Proceedings of the 31st annual meeting on Association for Computational Linguistics, p. 286-288, Columbus.
An information-theoretic definition of similarity. Lin D , Proceedings of the 15th International Conference on Machine Learning. the 15th International Conference on Machine LearningMadisonLIN D. (1998). An information-theoretic definition of similarity. In Proceedings of the 15th International Conference on Machine Learning, p. 296-304, Madison.
Segmentation of lecture videos based on text : a method combining multiple linguistic features. Lin M J F Nunamaker Jr, Chau M Chen H, Proceedings of the 37th Annual Hawaii International Conference on System Sciences. the 37th Annual Hawaii International Conference on System SciencesHawaiiLIN M., NUNAMAKER JR. J. F., CHAU M. & CHEN H. (2004). Segmentation of lecture videos based on text : a method combining multiple linguistic features. In Proceedings of the 37th Annual Hawaii International Conference on System Sciences, Hawaii.
Minimum cut model for spoken lecture segmentation. Malioutov I, Barzilay R, ACL-44 : Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. Morristown, NJ, USAAssociation for Computational LinguisticsMALIOUTOV I. & BARZILAY R. (2006). Minimum cut model for spoken lecture segmenta- tion. In ACL-44 : Proceedings of the 21st International Conference on Computational Linguis- tics and the 44th annual meeting of the Association for Computational Linguistics, p. 25-32, Morristown, NJ, USA : Association for Computational Linguistics.
Non-classical lexical semantic relations. Morris J Hirst G, Proceedings of the HLT Workshop on Computational Lexical Semantics. the HLT Workshop on Computational Lexical SemanticsBostonMORRIS J. & HIRST G. (2004). Non-classical lexical semantic relations. In Proceedings of the HLT Workshop on Computational Lexical Semantics, p. 46-51, Boston.
A critique and improvement of an evaluation metric for text segmentation. A Pevzner L. & Hearst M, Computational Linguistics. 28PEVZNER L. & HEARST M. A. (2002). A critique and improvement of an evaluation metric for text segmentation. Computational Linguistics, 28, 1-19. |
222,176,890 | Attention is Not Only a Weight: Analyzing Transformers with Vector Norms | Attention is a key component of Transformers, which have recently achieved considerable success in natural language processing. Hence, attention is being extensively studied to investigate various linguistic capabilities of Transformers, focusing on analyzing the parallels between attention weights and specific linguistic phenomena. This paper shows that attention weights alone are only one of the two factors that determine the output of attention and proposes a norm-based analysis that incorporates the second factor, the norm of the transformed input vectors. The findings of our norm-based analyses of BERT and a Transformer-based neural machine translation system include the following: (i) contrary to previous studies, BERT pays poor attention to special tokens, and (ii) reasonable word alignment can be extracted from attention mechanisms of Transformer. These findings provide insights into the inner workings of Transformers. | [
196176486,
52967399,
208100714,
195584234,
8476273,
174799346,
5284722,
67855860,
201645145,
184486746,
202888986,
5219389,
195477534
] | Attention is Not Only a Weight: Analyzing Transformers with Vector Norms
November 16-20, 2020
Goro Kobayashi
Tohoku University
Tatsuki Kuribayashi kuribayashi@ecei.tohoku.ac.jp
Tohoku University
Langsmith Inc. 3 RIKEN
Sho Yokoi yokoi@ecei.tohoku.ac.jp
Tohoku University
Kentaro Inui inui@ecei.tohoku.ac.jp
Tohoku University
Attention is Not Only a Weight: Analyzing Transformers with Vector Norms
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing
the 2020 Conference on Empirical Methods in Natural Language ProcessingNovember 16-20, 20207057
Attention is a key component of Transformers, which have recently achieved considerable success in natural language processing. Hence, attention is being extensively studied to investigate various linguistic capabilities of Transformers, focusing on analyzing the parallels between attention weights and specific linguistic phenomena. This paper shows that attention weights alone are only one of the two factors that determine the output of attention and proposes a norm-based analysis that incorporates the second factor, the norm of the transformed input vectors. The findings of our norm-based analyses of BERT and a Transformer-based neural machine translation system include the following: (i) contrary to previous studies, BERT pays poor attention to special tokens, and (ii) reasonable word alignment can be extracted from attention mechanisms of Transformer. These findings provide insights into the inner workings of Transformers.
Introduction
Transformers (Vaswani et al., 2017;Devlin et al., 2019;Yang et al., 2019;Lan et al., 2020) have improved the state-of-the-art in a wide range of natural language processing tasks. The success of the models has not yet been sufficiently explained; hence, substantial research has focused on assessing the linguistic capabilities of these models (Rogers et al., 2020;Clark et al., 2019).
One of the main features of Transformers is that they utilize an attention mechanism without the use of recurrent or convolutional layers. The attention mechanism computes an output vector by accumulating relevant information from a sequence of input vectors. Specifically, it assigns attention weights (i.e., relevance) to each input, and sums up input vectors based on their weights. The analysis of correlations between attention weights and various linguistic phenomena (i.e., weight-based analysis) is a prominent research area (Clark et al., 2019;Kovaleva et al., 2019;Reif et al., 2019;Lin et al., 2019;Mareček and Rosa, 2019;Htut et al., 2019;Raganato and Tiedemann, 2018;Tang et al., 2018).
This paper first shows that weight-based analysis is insufficient to analyze the attention mechanism. Weight-based analysis is a common approach to analyze the attention mechanism by simply tracking attention weights. The attention mechanism can be expressed as a weighted sum of linearly transformed vectors (Section 2.2); however, the effect of transformed vectors in weightbased analysis is ignored. We propose a normbased analysis that considers the previously ignored factors (Section 3). In this analysis, we measure the norms (lengths) of the vectors that were summed to compute the output vector of the attention mechanism.
Using the norm-based analysis of BERT (Section 4), we interpreted the internal workings of the model in more detail than when weight-based analysis was used. For example, the weight-based analysis (Clark et al., 2019;Kovaleva et al., 2019) reports that specific tokens, such as periods, commas, and special tokens (e.g., separator token; [SEP]), tend to have high attention weights. However, our norm-based analysis found that the information collected from vectors corresponding to special tokens was considerably lesser than that reported in the weight-based analysis, and the large attention weights of these vectors were canceled by other factors. Additionally, we found that BERT controlled the levels of contribution from frequent, less informative words by controlling the norms of their vectors.
In the analysis of a Transformer-based NMT system (Section 5), we reinvestigated how accurate word alignment can be extracted from the source-target attention. The weight-based results of Li et al. (2019), Ding et al. (2019), andZenkel et al. (2019) have empirically shown that word alignments induced by the source-target attention of the Transformer-based NMT systems are noisy. Our experiments show that more accurate alignments can be extracted by focusing on the vector norms.
The contributions of this study are as follows: • We propose a novel method of analyzing an attention mechanism based on vector norms (norm-based analysis). The method considers attention weights and previously ignored factors, i.e., the norm of the transformed vector. • Our norm-based analysis of BERT reveals that (i) the attention mechanisms pay considerably lesser attention to special tokens than to observations that are solely based on attention weights (weight-based analysis), and (ii) the attention mechanisms tend to discount frequent words. • Our norm-based analysis of a Transformer-based NMT system reveals that reasonable word alignment can be extracted from source-target attention, in contrast to the previous results of the weight-based analysis. The codes of our experiments are publicly available. 1 2 Background
Attention mechanism
Attention is a core component of Transformers, which consist of several layers, each containing multiple attentions ("heads"). We focused on analyzing the inner workings of these heads.
As illustrated in Figure 1, each attention head gathers relevant information from the input vectors. A vector is updated by vector transformations, attention weights, and a summation of vectors. Mathematically, attention computes each output vector y i ∈ R d from the corresponding pre-update vector y i ∈ R d and a sequence of input vectors X = {x 1 , . . . , x n } ⊆ R d :
y i = n j=1 α i,j v(x j ) W O (1) α i,j := softmax x j ∈X q( y i )k(x j ) √ d ∈ R,(2)
where α i,j is the attention weight assigned to the token x j for computing y i , and q(·), k(·), and v(·) Figure 1: Overview of attention mechanism in Transformers. Sizes of the colored circles illustrate the value of the scalar or the norm of the corresponding vector.
! !,# " # # ! !,$ " # $ ! !,% " # % ! !,& " # & ! !,' " # ' ! !,& ! !,' ! !,$ ! !," # # " # ' " # $ " # % " # & Value vectors $ % ! # # # $ # ' # % # & Σ eq. (2)
are the query, key, and value transformations, respectively.
q( y i ) := y i W Q + b Q W Q ∈ R d×d , b Q ∈ R d k(xj) := xjW K + b K W K ∈ R d×d , b K ∈ R d v(xj) := xjW V + b V W V ∈ R d×d , b V ∈ R d .
Attention gathers value vectors v(x j ) based on attention weights and then, applies matrix multiplication W O ∈ R d ×d ( Figure 1). 2 Boldface letters such as x denote row (not column) vectors, following the notations in Vaswani et al. (2017).
In self-attention, the input vectors X and the pre-update vector y i are previous layer's output representations. In source-target attention, X corresponds to the representations of the encoder, and vector y i (and updated vector y i ) corresponds to the vector of the i-th input token of the decoder.
Attention is a weighted sum of vectors
With a simple reformulation, one can observe that the attention mechanism computes the weighted sum of the transformed input vectors. Because of the linearity of the matrix product, we can rewrite Equation 1 as
y i = n j=1 α i,j f (x j )(3)! ! ! " ! # ! $ ! % ! " ! ! " " ! " # ! " $ ! " % ! !,# " # # ! !,$ " # $ ! !,% " # % ! !,& " # & ! !,' " # ' # &,% # &," # &,# # &,!
Weighted vectors
Attention weights
Transformed vectors
Input vectors f
# &,$ Σ eq. (2) " & " # & Output vector Pre- update vector ' ( !,( ) * ( # (*& ( !,' ) * '(x) := xW V + b V W O .(4)
Equation 3 shows that the attention mechanism first transforms each input vector x to generate f (x) ; computes attention weights α ; and then compute the sum αf (x) (see Figure 2).
Problems encountered in weight-based analysis
The attention mechanism has been designed to update representations by gathering relevant information from the input vectors. Prior studies have analyzed attention, focusing on attention weights, to ascertain which input vectors contribute (weight-based analysis) (Clark et al., 2019;Kovaleva et al., 2019;Reif et al., 2019;Lin et al., 2019;Mareček and Rosa, 2019;Htut et al., 2019;Raganato and Tiedemann, 2018;Tang et al., 2018). Analyses solely based on attention weight are based on the assumption that the larger the attention weight of an input vector, the higher its contribution to the output. However, this assumption disregards the magnitudes of the transformed vectors. The problem encountered when neglecting the effect of f (x j ) is illustrated in Figure 2. The transformed vector f (x 1 ) for input x 1 is assumed to be very small ( f (x 1 ) ≈ 0), while its attention weight α i,1 is considerably large. Note that the small α i,1 f (x 1 ) contributes a little to the output vector y i because y i is the sum of αf (x), where a larger vector contributes more to the output. Conversely, the large α i,3 f (x 3 ) dominates the output y i . Therefore, in this case, only considering the attention weight may lead to a wrong interpretation of the high contribution of input vector x 1 to output y i . Nevertheless, x 1 hardly has any effect on y i .
Analyses based on attention weights have not provided clear results in some cases. For example, Clark et al. (2019) reported that input vectors for separator tokens [SEP] tend to receive remarkably large attention weights in BERT, while changing the magnitudes of these weights does not affect the masked-token prediction of BERT. Such results can be attributed to the aforementioned issue of focusing only on attention weights.
3 Proposal: norm as a degree of attention As described in Section 2.3, analyzing the attention mechanism with only attention weights neglects the effect of the transformed vector f (x j ), which has a significant impact as we discussed later.
Herein, we propose the measurement of the norm of the weighted transformed vector αf (x) , given by Equation 3, to analyze the attention mechanism behavior. 3 Unlike in previous studies, we analyzed the behaviors of the norms, αf (x) and f (x) , and α to gain more in-depth insights into the functioning of attention. The proposed method of analyzing the attention mechanism is called norm-based analysis and the method that solely analyzes the attention weights is called weight-based analysis.
In Sections 4 and 5, we provide insights into the working of Transformers using norm-based analysis. Appendix A explains that our norm-based analysis can also be effectively applied to an entire multi-head attention mechanism.
Experiments: BERT
First, we show that the previously ignored transformed-vector norm affects the analysis of attention in BERT (Section 4.1). Applying our norm-based analysis, we re-examine the previous reports on BERT obtained by weight-based analysis (Section 4.2). Next, we demonstrate the previously overlooked properties of BERT (Section 4.3).
Does f (x) have an impact?
We analyzed the coefficient of variation (CV) 6 of previously ignored effect-f (x) -to first demonstrate the degree to which αf (x) differs from weight α. We computed the CV of f (x) of all the example data for each head. Table 1 shows that the average CV is 0.22. Typically, the value of the norm f (x) varies from 0.78 to 1.22 times the average value of the f (x) . Thus, there is a difference between the weight α and αf (x) due to the dispersion of f (x) , which motivated us to consider f (x) in the attention analysis. Appendix B presents the detailed results.
Re-examining previous observation
In this section, with the application of our normbased analysis, we reinvestigate the previous observation of Clark et al. (2019); they analyzed BERT using the weight-based analysis.
Settings: First, all the data were fed into BERT. Then, the weight α and αf (x) were collected from each head. Following Clark et al. (2019), we report the results of the following categories: (i) 4 We used PyTorch implementation of BERT-base (uncased) released at https://github.com/huggingface/ transformers. 5 https://github.com/clarkkev/attention-analysis 6 Coefficient of variation (CV) is a standardized (scaleinvariant) measure of dispersion, which is defined by the ratio of the standard deviation σ to the mean µ; CV := σ/µ. , and punctuations-have remarkably large attention weights, which is consistent with the report of Clark et al. (2019). In contrast, our normbased analysis demonstrated that the contributions of vectors corresponding to these tokens were generally small (Figure 3b). The result demonstrates that the size of the transformed vector f (x) plays a considerable role in controlling the amount of information obtained from the specific tokens. Clark et al. (2019) hypothesized that if the necessary information is not present in the input vectors, BERT assigns large weights to [SEP], which appears in every input sequence, to avoid the incorporation of any additional information via at-tention. 7 Clark et al. (2019) called this operation no-operation (no-op). However, it is unclear whether assigning large attention weights to [SEP] realizes the operation of collecting little information from the input sequence.
Our norm-based analysis demonstrates that the amount of information from the vectors corresponding to [SEP] is small (Figure 3b). This result supports the interpretation that BERT conducts "no-op," in which attention to [SEP] is considered a signal that does not collect anything. Additionally, we hope that our norm-based analysis can provide a better interpretation of other existing findings.
Analysis-The relationship between α and
f (x) : It remains unclear how attention collects only a little information while assigning a high attention weight to a specific token, [SEP].
Here, we demonstrate an interesting trend of α and f (x) cancelling each other out on the tokens. 8 Table 2 shows the Spearman rank correlation coefficient between α and f (x) , corresponding to the vectors in each category. The weight α and the norm f (x) have a negative correlation in terms of [CLS], [SEP], periods, and commas. This cancellation manages to collect a little information even with large weights. Figure 4 illustrates the contrast between α and f (x) corresponding to [SEP] in each head. For most of the heads, α and f (x) clearly negate the magnitudes of each other. A similar trend was observed in [CLS], periods, and commas. Conversely, no significant trend was observed in the other tokens (see Appendix D.3). Figure 5 shows 1% randomly selected pairs of α and f (x) in each word category. Even when the same weight α is assigned, f (x) can vary, suggesting that α and f (x) play a different roles in attention.
Relation between frequency and f (x)
In the previous section, we demonstrated that f (x) corresponding to the specific tokens (e.g., [SEP]) is small. Based on the high frequencies 9 of (a) α.
( these word types 10 , we hypothesized that BERT controlled contributions of highly frequent, less informative words by adjusting the norm of f (x).
b) f (x) .
Settings: First, all the data were fed into the model. Then, for each input token t, we collected the weight α and f (x) . We averaged α and f (x) for all the heads for each t to analyze the trend of the entire model. Let r(·) be a function that returns the frequency rank of a given word. 11 We analyzed the relationship of r(t) with α and f (x) .
Results:
The Spearman rank correlation coefficient between the frequency rank r(t) and f (x) was 0.75, indicating a strong positive correlation. In contrast, the Spearman rank correlation coefficient did not show any correlation (ρ = 0.06) between r(t) and α. 12 The visualizations of their relationships are shown in Appendix D.4. These results demonstrate that the self-attentions in BERT reduce the information from highly frequent words by adjusting f (x) and not α. This frequency-based effect is consistent with the intuition that highly frequent words, such as stop words, are unlikely to play an important role in solving the pre-training tasks (masked-token prediction and next-sentence prediction).
Experiments: Transformer for NMT
Additionally, we analyzed the source-target attention in a Transformer-based NMT system. One major research topic in the NMT field is whether NMT systems internally capture word alignment between source and target texts, and if so, how word alignment can be extracted from black-box NMT systems. Li et al. (2019), Ding et al. (2019), and Zenkel et al. (2019) empirically showed, using the weight-based method, that word alignment induced by the attention of the Transformer is noisy. In this section, we show the analysis of source-target attention using vector norms αf (x) and demonstrate that clean alignments can be extracted from the source-target attention. Word alignment can be used to provide rich information for the users of NMT systems (Ding et al., 2019). 2019), we trained a Transformer-based NMT system for German-to-English translation on the Europarl v7 corpus 13 . Next, we extracted word alignments from α and αf (x) under the force decoding setup. Finally, we evaluated the derived alignment using the alignment error rate (AER) (Och and Ney, 2000). A low AER score indicates that the extracted word alignments are close to the reference. We used the gold alignment dataset provided by Vilar et al. (2006) 14 . Experiments were performed on five random seeds, and the average AER scores were reported. The experimental settings are detailed in Appendix E.
Alignment extraction from attention
Weights or norms: A typical alignment extraction method uses attention weights (Li et al., 2019;Ding et al., 2019;Zenkel et al., 2019). Specifically, given a source-target sentence pair, Figure 6: An example of behavior of the source-target attentions in an NMT system (German-to-English). Attentions in the earlier layers focus the source word "ein" aligned with the input word "a," while those in the latter layers focus the source word "Schüler" aligned with the output word "student."
{s 1 , . . . , s J } and {t 1 , . . . , t I }, word alignment is estimated by calculating a source word s j that has the highest weight when generating a target word t i . We call this method the weight-based alignment extraction. In contrast, we propose a normbased alignment extraction method that extracts word alignments based on αf (x) instead of α. Formally, in these methods, the source word s j with the highest attention weight or norm during the generating of target word t i is extracted as the word that is aligned with t i :
argmax s j α i,j or argmax s j α i,j f (x j ) . (5)
In Section 5.2, following Li et al. (2019), we analyze the word alignments that we obtained from each layer by integrating H heads within the same layer:
argmax s j H h=1 α h i,j or argmax s j H h=1 α h i,j f h (x j ) ,
where f h (x j ) and α h i,j are the transformed vector and the attention weight at the h-th head, respectively.
Alignment with input or output word: In our preliminary experiments (Appendix E.3), we observed that the behavior of the source-target attention of the decoder differs between the earlier and later layers. As shown in Figure 6, at the time decoding the word t i+1 with the input t i , attention heads in the earlier layers assign large weights or norms to s j corresponding to the input t i "a," whereas those in the latter layers assign large values to s j corresponding to the output word t i+1 "student."
Based on this observation, we explored two settings for investigating alignment extraction methods: alignment with output (AWO) and alignment with input (AWI). The AWO setting refers to the approach introduced in Equation 5. Specifically, alignments (s j , t i ) were extracted by considering a source word s j that gained the highest weight (norm) when outputting a particular target word t i .
In the AWI setting, alignments (s j , t i ) were extracted by considering a source word s j that gained the highest weight (norm) when inputting the word t i (i.e., predicting a word t i+1 ). Formally, alignment with the AWI setting is calculated as follows:
argmax s j α i+1,j or argmax s j α i+1,j f (x j ) .(6)
Comparative experiments
We compared the quality of the alignments that were obtained by the following six methods: We report the best and averaged AER scores across the layers. In addition, we report on the AER score at the head and the layer with the highest average αf (x) in the norm-based extraction. 15 The settings are detailed in Appendix E.2. The AER scores of each method are listed in Table 3. The results show that word alignments extracted using the proposed norm-based approach are more reasonable than those extracted using the weight-based approach. Additionally, better word alignments were extracted in the AWI setting than in the AWO setting. The alignment extracted using the layer with the highest average αf (x) in the AWI setting is better than the gradientbased method, and competitive with one of the existing word aligners-fast align. 16 These results 15 The average αf (x) of the layer was determined by the sum of the average αf (x) at each head in the layer. 16 Even at the head with the highest average αf (x) . Although the average score of five seeds in the AWI setting was 35.5, four seeds out of them achieved great score range show that much clearer word alignments can be extracted from a Transformer-based NMT system than the results reported by existing research. The primary reason behind the differences between the results of the weight-and norm-based methods was analogous to the finding discussed in Section 4.2, while some specific tokens, such as /s , the special token for the end of the sentence, tended to obtain heavy attention weights; their transformed vectors were adjusted to be smaller, as shown in Figure 7.
Methods
AER ±SD
Relationship between norms and alignment quality
We further analyze the relationship between αf (x) and AER scores in the head-level. Figures 8a and 8b show the AER scores of the alignments obtained by the norm based extraction at each head in the AWO and AWI settings. Figure 8c shows the average of αf (x) at each head. The small αf (x) implies that α and f (x) tend to cancel out in the head.
Comparing Figures 8a and 8c, the average αf (x) and AER scores in the AWI setting from 23.6-to 25.7. The score was 77.5 for a remaining seed. Table 3, where the head or the layer with the highest average αf (x) provides clean alignments in the AWI setting. This result suggests that Transformer-based NMT systems may rely on specific heads that align source and target tokens. This result is also consistent with the exiting reports that pruning some attention heads in Transformers does not change its performance; on the contrary, it improves the performance (Michel et al., 2019;Kovaleva et al., 2019). In contrast, in the AWO setting (Figures 8b and 8c), such a negative correlation is not observed; rather, a positive correlation is observed (Spearman's ρ is 0.56, and the Pearson's r is 0.55). Actually, in the AWO setting, the alignments extracted from the head/layer with the highest αf (x) is considerably worse than those from the other settings in Table 3. Investigating the reason for these contrasting results would be our future work. In Appendix F, we also present the results of a model with a different number of heads.
6 Related work 6.1 Probing of Transformers Transformers are used for many NLP tasks. Many studies have probed their inner workings to understand the mechanisms underlying their success (Rogers et al., 2020;Clark et al., 2019).
There are mainly two probing perspectives to investigate these models; they differ based on whether the target of the analysis is per-token level or it considers token-to-token interactions. The . The present study is closely related to the latter group; we have provided insights into the token-to-token attention in Transformer-based systems.
Analyzing the token-to-token interaction
Two types of methods are mainly considered to analyze the token-to-token interactions in Transformers. One is to track the attention weights, and the other is to check the gradient of the output with respect to the input of attention mechanisms. Brunner et al. (2020) have introduced "effective attention," which has upgraded the weight-based analysis. Their proposal is similar to ours; they exclude attention weights that do not affect the output owing to the application of transformation f and input x in the analysis. However, our proposal differs from theirs in some aspects. Specifically, we aim to analyze the behavior of the whole attention mechanism more accurately, whereas they aim to make the attention weights more accurate. Furthermore, the effectiveness of their approach depends on the length of an input sequence; however, ours approach does not have such a limitation (see Appendix G). Additionally, we incorporate the scaling effects of f and x, whereas Brunner et al. (2020) have considered only the binary effect-either the weight is canceled or not.
Gradient-based analysis:
In the gradient analysis, the contribution of the input with respect to the output of the attention mechanism is calculated using the norm of a gradient matrix between the input and the output vector (Pascual et al., 2020). Intuitively, such gradient-based methods measure the change in the output vector with respect to the perturbations in the input vector. Estimating the contribution of a to b = ka by computing the gradient ∂b/∂a (= k) is analogous to estimating the contribution of x to y = αf (x) by observing only an attention weight α. 17 The two ap-17 For simplicity, we consider a linear example: b = ka. We are aware that there is a gap between the two examples in terms of linearity. Further exploration of the connection to the gradient-based method is needed. proaches have the same kind of problems; that is, both ignore the magnitude of the input, a or f (x).
Conclusions and future work
This paper showed that attention weights alone are only one of two factors that determine the output of attention. We proposed the incorporation of another factor, the transformed input vectors. Using our norm-based method, we provided a more detailed interpretation of the inner workings of Transformers, compared to the studies using the weight-based analysis. We hope that this paper will inspire researchers to have a broader view of the possible methodological choices for analyzing the behavior of Transformer-based models.
We believe that these findings can provide insights not only into the interpretation of the behaviors of Blackbox NLP systems but also into developing a more sophisticated Transformer-based system. One possible direction is to design an attention mechanism that can collect almost no information from an input sequence as the current systems achieve it by exploiting the [SEP] token.
In future work, we plan to apply our norm-based analysis to attention in other models, such as finetuned BERT, RoBERTa , and AL-BERT (Lan et al., 2020). Furthermore, we expect to extend the scope of analysis from the attention to an entire Transformer architecture to better understand the inner workings and linguistic capabilities of the current powerful systems in NLP.
A Multi-head attention and the norm-based analysis
Our norm-based analysis is applicable to the analysis of the multi-head attention mechanism implemented in Transformers. The i-th output of the multi-head attention mechanism y integrated i is calculated as follows:
y integrated i = h y h i (7) y h i = n j=1 α h i,j f h (x j ) (8) f h (x) := xW V,h + b V,h W O,h ,(9)
where α h i,j , W V,h , b V,h , and W O,h are the same as α i,j , W V , b V , and W O in Equations 3 and 4 for each head h, respectively. n is the number of tokens in the input vectors. Equation 7 can be rewritten as follows:
y integrated i = n j=1 h α h i,j f h (x j )(10)
As shown in Equation 10, the multi-head attention mechanism is also linearly decomposable, and one can analyze the flaw of the information from the j-th vector to the i-th vector by measuring h α h i,j f h (x j ) . In Section 5, we actually used h α h i,j f h (x j ) to extract the alignment from each layer's multi-head attention.
The output of the multi-head attention mechanism is calculated via the sum of the outputs of all the heads and a bias b O ∈ R d . Because adding a fixed vector is irrelevant to the token-to-token interaction that we aim to investigate, we omitted b O in our analysis.
B The source of the dispersion of f (x)
As described in Section 4.1, f (x) exhibits dispersion; however, it remains unclear whether this dispersion is attributed to x or f . Hence, we checked the dispersion of x and the scaling effects of the transformation f .
Dispersion of x : First, we checked the coefficient of variation (CV) of x . Table 4 shows that the average CV is 0.12, which is less than that of f (x) (0.22). The value of x typically varies between 0.88 and 1.12 times the average value of x . The layer normalization (Ba et al., 2016) that applied at the end of the previous layer should have a large impact on the variance of x .
Scaling effects of f : Second, we investigated the scaling effect of the transformation f on the norm of the input. Because the affine transformation f : R d → R d can be considered a linear transformation R d+1 → R d+1 (Appendix C), the singular values of f can be regarded as its scaling effect. Figure 9 shows the singular values of f in randomly selected heads in BERT. The singular values are displayed in descending order from left to right. In each head, there is a difference of at least 1.8 times between the maximum and minimum singular values. This difference is larger than that of x , where x typically varies between 0.88 and 1.12 times the average value. These results suggest that the dispersion of f (x) is primarily attributed to the scaling effect of f .
C Affine transformation as linear transformation
The affine transformation f : R d → R d in Equation 4 can be viewed as a linear transformation f : R d+1 → R d+1 . Given x := x 1 ∈ R d+1 , where 1 is concatenated to the end of each input vector x ∈ R d , the affine transformation f can be viewed as:
f ( x) = x W V W O (11) W V := 0 W V . . . 0 b V 1 ∈ R (d+1)×(d +1)(12)W O := 0 W O . . . 0 0 . . . 0 1 ∈ R (d +1)×(d+1) .(13)
D Details on Sections 4.2 and 4.3
We describe the detailed experimental setup presented in Sections 4.2 and 4.3.
D.1 Notations
The dataset consists of several sequences; Data = (s 1 , · · · , s |Data| ). Each sequence consists of sev- Table 4: Mean (μ), standard deviation (σ), coefficient of variance (CV), and maximum and minimum values of x ; the former three are averaged on all the layers. Figure 9: Singular values of f at randomly selected heads in each layer. We use layer -head number to denote a particular attention head. The singular values are eral tokens, s p = (t p 1 , · · · , t p |sp| ), where t p q is the q-th token in the p-th sequence. For simplicity, we define the following functions:
Weight(p, q, , h) = 1 |s p | |sp| i=1 α ,h p,i,q Norm(p, q, , h) = f ,h (x p,q ) WNorm(p, q, , h) = 1 |s p | |sp| i=1 α ,h p,i,q f ,h (x p,q ) ,
where α ,h p,i,q is the attention weight assigned from the i-th pre-update vector to the q-th input vector in the p-th sequence. h and denote that the score is obtained from the h-th head of the -th layer.
x p,q denotes the input vector for token t p q in theth layer. f ,h (x p,q ) is the transformed vector for x p,q in the h-th head of the -th layer.
Next, the vocabulary V of BERT is divided into the following four categories:
A = {[CLS]} B = {[SEP]} C = {".", ","} D = V \ (A ∪ B ∪ C).(14)
Let T (p, Z) be a function that returns all tokens t p q belonging to the category Z in the p-th sequence. To formally describe our experiments, several functions are defined as follows. Note that we analyzed a model with 12 heads in each layer. The LayerW(·) and LayerWN(·) functions are used to analyze the average behavior of the heads in a layer.
MeanN(Z, , h, p) = 1 |T (Z, p)| t p q ∈T (Z,p) Norm(p, q, , h) SumW(Z, , h, p) = t p q ∈T (Z,p) Weight(p, q, , h) SumWN(Z, , h, p) =
D.2 Experimental setup for Section 4.2
In Figure 3, the results of each layer are reported for each category. In Figures 3a and 3b, the values for each category Z were calculated using LayerW(Z, ) and LayerWN(Z, ), respectively. In Figure 4, α and f (x) in the h-th head of the -th layer were calculated using HeadW(Z, , h) and HeadN(Z, , h), respectively.
The scores reported in Table 2 are the Spearman rank correlation coefficient r between Weight (p, q, , h) and WNorm(p, q, , h). We calculated the r using all the pairs of Weight(p, q, , h) and WNorm(p, q, , h) for the possible values of p, q, , and h. In Figure 5, each plot corresponds to the pair of Weight (p, q, , h) and WNorm(p, q, , h), where the combination of (p, q, , h) was randomly determined. tively. The values in these figures were calculated as described in Appendix D.2. Figures 10 and 11 show that the trends for categories B and C were analogous to those for the [SEP] token; the large α was canceled by the small f (x) . However, the trends for category D do not exhibit the trends of the negative correlation between α and f (x) . In each heatmap of f (x) , the color scale is determined by the maximum value of f (x) in each category.
We also reported the relationship between α and f (x) in Section 4.2 ( Figure 5). Figure 13 shows the results for each word category to provide a clearer display of the results.
D.4 Experimental setup and visualizations
for Section 4.3
In Section 4.3, we analyzed the relationship between the word frequency and f (x) . To formally describe our experiments, we further define the functions as follows:
AvgW(p, q) = 1 12 · 12 Note that we analyzed a model comprising 12 layers; each layer has 12 attention heads. Let (a) α.
(b) f (x) . Figure 12: α and f (x) corresponding to other tokens, averaged on all the input text. r(·) be a function that returns the frequency rank of a given word. We first calculated the Spearman rank correlation coefficient between r(t p q ) and AvgW(p, q). The score was 0.06, which suggests that there is no relationship between α and the frequency rank of the word. Then, we calculated the Spearman rank correlation coefficient between r(t p q ) and AvgN(p, q). The score was 0.75, which suggests a strong correlation between f (x) and the frequency rank of the word; Figure 14 shows these results.
In addition, the results for the word frequency, instead of the frequency rank, are shown in Figure 15. c(·) denotes a function that returns the frequency of a given word in the training dataset of BERT. We reproduced the dataset because it is not released.
E Details on Section 5 E.1 Hyperparameters and training settings
We used the Transformer (Vaswani et al., 2017) NMT model implemented in fairseq (Ott et al., 2019) for the experiments. Table 5 shows the hyperparameters of the model, which were the same Figure 14: Relationship between frequency rank r(t p q ) and AvgW(p, q), and that between r(t p q ) and AvgN(p, q). as those used by Ding et al. (2019). We used the model with the highest BLEU score in the development set for our experiments.
We conducted the data preprocessing 18 following the method by Zenkel et al. (2019) and Ding et al. (2019). All the words in the training data of the NMT systems were split into subword units using byte-pair encoding (BPE, Sennrich et al. (2016)) with 10k merge operations. Following Ding et al. (2019), the last 1000 instances of the training data were used as the development data.
E.2 Settings of the word alignment extraction
First, we applied BPE, which was used to split the training data of the NMT systems to create the evaluation data used for calculating the AER scores. Next, we extracted the scores of α and αf (x) for each subword in the evaluation data for the force decoding setup. The gold alignments are annotated at the word-level, not the subword-level. To calculate the word-level alignment scores, α and αf (x) for the subwords were merged along with the target token in the gold data by averaging, then merged along with the source tokens in the gold data by summation. These operations were the same as Li et al. (2019). 18 https://github.com/lilt/alignment-scripts (a) Relationship between c(t) and AvgW. (b) Relationship between c(t) and AvgN. Figure 15: Relationship between frequency count c(t p q ) and AvgW(p, q), and that between c(t p q ) and AvgN(p, q).
In existing studies, /s , the special token for the end of the sentence, was probably removed in calculating word alignments. We included /s as the alignment targets and we considered the alignments to /s as no alignment. In other words, if the model aligns a certain word with /s , we assume that the model decides that the word is not aligned to any word.
E.3 Layer-wise analysis
We preliminarily investigated how the sourcetarget attentions in a Transformer-based NMT system behave depending on the layer. Tang et al. (2018) have reported that they behave differently depending on the layer. The AER scores in the AWI and AWO settings were calculated for each layer (Figure 16). In the AWO setting, AER scores tend to be better in the latter layers than in the earlier layers (Figure 16a). In contrast, the AER scores tend to be better in the earlier layers than in the latter layers in the AWI setting (Figure 16b).
These results suggest that the earlier and latter layers focus on the source word that is aligned with the input and output target word, respectively (as shown in Figure 6). Furthermore, we believe that it is a convincing result to extract cleaner word alignments from the AWI setting than the AWO setting (Figure 16), because the AWI setting is Figure 16: Layer-wise AER scores. Each value is the average of five random seeds. The closer the extracted word alignment is to the reference, the lower the AER score-the lighter the color. more advantageous. The main advantage is that while the decoder may fail to predict the correct output words, the input words are perfectly accurate owing to the teacher forcing.
E.4 Alignments in different layers
F Word alignment experiments on different settings
To verify whether the results obtained in the Section 5 are reproducible in different settings, we conducted an additional experiment using the model with a different number of attention heads. Specifically, we used a model with eight attention heads in both the encoder and decoder. Table 6 shows the AER scores of the 8-head model. As with the results obtained by the 4-head model, word alignments extracted using the proposed norm-based approach were more reasonable than those extracted using the weight-based approach, and better word alignments are extracted in the AWI setting than in the AWO setting. Furthermore, the alignments extracted using the head or the layer with the highest average αf (x) in the AWI setting are competitive with one of the existing word aligners-fast align. With respect to the weight-based extraction, the scores obtained using (a) Reference. (b) Attention-weights.
(c) Vector-norms (ours). Figure 17: Examples of the reference alignment and the extracted patterns by each method in layer 1. Word pairs with a green frame shows the word with the highest weight or norm. The vertical axis represents the input source word in the decoder, and the pairs with a green frame are extracted as alignments in the AWI setting. Note that pairs that contain /s not extracted. the 8-head model were worse than those obtained using the 4-head model. This may be owing to the increase in the number of heads that do not capture reasonable alignments. Figures 23a and 23b show the AER scores of the alignments obtained by the norm-based extraction at each head on one out of five seeds. Figure 23c shows the average of αf (x) at each head. As with the results obtained by the 4-head model, the heads with the low (i.e., better) AER score in the AWI setting tended to have the high αf (x) (the Spearman rank and Pearson correla- tion coefficients between the AER scores and averaged αf (x) among the 6×8 heads are −0.26 and −0.50). In contrast, in the AWO setting, such a negative correlation is not observed; rather, a positive correlation is observed (the Spearman's ρ is 0.40 and the Pearson's r is 0.40). Additionally, following Appendix E.3, the AER scores for both the AWI and AWO settings for each layer were calculated ( Figure 24). As with the 4-head model (Appendix E.3), the latter layers correspond to the AWO setting and the earlier layers corresponds to the AWI setting in the 8-head (a) Attention-weights. (b) Vector-norms. Figure 22: Examples of the reference alignment and the extracted patterns by each method in layer 6.
Methods
AER ±SD
Transformer -Attention-based Approach -Alignment with output setting -Weight-based layer mean 70.4 0.6 best layer (layer 4 or 5) 49.3 1.2 Norm-based (ours) layer mean 63.2 0,7 best layer (layer 5) 43.4 0.8 head with the highest average αf (x) 87.2 0.6 layer with the highest average αf (x) 83.7 2.2 -Alignment with input setting -Weight-based layer mean 76.6 1.7 best layer (layer 2 or 3) 38.7 8.9 Norm-based (ours) layer mean 59.9 1.0 best layer (layer 2 or 3) 26.3 1.9 head with the highest average αf (x) 24.9 1.7 layer with the highest average αf (x) 26.5 1.9
Word Aligner fast align from Zenkel et al. (2019) 28.4 -GIZA++ from Zenkel et al. (2019) 21.0 - Table 6: Results on a model trained with the same settings as described in Appendix E.1 except that the number of attention heads in the encoder and decoder is 8. Each value is the average of five random seeds.
model.
G Comparison with effective attention (Brunner et al., 2020) In this section, we discuss the difference between our approach and "effective attention" (Brunner et al., 2020), which is an enhanced version of the weight-based analysis. The effective attention exclude the components that do not affect the output owing to the application of transformation f and input x from the attention weight matrix A. The output-irrelevant components are derived from the null space of the matrix T , which is the stack of f (x). Figure 25a shows the Pearson correlation coefficient between the raw attention weight and the effective attention. Since the dimension of the null space of the matrix T depends on the length of (a) AER in the AWO setting.
(b) AER in the AWI setting.
(c) Averaged αf (x) . the input sequence, as shown in Figure 25a, the effective attention and raw attention weight are identical for short input sequences. Figure 25b shows the Pearson correlation coefficient between the raw attention weight and our norm-based method.
Since we incorporate the scaling effects of f and x, which contain canceling, our proposed method αf (x) differs from the raw attention weight, whether the input sequence is long or short.
Figure 2 :
2Overview of attention mechanism based on Equation 3. It computes the output vector by summing the weighted vectors; vectors with larger norms have higher contributions. Sizes of the colored circles illustrate the value of the scalar or the norm of the corresponding vector.
Figure 3 :
3Each point corresponds to averaged α or αf (x) on a word category in a given layer. Note that, in each layer, the sum of α among all the categories is 1. The x-axis denotes the index of the layers.
[
CLS], (ii) [SEP], (iii) periods and commas, and (iv) the other tokens. More specific descriptions of the experiments are provided in Appendix D.Results: The weight-based and norm-based analyses exhibited entirely different trends(Figure 3). The vectors for specific tokens-[CLS],[SEP]
Figure 4 :
4The higher value of averaged α or f (x) for [SEP] tokens in a given head, the darker its cell.
Figure 5 :
5Relationship between α and f (x) . Each plot corresponds to a pair of α i,j and f (x j ) in one of the attention heads. Each plot is colored by the word category corresponding to x j . Visualizations by category are shown in Appendix D.3.
Experimental procedure: Following Zenkel et al. (2019) and Ding et al. (
• norm-based extraction with the AWO/AWI settings • weight-based extraction with the AWO/AWI settings(Li et al., 2019; Zenkel et al., 2019; Ding et al., 2019) • gradient-based extraction (Ding et al., 2019) • existing word aligners (Och and Ney, 2003; Dyer et al., 2013)
Figure 7 :
7Examples of the reference and extracted alignments using each method in layer 2 (best layer) in the AWI setting on one out of five seeds. Two misalignments in the weight-based extraction were resolved in the norm-based analysis-alignments with the green frame. Examples of the extracted alignments in all the layers are shown in Appendix E.4. are inversely correlated (the Spearman rank and Pearson correlation coefficients are −0.44 and −0.52, respectively). This result is consistent with
first category assesses a single word or phraselevel linguistic capabilities of BERT, such as its performance on part-of-speech tagging and word sense disambiguation performance (Tenney et al., 2019; Jawahar et al., 2019; Reif et al., 2019; Lin et al., 2019; Wallace et al., 2019).The latter category explores the ability of Transformers to capture token-to-token interactions, such as syntactic relations and word alignment in the translation(Clark et al., 2019;Kovaleva et al., 2019;Htut et al., 2019; Reif et al., 2019;Lin et al., 2019;Goldberg, 2019;Ding et al., 2019; Zenkel et al., 2019;Li et al., 2019; Raganato and Tiedemann, 2018)
Weight-based analysis: Many studies have analyzed the linguistic capabilities of Transformers by tracking attention weights. This type of analysis has covered a wide range of subjects, including syntactic and semantic relationships(Tang et al., 2018; Raganato and Tiedemann, 2018;Clark et al., 2019; Reif et al., 2019;Jawahar et al., 2019;Htut et al., 2019;Kovaleva et al., 2019;Mareček and Rosa, 2019). However, as outlined in Section 2.3, these studies have ignored the effect of f (x). It has been actively discussed so far whether the attention weights can be interpreted to explain
Figure 8 :
8AER scores and averaged αf (x) in each head on one out of five seeds. The closer the extracted word alignment is to the reference, the lower the AER score-the lighter the color. The larger the averaged αf (x) , the darker the color. the models(Jain and Wallace, 2019; Serrano and Smith, 2019; Wiegreffe and Pinter, 2019; Pruthi et al., 2020; Vashishth et al., 2019).
D. 3
3Visualizations of α and f (x) for each word categoryAs described in Section 4.2, α and f (x) for the [SEP] token were canceled out in almost all heads (Figure 4). Here, we show the trends for the other categories-B, C, and D in Equation 14.Figures 10, 11, and 12show the trends of α and f (x) for category B (the [CLS] token), C (periods and commas), and D (other tokens), respec-(a) α.(b) f (x) .
Figure 10 :
10α and f (x) corresponding to [CLS] token, averaged on all the input text.(a) α.(b) f (x) .
Figure 11 :
11α and f (x) corresponding to periods and commas, averaged on all the input text.
Norm(p, q, , h).
(a) [CLS]. (b) [SEP].(c) Periods and commas. (d) Other tokens.
Figure 13 :
13Relationship between α and f (x) for each category.
Figures
17 to 22 show additional examples of the extracted alignments from the different layers of the NMT system. Note that the color scale in each heatmap is determined by the maximum value in each figure. One can observe that while the attention weights α are biased towards /s , the norms αf (x) corresponding to the token are small.
Figure 18 :
18Examples of the reference alignment and the extracted patterns by each method in layer 2. (a) Attention-weights. (b) Vector-norms.
Figure 19 :
19Examples of the reference alignment and the extracted patterns by each method in layer 3.
Figure 20 :
20Examples of the reference alignment and the extracted patterns by each method in layer 4. (a) Attention-weights. (b) Vector-norms.
Figure 21 :
21Examples of the reference alignment and the extracted patterns by each method in layer 5.
Figure 23 :
23AER scores and averaged αf (x) for each head in a model with 8 heads.
Figure 24 :
24Layer-wise AER scores. Each value is the average of five random seeds. The closer the extracted word alignment is to the reference, the lower the AER score-the lighter the color.(a) Effective attention.
(b) αf (x) .
Figure 25 :
25Each point represents the Pearson correlation coefficient of raw attention and each method toward token length.
#Weighted
Value
vectors
Attention
weights
Input vectors
! !,%
$ !
! (
Pre-
update
vector
Output vector
Weight
matrix
vectors depends on the implementation. W O ∈ R d ×d in Equation 1 corresponds to the part of W O ∈ R hd ×d that was introduced in Vaswani et al.(2017)which is applied to each head; where h is the number of heads, and hd = d holds.
Table 1 :
1Mean (µ), standard deviation (σ), coefficient
of variance (CV), and maximum and minimum values
of f (x) . In the last row, the former three are aver-
aged over all the heads.
General settings: Following the previous stud-
ies (Clark et al., 2019; Kovaleva et al., 2019; Reif
et al., 2019; Lin et al., 2019; Htut et al., 2019), we
used the pre-trained BERT-base 4 , with 12 layers,
each containing 12 attention heads. We used the
data provided by Clark et al. (2019) for the anal-
ysis. 5 The data contains 992 sequences extracted
from Wikipedia, where each sequence consists of
two consecutive paragraphs, in the form of: [CLS]
paragraph1 [SEP] paragraph2 [SEP]. Each se-
quence consists of up to 128 tokens, with an aver-
age of 122 tokens.
Table 2 :
2Spearman rank correlation coefficient between α and f (x) in each token category.
Table 3 :
3AER scores with different methods for German-to-English translation. The closer the extracted word alignment is to the reference, the lower the AER score. The "layer mean" denotes the average of AER scores across all layers. Each value is the average of five random seeds.
Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B Viegas, Andy Coenen, Adam Pearce, and Been Kim. 2019. Visualizing and Measuring the Geometry of BERT. Advances in Neural Information Processing Systems 32 (NIPS), pages 8594-8603.Myle Ott, Sergey Edunov, Alexei Baevski, Angela
Fan, Sam Gross, Nathan Ng, David Grangier, and
Michael Auli. 2019. fairseq: A Fast, Extensible
Toolkit for Sequence Modeling. In Proceedings of
the 2019 Conference of the North American Chap-
ter of the Association for Computational Linguistics
(Demonstrations), pages 48-53.
Damian Pascual, Gino Brunner, and Roger Watten-
hofer. 2020. Telling BERT's full story: from Lo-
cal Attention to Global Aggregation. arXiv preprint
arXiv:2004.05916.
Danish Pruthi, Mansi Gupta, Bhuwan Dhingra, Gra-
ham Neubig, and Zachary C Lipton. 2020. Learning
to Deceive with Attention-Based Explanations. In
Proceedings of the 58th Annual Meeting of the As-
sociation for Computational Linguistics (ACL).
Alessandro Raganato and Jörg Tiedemann. 2018.
An Analysis of Encoder Representations in
Transformer-Based Machine Translation.
In
Proceedings of the 2018 EMNLP Workshop
BlackboxNLP: Analyzing and Interpreting Neural
Networks for NLP, pages 287-297.
Anna Rogers, Olga Kovaleva, and Anna Rumshisky.
2020.
A Primer in BERTology: What we
know about how BERT works. arXiv preprint
arXiv:2002.12327.
Rico Sennrich, Barry Haddow, and Alexandra Birch.
2016. Neural Machine Translation of Rare Words
with Subword Units. In Proceedings of the 54th An-
nual Meeting of the Association for Computational
Linguistics (ACL), pages 1715-1725.
Sofia Serrano and Noah A Smith. 2019. Is Attention
Interpretable? In Proceedings of the 57th Annual
Meeting of the Association for Computational Lin-
guistics (ACL), pages 2931-2951.
Gongbo Tang, Rico Sennrich, and Joakim Nivre. 2018.
An Analysis of Attention Mechanisms: The Case
of Word Sense Disambiguation in Neural Machine
Translation. In Proceedings of the 3rd Conference
on Machine Translation (WMT): Research Papers,
pages 26-35.
Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang,
Adam Poliak, R Thomas McCoy, Najoung Kim,
Benjamin Van Durme, Samuel R Bowman, Dipan-
jan Das, and Ellie Pavlick. 2019. What do you
learn from context? Probing for sentence structure
in contextualized word representations. In 7th Inter-
national Conference on Learning Representations
(ICLR).
Shikhar Vashishth, Shyam Upadhyay, Gaurav Singh
Tomar, and Manaal Faruqui. 2019. Attention In-
terpretability Across NLP Tasks. arXiv preprint
arXiv:1909.11218.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz
Kaiser, and Illia Polosukhin. 2017. Attention is All
you Need. In Advances in Neural Information Pro-
cessing Systems 30 (NIPS), pages 5998-6008.
David Vilar, Maja Popović, and Hermann Ney. 2006.
AER: Do we need to "improve" our alignments?
In International Workshop on Spoken Language
Translation (IWSLT) 2006, pages 205-212.
Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh,
and Matt Gardner. 2019. Do NLP Models Know
Numbers? Probing Numeracy in Embeddings. In
Proceedings of the 2019 Conference on Empirical
Methods in Natural Language Processing and the
9th International Joint Conference on Natural Lan-
guage Processing (EMNLP-IJCNLP), pages 5307-
5315.
Sarah Wiegreffe and Yuval Pinter. 2019. Attention
is not not Explanation. In Proceedings of the
2019 Conference on Empirical Methods in Natu-
ral Language Processing and the 9th International
Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 11-20.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car-
bonell, Ruslan Salakhutdinov, and Quoc V. Le.
2019. XLNet: Generalized Autoregressive Pretrain-
ing for Language Understanding. In Advances in
Neural Information Processing Systems 32 (NIPS),
pages 1-18.
Thomas Zenkel, Joern Wuebker, and John DeNero.
2019. Adding Interpretable Attention to Neu-
ral Translation Models Improves Word Alignment.
arXiv preprint arXiv:1901.11359.
Table 5 :
5Hyperparameters of the NMT model.Layer
Weight-based
Norm-based
(a) AWO setting.
Layer
Weight-based
Norm-based
(b) AWI setting.
https://github.com/gorokoba560/ norm-analysis-of-transformer
Whether bias b is added to calculate query, key, and value
We use the standard Euclidean norm.
Note that the attention mechanism has the constraint that the sum of the attention weights becomes 1.0 (see Equation 2). 8 Note that for any positive scalar λ ∈ R and vector x ∈ R d , λx = λ x . 9 The frequency ranks of the words [CLS],[SEP], period, and comma, out of approximately 30,000 words, are 50, 28, 2, and 3, respectively.
We call word type as "word." Each instance of a word is called "token."11 We counted the frequency for each word type by reproducing the training data of BERT. 12 The Spearman rank correlation coefficient without special tokens, periods, and commas was 0.28 for the attention weights and 0.69 for the norms.
http://www.statmt.org/europarl/v7 14 https://www-i6.informatik.rwth-aachen.de/ goldAlignment/
(a) Reference. (b) α. (c) αf (x) .
AcknowledgmentsWe would like to thank the anonymous reviewers of the EMNLP 2020 and the ACL 2020 Student Research Workshop (SRW), and the SRW mentor Junjie Hu for their insightful comments. We also thank the members of Tohoku NLP Laboratory for helpful comments. This work was supported by JSPS KAKENHI Grant Number JP19H04162. This work was also partially supported by a Bilateral Joint Research Program between RIKEN AIP Center and Tohoku University.
AER in the AWI setting. AER in the AWI setting.
AER in the AWO setting. (c) Averaged αf (x). AER in the AWO setting. (c) Averaged αf (x) .
. Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E , arXiv:1607.06450Hinton. 2016. Layer Normalization. arXiv preprintJimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hin- ton. 2016. Layer Normalization. arXiv preprint arXiv:1607.06450.
On Identifiability in Transformers. Gino Brunner, Yang Liu, Damián Pascual, Oliver Richter, Massimiliano Ciaramita, Roger Wattenhofer, 8th International Conference on Learning Representations. ICLRGino Brunner, Yang Liu, Damián Pascual, Oliver Richter, Massimiliano Ciaramita, and Roger Watten- hofer. 2020. On Identifiability in Transformers. In 8th International Conference on Learning Represen- tations (ICLR).
What Does BERT Look At? An Analysis of BERT's Attention. Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D Manning, 10.18653/v1/W19-4828Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPKevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning. 2019. What Does BERT Look At? An Analysis of BERT's Attention. In Pro- ceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276-286.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 4171-4186.
Saliency-driven Word Alignment Interpretation for Neural Machine Translation. Shuoyang Ding, Hainan Xu, Philipp Koehn, 10.18653/v1/W19-5201Proceedings of the 4th Conference on Machine Translation (WMT). the 4th Conference on Machine Translation (WMT)Shuoyang Ding, Hainan Xu, and Philipp Koehn. 2019. Saliency-driven Word Alignment Interpretation for Neural Machine Translation. In Proceedings of the 4th Conference on Machine Translation (WMT), pages 1-12.
A Simple, Fast, and Effective Reparameterization of IBM Model 2. Chris Dyer, Victor Chahuneau, Noah A Smith, Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NACCL-HLT). the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NACCL-HLT)Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A Simple, Fast, and Effective Reparameter- ization of IBM Model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NACCL-HLT), pages 644- 648.
Yoav Goldberg, arXiv:1901.05287Assessing BERT's Syntactic Abilities. arXiv preprintYoav Goldberg. 2019. Assessing BERT's Syntactic Abilities. arXiv preprint arXiv:1901.05287.
Jason Phu Mon Htut, Shikha Phang, Samuel R Bordia, Bowman, arXiv:1911.12246Do Attention Heads in BERT Track Syntactic Dependencies? arXiv preprint. Phu Mon Htut, Jason Phang, Shikha Bordia, and Samuel R. Bowman. 2019. Do Attention Heads in BERT Track Syntactic Dependencies? arXiv preprint arXiv:1911.12246.
Attention is not Explanation. Sarthak Jain, C Byron, Wallace, 10.18653/v1/N19-1357Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)Sarthak Jain and Byron C Wallace. 2019. Atten- tion is not Explanation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies (NAACL-HLT), pages 3543-3556.
What Does BERT Learn about the Structure of Language?. Ganesh Jawahar, Benoît Sagot, Djamé Seddah, 10.18653/v1/P19-1356Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL). the 57th Annual Meeting of the Association for Computational Linguistics (ACL)Ganesh Jawahar, Benoît Sagot, and Djamé Seddah. 2019. What Does BERT Learn about the Structure of Language? In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics (ACL), pages 3651-3657.
Revealing the Dark Secrets of BERT. Olga Kovaleva, Alexey Romanov, Anna Rogers, Anna Rumshisky, 10.18653/v1/D19-1445Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the Dark Secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4364-4373.
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, 8th International Conference on Learning Representations. ICLRZhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In 8th Inter- national Conference on Learning Representations (ICLR).
On the Word Alignment from Neural Machine Translation. Xintong Li, Guanlin Li, Lemao Liu, Max Meng, Shuming Shi, 10.18653/v1/P19-1124Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL). the 57th Annual Meeting of the Association for Computational Linguistics (ACL)Xintong Li, Guanlin Li, Lemao Liu, Max Meng, and Shuming Shi. 2019. On the Word Alignment from Neural Machine Translation. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics (ACL), pages 1293-1303.
Open Sesame: Getting Inside BERT's Linguistic Knowledge. Yongjie Lin, Yi Chern Tan, Robert Frank, 10.18653/v1/W19-4825Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPYongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open Sesame: Getting Inside BERT's Linguis- tic Knowledge. Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpret- ing Neural Networks for NLP, pages 241-253.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretrain- ing Approach. arXiv preprint arXiv:1907.11692.
From Balustrades to Pierre Vinken: Looking for Syntax in Transformer Self-Attentions. David Mareček, Rudolf Rosa, 10.18653/v1/W19-4827Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPDavid Mareček and Rudolf Rosa. 2019. From Balustrades to Pierre Vinken: Looking for Syntax in Transformer Self-Attentions. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 263- 275.
Are Sixteen Heads Really Better than One?. Paul Michel, Omer Levy, Graham Neubig, Advances in Neural Information Processing Systems 32 (NIPS). Paul Michel, Omer Levy, and Graham Neubig. 2019. Are Sixteen Heads Really Better than One? In Ad- vances in Neural Information Processing Systems 32 (NIPS), pages 14014-14024.
Improved Statistical Alignment Models. Josef Franz, Hermann Och, Ney, 10.3115/1075218.1075274Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics (ACL). the 38th Annual Meeting of the Association for Computational Linguistics (ACL)Franz Josef Och and Hermann Ney. 2000. Improved Statistical Alignment Models. In Proceedings of the 38th Annual Meeting of the Association for Compu- tational Linguistics (ACL), pages 440-447.
A Systematic Comparison of Various Statistical Alignment Models. Josef Franz, Hermann Och, Ney, 10.1162/089120103321337421Relationship between r(t) and AvgW. (b) Relationship between r(t) and AvgN. 29Franz Josef Och and Hermann Ney. 2003. A System- atic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29(1):19-51. (a) Relationship between r(t) and AvgW. (b) Relationship between r(t) and AvgN. |
34,456,947 | Walk Thru Text and Keys Walk Thru Text for Information Extraction Key for Template Element Key for Template Relation Key for Scenario Template Walk Thru Text for Named Entity Key for Named Entity Walk Thru Text for Coreference Key for Coreference Template Element Key | [] | Walk Thru Text and Keys Walk Thru Text for Information Extraction Key for Template Element Key for Template Relation Key for Scenario Template Walk Thru Text for Named Entity Key for Named Entity Walk Thru Text for Coreference Key for Coreference Template Element Key
Walk Thru Text and Keys Walk Thru Text for Information Extraction Key for Template Element Key for Template Relation Key for Scenario Template Walk Thru Text for Named Entity Key for Named Entity Walk Thru Text for Coreference Key for Coreference Template Element Key
Last Modified November 1998
Template Relation Key
<TEMPLATE-9602140509-1> := DOC_NR: "9602140509" <ENTITY-9602140509-1> := ENT_NAME: "New York Times News Service" ENT_TYPE: ORGANIZATION ENT_CATEGORY: ORG_CO <ENTITY-9602140509-2> := ENT_NAME: "Bloomberg Business News" "Bloomberg" ENT_TYPE: ORGANIZATION ENT_CATEGORY: ORG_CO <ENTITY-9602140509-3> := ENT_NAME: "Intelsat" ENT_TYPE: ORGANIZATION ENT_DESCRIPTOR: "a global supplier of international satellite communication services" ENT_CATEGORY: ORG_CO <ENTITY-9602140509-4> := ENT_NAME: "News Corp." "News Corporation" ENT_TYPE: ORGANIZATION ENT_CATEGORY: ORG_CO <ENTITY-9602140509-5> := ENT_NAME: "Tele-Communications Inc." "TCI" ENT_TYPE: ORGANIZATION ENT_CATEGORY: ORG_CO <ENTITY-9602140509-6> := ENT_NAME: "China Great Wall Industry Corp." ENT_TYPE: ORGANIZATION ENT_CATEGORY: ORG_CO <ENTITY-9602140509-7> := ENT_NAME: "Loral Corp." ENT_TYPE: ORGANIZATION ENT_CATEGORY: ORG_CO <ENTITY-9602140509-8> := ENT_NAME: "PanAmSat Corp." ENT_TYPE: ORGANIZATION ENT_DESCRIPTOR: "a private, Greenwich, Connecticut, company" / "a satellite provider for the TV project" ENT_CATEGORY: ORG_CO <ENTITY-9602140509-9> := ENT_NAME: "Grupo Televisa SA" "Televisa" ENT_TYPE: ORGANIZATION ENT_DESCRIPTOR: "Mexico's biggest broadcaster" ENT_CATEGORY: ORG_CO <ENTITY-9602140509-10> := ENT_NAME: "Organizacoes Globo" ENT_TYPE: ORGANIZATION ENT_DESCRIPTOR: "Brazil's largest media company" ENT_CATEGORY: ORG_CO <ENTITY-9602140509-11> := ENT_NAME: "ING Barings" ENT_TYPE: ORGANIZATION ENT_CATEGORY: ORG_CO <ENTITY-9602140509-12> := ENT_NAME: "Arianespace" ENT_TYPE: ORGANIZATION ENT_CATEGORY: ORG_CO <ENTITY-9602140509-13> := ENT_NAME: "International Space Brokers Group" ENT_TYPE: ORGANIZATION ENT_DESCRIPTOR: "this consortium" ENT_CATEGORY: ORG_CO <ENTITY-9602140509-14> := ENT_NAME: "International Technology Underwriters" "International Technology" ENT_TYPE: ORGANIZATION ENT_DESCRIPTOR: "one insurer in this consortium" / "80 percent owned by Paris insurer Axa SA and 20 percent by Prudential Reinsurance Holdings Inc. of Newark, New Jersey" ENT_CATEGORY: ORG_CO <ENTITY-9602140509-15> := ENT_NAME: "Axa SA" ENT_TYPE: ORGANIZATION ENT_DESCRIPTOR: "Paris insurer" ENT_CATEGORY: ORG_CO <ENTITY-9602140509-16> := ENT_NAME: "Prudential Reinsurance Holdings Inc." ENT_TYPE: ORGANIZATION ENT_CATEGORY: ORG_CO <ENTITY-9602140509-17> := ENT_NAME: "Space Transportation Association" ENT_TYPE: ORGANIZATION ENT_CATEGORY: ORG_OTHER <ENTITY-9602140509-18> := ENT_NAME: "National Aeronautics and Space Administration" ENT_TYPE: ORGANIZATION ENT_CATEGORY: ORG_GOVT <ENTITY-9602140509-19> := ENT_NAME: "Lockheed Martin Corp." "Lockheed" ENT_TYPE: ORGANIZATION ENT_CATEGORY: ORG_CO <ENTITY-9602140509-20> := ENT_NAME: "Lockheed Space and Strategic Missiles" ENT_TYPE: ORGANIZATION ENT_CATEGORY: ORG_CO OBJ_STATUS: OPTIONAL COMMENT: "part of larger org mentioned" <ENTITY-9602140509-21> := ENT_NAME: "McDonnell Douglas Corp." ENT_TYPE: ORGANIZATION ENT_CATEGORY: ORG_CO <ENTITY-9602140509-22> := ENT_NAME: "Bloomberg Information Television" ENT_TYPE: ORGANIZATION ENT_DESCRIPTOR: "a unit of Bloomberg L.P., the parent of Bloomberg Business News" / "a unit of Bloomberg L.P." ENT_CATEGORY: ORG_CO COMMENT: "parent org mentioned" <ENTITY-9602140509-23> := ENT_NAME: "Bloomberg L.P." ENT_TYPE: ORGANIZATION ENT_DESCRIPTOR: "the parent of Bloomberg Business News" ENT_CATEGORY: ORG_CO <ENTITY-9602140509-24> := ENT_NAME: "Liza McDonald" ENT_TYPE: PERSON ENT_CATEGORY: PER_CIV <ENTITY-9602140509-25> := ENT_NAME: "Todd Blecher" ENT_TYPE: PERSON ENT_CATEGORY: PER_CIV <ENTITY-9602140509-26> := ENT_NAME: "Rupert Murdoch" ENT_TYPE: PERSON ENT_CATEGORY: PER_CIV <ENTITY-9602140509-27> := ENT_NAME: "Irving Goldstein" "Goldstein" ENT_TYPE: PERSON ENT_DESCRIPTOR: "director general and chief executive of Intelsat" ENT_CATEGORY: PER_CIV <ENTITY-9602140509-28> := ENT_NAME: "Howard J. Rubenstein" ENT_TYPE: PERSON ENT_DESCRIPTOR: "company spokesman" ENT_CATEGORY: PER_CIV COMMENT: "News Corporation" <ENTITY-9602140509-29> := ENT_NAME: "Shayne McGuire" "McGuire" ENT_TYPE: PERSON ENT_DESCRIPTOR: "an analyst at ING Barings in Mexico City" ENT_CATEGORY: PER_CIV <ENTITY-9602140509-30> := ENT_NAME: "Rick Hauck" ENT_TYPE: PERSON ENT_DESCRIPTOR: "Its chief executive" / "chief executive" / "former space shuttle astronaut" ENT_CATEGORY: PER_CIV COMMENT: "International Technology Underwriters" <ENTITY-9602140509-31> := ENT_NAME: "Eric Stallmer" ENT_TYPE: PERSON ENT_DESCRIPTOR: "spokesman for the Space Transportation Association of Arlington, Virginia" / "spokesman for the Space Transportation Association of Arlington, Virginia, which represents U.S. rocket makers who compete with the Chinese" ENT_CATEGORY: PER_CIV <ENTITY-9602140509-32> := ENT_NAME: "Virnell Bruce" ENT_TYPE: PERSON ENT_DESCRIPTOR: "spokeswoman for Lockheed Space and Strategic Missiles in Bethesda, Maryland" ENT_CATEGORY: PER_CIV <ENTITY-9602140509-33> := ENT_TYPE: PERSON ENT_DESCRIPTOR: "company spokesman" ENT_CATEGORY: PER_CIV COMMENT: "Bloomberg Information Television" <ENTITY-9602140509-34> := ENT_NAME: "Long March 3B" ENT_TYPE: ARTIFACT ENT_DESCRIPTOR: "A Chinese rocket carrying an Intelsat satellite" / "A Chinese rocket" / "Long March 3B rocket" / "Long March 3B rocket for today's failed launch" / "Long March 3B rocket for today's failed launch of a satellite built by Loral Corp. of New York for Intelsat" ENT_CATEGORY: ART_AIR COMMENT: "too many logically possible alternatives" <ENTITY-9602140509-35> := ENT_TYPE: ARTIFACT ENT_DESCRIPTOR: "an Intelsat satellite" / "spacecraft" / "satellite built by Loral Corp. of New York for Intelsat" / "satellite built by Loral Corp. of New York" / "satellite built by Loral Corp." / "satellite" / "one of three satellites to be used for a new direct-to-home subscription-based television service in Latin America scheduled to begin in May" / "the satellite destroyed today" <p> Evangelista said the Chinese-built Long March rocket veered off course and was destroyed after it failed to reach orbit. Intelsat was using the Long March rocket for the first time to launch one of its satellites. Intelsat currently has 23 satellites in orbit. <p> A spokesman for News Corp., Howard Rubenstein, said the accident would not hinder the group's plans to offer 150 channels of entertainment, news and sports programming to viewers in Latin America and the Caribbean. <p>`N ews Corp. has a number of other real options and will disclose them shortly,'' Rubinstein said in a statement. <p> Grupo Televisa and Globo plan to offer national and local programming in Spanish and Portuguese. Initially, the venture's partners said they planned to invest $500 million. <p> But a similar explosion last year delayed the plans of several American media companies to offer a package of satellite television services in Asia. Viacom, Time Warner's Home Box Office and Turner Broadcasting System were among the companies that had leased space on an Apstar 2 satellite to beam MTV, CNN and other channels throughout Asia. <p> After the rocket carrying that satellite exploded, media analysts said the companies had to settle for space on a series of regional satellites, which had less reach than the Apstar 2 would have offered. <p> News Corp. actually benefited from that accident. In 1993, the company had purchased a controlling stake in a rival Asian satellite service, Star TV. With his biggest competitors unable to enter the Asian market, Murdoch was able to build Star TV into the dominant programming service. <p> A spokeswoman for Tele-Communications, LaRae Marsik, said the partners in the Latin American venture intended to begin service by the end of 1996. When the companies announced their plans last November, they said they planned to be in business by May. <p> Ms. Marsik said Tele-Communications and its partners had a back-up plan, which could include leasing space on another satellite, but she declined to offer details. ``It is an unfortunate incident,'' she said, ``but it is not a make-it-or-break-it event for us.'' <p> Jessica Reif, a media analyst at Merrill Lynch & Co., said, ``If they can get up and running with exclusive programming within six months, it doesn't set the venture back that far.'' <p> Hughes Electronics, a subsidiary of the General Motors Corp., is starting its own satellite broadcast service in Latin America. Ms. Reif said that venture, which is based on Hughes's DirecTV service in the United States, would benefit if the explosion delayed the Murdoch-led venture. </TEXT> <TRAILER> NYT-02-14-96 2029EST </TRAILER> </DOC> Last Modified November 1998http://www.muc.saic.com/proceedings/walkthru_ne_text.html Copyright 1998 Named Entity Key <DOC> <DOCID> nyt960214.0704 </DOCID> <STORYID cat=f pri=u> A4479 </STORYID> <SLUG fv=taf-z> BC-<ENAMEX TYPE="PERSON">MURDOCH</ENAMEX>-SATELLITE-NYT </SLUG> <DATE> <TIMEX TYPE="DATE">02-14</TIMEX> </DATE> <NWORDS> 0608 </NWORDS> <PREAMBLE> BC-<ENAMEX TYPE="PERSON">MURDOCH</ENAMEX>-SATELLITE-NYT <ENAMEX TYPE="PERSON">MURDOCH</ENAMEX> SATELLITE FOR LATIN PROGRAMMING EXPLODES ON TAKEOFF (kd) By <ENAMEX TYPE="PERSON">MARK LANDLER</ENAMEX> c.<TIMEX TYPE="DATE">1996</TIMEX> <ENAMEX TYPE="ORGANIZATION">N.Y. Times News Service</ENAMEX> </PREAMBLE> <TEXT> <p> A Chinese rocket carrying a television satellite exploded seconds after launch <TIMEX TYPE="DATE">Wednesday</TIMEX>, dealing a potential blow to <ENAMEX TYPE="PERSON">Rupert Murdoch</ENAMEX>'s ambitions to offer satellite programming in <ENAMEX TYPE="LOCATION">Latin America</ENAMEX>. <p> <ENAMEX TYPE="PERSON">Murdoch</ENAMEX>'s <ENAMEX TYPE="ORGANIZATION">News Corp.</ENAMEX> is one of four media companies in a partnership that had leased space on the <ENAMEX TYPE="ORGANIZATION">Intelsat</ENAMEX> satellite to offer the Latin American service. The other partners are <ENAMEX TYPE="ORGANIZATION">Tele-Communications Inc.</ENAMEX>, the nation's largest cable operator; <ENAMEX TYPE="ORGANIZATION">Grupo Televisa SA</ENAMEX>, the Mexican broadcaster and publisher, and the giant Brazilian media conglomerate <ENAMEX TYPE="ORGANIZATION">Globo</ENAMEX>. <p> <ENAMEX TYPE="PERSON">Llennel Evangelista</ENAMEX>, a spokesman for <ENAMEX TYPE="ORGANIZATION">Intelsat</ENAMEX>, a global satellite consortium based in <ENAMEX TYPE="LOCATION">Washington</ENAMEX>, said the accident occurred at <TIMEX TYPE="TIME">2 p.m. EST</TIMEX> <TIMEX TYPE="DATE">Wednesday</TIMEX>, or <TIMEX TYPE="TIME">early Thursday morning</TIMEX> at the <ENAMEX TYPE="LOCATION">Xichang</ENAMEX> launch site in <ENAMEX TYPE="LOCATION">Sichuan Province</ENAMEX> in southwestern <ENAMEX TYPE="LOCATION">China</ENAMEX>. ``We have no details on what caused the accident,'' he said. <p> <ENAMEX TYPE="PERSON">Evangelista</ENAMEX> said the Chinese-built Long March rocket veered off course and was destroyed after it failed to reach orbit. <ENAMEX TYPE="ORGANIZATION">Intelsat</ENAMEX> was using the Long March rocket for the first time to launch one of its satellites. <ENAMEX TYPE="ORGANIZATION">Intelsat</ENAMEX> currently has 23 satellites in orbit. <p> A spokesman for <ENAMEX TYPE="ORGANIZATION">News Corp.</ENAMEX>, <ENAMEX TYPE="PERSON">Howard Rubenstein</ENAMEX>, said the accident would not hinder the group's plans to offer 150 channels of entertainment, news and sports programming to viewers in <ENAMEX TYPE="LOCATION">Latin America</ENAMEX> and the <ENAMEX TYPE="LOCATION">Caribbean</ENAMEX>. <p>`< ENAMEX TYPE="ORGANIZATION">News Corp.</ENAMEX> has a number of other real options and will disclose them shortly,'' <ENAMEX TYPE="PERSON">Rubinstein</ENAMEX> said in a statement. <p> <ENAMEX TYPE="ORGANIZATION">Grupo Televisa</ENAMEX> and <ENAMEX TYPE="ORGANIZATION">Globo</ENAMEX> plan to offer national and local programming in Spanish and Portuguese. Initially, the venture's partners said they planned to invest <NUMEX TYPE="MONEY">$500 million</NUMEX>. <p> But a similar explosion <TIMEX TYPE="DATE">last year</TIMEX> delayed the plans of several American media companies to offer a package of satellite television services in <ENAMEX TYPE="LOCATION">Asia</ENAMEX>. <ENAMEX TYPE="ORGANIZATION">Viacom</ENAMEX>, <ENAMEX TYPE="ORGANIZATION">Time Warner</ENAMEX>'s <ENAMEX TYPE="ORGANIZATION">Home Box Office</ENAMEX> and <ENAMEX TYPE="ORGANIZATION">Turner Broadcasting System</ENAMEX> were among the companies that had leased space on an Apstar 2 satellite to beam MTV, CNN and other channels throughout <ENAMEX TYPE="LOCATION">Asia</ENAMEX>. <p> After the rocket carrying that satellite exploded, media analysts said the companies had to settle for space on a series of regional satellites, which had less reach than the Apstar 2 would have offered. <p> <ENAMEX TYPE="ORGANIZATION">News Corp.</ENAMEX> actually benefited from that accident. In <TIMEX TYPE="DATE">1993</TIMEX>, the company had purchased a controlling stake in a rival Asian satellite service, <ENAMEX TYPE="ORGANIZATION" STATUS="OPT">Star TV</ENAMEX>. With his biggest competitors unable to enter the Asian market, <ENAMEX TYPE="PERSON">Murdoch</ENAMEX> was able to build <ENAMEX TYPE="ORGANIZATION" STATUS="OPT">Star TV</ENAMEX> into the dominant programming service. <p> A spokeswoman for <ENAMEX TYPE="ORGANIZATION">Tele-Communications</ENAMEX>, <ENAMEX TYPE="PERSON">LaRae Marsik</ENAMEX>, said the partners in the Latin American venture intended to begin service by <TIMEX TYPE="DATE">the end of 1996</TIMEX>. When the companies announced their plans <TIMEX TYPE="DATE">last November</TIMEX>, they said they planned to be in business by <TIMEX TYPE="DATE">May</TIMEX>. <p> Ms. <ENAMEX TYPE="PERSON">Marsik</ENAMEX> said <ENAMEX TYPE="ORGANIZATION">Tele-Communications</ENAMEX> and its partners had a back-up plan, which could include leasing space on another satellite, but she declined to offer details. ``It is an unfortunate incident,'' she said, ``but it is not a make-it-or-break-it event for us.'' <p> <ENAMEX TYPE="PERSON">Jessica Reif</ENAMEX>, a media analyst at <ENAMEX TYPE="ORGANIZATION">Merrill Lynch & Co.</ENAMEX>, said, ``If they can get up and running with exclusive programming <TIMEX TYPE="DATE" STATUS="OPT">within six months</TIMEX>, it doesn't set the venture back that far.'' <p> <ENAMEX TYPE="ORGANIZATION">Hughes Electronics</ENAMEX>, a subsidiary of the <ENAMEX TYPE="ORGANIZATION">General Motors Corp.</ENAMEX>, is starting its own satellite broadcast service in <ENAMEX TYPE="LOCATION">Latin America</ENAMEX>. Ms. <ENAMEX TYPE="PERSON">Reif</ENAMEX> said that venture, which is based on <ENAMEX TYPE="ORGANIZATION">Hughes</ENAMEX>'s <ENAMEX TYPE="ORGANIZATION" STATUS="OPT">DirecTV</ENAMEX> service in the <ENAMEX TYPE="LOCATION">United States</ENAMEX>, would benefit if the explosion delayed the <ENAMEX TYPE="PERSON">Murdoch</ENAMEX>-led venture. </TEXT> <TRAILER> NYT-<TIMEX TYPE="DATE">02-14-96</TIMEX> <TIMEX TYPE="TIME">2029EST</TIMEX> </TRAILER> </DOC> Hughes expects Galaxy VIII(I) will bring in $30 million in revenue in its first year and $58 million each year for the following 11 years, according to filings at the FCC. <p> GE Americom filed its cost and revenue assumptions confidentially at the agency. Its plan calls for two satellites and a spare. <p> The plans are significant, said Scott Blake Harris, former FCC international bureau chief, as ``yet another indication of the health and strength of the U.S. satellite industry.'' <p> The airwaves to be allocated are currently used by the National Aeronautics and Space Administration for its tracking and data relay system. The system, among other things, monitors the Space Shuttle, helps to retrieve satellites, and relays communications between ground stations and low-orbiting spacecraft including the Shuttle. Those functions are likely to be slowly shifted to another slice of spectrum, while the airwaves they've historically used are turned over, in part, to satellite services such as the ones planned by GE and GM. Other companies that support the allocation and may use it include Lockheed Martin Corp.'s Loral Space and Communications, International Private Satellite Partners/Orion Atlantic Capital Corp., and Comsat Corp. <p> No opposing comments on the allocation were filed at the agency. <p> The spectrum shift comes at Hughes' initiative. The company asked the FCC in March of 1995 to fix an imbalance in the uplink and downlink airwaves available to fixed satellite services so that the spectrum could be more effectively used. <p>`T he downlink bands are not paired with any uplink bands,'' the company wrote. Indeed, for 1000 megahertz allocated for satellite downlinks, or transmissions from satellites to earth stations, the agency had only set aside 500 megahertz for uplinks. That's meant that half of the downlink capacity has been unusable, because no corresponding uplink airwaves existed. <p>`I t is . . . critical to the competitiveness of the United States satellite industry, both at home and abroad, that the commission allocate'' more airwaves for fixed satellite uplinks, Hughes said. <p> A similar plan was set by the International Telecommunications Union at the World Administrative Radio Conference in 1992, and adopted at the same meeting in 1995. <p> The plan hadn't yet been implemented in the U.S. because interference with NASA's radar functions hadn't been worked out. <p> Also at Thursday's meeting, the FCC plans to formalize the process public utility companies use to become certified as telecommunications providers. </TEXT> <TRAILER> NYT-09-10-96 1604EDT </TRAILER> </DOC>
9602140509-7> := LOCALE: "Central America" LOCALE_TYPE: REGION COUNTRY: "Central America" <LOCATION-9602140509-8> :"northern Argentina not included because it would be a province; perhaps both should be optional" : <ENTITY-9602140509-2> <EMPLOYEE_OF-9602140509-2> := PERSON: <ENTITY-9602140509-25> ORGANIZATION: <ENTITY-9602140509-2> <EMPLOYEE_OF-9602140509-3> :: <LOCATION-9602140509-18> ORGANIZATION: <ENTITY-9602140509-16> <LOCATION_OF-9602140509-11> := LOCATION: <LOCATION-9602140509-19> ORGANIZATION: <ENTITY-9602140509-17>LastModified November 1998 http://www.muc.saic.com/proceedings/walkthru_tr_key.html Copyright 1998 : "a global supplier of international satellite communication services" ENT_CATEGORY: ORG_CO <ENTITY-9602140509-4> := ENT_NAME: "News Corp." ENT_DESCRIPTOR: "Brazil's largest media company" ENT_CATEGORY: ORG_CO <ENTITY-9602140509-34> := ENT_NAME: "Long March 3B" ENT_TYPE: ARTIFACT ENT_DESCRIPTOR: "A Chinese rocket carrying an Intelsat satellite" / "A Chinese rocket" / "Long March 3B rocket" / "Long March 3B rocket for today's failed launch" / "Long March 3B rocket for today's failed launch of a satellite built by Loral Corp. of New York for Intelsat" ENT_CATEGORY: ART_AIR COMMENT: "too many logically possible alternatives" <ENTITY-9602140509-35> := ENT_TYPE: ARTIFACT ENT_DESCRIPTOR: "an Intelsat satellite" / "spacecraft" / "satellite built by Loral Corp. of New York for Intelsat" / "satellite built by Loral Corp. of New York" / "satellite built by Loral Corp." / "satellite" / "one of three satellites to be used for a new direct-to-home subscription-based television service in Latin America scheduled to begin in May" / "the satellite destroyed today" ENT_CATEGORY: ART_AIR <ENTITY-9602140509-36> := ENT_TYPE: ARTIFACT ENT_DESCRIPTOR: "a second Intelsat satellite" ENT_CATEGORY: ART_AIR <LOCATION-9602140509-1> := LOCALE: "Xichang" LOCALE_TYPE: CITY / PROVINCE COUNTRY: "China" COMMENT: "China" <LOCATION-9602140509-3> :: "later this month" COMMENT: "late Feb."
LastModified November 1998 http://www.muc.saic.com/proceedings/walkthru_st_key.html Copyright 1998 Science Applications International Corporation Chinese rocket carrying a television satellite exploded seconds after launch Wednesday, dealing a potential blow to Rupert Murdoch's ambitions to offer satellite programming in Latin America. <p> Murdoch's News Corp. is one of four media companies in a partnership that had leased space on the Intelsat satellite to offer the Latin American service. The other partners are Tele-Communications Inc., the nation's largest cable operator; Grupo Televisa SA, the Mexican broadcaster and publisher, and the giant Brazilian media conglomerate Globo. <p> Llennel Evangelista, a spokesman for Intelsat, a global satellite consortium based in Washington, said the accident occurred at 2 p.m. EST Wednesday, or early Thursday morning at the Xichang launch site in Sichuan Province in southwestern China. ``We have no details on what caused the accident,'' he said.Named Entity Text
<DOC>
<DOCID> nyt960214.0704 </DOCID>
<STORYID cat=f pri=u> A4479 </STORYID>
<SLUG fv=taf-z> BC-MURDOCH-SATELLITE-NYT </SLUG>
<DATE> 02-14 </DATE>
<NWORDS> 0608 </NWORDS>
<PREAMBLE>
BC-MURDOCH-SATELLITE-NYT
MURDOCH SATELLITE FOR LATIN PROGRAMMING EXPLODES ON TAKEOFF
(kd)
By MARK LANDLER
c.1996 N.Y. Times News Service
</PREAMBLE>
<TEXT>
<p>
A
LastModified November 1998 http://www.muc.saic.com/proceedings/walkthru_ne_key.html Copyright 1998 Science Applications International Corporation <DOC> <DOCID> nyt960910.0378 </DOCID> <STORYID cat=f pri=r> A2394 </STORYID> <SLUG fv=tia-z> BC-HUGHES-FCC-BLOOM </SLUG> <DATE> &LR; </DATE> <NWORDS> 09-10 </NWORDS> <PREAMBLE> BC-HUGHES-FCC-BLOOM GM, GE PROJECTS LIKELY TO GET NEEDED AIRWAVES FROM FCC THURSDAY (For use by New York Times News Service clients) Sept. 10 (Bloomberg) --Satellite systems to deliver video services to Latin America planned by General Motors Corp.'s Hughes Electronics Corp. and General Electric Co. are likely to get the airwaves they need from federal regulators. <p> Plans for Hughes' Galaxy VIII(I) project and GE's GE Americom project depend on the Federal Communications Commission's allocation of a swath of spectrum that will let their earth stations communicate with satellites in space. <p> Scheduled for a vote at the agency's meeting on Thursday, the expected allocation will let the companies transmit video pictures, phone calls, and other data from earth stations to orbiting satellites, and then to customers in Mexico, the Caribbean, Central America, and South America. <p> Both companies said they expect to use the systems primarily to deliver digital video services to Latin American subscribers' own dishes and to cable company receivers for distribution to cable subscribers. <p> Mexico's Grupo Televisa SA, Multivision SA and Medcom SA all have plans to deliver direct-to-home video satellite service to Mexico within a year. <p> Televisa, Mexico's largest broadcaster, has formed an agreement with Rupert Murdoch's News Corp., Brazil's Globo television network, and Denver-based Tele-Communications Inc. to offer direct-to-home service throughout Latin America. <p> Turner Broadcasting System Inc., for its part, agreed in July to distribute Cable News Network and three other cable channels to Latin American subscribers together with a group called Galaxy Latin America, composed of GM's DirecTV, Venezuela's Cisneros Group of Cos., Brazil's Televisao Abril, and Mexico's MVS Multivision. <p> Hughes' Galaxy VIII(I) plan would use one satellite, which the company estimates will cost $230 million to build and launch.Coreference Text
By Liza McDonald
c.1996 Bloomberg Business News
</PREAMBLE>
<TEXT>
<p>
Washington,
. </Docid> <storyid Cat=f Pri=r> A2394 </Storyid> <slug Fv=tia-Z>, Bc-<coref Id=, 1">HUGHES</COREF>-<COREF ID="3">FCC</COREF>-BLOOM </SLUG> <DATE> &LR</DOCID> <STORYID cat=f pri=r> A2394 </STORYID> <SLUG fv=tia-z> BC-<COREF ID="1">HUGHES</COREF>-<COREF ID="3">FCC</COREF>-BLOOM </SLUG> <DATE> &LR;
. </Date> <nwords> <coref Id= ; ">09-10</Coref> </Nwords> <preamble> Bc-<coref Id=, 0" TYPE="IDENT" REF="1">HUGHES</COREF>-<COREF ID="2" TYPE="IDENT" REF="3">FCC</COREF>-BLOOM</DATE> <NWORDS> <COREF ID="6">09-10</COREF> </NWORDS> <PREAMBLE> BC-<COREF ID="0" TYPE="IDENT" REF="1">HUGHES</COREF>-<COREF ID="2" TYPE="IDENT" REF="3">FCC</COREF>-BLOOM
<COREF ID="21" MIN="PROJECTS"><COREF ID="11">GM</COREF>, <COREF ID="13">GE</COREF> PROJECTS</COREF> LIKELY TO GET <COREF ID="15" MIN="AIRWAVES">NEEDED AIRWAVES</COREF> FROM <COREF. ID="4" TYPE="IDENT" REF="2">FCC</COREF> <COREF ID="27">THURSDAY</COREF> (For use by New York Times News Service clients<COREF ID="21" MIN="PROJECTS"><COREF ID="11">GM</COREF>, <COREF ID="13">GE</COREF> PROJECTS</COREF> LIKELY TO GET <COREF ID="15" MIN="AIRWAVES">NEEDED AIRWAVES</COREF> FROM <COREF ID="4" TYPE="IDENT" REF="2">FCC</COREF> <COREF ID="27">THURSDAY</COREF> (For use by New York Times News Service clients)
By Liza McDonald c.1996 <COREF ID="8">Bloomberg Business News</COREF>. By Liza McDonald c.1996 <COREF ID="8">Bloomberg Business News</COREF>
12" TYPE="IDENT" REF="13">General Electric Co.</COREF></COREF></COREF> are likely to get the <COREF ID="14" TYPE="IDENT" REF="15" MIN="airwaves">airwaves <COREF ID="16" TYPE="IDENT" REF="17">they</COREF> need</COREF> from <COREF ID="18" TYPE="IDENT" REF="4" MIN="regulators">federal regulators</COREF>. <p> <COREF ID="69" MIN="Plans">Plans for <COREF ID="20" TYPE="IDENT" REF="21"><COREF ID="19" TYPE="IDENT" REF="9">Hughes</COREF>' <COREF ID="55">Galaxy VIII(I)</COREF> project and <COREF ID="22" TYPE="IDENT" REF="12">GE</COREF>'s <COREF ID="64">GE Americom</COREF> project</COREF></COREF> depend on the <COREF ID="29" MIN="allocation"><COREF ID="23" TYPE="IDENT" REF="18">Federal Communications Commission</COREF>'s allocation of a swath of spectrum</COREF> that will let <COREF ID="24" TYPE="IDENT" REF="20">their</COREF> earth stations communicate with satellites in space. <p> Scheduled for a vote at the <COREF ID="109" MIN="meeting"><COREF ID="25" TYPE="IDENT" REF="23">agency</COREF>'s meeting</COREF> on <COREF ID="26" TYPE="IDENT" REF="27">Thursday</COREF>, the <COREF ID="28" TYPE="IDENT" REF="29" MIN="allocation">expected allocation</COREF> will let the <COREF ID="30" TYPE="IDENT" REF="31">companies</COREF> transmit video pictures, phone calls, and other data from earth stations to orbiting satellites, and then to customers in <COREF ID="36">Mexico</COREF>, the Caribbean, Central America, and South America. </Preamble> <text> <p> Washington, <COREF ID="5" TYPE="IDENT" REF="6">Sept. 10</COREF> (<COREF ID="7" TYPE="IDENT" REF="8">Bloomberg</COREF>) --<COREF ID="17" MIN="systems">Satellite systems to deliver video services to <COREF ID="43">Latin America</COREF> planned by <COREF ID="31" MIN="Hughes Electronics Corp. and General Electric Co. ><COREF ID="9" TYPE="IDENT" REF="0" MIN="Hughes Electronics Corp."><COREF ID="10" REF="11" TYPE="IDENT">General Motors Corp.</COREF>'s Hughes Electronics Corp.</COREF> and <COREF ID=. <p> <COREF ID="32" TYPE="IDENT" REF="30" MIN="companies">Both companies</COREF> said <COREF ID="33" TYPE="IDENT" REF="32">they</COREF> expect to use <COREF ID="34" TYPE="IDENT"</PREAMBLE> <TEXT> <p> Washington, <COREF ID="5" TYPE="IDENT" REF="6">Sept. 10</COREF> (<COREF ID="7" TYPE="IDENT" REF="8">Bloomberg</COREF>) --<COREF ID="17" MIN="systems">Satellite systems to deliver video services to <COREF ID="43">Latin America</COREF> planned by <COREF ID="31" MIN="Hughes Electronics Corp. and General Electric Co."><COREF ID="9" TYPE="IDENT" REF="0" MIN="Hughes Electronics Corp."><COREF ID="10" REF="11" TYPE="IDENT">General Motors Corp.</COREF>'s Hughes Electronics Corp.</COREF> and <COREF ID="12" TYPE="IDENT" REF="13">General Electric Co.</COREF></COREF></COREF> are likely to get the <COREF ID="14" TYPE="IDENT" REF="15" MIN="airwaves">airwaves <COREF ID="16" TYPE="IDENT" REF="17">they</COREF> need</COREF> from <COREF ID="18" TYPE="IDENT" REF="4" MIN="regulators">federal regulators</COREF>. <p> <COREF ID="69" MIN="Plans">Plans for <COREF ID="20" TYPE="IDENT" REF="21"><COREF ID="19" TYPE="IDENT" REF="9">Hughes</COREF>' <COREF ID="55">Galaxy VIII(I)</COREF> project and <COREF ID="22" TYPE="IDENT" REF="12">GE</COREF>'s <COREF ID="64">GE Americom</COREF> project</COREF></COREF> depend on the <COREF ID="29" MIN="allocation"><COREF ID="23" TYPE="IDENT" REF="18">Federal Communications Commission</COREF>'s allocation of a swath of spectrum</COREF> that will let <COREF ID="24" TYPE="IDENT" REF="20">their</COREF> earth stations communicate with satellites in space. <p> Scheduled for a vote at the <COREF ID="109" MIN="meeting"><COREF ID="25" TYPE="IDENT" REF="23">agency</COREF>'s meeting</COREF> on <COREF ID="26" TYPE="IDENT" REF="27">Thursday</COREF>, the <COREF ID="28" TYPE="IDENT" REF="29" MIN="allocation">expected allocation</COREF> will let the <COREF ID="30" TYPE="IDENT" REF="31">companies</COREF> transmit video pictures, phone calls, and other data from earth stations to orbiting satellites, and then to customers in <COREF ID="36">Mexico</COREF>, the Caribbean, Central America, and South America. <p> <COREF ID="32" TYPE="IDENT" REF="30" MIN="companies">Both companies</COREF> said <COREF ID="33" TYPE="IDENT" REF="32">they</COREF> expect to use <COREF ID="34" TYPE="IDENT"
Multivision SA and Medcom SA all have plans to deliver direct-to-home video satellite service to <COREF ID="37" TYPE="IDENT" REF="35">Mexico</COREF> within a year. <p> <COREF ID="38" TYPE="IDENT" REF="39" MIN="Televisa">Televisa, <COREF ID="41" TYPE="IDENT" REF="38" MIN="broadcaster"><COREF ID="40" TYPE="IDENT" REF="37">Mexico</COREF>'s largest broadcaster</COREF>,</COREF> has formed an agreement with Rupert Murdoch's News Corp., <COREF ID="51">Brazil</COREF>'s Globo television network, and Denver-based Tele-Communications Inc. to offer direct-to-home service throughout <COREF ID="42" TYPE="IDENT" REF="43">Latin America</COREF>. <p> <COREF ID="45">Turner Broadcasting System Inc.</COREF>, for <COREF ID=. <COREF ID="39" MIN="Grupo Televisa SA"><COREF ID="35" TYPE="IDENT" REF="36">Mexico</COREF>'s Grupo Televisa SA</COREF>. 44" TYPE="IDENT" REF="45">its</COREF> part, agreed in July to distribute Cable News Network and three other cable channels to<COREF ID="39" MIN="Grupo Televisa SA"><COREF ID="35" TYPE="IDENT" REF="36">Mexico</COREF>'s Grupo Televisa SA</COREF>, Multivision SA and Medcom SA all have plans to deliver direct-to-home video satellite service to <COREF ID="37" TYPE="IDENT" REF="35">Mexico</COREF> within a year. <p> <COREF ID="38" TYPE="IDENT" REF="39" MIN="Televisa">Televisa, <COREF ID="41" TYPE="IDENT" REF="38" MIN="broadcaster"><COREF ID="40" TYPE="IDENT" REF="37">Mexico</COREF>'s largest broadcaster</COREF>,</COREF> has formed an agreement with Rupert Murdoch's News Corp., <COREF ID="51">Brazil</COREF>'s Globo television network, and Denver-based Tele-Communications Inc. to offer direct-to-home service throughout <COREF ID="42" TYPE="IDENT" REF="43">Latin America</COREF>. <p> <COREF ID="45">Turner Broadcasting System Inc.</COREF>, for <COREF ID="44" TYPE="IDENT" REF="45">its</COREF> part, agreed in July to distribute Cable News Network and three other cable channels to
Latin American subscribers together with a <COREF ID="46" TYPE="IDENT" REF="47">group</COREF> called <COREF ID=. 47Latin American subscribers together with a <COREF ID="46" TYPE="IDENT" REF="47">group</COREF> called <COREF ID="47">Galaxy
Venezuela's Cisneros Group of Cos., <COREF ID="50" TYPE="IDENT" REF="51">Brazil</COREF>'s Televisao Abril, and <COREF ID="52" TYPE="IDENT" REF="40">Mexico</COREF>'s MVS Multivision</COREF>. <p> <COREF ID="53" TYPE="IDENT" REF="19">Hughes</COREF>' <COREF ID="54" TYPE="IDENT" REF="55">Galaxy VIII(I)</COREF> plan would use one satellite. Latin America</COREF>, composed of <COREF ID="49" TYPE="IDENT" REF="47" STATUS="OPT"><COREF ID="48" TYPE="IDENT" REF="10">GM</COREF>'s DirecTV. which the <COREF ID="56" TYPE="IDENT" REF="53">company</COREF> estimates will cost $230 million to build and launchLatin America</COREF>, composed of <COREF ID="49" TYPE="IDENT" REF="47" STATUS="OPT"><COREF ID="48" TYPE="IDENT" REF="10">GM</COREF>'s DirecTV, Venezuela's Cisneros Group of Cos., <COREF ID="50" TYPE="IDENT" REF="51">Brazil</COREF>'s Televisao Abril, and <COREF ID="52" TYPE="IDENT" REF="40">Mexico</COREF>'s MVS Multivision</COREF>. <p> <COREF ID="53" TYPE="IDENT" REF="19">Hughes</COREF>' <COREF ID="54" TYPE="IDENT" REF="55">Galaxy VIII(I)</COREF> plan would use one satellite, which the <COREF ID="56" TYPE="IDENT" REF="53">company</COREF> estimates will cost $230 million to build and launch.
TYPE="IDENT" REF="58">its</COREF> first year and $58 million each year for the following 11 years, according to filings at the <COREF ID="62" TYPE="IDENT" REF="25">FCC</COREF>. <p> <COREF ID="63" TYPE="IDENT" REF="64">GE Americom</COREF> filed <COREF ID="65" TYPE="IDENT" REF="63">its</COREF> cost and revenue assumptions confidentially at the <COREF ID="66" TYPE="IDENT" REF="62">agency</COREF>. <COREF ID="67" TYPE="IDENT" REF="65">Its</COREF> plan calls for two satellites and a spare. <p> The <COREF ID="68" TYPE="IDENT" REF="69">plans</COREF> are significant, said <COREF ID="71" MIN="Scott Blake Harris. <COREF ID="57" TYPE="IDENT" REF="56">Hughes</COREF> expects <COREF ID="58" TYPE="IDENT" REF="54">Galaxy VIII(I)</COREF> will bring in <COREF ID="60">$30 million</COREF> in <COREF ID="59" TYPE="IDENT" REF="60">revenue</COREF> in <COREF ID="61. COREF>,</COREF> as ``yet another indication of the health and strength of the <COREF ID="96" MIN="industry"><COREF ID="98">U.S.</COREF> satellite industry</COREF>.'' <p> The <COREF ID=. 72" TYPE="IDENT" REF="14" MIN="airwaves">airwaves to be allocated</COREF> are currently used by the <COREF ID="74">National Aeronautics and Space Administration</COREF> for <COREF ID="76" MIN="system"><COREF ID="73" TYPE="IDENT" REF="74">its</COREF> tracking and data relay system</COREF>. The <COREF ID="75" TYPE="IDENT" REF="76">system</COREF>, among other things, monitors the <COREF ID="78">Space<COREF ID="57" TYPE="IDENT" REF="56">Hughes</COREF> expects <COREF ID="58" TYPE="IDENT" REF="54">Galaxy VIII(I)</COREF> will bring in <COREF ID="60">$30 million</COREF> in <COREF ID="59" TYPE="IDENT" REF="60">revenue</COREF> in <COREF ID="61" TYPE="IDENT" REF="58">its</COREF> first year and $58 million each year for the following 11 years, according to filings at the <COREF ID="62" TYPE="IDENT" REF="25">FCC</COREF>. <p> <COREF ID="63" TYPE="IDENT" REF="64">GE Americom</COREF> filed <COREF ID="65" TYPE="IDENT" REF="63">its</COREF> cost and revenue assumptions confidentially at the <COREF ID="66" TYPE="IDENT" REF="62">agency</COREF>. <COREF ID="67" TYPE="IDENT" REF="65">Its</COREF> plan calls for two satellites and a spare. <p> The <COREF ID="68" TYPE="IDENT" REF="69">plans</COREF> are significant, said <COREF ID="71" MIN="Scott Blake Harris">Scott Blake Harris, <COREF ID="70" TYPE="IDENT" REF="71" MIN="chief">former FCC international bureau chief</COREF>,</COREF> as ``yet another indication of the health and strength of the <COREF ID="96" MIN="industry"><COREF ID="98">U.S.</COREF> satellite industry</COREF>.'' <p> The <COREF ID="72" TYPE="IDENT" REF="14" MIN="airwaves">airwaves to be allocated</COREF> are currently used by the <COREF ID="74">National Aeronautics and Space Administration</COREF> for <COREF ID="76" MIN="system"><COREF ID="73" TYPE="IDENT" REF="74">its</COREF> tracking and data relay system</COREF>. The <COREF ID="75" TYPE="IDENT" REF="76">system</COREF>, among other things, monitors the <COREF ID="78">Space
83" TYPE="IDENT" REF="48">GM</COREF>. Other companies that support the <COREF ID="84" TYPE="IDENT" REF="28">allocation</COREF> and may use <COREF ID="85" TYPE="IDENT" REF="84">it</COREF> include Lockheed Martin Corp.'s Loral Space and Communications, International Private Satellite Partners/Orion Atlantic Capital Corp., and Comsat Corp. <p> No opposing comments on the <COREF ID="86" TYPE="IDENT" REF="85">allocation</COREF> were filed at the <COREF ID="87" TYPE="IDENT" REF="66">agency</COREF>. <p> The spectrum shift comes at <COREF ID="88" TYPE="IDENT" REF="57">Hughes</COREF>' initiative. The <COREF ID="89" TYPE="IDENT" REF="88">company</COREF> asked the <COREF ID="90" TYPE="IDENT" REF="87">FCC</COREF> in March of 1995 to fix an imbalance in the uplink and downlink airwaves available to fixed satellite services so that the spectrum could be more effectively used. <p>`T he downlink bands are not paired with any uplink bands,'' the <COREF ID="91" TYPE="IDENT" REF="89">company</COREF> wrote. Indeed, for 1000 megahertz allocated for <COREF ID="93" MIN="downlinks">satellite downlinks</COREF>, or <COREF ID="92" TYPE="IDENT" REF="93" MIN="transmissions">transmissions from satellites to earth stations</COREF>, the <COREF ID="94" TYPE="IDENT" REF="90">agency</COREF> had only set aside 500 megahertz for uplinks. / Shuttle<, Coref>, helps to retrieve satellites, and relays communications between ground stations and low-orbiting spacecraft including the <COREF ID="77" TYPE="IDENT" REF="78">Shuttle</COREF>. <COREF ID="81" MIN="functions">Those functions</COREF> are likely to be slowly shifted to another slice of spectrum, while the <COREF ID="79" TYPE="IDENT" REF="72" MIN="airwaves">airwaves <COREF ID="80" TYPE="IDENT" REF="81">they</COREF>'ve historically used</COREF> are turned over. >United States</COREF> satellite industry</COREF>, both at home and abroad, that the <COREF ID="99" TYPE="IDENT" REF="94">commission</COREF> allocate'' more airwaves for fixed satellite uplinks, <COREF ID="100" TYPE="IDENT" REF="91">Hughes</COREF> said. <p> A <COREF ID="104" MIN="plan">similar plan</COREF> was set by the International Telecommunications Union at the <COREF ID="102">World Administrative Radio Conference</COREF> in 1992, and adopted at the <COREF ID="101" TYPE="IDENT" REF="102" MIN="meeting">same meeting</COREF> in 1995. <p> The <COREF ID="103" TYPE="IDENT" REF="104">plan</COREF> hadn't yet been implemented in the <COREF ID="105" TYPE="IDENT" REF="97">U.S.</COREF> because interference with <COREF ID="107" TYPE="IDENT" REF="80" MIN="functions"><COREF ID="106" TYPE="IDENT" REF="73">NASA</COREF>'s radar functions</COREF> hadn't been worked out. <p> Also at <COREF ID="108" TYPE="IDENT" REF="109" MIN="meeting"><COREF ID="110" TYPE="IDENT" REF="26">Thursday</COREF>'s meeting</COREF>, the <COREF ID="111" TYPE="IDENT"Shuttle</COREF>, helps to retrieve satellites, and relays communications between ground stations and low-orbiting spacecraft including the <COREF ID="77" TYPE="IDENT" REF="78">Shuttle</COREF>. <COREF ID="81" MIN="functions">Those functions</COREF> are likely to be slowly shifted to another slice of spectrum, while the <COREF ID="79" TYPE="IDENT" REF="72" MIN="airwaves">airwaves <COREF ID="80" TYPE="IDENT" REF="81">they</COREF>'ve historically used</COREF> are turned over, in part, to satellite services such as the ones planned by <COREF ID="82" TYPE="IDENT" REF="22">GE</COREF> and <COREF ID="83" TYPE="IDENT" REF="48">GM</COREF>. Other companies that support the <COREF ID="84" TYPE="IDENT" REF="28">allocation</COREF> and may use <COREF ID="85" TYPE="IDENT" REF="84">it</COREF> include Lockheed Martin Corp.'s Loral Space and Communications, International Private Satellite Partners/Orion Atlantic Capital Corp., and Comsat Corp. <p> No opposing comments on the <COREF ID="86" TYPE="IDENT" REF="85">allocation</COREF> were filed at the <COREF ID="87" TYPE="IDENT" REF="66">agency</COREF>. <p> The spectrum shift comes at <COREF ID="88" TYPE="IDENT" REF="57">Hughes</COREF>' initiative. The <COREF ID="89" TYPE="IDENT" REF="88">company</COREF> asked the <COREF ID="90" TYPE="IDENT" REF="87">FCC</COREF> in March of 1995 to fix an imbalance in the uplink and downlink airwaves available to fixed satellite services so that the spectrum could be more effectively used. <p>`T he downlink bands are not paired with any uplink bands,'' the <COREF ID="91" TYPE="IDENT" REF="89">company</COREF> wrote. Indeed, for 1000 megahertz allocated for <COREF ID="93" MIN="downlinks">satellite downlinks</COREF>, or <COREF ID="92" TYPE="IDENT" REF="93" MIN="transmissions">transmissions from satellites to earth stations</COREF>, the <COREF ID="94" TYPE="IDENT" REF="90">agency</COREF> had only set aside 500 megahertz for uplinks. That's meant that half of the downlink capacity has been unusable, because no corresponding uplink airwaves existed. <p>`I t is . . . critical to the competitiveness of the <COREF ID="95" TYPE="IDENT" REF="96" MIN="industry"><COREF ID="97" TYPE="IDENT" REF="98">United States</COREF> satellite industry</COREF>, both at home and abroad, that the <COREF ID="99" TYPE="IDENT" REF="94">commission</COREF> allocate'' more airwaves for fixed satellite uplinks, <COREF ID="100" TYPE="IDENT" REF="91">Hughes</COREF> said. <p> A <COREF ID="104" MIN="plan">similar plan</COREF> was set by the International Telecommunications Union at the <COREF ID="102">World Administrative Radio Conference</COREF> in 1992, and adopted at the <COREF ID="101" TYPE="IDENT" REF="102" MIN="meeting">same meeting</COREF> in 1995. <p> The <COREF ID="103" TYPE="IDENT" REF="104">plan</COREF> hadn't yet been implemented in the <COREF ID="105" TYPE="IDENT" REF="97">U.S.</COREF> because interference with <COREF ID="107" TYPE="IDENT" REF="80" MIN="functions"><COREF ID="106" TYPE="IDENT" REF="73">NASA</COREF>'s radar functions</COREF> hadn't been worked out. <p> Also at <COREF ID="108" TYPE="IDENT" REF="109" MIN="meeting"><COREF ID="110" TYPE="IDENT" REF="26">Thursday</COREF>'s meeting</COREF>, the <COREF ID="111" TYPE="IDENT"
</Text> <trailer> Nyt-<coref Id=, 112" TYPE="IDENT" REF="5">09-10-96</COREF> 1604EDT. </TEXT> <TRAILER> NYT-<COREF ID="112" TYPE="IDENT" REF="5">09-10-96</COREF> 1604EDT
. </Trailer> </Doc> Last, </TRAILER> </DOC> Last Modified November 1998
. Science Applications International Corporation. http://www.muc.saic.com/proceedings/walkthru_co_key.html Copyright 1998 Science Applications International Corporation |
|
233,474,003 | Lexical strata and phonotactic perplexity minimization | We test the hypothesis that in some languages the lexicon is stratified(Itô and Mester, 1995a)and that multiple phonotactic subgrammars based on gradiently measured phonotactics not only reduce average phoneme uncertainty, but align well with proposed lexical strata that are based on categorical constraint ranking differences.Whereas some recent studies(Smith, 2018;Hsu and Jesney, 2017;Hearn, 2016;Hayes, 2016)address the question of lexical stratification directly through interactions of categorical or gradient phonotactic and/or faithfulness constraints, here we adopt a neural network approach, originating with Elman (1990) and most recently implemented by Mayer and Nelson (2020) (henceforth M&N) which captures phonotactic knowledge through relatively simple recurrent neural language models (RNNLMs) that predict the next phoneme given the previous phonemes in the word.Hayes and Wilson (2008)'s model of phonotactics introduced into mainstream phonological theory the conception of phonotactic knowledge as probabilistic gradience. 1 Here, we ask: if a grammar can account for phonotactic patterns probabilistically, and having multiple subgrammars achieves a greater overall probability of the data of a language, how might such probabilistically optimal subgrammars place words into phonotactically differing lexical strata?We test this idea on the well-known hypothesis of lexical stratification in Japanese(Itô and Mester, 1995a), in which the proposed strata -Yamato (native), Sino-Japanese, mimetic and foreign -exhibit different phonotactic properties. We apply a modification of M&N's code (Nelson and Mayer, 2020), to a corpus of 75,000+ words from NHK (1999), converted to phone-1 e.g., in English [pr] is a more probable onset cluster than [Tw], but both are possible. mic representations. The model learns a RNNLM whose objective function is to minimize the overall phoneme perplexity 2 , averaged across positions in each word and across words in the database. We then bifurcate the model into two separate RNNLMs, with no prior bias given to each, and the model calculates the perplexity of each word as the minimum result between the two models, in effect assigning each word to one of two grammars/models, with no supervision about a word's lexical stratum.The experiment We propose that a learner, faced with sets of words that exhibit divergent phonotactic properties, would allow their phonotactic grammar to diverge into sub-modules that align with each divergent set. phonotactic grammar subgrammar 1 subgrammar 2 Figure 1: Bifurcation of grammar into sub-grammarsWe ask, to what extent would these submodules align with the lexical strata proposed by Mester (1995, 1999) for Japanese, which subdivides the lexicon as shown in figure 2, where each stratum has a different ranking of some constraints in OT?Here we adopt a probabilistic model of phonology (Pierrehumbert, 2015) which can capture finegrained phonotactic properties that go beyond what categorical constraints can capture. For 2 M&N calculate the perplexity as "the exponentiated entropy, or inverse of the mean log likelihood, of all phonemes in the test word." | [
44115640,
204780842
] | Lexical strata and phonotactic perplexity minimization
February 14-19, 2021
Eric Rosen erosen27@jh.edu
Johns Hopkins University
Lexical strata and phonotactic perplexity minimization
Proceedings of the Society for Computation in Linguistics (SCiL) 2021
the Society for Computation in Linguistics (SCiL) 2021February 14-19, 2021415
We test the hypothesis that in some languages the lexicon is stratified(Itô and Mester, 1995a)and that multiple phonotactic subgrammars based on gradiently measured phonotactics not only reduce average phoneme uncertainty, but align well with proposed lexical strata that are based on categorical constraint ranking differences.Whereas some recent studies(Smith, 2018;Hsu and Jesney, 2017;Hearn, 2016;Hayes, 2016)address the question of lexical stratification directly through interactions of categorical or gradient phonotactic and/or faithfulness constraints, here we adopt a neural network approach, originating with Elman (1990) and most recently implemented by Mayer and Nelson (2020) (henceforth M&N) which captures phonotactic knowledge through relatively simple recurrent neural language models (RNNLMs) that predict the next phoneme given the previous phonemes in the word.Hayes and Wilson (2008)'s model of phonotactics introduced into mainstream phonological theory the conception of phonotactic knowledge as probabilistic gradience. 1 Here, we ask: if a grammar can account for phonotactic patterns probabilistically, and having multiple subgrammars achieves a greater overall probability of the data of a language, how might such probabilistically optimal subgrammars place words into phonotactically differing lexical strata?We test this idea on the well-known hypothesis of lexical stratification in Japanese(Itô and Mester, 1995a), in which the proposed strata -Yamato (native), Sino-Japanese, mimetic and foreign -exhibit different phonotactic properties. We apply a modification of M&N's code (Nelson and Mayer, 2020), to a corpus of 75,000+ words from NHK (1999), converted to phone-1 e.g., in English [pr] is a more probable onset cluster than [Tw], but both are possible. mic representations. The model learns a RNNLM whose objective function is to minimize the overall phoneme perplexity 2 , averaged across positions in each word and across words in the database. We then bifurcate the model into two separate RNNLMs, with no prior bias given to each, and the model calculates the perplexity of each word as the minimum result between the two models, in effect assigning each word to one of two grammars/models, with no supervision about a word's lexical stratum.The experiment We propose that a learner, faced with sets of words that exhibit divergent phonotactic properties, would allow their phonotactic grammar to diverge into sub-modules that align with each divergent set. phonotactic grammar subgrammar 1 subgrammar 2 Figure 1: Bifurcation of grammar into sub-grammarsWe ask, to what extent would these submodules align with the lexical strata proposed by Mester (1995, 1999) for Japanese, which subdivides the lexicon as shown in figure 2, where each stratum has a different ranking of some constraints in OT?Here we adopt a probabilistic model of phonology (Pierrehumbert, 2015) which can capture finegrained phonotactic properties that go beyond what categorical constraints can capture. For 2 M&N calculate the perplexity as "the exponentiated entropy, or inverse of the mean log likelihood, of all phonemes in the test word."
We test the hypothesis that in some languages the lexicon is stratified (Itô and Mester, 1995a) and that multiple phonotactic subgrammars based on gradiently measured phonotactics not only reduce average phoneme uncertainty, but align well with proposed lexical strata that are based on categorical constraint ranking differences.
Whereas some recent studies (Smith, 2018;Hsu and Jesney, 2017;Hearn, 2016;Hayes, 2016) address the question of lexical stratification directly through interactions of categorical or gradient phonotactic and/or faithfulness constraints, here we adopt a neural network approach, originating with Elman (1990) and most recently implemented by (henceforth M&N) which captures phonotactic knowledge through relatively simple recurrent neural language models (RNNLMs) that predict the next phoneme given the previous phonemes in the word. Hayes and Wilson (2008)'s model of phonotactics introduced into mainstream phonological theory the conception of phonotactic knowledge as probabilistic gradience. 1 Here, we ask: if a grammar can account for phonotactic patterns probabilistically, and having multiple subgrammars achieves a greater overall probability of the data of a language, how might such probabilistically optimal subgrammars place words into phonotactically differing lexical strata?
We test this idea on the well-known hypothesis of lexical stratification in Japanese (Itô and Mester, 1995a), in which the proposed strata -Yamato (native), Sino-Japanese, mimetic and foreign -exhibit different phonotactic properties. We apply a modification of M&N's code , to a corpus of 75,000+ words from NHK (1999), converted to phone-mic representations. The model learns a RNNLM whose objective function is to minimize the overall phoneme perplexity 2 , averaged across positions in each word and across words in the database. We then bifurcate the model into two separate RNNLMs, with no prior bias given to each, and the model calculates the perplexity of each word as the minimum result between the two models, in effect assigning each word to one of two grammars/models, with no supervision about a word's lexical stratum.
The experiment We propose that a learner, faced with sets of words that exhibit divergent phonotactic properties, would allow their phonotactic grammar to diverge into sub-modules that align with each divergent set. We ask, to what extent would these submodules align with the lexical strata proposed by Mester (1995, 1999) for Japanese, which subdivides the lexicon as shown in figure 2, where each stratum has a different ranking of some constraints in OT?
Here we adopt a probabilistic model of phonology (Pierrehumbert, 2015) which can capture finegrained phonotactic properties that go beyond what categorical constraints can capture. For example, Sino-Japanese word zyokyo 除 去 'removal' violates none of the constraints in Itô and Mester's tableau but has a phonotactic pattern (offglide after onset consonant) seldom seen in Yamato words. Offglides occur robustly in Sino-Japanese words but rarely in Yamato (native) words such as kyuuri 胡瓜 'cucumber' (Martin, 1987, 469)). In our experiment, as illustrated in figure 3, we simulate a putative divergence of a phonotactic grammar into sub-modules by feeding a corpus of Japanese words into two diverging RNNs. Outline of the experiment We use a corpus of 24,000+ Japanese words from NHK (1999), converted to phonemic representations:
. . . 除去 −→ ジョキョ −→ zyokyo . . .
We feed them into a maximally simple recurrent neural network, modeled after ; , whose onelayer RNN of finite precision has been shown to be unable to learn unattested patterns such as a n b n (Weiss et al., 2018;Merrill et al., 2020). Each cell h i of the RNN is fed (a) a vector-encoding of the input segment x i and (b) the vector output of the previous hidden state h i−1 . It applies a separate linear transformation to each, sums them, applies a non-linear function such as tanh, and outputs a vector which is softmaxed to give a probability distribution over candidate phonemes y i . Its objective is to minimize the overall negative log probability of each phoneme, averaged across positions in words and words in the database. The model is initialized as two subnetworks, each with a different random initialization. Each word is fed into both submodels, each of which tries to predict each segment based on the string that precedes it. Figure 4, copied from M&N, illustrates the architecture of one timestep of a simple RNN. x t is a phoneme input at timestep t, h t−1 is the output of the network's hidden layer at time t − 1, recycled back on the next timestep, W h and W x are linear transformations with an added nonlinearity, and W y is a linear transformation to produce output y t for each timestep. As phoneme vectors are input to the model over time, an unrolled model that is fed example word zyokyo 除去 'removal' looks as shown in figure 5: Each word in the dataset is fed to each of two randomly initialized submodels. The submodel that a given word performs best on is updated with backpropagation to improve that word's predicted probability. But the other submodel is not updated. If the words diverge enough in their phonotactics, the submodels will also diverge, with some words being more predictable with one submodel and other words with the other. The learning is unsupervised, in that the words are not tagged with any strata labels such as 'Yamato' or 'Sino-Japanese'. The model quickly plateaus after running through all the data for only 3 epochs. The words end up in two groups, with membership of each word determined by the model that gave it the highest probability at the end of learning. In a random sample of 1,000 words from each of the resulting groups, group 1 has a strong presence (73.2%) of Yamato words but few Sino-Japanese words, which 416 dominate group 2 (79.3%), which has few Yamato words, as shown in figure Many of the misclassified words could phonotactically occur in either stratum: misclassified SJ words yaku-ri 'pharmacology' 薬理 and sui-ro 水 路 'watercourse', are homophonous with fictitious Yamato compounds ya-kuri 家栗 'house-chestnut' and su-iro 巣色 'nest-colour'.
h 0 h 1 h 2 h 3 h 4 h 5 h 6 y 1ŷ2ŷ3ŷ4ŷ5ŷ6 x 1 x 2 x 3 x 4 x 5 x 6 targets y i : z y o k y o input: < s > z y o k y
The outputs of each RNN at each timestep reveal differences in predictions that mirror gradient phonotactic differences between Yamato and Sino-Japanese words. Among ∼4000 nouns and ∼2000 verbs Martin (1987)'s diachronic study of Yamato Japanese, only 15 lexemes have a word-initial consonant-offglide sequence such as [#ky−]. Such [Cy] sequences are extremely common among Sino-Japanese words (e.g. city name 京都 kyooto 'Kyoto'.) Conversely, diphthong [ae] which occurs frequently in the Yamato lexicon (e.g., mae 前 'before') occurs rarely if at all tautomorphemically in Sino-Japanese words. 3 For comparison, we ran a bigram model that predicts only from the previous segment. It misclassifies Sino-Japanese words at a 68% higher rate than the n-gram model, suggesting that ngram segmental patterns with n > 2 contribute to the gradient phonotactics of the language. 4 Table 1 shows the ratio of probabilities assigned by RNN 1 relative to RNN 2 for offglide [y] to occur after selected word-initial consonants (column 2) and for [e] to follow a word-initial [Ca] sequence (column 3). RNN 1 favours the occurrence of offglides much more than RNN 2 and RNN 2 favours diphthong [ae] much more than RNN 1 .
These results suggest that the two-RNN model has encoded gradient phonotactic differences be-3 See also Moreton and Amano (1999) whose psycholinguistic experiments use initial Cy sequences to trigger perception of a Sino-Japanese stratum, which in turn affects perception of vowel length later in the word. 4 E.g., bigrams will not detect the fact that few Yamato words have /e/ in the first syllable. (Martin, 1987, 48) Schematic of the RNN model Sample word, zyokyo 除去 'removal' is shown in figures 7 and 8 processed by each of the two submodels. Its overall probability, calculated as the mean log probability of each segment, is 7.78 times higher for submodel 2 than with submodel 1. (2 −2.43 /2 −5.39 )
h 0 h 1 h 2 h 3 h 4 h 5 h 6 y 1ŷ2ŷ3ŷ4ŷ5ŷ6 x 1 x 2 x 3 x 4 x 5 x 6 targets y i : z y o k y o input: < s > z y o k y p(y i |x 0 . . . x i ):
.002 .008 .573 .095 .0005 .420 Figure 7: Model 1 Mean per-phoneme log 2 probability = -5.39
h 0 h 1 h 2 h 3 h 4 h 5 h 6
y 1ŷ2ŷ3ŷ4ŷ5ŷ6
x 1 x 2
x 3
x 4
x 5 Corresponding coloured pairs of segments across the models show a greater likelihood for group 2 than group 1 by factors of 31, 24 and 70.
One source of this difference is that the wordinitial /z/ is uncommon in Yamato words, which clustered with submodel 1, but not in Sino-Japanese words. And the offglides that follow both the z and the k are much more common in Sino-Japanese words than Yamato words. In sum, unsupervised clustering with diverging phonotactic submodels aligns strongly with strata based on categorical constraint rankings. Hayes (2016) and Jennifer Smith (p.c.) both cite Itô and Mester (1995b, 821) suggesting that membership in lexical strata may be gradient. Hayes (2016) explores, using a MaxEnt model, gradient membership of English words in Native vs. Latinate vocabularies as scores on a scale based on weighted constraints that favour or disfavour membership in one of the strata. Whereas Hayes' model uses heuristics to pre-classify a word's stratum membership and pre-defines phonotactic constraints, our model allows strata to emerge on their own without preassignment and constraints to emerge latently by the probabilities the model assigns to segment in a particular environment.
Gradient membership in strata
To examine how our model might assign words gradiently into strata 6 , we took random samples of 100 words each assigned to groups 1 (mostly Yamato) and 2 (mostly Sino-Japanese), with differences of perplexity 2 − perplexity 1 shown in the first plot, and the most marginal words (|diff| < 0.5) in the second plot. ( = Yamato, = Sino-Japanese, = foreign, = hybrid or ambiguous. −1.5 −1 −0.5 0 0.5 1 1.5 2 −0.25 0 0.25 The four most marginal, misclassified Sino-Japanese words in group 1 (red dots left of 0), are hi-dai 肥大 'corpulence' (lit. 'fatten-big'), ei-yo 栄誉 'honour' (lit. 'honour-honour'), ku-iki 区域 'district' (lit. 'ward-level') and ki-matu 期末 'endof-term' (lit. 'term-end') with margins of -0.004, -0.008, -0.047 and -0.043 respectively, which are homophonous with fictitious Yamato compounds hida-i 襞胃 'pleat-stomach', ei-yo 鱏夜 'ray(fish)night', kui-ki 杭木 'stake-tree' and ki-matu 木松 'tree-pine'. 7 On one hand, the abundance of morphemes with different Sino-Japanese and Yamato readings of the same kanji (e.g., moku and ki for 木 'tree'), discretely determines the stratum membership of a given reading by the pronunciation contrast: Sino-Japanese moku contrasts with Yamato ki. On the other hand, many readings of either type, Sino-Japanese or Yamato, not only satisfy all of Itô and Mester's strata-distinguishing constraints, but show only marginal differences in the phoneme perplexity assigned by each model, making their phoneme sequences ambiguous as to their stratum. In Japanese, one easily finds stratastraddling homophones like Sino-Japanese atu 圧, 'pressure' (as in si-atu 'finger-pressure, shiatsu') and Yamato atu-i 熱い 'hot'. The lack of a characteristically Yamato or Sino-Japanese shape makes them good candidates for gradient strata membership in a way analogous to English words that Hayes judges to be 'intermediate in Latinity.'
If we look at misclassified Yamato words in group 2 (blue dots right of 0) we find fewer marginal words. We do find tooku 遠 く 'far' (adv.), (which is also homophonous with foreign borrowing 'talk'), and atude 厚 手 'thick' (lit. thick-hand) with margins 0.023 and 0.228 respectively. tooku has many candidates for homophonous fictitious compounds, including what appears to be a recently coined compound 投句 'posting a haiku poem in the internet' (lit. 'throwstanza'). In the marginal group are also two hybrid compounds, modosi-zee 戻し税 'tax refund' (lit. 'return(trans.)-tax', Yamato+Sino-Japanese) and zyo-no-kuti 序の口 'beginning' (lit. 'beginningentrance', Sino-Japanese+Yamato) with margins of 0.084 and 0.130.
Summary Simple neural networks which can learn gradient phonotactic properties of words such as the probability of a given phoneme to occur after a given string are shown to be useful tools in capturing the ways in which gradient phonotactics separate words in a language into strata in both discrete and continuous ways. Hayes (2016, 3) suggests that speakers of a stratified language internalize stratal divisions for stylistic reasons. Further research might examine whether this applies to Japanese, where there is a choice among a Yamato, Sino-Japanese and foreign word for expressing the same meaning (e.g., kuruma 車, zidoosya 自動車, kaa カア for 'car, automobile').
Figure 1 :
1Bifurcation of grammar into sub-grammars
Figure 2 :
2Itô and Mester's constraint violations in lexical strata
Figure 3 :
3Sample word zyokyo 'removal' fed into two sub-grammars
Figure 4 :
4Mayer and Nelson's diagram of an RNN cell
Figure 5 :
5Unrolled Model over time
Figure 6 :
6Membership in strata of 1000 words assigned by each sub-model
Figure 8 :
8y i |x 0 . . . x i ): Model 2 Mean per-phoneme log 2 probability = -2.43
e.g., in English [pr] is a more probable onset cluster than [Tw], but both are possible.
M&N calculate the perplexity as "the exponentiated entropy, or inverse of the mean log likelihood, of all phonemes in the test word." 415 Proceedings of the Society for Computation in Linguistics (SCiL) 2021, pages 415-419. Held on-line February 14-19, 2021
Not all languages that experience borrowing will necessarily exhibit strata: arguably, only if the phonotactics of adapted forms of borrowings differ enough from those of native words.
There will be some oversimplification in that so far, we have only used two RNN models in spite of evidence of more than two strata in Japanese.7 The last one is not quite fictitious, having been coined as the actual name of a hotel in Hiroshima.
Finding structure in time. Jeffrey L Elman, Cognitive Science. 142Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science, 14(2):179 -211.
Comparative phonotactics. Bruce Hayes, Proceedings of the 50th meeting of the Chicago Linguistic Society. the 50th meeting of the Chicago Linguistic SocietyBruce Hayes. 2016. Comparative phonotactics. In Proceedings of the 50th meeting of the Chicago Lin- guistic Society, pages 265-285.
A maximum entropy model of phonotactics and phonotactic learning. Bruce Hayes, Colin Wilson, Linguistic Inquiry. 393Bruce Hayes and Colin Wilson. 2008. A maximum en- tropy model of phonotactics and phonotactic learn- ing. Linguistic Inquiry, 39(3):379 -440.
Rethinking the Core-Periphery Model: Evidence from Japanese and English. Ryan Hearn, Proceedings of the 24th Manchester Phonology Meeting. the 24th Manchester Phonology MeetingRyan Hearn. 2016. Rethinking the Core-Periphery Model: Evidence from Japanese and English. In Proceedings of the 24th Manchester Phonology Meeting.
Loanword adaptation in Québec French: Evidence for weighted scalar constraints. Brian Hsu, Karen Jesney, Proceedings of the West Coast Conference on Formal Linguistics. the West Coast Conference on Formal Linguistics34Brian Hsu and Karen Jesney. 2017. Loanword adapta- tion in Québec French: Evidence for weighted scalar constraints. In Proceedings of the West Coast Con- ference on Formal Linguistics, volume 34, pages 249 -258.
The coreperiphery structure of the lexicon and constraints on reranking. Junko Itô, Armin Mester, University of Massachusetts Occasional Papers in Linguistics. 18Junko Itô and Armin Mester. 1995a. The core- periphery structure of the lexicon and constraints on reranking. In University of Massachusetts Occa- sional Papers in Linguistics, volume 18.
The Handbook of Phonological Theory, chapter Japanese Phonology. Junko Itô, Armin Mester, BlackwellJunko Itô and Armin Mester. 1995b. The Handbook of Phonological Theory, chapter Japanese Phonology. Blackwell.
The Japanese Language Through Time. Samuel E Martin, Yale University PressSamuel E. Martin. 1987. The Japanese Language Through Time. Yale University Press.
Phonotactic learning with neural language models. Connor Mayer, Max Nelson, Proceedings of the Annual Meeting of the Association for Computation in Linguistics. the Annual Meeting of the Association for Computation in Linguistics3Connor Mayer and Max Nelson. 2020. Phonotactic learning with neural language models. In Proceed- ings of the Annual Meeting of the Association for Computation in Linguistics, volume 3.
A formal hierarchy of RNN architectures. William Merrill, Gail Weiss, Yoav Goldberg, Roy Schwartz, Noah A Smith, Eran Yahav, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsWilliam Merrill, Gail Weiss, Yoav Goldberg, Roy Schwartz, Noah A. Smith, and Eran Yahav. 2020. A formal hierarchy of RNN architectures. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 443 -459.
Phonotactics in the perception of Japanese vowel length: evidence for long-distance dependencies. Elliott Moreton, Shigeaki Amano, Proceedings of the IEEE. the IEEEElliott Moreton and Shigeaki Amano. 1999. Phonotac- tics in the perception of Japanese vowel length: evi- dence for long-distance dependencies. Proceedings of the IEEE.
Phonotactic language model. Max Nelson, Connor Mayer, Max Nelson and Connor Mayer. 2020. Phonotactic language model.
NHK Hatsuon Akusento Jiten (NHK Pronunciation and Accent Dictionary). NHK (Japanese Broadcasting Corporation). Nhk, NHK. 1999. NHK Hatsuon Akusento Jiten (NHK Pro- nunciation and Accent Dictionary). NHK (Japanese Broadcasting Corporation).
Oxford Handbook on the the History of Phonology, chapter 70+ years of probabiliistic phonology. Janet Pierrehumbert, Oxford University PressJanet Pierrehumbert. 2015. Oxford Handbook on the the History of Phonology, chapter 70+ years of prob- abiliistic phonology. Oxford University Press.
Stratified faithfulness in Harmonic Grammar and emergent core-periphery structure. Hana-bana: A festschrift for Junko Itô and Armin Mester. Jennifer Smith, 13Jennifer Smith. 2018. Stratified faithfulness in Har- monic Grammar and emergent core-periphery struc- ture. Hana-bana: A festschrift for Junko Itô and Armin Mester, 13.
On the practical computational power of finite precision rnns for language recognition. Gail Weiss, Yoav Goldberg, Eran Yahav, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsShort PapersGail Weiss, Yoav Goldberg, and Eran Yahav. 2018. On the practical computational power of finite precision rnns for language recognition. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Short Papers), pages 740 -745. |
32,922,893 | HANDLING SCOPE AMBIGUITIES IN ENGLISH | This paper describes a program for handling "scope ambiguities" in individual English sentences. The program operates on initial logical translations, generated by a parser/translator, in which "unscoped elements" such as quantifiers, coordinators and negation are left in place to be extracted and positioned by the scoping program. The program produces the set of valid scoped readings, omitting logically redundant readings, and places the readings in an approximate order of preference using a set of domain-independent heuristics. The heuristics are based on information about the lexical type of each operator and on "structural relations" between pairs of operators.The need for such domain-independent heuristics is emphasized; in some cases they can be decisive and in general they will serve as a guide to the use of further heuristics based on domain-specific knowledge and on the context of discourse. The emphasis of this paper is on discussing several of the more problematic aspects of the scoping protocol which wcre encountered during the design of the scoping program. | [
14692027
] | HANDLING SCOPE AMBIGUITIES IN ENGLISH
Sven Hurum
Department of Computing Science 615 General Services Building
University of Alberta Edmonton
T6G 2H1Canada
HANDLING SCOPE AMBIGUITIES IN ENGLISH
This paper describes a program for handling "scope ambiguities" in individual English sentences. The program operates on initial logical translations, generated by a parser/translator, in which "unscoped elements" such as quantifiers, coordinators and negation are left in place to be extracted and positioned by the scoping program. The program produces the set of valid scoped readings, omitting logically redundant readings, and places the readings in an approximate order of preference using a set of domain-independent heuristics. The heuristics are based on information about the lexical type of each operator and on "structural relations" between pairs of operators.The need for such domain-independent heuristics is emphasized; in some cases they can be decisive and in general they will serve as a guide to the use of further heuristics based on domain-specific knowledge and on the context of discourse. The emphasis of this paper is on discussing several of the more problematic aspects of the scoping protocol which wcre encountered during the design of the scoping program.
INTRODUCTION
Natural languages contain a variety of "logical operators" which interact with each other to give rise to different types of ambiguity. The logical operators recognized by the scoping program include quantifiers, coordinators and negation, which are initially "unscoped" and must therefore be moved into position by the program, and adverbs, predicates and connectives (such as if-then). At the moment, other operators such as tense, aspect and modals are left in place and therefore assume innermost scope. There is some evidence that the handling of the scoping of quantifiers relative to such operators may require special treatment (eg. Fodor 1970;Enc 1981;S aarinen 1983).
Three simple examples will illustrate some different types of scope ambiguity and their representation in an informal first order predicate logic, using restrictions on quantifiers and an infix notation for sentential formulas. The meanings of the different interpretations should be clear. For example, (4) may mean that John didn't meet either Jane or Mary (5) or that he didn't meet at least one of them (6). Further examples are given in Hurum & Schubert (1986) and Hurum (1987). Some alternative proposals for representing scope ambiguities are also discussed in the latter. Until quite recently, designers of natural language understanding systems have given little attention to the problem of dealing with scope ambiguities. Two of the earliest attempts to incorporate quantifier scoping into natural language understanding systems in ,an integral way are described in Woods (1978) and Dahl (1979). Some more recent scoping algorithms are presented in McCord (1981), Warren & Pereira (1982), Hobbs (1983), Saint-Dizier (1985) and Hobbs & Shieber (1987).
While each of these algorithms introduces some new features, certain problems, such as the scoping of coordinators and the use of heuristics to select preferred readings, have generally been given little or no treatment. Some of the main features of the algorithm being discussed here are: (a) it handles ambiguities created by quantifiers, coordinators, negation and adverbs, ~ (b) it works bottom-up and left-to-right and generates the set of valid scoped readings in one pass, (c) it removes logically redundant readings as they are encountered during the process of scoping and (d) it uses domain-independent heuristics, during the scoping, to arrange the readings in an approximate order of preference.
LOGICAL REPRESENTATION
The scoping program is designed to be used as an extension to a parser/translator which generates initial translations in a first order modal logic augmented with certain operators (Schubert & Pelletier 1982). The operators being used include a generic kind forming operator, g, and the Four types of coordinated expression are currently handled: noun phrases, noun complements, verbs and verb phrases. At the moment adverbs are treated as scoped (unmoved) elements. operators ~ and x which form functions and terms, respectively, from infix and prefix expressions. For example, the operators x I and x 2 map infix and prefix expressions, respectively, into terms.
The syntax of the logical translations has been chosen to simplify the mapping from the syntax (using a modified GPSG parser). A mixed infix/prefix notation has been used in order to keep the logical form as close as possible to the surface form. Two examples of the initial logical translations being used are shown below. Unscoped operators, which are to be extracted and positioned by the scoping program, are placed in angled brackets; the square, curly and round brackets signify infix (sentential), prefix (predicative) and functional expressions, respectively. The suffixes which are attached to each word to mark their surface position are not shown here.
(10) Many people visit Europe every month (11) A sample of the output from the program is shown in the Appendix. The two sentences shown are 1. All men want to marry Peggy or Sue 2. Mary (read or told some story to each child)
The output for each sentence consists of an echo of the input formula followed by a list of scoped readings ordered according to their average scoping weight (see below). In the LISP notation, the prefixes i, p, f, q and c are used to mark infix, prefix, functional, quantified and coordinated expressions. The first sentence is taken from Schubert & Pclletier (1982) which gives a description of the three interpretations. The second sentence has been parsed as having a verb phrase ambiguity (indicated by the brackets) and the input formula therefore contains two duplicated operators. The two comparisons made are each~some and each~or. No comparison is made between the commutative operators some and or.
COORDINATED EXPRESSIONS
The scoping of coordinated expressions poses several problems. One problem is how to avoid the "vacuous" quantification or coordination which may result whenever a coordinated expression contains an unscoped operator. For example, if the indefinite some blonde in (14) is applied to the clause before the coordinator, the subsequent application of the latter will result in vacuous quantification (15). A second problem is how to handle the scoping of multiple copies of the same operator which may occur when the operator is embedded inside a coordinated expression. This problem is unavoidable when it results from the parser; for example, (16) may be parsed and initially translated into (17). The brackets signify that the sentence has been parsed as having a VP coordination. Three constraints on a duplicated operator such as a I are that (a) it must scope consistently with respect to all other operators, (b) it must only be compared once to each other operator (for the purposes of computing the preferred scope orderings) and (c) if it scopes outside a coordinator which initially embeds it, only one copy of the operator can be carried up. This poses a problem for bottom-up approaches to scoping since some global knowledge is needed to ensure the consistency of the scoping of duplicated operators inside the different expressions in which they occur. It therefore is necessary to use some overhead to keep track of the scope relations of operators which are present in multiple copies, and to store this information separately for each reading.
Duplication of operators may also occur during the scoping process. For example, the application of one of the coordinators in (18) John and Bill visited Spain or Morocco will result in the duplication of the other. At present, the scoping program avoids this problem, as well as the problem of vacuous quantification, by using a "branch-trimming" function which removes incorrectly embedded operators from the different branches of a coordinator at the time of applying the coordinator. This function is simple to use but does involve some extra overhead. The problem of duplication resulting from the parser is handled by labelling readings and by storing on the property list of each duplicated operator a list of the operators having been scoped inside and outside the operator.
A third problem is how to treat unscoped operators inside "coordinated predicates". In example (16) it seems evident that the indefinite a boat cannot have both opaque and transparent interpretations in the same reading. That is, assuming that the opaque/transparent distinction is to be represented in terms of scope, then both copies of the indefinite must scope consistently relative to the two coordinated predicates hope and intend. Since the two predicates are distinct, and therefore should be allowed to scope independently with a boat, the current version of the program contains a special constraint which forces coordinated predicates to scope consistently relative to all duplicated operators embedded inside them. This rule could be treated as a heuristic rather than as a constraint, but the rule does seem to be absolute.
In contrast, there is a general, but not absolute, preference for "symmetric" interpretations whenever coordinated expressions contain similar but not identical pairs of operators. For example, in (19) one could imagine a context in which it is made clear that Sue, but not Mary, has a particular hat in mind and in (20)it is possible, though very improbable, that the two indefinites have different functional dependencies.
(19) John (knows that Sue wants) and (thinks that Mary hopes) to buy a new hat (20) Mary read a story to each child or told a story to each child At present, the program does not adequately handle this preference for symmetric readings, which requires some non-local heuristic knowledge.
REDUNDANT READINGS
A test is made for logically redundant readings whenever an unscoped operator is about to be positioned (applied to a clausal expression). A reading is considered to be redundant if two commutative operators are applied consecutively and the suffix of the outer operator is greater than that of the inner one. (Suffixes are attached to words by the parser to mark their position in the original sentence). If one of the operators is a coordinator the criterion used is that the quantifier should scope inside the coordinator. Readings will also be removed if they contain an ordering of a pair of operators which has a scoping weight less than a preset parameter.
SCOPING WEIGHTS
In order to quantify scoping preferences, we associate a "scoping weight", a value between 0 and 1, with each pair of interacting operators. The weight indicates the preference for the reading in which the second operator (in surface order) scopes outside the first one. For example, the value 0.9 indicates a strong preference for the reading in which the second operator takes wide scope, a preference which might, on occasion, be overridden by pragmatics. The weight associated with the reverse ordering will automatically be 0.1. The value 0.5 indicates an equal preference for both scope orderings in a pragmatically neutral context. The following examples illustrate how the scoping weights are used.
(21) Some person on each team was injured .9 (22) Some person playing on each team was injured .5 (23) Some person who plays on each team was injured .02
As the scoping weights indicate, the ability of the embedded quantifier each team to widen scope over some person decreases as the embedding phrase changes from a prepositional phrase (21) to a verb phrase (22) to a full clause (23). This "embedding hierarchy" was pointed out by van Lehn (1978) and also holds for phrases serving as adverbials or as terms.
The scoping weights used by the program have been derived from the examination of a large number of sentences such as these. An attempt was made to keep the sentences as pragmatically neutral as possible and to try to obtain a domain-independent weight for pairs of operators in a given "pattern", where a pattern is a combination of two operators of given types and in a given structural relation to one another. Although the data reflect the intuitve judgements of the author, it is likely that there would be a good general agreement in cases in which there is a strong preference for one ordering. In other cases, the need to include pragmatic knowldege would be more important. Some consideration was also given to the empirical data on scoping preferences described previously (eg. Ioup 1975, van Lehn 1978, Gil 1982.
Given that we can determine scoping weights for pairs of operators, it is still necessary to combine these to arrive at an overall rating of a reading. This involves two separate problems: how to select pairs of operators for comparison and then how to combine the weights obtained. There appears to be no obvious solution to either of these problems. There are at least three different choices which need to be made when picking a strategy for selecting pairs of operators for comparison, none of which is clearcut. For example, if a sentence contains three quantifiers at the same level, such as a subject and two objects, should all three pairs of quantifiers be compared or should the results of each comparison made be used to reduce the number of further comparisons needed?
There is also no obviou.s way to combine the scoping weights obtained. A probabilistic treatment is not feasible, in part because different readings of a sentence may involve different numbers of comparisons. The simplest method is to order the readings according to their average scoping weight and this appears to give quite good results. The major drawback to this method is that it tends to smooth out the effect of very low individual weights. However, there are ways to minimize this problem. At present, a parameter is used to specify the minimal acceptable scoping weight so that readings with very low pairwise orderings can be removed. Alternatively, readings could be tagged with their lowest weights and some readings later be set aside or some more complex function could be used for combining the scoping weights. These problems are discussed in Hurum (1987).
HEURISTICS BASED ON LEXICAL TYPES
The domain-independent heuristics are based on two types of information: the lexical type of each operator and structural relations between pairs of operators. Some heuristics are def'med for individual lexical types, such as each, some and or, and others for classes of individual types, such as universal or existential quantifiers. Most of the heuristics used by the program are stored in a table of scoping weights. To minimize the amount of data, universal and existential quantifiers are sometimes represented by the "standards" each and some and other members of these classes are then related to the standards by ratios. Most, but not all, of the heuristics described here are currently being used by the program.
The universal quantifiers may be arranged in the hierarchy each > every > all in terms of the tendency to tak~ wide scope. This hierarchy has been mentioned by both Ioup and van Lehn and a number of people have commented that the function of each in English may partly be to indicate the distributive (ie. wide scope) reading. Universal quantifiers have a surpringly marked tendency to scope inside a negation (24-26) given their usual tendency, with the exception of both, to take wide scope:
(24) All people aren't happy (25) John didn't win every race (26) John didn't win both races .6 .2 .1
Non-universal quantifiers in the subject position of a negated sentence seldom scope inside the negation.
Few and no have very little ability to widen scope over a preceding operator but, in contrast, have a strong tendency to trap subsequent operators. Therefore, a distinction needs to be made between the ability to widen scope over a preceding operator and to trap subsequent operators. The following examples show the scoping of few and no relative to quantifiers (27,28), the negation operator (29) There appear to be some sentences, typically containing two no or few quantifiers, which are used in a sense which does not appear to correspond to any straightforward ordering of the quantifiers. Instead, the total quantity of predications being made seems to be emphasized. An example is given in (35). One possible way of representing such sentences might be to use branching quantification (Hintikka 1974).
(35) Few boys kissed few girls
Sentences containing operators which create negated contexts (eg. few, no, not, never) are often disambiguated by the presence of any, ever ("at any time") or neither-nor. For example, after few or no the adverb sometimes is usually replaced by ever (32,33) and the wide-scope reading of never in (34) is best obtained by replacing the or with and or by using neither-nor and ever (35). The singular indefinite a is quite consistently more likely than some to take narrow scope. For example, it would be more natural to use (36) and (38) than (37) and (39) to indicate the narrow scope existential reading. Also, (40) is acceptable but (41) is not. (The scoping weights given for (40) and (41) have not been adjusted to take into account the effect of the modifier different).
(36) Each person grabbed a chair .3 (37) Each person grabbed some chair .5
(38) John didn't find a chair .3 (39) John didn't find some chair .6 (40) A different person brought each chair (.7) (41) *Some different person brought each chair (.5)
The scoping of sentences containing the determiner a may be complicated by the presence of generic interpretations. For example, in (42) the non-specific reading could be obtained either by giving never wide scope or by treating a guest as a quasi-universal quantifier (derived from the generic interpretation via meaning postulates). Assuming that the generic reading is present, the standard interpretation in which never has wide scope must be treated as being either absent or logically redundant. (This is an oversimplified view; some attempts are currently being made to give a uniform interpretation to indefinites which would avoid this problem of redundancy). A somewhat similar problem arises when indefinites which may have generic interpretations are present inside the antecedent clause of an if-then sentence (as in certain donkey sentences). Note that there is no comparable reading when a is replaced by some, which does not receive a generic interpretation (43).
(42) An old sailor never gets seasick (43) Some old sailor never gets seasick .5? .01
Plural indefinites can be placed in an approximate hierarchy in terms of their ability to receive collective interpretations: some > three > several > many. This correlates with their ability to be given "specific" interpretations and therefore with their ability to widen scope from strong clausal scope traps (44,45) and perhaps also, to a lesser extent, relative to the negation operator (46,47). The scoping weights shown are associated with the scoping of the existentially quantified collections.
(44) If three people show up then I will come (45) If many people show up then I will come Plural indefinites may have implicit universal partitives associated with them (see Hurum & Schubert 1986) and, when present, these must be scoped separately. While the existential quantifiers associated with indefinites are free to scope to any position, in the absence of pragmatic information, there are considerable restrictions on the ability of plural indefinites to distribute over preceding operators. For example, plural indefinites in the object position almost never distribute over quantifiers in the subject position unless preceded by an explicit partitive.
HEURISTICS BASED ON STRUCTURAL RELATIONS
Scoping preferences are strongly influenced by "structural relations", that is, the relations between pairs of operators in the initial logical translations (or, approximately, in the parse tree). Structural relations may be loosely classified as "horizontal", an example being the subject-object relation, or "vertical", an example being the relation between a noun phrase determiner and an operator inside the noun complement. Although this distinction is not always clearcut, the scoping program makes considerable use of it and separate heuristics are used for horizontal and vertical relations.
As a general rule in English, scope order tends to follow surface order, although there axe some exceptions such as in the case of postposed adverbials. The effect of surface order is strengthened considerably by "shifting", where shifting is used here in a general sense to include the preposing of adverbials, topicaiization and perhaps the dative shift. For example, it is much more likely that (48) refers to a different set of people each year than (49) and the distributive reading is more likely in (50) than in (51). It should be pointed out that Ioup (1975) has presented evidence that in a wide range of languages "grammatical function" (eg. subject, direct object .... ) may be a more important determiner of scope than surface order. (Ioup considers "topic" to be a grammatical category rather than a result of shifting.) It happens that in English there is a close correlation between surface order and scope order. However, it would always be possible, if necessary, to reinterpret some of the heuristics shown here in terms of grammatical relations rather than in terms of surface order.
The effect of surface order and shifting also appears to hold for temporal adverbs, although the interaction of quantifiers with such adverbs can sometimes be quite complex. In the case of negated quantifiers (eg. no, few) and not the effect of surface order is again quite decisive, with the exception of certain postposed adverbs (see below):
(52) Often, nobody is late for lunch (53) Nobody is often late for lunch .01 .0
The effect of shifting can also be seen with existential quantifiers. The following examples show the scoping of the existential quantifier associated with many relative to often in preposed, medial and postposed positions. The effect of adverb placement is clear, although the scoping of postposed adverbs will be radically different depending on such factors as the pronounciation or the presence or absence of a comma (56,57).
(54) Often, many people are late for lunch (55) Many people are often late for lunch (56) Many people are late for lunch often (57) Many people axe late for lunch, often .02 .5 .1 .98
The principal ambiguity in these sentences is related to whether or not the same group of people is being referred to in each situation (we may loosely interpret often as quantifying over instances of a type of situation, in this case a lunch setting). This ambiguity can be represented by scoping the existential quantifier associated with many relative to often. It is very unlikely that we would give many wide scope in (54) although this would be more likely with indefinites which can more easily receive specific interpretations, such as some, three and several.
There is also an optional universal partitive associated with plural indefinites such as many and this must also be scoped. The interaction of universal quantifiers with temporal adverbs involves some quite subtle ambiguities which axe related to whether or not all members of some collection axe involved in the same situation. However, the effect of surface position is still notable:
(58) Often, everyone is late for lunch (59) Everyone is often late for lunch .02 .5
Different types of embedding construct form quite consistent traps for quantifiers and other unscoped operators. Operators inside prepositional phrases generally widen scope over the head quantifier, those inside full clauses almost never do (with the exception of specific indefinites) and those inside bare verb phrases have an intermediate tendency to do so (see (21)-(23)). Verb phrases serving as noun complements form considerably weaker traps than do those serving as nominalized arguments. Preposed antecedent clauses of connective sentences such as if-then sentences appear to form absolute traps for distributive quantifiers, in contrast to consequent or postposed antecedent clauses, and for connective clauses in general the ordering of the antecedent and consequent clauses needs to be considered.
The effect of structural relations on the scoping of quantifiers generally holds for coordinators as well. Some examples will illustrate the effect of the surface position of NP coordinators relative to negation (60,61) and to quantifiers (62,63). The presence of either, by emphasizing the disjunction, tends to widen the scope of or somewhat.
(60) (Either) Sue or Mary didn't dance with John .2 (61) John didn't dance with Sue or Mary .2 (62) Few people danced with Sue or Mary .2 (63) (Either) Sue or Mary danced with few people .2
Verb coordinators usually scope inside quantifiers in the subject and object positions. For quantifiers in the subject position this is clearly a structural constraint; in both (64) and (65) the subject presumably scopes outside the coordinator and it is difficult to reverse this ordering by passivization or by replacing the subject with someone different. By contrast, the examples show that the seoping of a direct object relative to a verb coordinator is largely dependent on pragmaties.
(64) Someone wrote and mailed a letter (65) Someone wrote and received a letter However, there is probably some bias, which might be considered structural, for scoping an object outside a verb coordinator, and this bias is stronger for prepositional objects and for some (66). It is always possible for or to take wide scope, both relative to subject and object quantifiers, although the latter is more likely (67,68). This is the "speaker's uncertainty" reading. Although always present, it is particularly difficult to get this reading with few or no in the object position.
(66) John drove and flew to some resort The interaction of plural quantifiers with verb conjunction is more complex and we make a distinction between primary and secondary scope dependencies: the former involves the scoping of the collection formed from the plural quantifier and the latter the details of the predications of individual members of the collection. For example, in (69) there is presumably only one set of two people, meaning that the collection formed from the subject scopes outside the coordinator. The details of the individual predications can be specified later. In general, some members of the set might be involved in both predications and some in just one. This type of interaction between sets is similar to that between two plural quantifiers. The conjoined subject in (70) could also intitially be treated as a collection, or the ambiguity might in this case be handled directly by the parser.
(69) Two different people painted and redecorated the apartment (70) John and Fred, respectively, fixed and upholstered the chair
The scoping of noun coordinators is somewhat similar to that of verb coordinators, although there is evidence that the wide scope and reading is elliptical for a NP coordination and should therfore be handled by the parser. Therefore, the scoping weights for (71)-(74) have been placed in brackets. The "scoping" of and relative to a singular indefinite is again largely dependent on pragmatics (71,72), although there is probably a statistical bias in favour of the wide scope (elliptical) and reading. Again, this reading is less likely when a is replaced with (singular) some.
(71) A man and woman came to help (72) A friend and colleague came to help (.5) (.5) Plural indefinites also display two levels of interaction with coordinated nouns. The initial scope (or syntactic) ambiguities of (73) and (74) again are related to whether there is one collection or two: the latter (meaning a wide scope and) is pragmatically more likely in (73) simply because we wouldn't use this wording to refer to a man and a woman. The details of applying the predicates to members of the collection can again be postponed until later. (A scoping program clearly needs to have some specialized knowledge for handling interactions between sets of objects and predicates.) (73) Two men and women arrived (74) Twenty men and women arrived (.5) (.5) Again, or tends to be trapped (75) and the trap is especially strong with no and few (76).
(75) Every freshman or sophomore finished the course .2 (76) Few freshmen or sophomores finished the course .05
Coordinators are treated as forming complete scope traps except for existential quantifiers and or. This useful rule removes the three unwanted readings of (77) in which either or both of the universal quantifiers take wide scope. It may prevent the anaphoric binding of pronouns, as in (78), but this is part of a more general problem for which there is no satisfactory theory at the moment (see Lepore & Garson 1983;Schubert & Pelletier 1987a,1987bHobbs & Shieber 1987).
(77) Every man or every woman arrived late (78) Every man or some friend of his arrived late Like universal quantifiers, and tends to be trapped by clausal embedding whereas or, though less easily than existential quantifiers, can generally widen scope from strong scope traps such as "scope islands". For example, there is no reading of (79) in which and scopes outside someone, that is in which there is a different person for Sue and Mary, but in (80) there is a reading, probably the preferred one, in which or must scope outside the nested clause, meaning that each person has either heard that his aunt or that his uncle is arriving. There is also a reading of (80), perhaps not obvious at first, in which or has maximally wide scope, meaning that the speaker is not sure whether it was his aunt or his uncle that each person heard was arriving.
(79) Someone heard the news that Sue and Mary were arriving (80) Each person heard the news that his aunt or his uncle was arriving
IMPLEMENTATION OF HEURISTICS
Each formula (given an input list of sentential formulas) is traversed in a bottom-up left-to-right order with different types of expression, such as infix, prefix and coordinated expressions, being scoped by separate procedures. As each unscoped operator is encountered, its structural category is stored on its property list and this information is later used to determine the structural relation between a given pair of operators. Vertical relations are passed as parameters to subordinate procedures; in the case of if-then sentences the parameters also contain information about the position and type of clause being scoped (eg. "preposed antecedent clause").
The scoping weight for a pair of operators can then be determined from a table of weights which is indexed according to structural relations and operator types. The table has been kept as small as possible by the use of default values and "standard" operator types (such as some and each for existential and universal quantifiers). Although the use of a table of weights does not in itself have much psychological plausibility, the rules on which the table is based, such as those described above, are generally quite simple and it is hoped that rules such as these can eventually be incorporated into a more comprehensive model of the grammatical biases which underly scope preferences.
PRAGMATICS
The most obvious place to try to combine pragmatic and domain-independent information is at the level of determining the pairwise scoping weights. The problem of how these weights should then be combined still remains but this approach does seem to be worth pursuing. Since properly applied pragmatic knowledge will often result in strong, if not absolute, preferences for certain scope orderings, the chances of selecting the best overall reading of a sentence will be improved when pragmatic heuristics are added. The ability of pragmatic knowledge to veto certain scope orderings can quite easily be implemented by setting the appropriate scoping weight below the value of the rain-weight parameter which will automatically disallow any readings containing such orderings. CONCLUSION This paper has described some features of a program designed to handle scope ambiguities in English. Some of the more problematic issues which were encountered during the designing of the program were selected for discussion: the choice of logical representation, the seeping of coordinated expressions, the choice of a strategy for selecting preferred scope orderings and the determination of a set of domain-independent heuristics. The program is currently being extended to include a wider range of lexical types and input expressions and the heuristics are being improved. Following this, it is hoped to incorporate some simple types of domainand discourse-dependent knowledge into the program, in particular knowledge about expected relations among objects in a given domain and a simple discourse focus structure.
The selection of preferred scope orderings depends on the complex interaction of linguistic and context-dependent knowldege. It would be a considerable advantage to be able to factor out the contributions of different types of knowledge required and then at some later time to combine them. One conclusion of this work is that there is a body of largely domain-independent knowledge which can play an important, and at times decisive, role in the disambiguation of scope.
Such knowledge is most useful when it indicates a very strong or absolute preference for one reading.
Absolute preferences typically occur with operators such as any or both and with distributive quantifiers or and inside strong clausal trap or inside a coordinator. Very strong preferences may occur with operators such as few, no or each, with preposed or topicalized operators and with operators inside prepositional phrases. When the domain-independent heuristics do not provide a strong preference for one reading, they may still serve as a useful guide guide for the later application of pragmatic knowledge. This is commonly the case when indefinites are present, as the "specificity" of indefinites is mainly context-dependent.
A number of problems have not been discussed here because they remain unresolved. These include: the scoping of quantifiers relative to tense and opaque operators, the logical representation and scoping of generics, the treatment of pronouns not embedded within their quantifier antecedents, non-local problems such as the preference for "symmetric" readings, the use of stray words, such as together and both (as an adverb), which provide important clues for preferred scope relations and the difficult problem of combining linguistic and context-dependent heuristic knowledge.
didn't meet Jane or Mary (5) --,[[John met Jane] v [John met Mary]] (6) [--,[John met Jane] v ~[John met Mary]] (7) Someone always comes late (8) (3x:person (always [x comes late])) (9) (always (3x:person [x comes late]))
( 16 )
16John (hopes and intends to buy a boat) (17) [John <and (PRES {hope (x2 (INF {buy <a I boat>}))}) (PRES {intend ('r 2 (INF {buy <a 1 boat>}))}) >]
didn't find three chairs (47) John didn't find many chairs
Every sailor gave flowers to two girls (51) To two girls, every sailor gave flowers
and temporal adverbs (30). By comparison, some does not create a strong trap for always (31).(27) Nobody read every article
(28) Someone read no articles
.02
.02
(29) Few people weren't surprised
.01
(30) Few people always come late
(31) Someone always comes late
.01
.5
ACKNOWLEDGEMENTS I would like to thank Dr. Len Schubert for originally suggesting this project and for his advice and many helpful comments throughout the course of this work. I would also like to thank the members of my thesis committee and the members of the Logical Grammar Study Group at the University of Alberta for their comments on parts of this work. This work was supported in part by NSERC Operating Grant A8818.(f PRES (p want3 (TAU2 (f INrF (p marry4 (c or6 Peggy5 Sue"/))))))) l. The average weight is 0.7 based on t comparison(i yl0 (f L\'F (p marry4 Sue7)))))))))) 2. The average weight is 0.5 based on 2 comparisons (q allt y5 (i y5 man2) (i (i y5 (f PRES (p want3 (TAU2 (f 1ANF (p marry4 Peggy5)))))) or6 (i y5 (f PRES (p want3 (TAU2 (f EN' F (p marry4 Sue7))))))))3. The average weight is 0.3 based on 2 comparisons (i (q alll y5 (i y5 man2) (i y5 (f PRES (p want3 (TAU2 (f IN'F (p marry4 Peggy5))))))) or6 (q alll y5 (i y5 man2)(i y5 (f PRES (p want3 (TAU2 (f L-NF (p marry4 Sue*'/)))))))) time used = 308 msecs. (f PAST (p read3 (q some6 story7) (q each8 child9))) (f PAST (p tell5 (q some6 story7) (q each8 child9))))) 1. The average weight is 0.82 based on 2 comparisons (i (q each8 y17 (i y17 child9) (q some6 yl5 (i yl5 story7) (i Maryl (f PAST (p read3 y15 yl7))))) or4 (q each8 yl7 (i y17 child9) (q some6 yl5 (i y15 story7) (i Maryl (f PAST (p tell5 y15 y17)))))) 2. The average weight is 0.67 based on 2 comparisons (i (q some6 yl5 (i y15 story7) (q each8 y17 (i y17 child9) (i Maryl (f I'AST (p read3 yl5 y17))))) or4 (q some6 y15 (i yl5 story7) (q each8 yl7 (i y17 child9) (i Maryl (f PAST (p tell5 y15 y17)))))) time used = 386 msecs.
Quantification in a Three-V~.lued Logic for Natural Language Question-Answering Systems. V Dahl, Proceedings of the Sixth International Joint Conference for Artificial Intelligence. the Sixth International Joint Conference for Artificial IntelligenceDahl, V. (1979), "Quantification in a Three-V~.lued Logic for Natural Language Question-Answering Systems", Proceedings of the Sixth International Joint Conference for Artificial Intelligence, 182-187.
M Ene, Tense Without Scope: An Analysis of Nouns as Indexicals. University of Wisconsin, Madisonunpublished Ph.D. DissertationEne, M. (1981), Tense Without Scope: An Analysis of Nouns as Indexicals, unpublished Ph.D. Dissertation (University of Wisconsin, Madison).
The Linguistic Description of Opaque Contexts. J D Fodor, available from Indiana University Linguistics ClubPh.D. DissertationFodor, J.D. (1976), The Linguistic Description of Opaque Contexts, Ph.D. Dissertation, available from Indiana University Linguistics Club.
Quantifier Scope, Linguistic Variation, and Natural Language Semantics. D Gil, Linguistics and Philosophy. 5Gil, D. (1982), "Quantifier Scope, Linguistic Variation, and Natural Language Semantics", Linguistics and Philosophy 5, 421-472.
1 Hintikka, Quantification vs. Quantification Theory. 5Hintikka, 1. (1974), "Quantification vs. Quantification Theory", Linguistic Inquiry 5, 153-177.
An Improper Treatment of Quantification in Ordinary English. J R Hobbs, Proceedings of the Twenty-First Annual Meeting of the Association for Computaional Linguistics. the Twenty-First Annual Meeting of the Association for Computaional LinguisticsHobbs, J.R. (1983), "An Improper Treatment of Quantification in Ordinary English", Proceedings of the Twenty-First Annual Meeting of the Association for Computaional Linguistics, 57-63.
An Algorithm for Generating Quantifier Scopings. J R S M Hobbs, Shieber, In preparationHobbs, J.R. & S.M. Shieber (1987), "An Algorithm for Generating Quantifier Scopings", (In preparation).
S Hurum, Quantifier Scoping in Initial Logical Translations of English Sentences. University of AlbertaM.Sc. thesisHurum, S. (1987), Quantifier Scoping in Initial Logical Translations of English Sentences, M.Sc. thesis, University of Alberta, 1-242.
Two Types of Quantifier Scoping. S L K Hurum, Schubert, Proceedings of the Sixth Canadian Conference on Artificial Intelligence. the Sixth Canadian Conference on Artificial IntelligenceHurum, S. & L.K. Schubert (1986), "Two Types of Quantifier Scoping", Proceedings of the Sixth Canadian Conference on Artificial Intelligence", 39-43.
Some Universals for Quantifier Scope. G Ioup, Syntax and Semantics. J.P. KimballNew YorkAcademic Press4Ioup, G. (1975), "Some Universals for Quantifier Scope", in J.P. Kimball (ed.), Syntax and Semantics, Vol. 4, (New York: Academic Press), 37-58.
Pronouns and. E & J Lepore, Garson, Lepore, E. & J. Garson (1983), "Pronouns and
. Quantifier-Scope In, English, Journal of Philosophical Logic. 12Quantifier-Scope in English", Journal of Philosophical Logic 12, 327-358.
Focalizers, the Scoping Problem and Semantic Interpretation Rules in Logic Grammars. M C Mccord, Proceedings of the International Workshop on Logic Programming for Expert Systems. the International Workshop on Logic Programming for Expert SystemsLogiconWoodland HillsMcCord, M.C. (1981), "Focalizers, the Scoping Problem and Semantic Interpretation Rules in Logic Grammars", Proceedings of the International Workshop on Logic Programming for Expert Systems, Logicon, (Woodland Hills).
Quantifier Phrases are (at Least) Five Ways Ambiguous in Intensional Contexts. E Saarinen, Ambiguities in lntensional Contexts. F. HenyDordrecht: ReidelSaarinen, E. (1980), "Quantifier Phrases are (at Least) Five Ways Ambiguous in Intensional Contexts", in F. Heny (ed.), Ambiguities in lntensional Contexts, (Dordrecht: Reidel), 1-45.
Handling Quantifier Scope Ambiguities in a Semantic Representation of Natural Language Sentences. P Saint-Dizier, Natural Language Understanding and Logic Programming. V. Dahl & P. Saint-DizierNorth-hollandSaint-Dizier, P. (1985), "Handling Quantifier Scope Ambiguities in a Semantic Representation of Natural Language Sentences", in V. Dahl & P. Saint-Dizier (eds.), Natural Language Understanding and Logic Programming, (North-holland), 49-63.
From English to Logic: Context-Free Computation of 'Conventional' Logical Translation. L K F J Schubert, Pelletier, Readings in Natural Language Processing. B.J. Grosz, K. Sparck-Jones & B.L. WebberLos AltosMorgan Kaufman8Reprinted (with correctionsSchubert, L.K. & F.J. Pelletier (1982), "From English to Logic: Context-Free Computation of 'Conventional' Logical Translation", American Journal of Computational Linguistics 8, 26-44. Reprinted (with corrections) in B.J. Grosz, K. Sparck-Jones & B.L. Webber (eds.), Readings in Natural Language Processing, (Los Altos: Morgan Kaufman), 1986.
Problems in the Representation of the Logical Form of Generics, Plurals and Mass Nouns. L K F J Schubert, Pelletier, New Directions in Semantics. E. LeporeAcademic PressSchubert, L.K. & F.J. Pelletier (1987a), "Problems in the Representation of the Logical Form of Generics, Plurals and Mass Nouns", in E. Lepore (ed.), New Directions in Semantics, (Academic Press).
Generically Speaking, With Remarks on the Interpretation of Pronouns and Tenses. L K F J Schubert, Pelletier, Property Theory, Type Theory, and Semantics. G. Chierchia, B. Partee & R. TurnerDordrecht: Reidel)Schubert, L.K. & F.J. Pelletier (1987b), "Generically Speaking, With Remarks on the Interpretation of Pronouns and Tenses". To appear in G. Chierchia, B. Partee & R. Turner (eds.), Property Theory, Type Theory, and Semantics, (Dordrecht: Reidel).
Determining the Scope of English Quantifiers. K Van Lehn, MIT Artificial Intelligence LaboratoryVan Lehn, K. (1978), Determining the Scope of English Quantifiers, MIT Artificial Intelligence Laboratory |
241,583,483 | Hyperparameter Power Impact in Transformer Language Model Training | Training large language models can consume a large amount of energy. We hypothesize that the language model's configuration impacts its energy consumption, and that there is room for power consumption optimisation in modern large language models. To investigate these claims, we introduce a power consumption factor to the objective function, and explore the range of models and hyperparameter configurations that affect power. We identify multiple configuration factors that can reduce power consumption during language model training while retaining model quality. | [
207912113
] | Hyperparameter Power Impact in Transformer Language Model Training
November 10, 2021
Lucas Høyberg
IT University of Copenhagen M47 Labs
IT University of Copenhagen
IT University of Copenhagen
IT University of Copenhagen
Puvis De Chavannes
IT University of Copenhagen M47 Labs
IT University of Copenhagen
IT University of Copenhagen
IT University of Copenhagen
Mads Kongsbak
IT University of Copenhagen M47 Labs
IT University of Copenhagen
IT University of Copenhagen
IT University of Copenhagen
Timmie Mikkel
IT University of Copenhagen M47 Labs
IT University of Copenhagen
IT University of Copenhagen
IT University of Copenhagen
Rantzau Lagermann
IT University of Copenhagen M47 Labs
IT University of Copenhagen
IT University of Copenhagen
IT University of Copenhagen
Leon Derczynski
IT University of Copenhagen M47 Labs
IT University of Copenhagen
IT University of Copenhagen
IT University of Copenhagen
Hyperparameter Power Impact in Transformer Language Model Training
Proceedings of the 2nd Workshop on Simple and Efficient Natural Language Processing
the 2nd Workshop on Simple and Efficient Natural Language ProcessingNovember 10, 202196
Training large language models can consume a large amount of energy. We hypothesize that the language model's configuration impacts its energy consumption, and that there is room for power consumption optimisation in modern large language models. To investigate these claims, we introduce a power consumption factor to the objective function, and explore the range of models and hyperparameter configurations that affect power. We identify multiple configuration factors that can reduce power consumption during language model training while retaining model quality.
Introduction
Large language models have pushed the boundaries of accuracy and performance in various NLP tasks, at the cost of energy efficiency. This is due to the increasing amount of compute time and power needed to train these models (Amodei, 2018), thus increasing the amount of energy the computers training the models need to consume.
The Robustly Optimized BERT approach (RoBERTa) (Liu et al., 2019) achieved this by improving the Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019) in multiple ways, such as increasing the training time through more epochs and a larger amount of data, with BERT already requiring 1507 kWh of electricity and emitting 652 kg of CO 2 . Other strategies, such as NAS, an English to German machine translation model, consumed 656,347 kWh of electricity, corresponding to 248,019 kg of CO 2 (Strubell et al., 2019). While these models show great potential, it comes at the cost of high CO 2 emissions. The core issue is that the electricity consumed is not guaranteed to be environmentally friendly, and often comes from sources such as coal or gas. According : These authors contributed to the paper equally.
to Strubell et al. (2019), we must cut CO 2 emissions by half to slow natural disasters. However, much research in the field ignores the perspective of energy efficiency. When looking at papers from three top AI conferences, namely ACL, NeurIPS, and CVPR, work tends to focus on accuracy rather than efficiency, or a mixture (Schwartz et al., 2019).
An added benefit to developing more energyefficient models is a reduced barrier of entry to NLP research. Researchers with good ideas may not be able to execute those ideas, given that state-of-theart results are locked behind large-scale compute (Strubell et al., 2019;Bender et al., 2021).
This study investigates how to reduce power consumption in training transformer language models. We seek to address the issue of high-power models by analysing the resulting models' hyperparameters, energy consumption, and perplexity and providing initial parameter guidelines for low-power, highperformance transformers, and an opening into the research of low-power transformers.
Our research question is: How can we reduce the energy consumption of models to both lower the barrier of entry and reduce CO2 emissions, while still keeping an effective model?
Following Strubell et al. (2019), a possible approach to this problem could be the use of Bayesian hyperparameter search. Throughout our work, we managed to identify hyperparameter configurations that provide strong entries for both perplexity and energy consumption. These configurations were found through our methodology utilising Bayesian optimisation, by combining the libraries Hyperopt and PyTorch. The optimal configurations collectively spanned the identified Pareto frontier.
Related Work
There is a body of work on making language models and transformers efficient, for varying definitions of efficiency, but we're not aware of any that algo-rithmically integrate power consumption into the loss function or architecture search.
The closest related work is that on task-specific network reduction and on low-power language processing. The former can be achieved through distillation, pruning, quantisation, or all of the above. For example, Wasserblat et al. (2020) reduce trained BERT models in size by orders of magnitude while retaining task performance. Similarly, Kim et al. (2019) present highly efficient networks that as a result can process translations very quickly. However, all these techniques require the training of a large network first, thus only offering power savings at inference time. Furthermore, Kaplan et al. (2020) presents scaling laws for neural language models, which can assist in more efficient training when applied.
Method
Our general approach is as follows. We specify a dataset, task, objective function, and hyperparameter space. We then explore hyperparameter space, repeatedly training models over the same data and evaluating them in terms of task performance and power consumption. This exploration optimises for good task performance and low power consumption, but is limited to a certain volume of model configurations. Once complete, we analyse these model configurations further, investigating per-epoch performance, and common factors in high and low power consumption and task performance.
Data
The dataset chosen is the CC-News dataset (Mackenzie et al., 2020). This is a subset of the Englishlanguage portion of the entire CC-News dataset. This specific set of data was chosen due to it being partially what RoBERTa was trained on, and having the longer document length typical to the news genre. Only the first 100 000 examples are used as train, primarily chosen in an attempt to keep the energy consumption of the trained models to a minimum. Each document comes with multiple data fields, with only the text field being used for training. The reduced amount of data introduces an assumption that these results will scale, which we address later in the paper when investigating common factors in efficient and inefficient models.
Task: Language modelling
The task used for this paper is Masked Language Modelling, also referred to as Masked LM or MLM. The procedure is very simple: mask some words in the input sentences with the token ([MASK]), and then attempt to predict what these words are. An example of such a sentence is "The borders of Paris are [MASK]", where [MASK] is the word to be predicted. Perplexity is a widely used metric for the evaluation of language models. Low perplexity means better performance, which makes it a useful metric to evaluate language models in general. We use perplexity as defined by huggingface, as the exponentiated average negative log-likelihood of a sequence (HuggingFace, 2021).
Perplexity-Energy Product
We used a simple multiplication of the total perplexity and energy consumption of a model, m perplexity , m energy to act as the return value, Perplexity-Energy Product (PEP), to be reduced:
P EP = m perplexity · m energy(1)
Where m perplexity is the perplexity of the trained LM, and m energy is the energy consumption for training the LM measured by Carbontracker (Anthony et al., 2020). We chose to call this return value, PEP, as shown in the equation . The reason for using a composite expression of both perplexity and energy is to make the optimization focus on parameters that affect both of these. We chose to multiply the two values as we hypothesised it would punish high values a lot more since multiplication is a lot more violent than addition. An issue with this is that as soon as either value is below 1, the loss is simply a linearly scaled-down expression of the other number. A worst-case scenario would be that Hyperopt would prioritise optimising one value and neglect the other, but the resulting loss would still be acceptable for the optimiser. A lower PEP value is better, since the aim is to minimize the energy cost and the perplexity.
There are underlying issues with assessing both the quality and efficiency of models in a single metric. One of the issues is that when efficiency is below one kilowatt-hour, the resulting PEP value will essentially be a scaled-down perplexity. So potentially, a model with a good balance between efficiency and quality would have a higher PEP value than a model that is very efficient, but lacking in quality. Additionally, using a single metric of the
Hardware
The experiment platform was V100 GPUs on i7 CPUs running over a SLURM service. As stated by (Lasse, 2021), Carbontracker also works slurm, only measuring GPU devices available for the given job, so the reported consumption figures are consistent but could be slight underestimates.
Hyperparameters
To investigate hyperparameter impact on Transformer model power use, we chose to specifically look at the parameters related to its size, such as the number of its hidden layers or number of attention heads, alongside a few key parameters such as the type of positional encoding, activation functions and dropout probabilities. The parameters picked were chosen on the assumption that they were the ones most likely to affect both model perplexity, and model energy consumption during training. Table 1 shows our final search space and parameter value intervals. Please note that the hidden_size of the model is given as:
h size = h size_mult · #A heads(2)
Where h size is hidden size, h size_mult is hidden size multiplier and #A heads is number of attention heads. This is due to the constraint that the hidden size of the model has to be a multiple of the number attention heads in the model (Vaswani et al., 2017). This leaves a parameter space of size 1026 between [1, 1800], most of which are centred around the lower end of the interval.
Search algorithm
We use Bayesian optimisation to traverse hyperparameter space in search of low-power transformer training configurations. Bayesian optimisation is suited to cases where one wishes to find optimal parameter configurations, for some definition of optimal, but individual trials can be expensive. This makes it a good match for transformer hyperparameter tuning. Search is executed in parallel over multiple GPUs and GPU hosts. As noted in (Bergstra et al., 2011), "The consequence of parallelization is that each proposal x * is based on less feedback. This makes search less efficient, though faster in terms of wall time". Research indicates Bayesian hyperparameter search techniques are more efficient than brute force techniques (Strubell et al., 2019), such as grid search; this family is often the least efficient in terms of time-to-viable-solutions. Further, our general focus on energy efficiency motivates choosing efficient search algorithms.
Samples follow a uniform distribution. This was a deliberate choice, as we have no knowledge over which parameters would perform best. We hypothesised giving a uniform distribution for all parameters would yield the best results. The specific algorithm used to minimise the loss is the Tree of Parzen Estimators (TPE) (Bergstra et al., 2013).
Model Selection and Per-epoch Measurements
Hyperopt was left to run over the data and hyperparameters for a fixed number of days. Models were trained with a batch size of 2, over 3 epochs. The batch size was chosen due to stability, as the relatively low number of epochs could make the results unstable. According to Masters and Luschi (2018), the most stable and best generalization results have been obtained with a batch size of 32 or smaller, but the best results were with batch sizes as low as 2. We also logged all of the parameter configurations alongside the energy consumption and perplexity of each model. The result was 154 different hyperparameter configurations. We then retrained the 154 hyperparameter configurations we found over 10 epochs to see a further evaluation of how each model would evolve per epoch in terms of power consumption and performance. For each model, a callback implementing Carbontracker was used to gather data about the energy consumption after each epoch, and the same callback was used to log the perplexity of each model. Another callback was created to save a model for each epoch, resulting in 1540 different models being saved. Each model was trained on a single Tesla V100 32GB graphics card.
Results
Correlation Matrices
The majority of the result section is going to utilise correlation matrices to analyse the data. We have two categorical data entries, namely the activation function and the position embedding type. For this, we use categorical correlation to showcase both the activation function and the position embedding type alongside the rest of the data. The primary reason for using correlation matrices is its ability to quickly visualise and pinpoint patterns in the data. We have three different ways of evaluating the different hyperparameters, namely with (i) energy consumption, (ii) perplexity, and (iii) PEP value, and therefore correlation matrices make it easy to visualise the hyperparameters. In general, the primary concern is to see correlations between the evaluation methods and hyperparameters. Table 2 contain overviews of the best 15% of models and worst 15% of models, for PEP value, with regard to average parameter size. A count of what position embedding type and activation function was used can be found in table 3 The models were sorted by lowest PEP value, and the best 15% and worst 15% were chosen. Table 2 presents an overview of average energy consumption and perplexity of the best 15% of models, to compare with the worst 15% of models in terms of PEP value.
PEP
The data of the best 15% of models is introduced to analyse tendencies that provide the PEP values. The data of the worst 15% of models is introduced to analyse what not to do when choosing parameters for a new model. Three different correlation matrices, for all models, for the best and worst 15% of models are given in Appendix A.
Energy
The data here is presented in the same format as the previous section. The table 4 show findings of the best 15% models in terms of low energy consumption, and the worst 15%, alongside several correlation matrices in appendix B. Furthermore, As these results are extracted from models that have been trained through hyperopt with a specific loss function, the resulting parameters are not chosen to achieve the lowest energy consumption possible, but rather the lowest PEP-value. These results can then possibly indicate which parameters can be tweaked to reduce energy consumption specifically, while retaining some performance.
Identifying optimal models
While PEP is a suitable metric, we also want to be able to identify models which can't be optimised further with respect to perplexity or energy consumption without a penalty in the other. Therefore we identified the Pareto-optimal models. The next graphs, Figure 1 showcases energy consumption related to perplexity for each model evolving over the 10 epochs. The colour of each dot represents a specific model throughout all the models, hence it is possible to see how each model progresses throughout the graphs. Besides a visualisation of how each model evolves, the graph also highlights the Pareto curve for each epoch, which is an indicator of the best-performing models for each epoch. While some of these best models might not have a particularly good PEP value, they are still a part of the Pareto curve, and thus can't improve either perplexity or energy consumption without a decrease in the other.
Furthermore, as can be seen in figure 1, there are a few models which permanently reside at 2000 perplexity. These are all models which have low vocab size, high dropout probabilities, low hidden size, or a combination of the three. This, combined with a relatively high number of hidden layers and
Analysis
For the analysis, we start with the most trivial optimisation steps and progress towards less trivial ones.
Parameter correlations
When looking at the best 15% models in terms of low PEP value, analysing the resulting correlation matrix given in Figure 3 in appendix A can give us insight into whether our approach has resulted in a good balance between energy consumption and perplexity. It can also give clues as to what parameter values can be chosen to reduce a models energy consumption without affecting perplexity. Additionally, we define a good balance for a hyperparameter between energy consumption and perplexity as the absolute value of the correlation one parameter to perplexity roughly equalling the absolute value of the correlation between the same parameter and energy consumption. Specifically:
|corr paramn,P P L | ≈ |corr paramn,energy |
The correlation between perplexity and energy consumption is at -0.88, which indicates that the use of hyperopt and our PEP loss function has given a good balance between the quality and energy use during optimisation. The three key parameters appear to be the number of hidden layers, the hidden activation function and the position embedding type. Hidden layer count has a correlation of 0.71 and -0.64 between energy consumption and perplexity respectively, this being the hyperparameter with the greatest impact on both of these values. Activation function also has a high impact, again with -0.63 and 0.68 with energy consumption and perplexity respectively (Derczynski, 2020). Lastly, the positional embedding is at 0.49 and -0.65 respectively. As in Table 2 and 3, the optimal number of hidden layers averaged out at 1.91. The most used activation functions and positional encodings are GELU and relative_key_query and 14 and 20 appearances. It is important to note that as the positional encoding defines the way attention is calculated, there may be an underlying link between the number of attention heads and encoding choice.
It is important to note that correlation is a useful tool to find linear tendencies. While a correlation close to negative and positive indicates a correlation between the two values, a value of zero doesn't guarantee a lack of correlation since there could still exist a non-linear correlation.
What predicts a good or bad model?
Looking at Table 2, the overall tendency is that the models, on average, are much smaller than the config of RoBERT a BASE (Liu et al., 2019). The number of hidden layers in our best models in terms of PEP value is on average 2, down from 12. With hidden layers having a correlation of 0.71 with energy consumption and -0.64 with perplexity, as seen in Figure 3, it is by far one of the most volatile parameters to adjust. Increasing the number of hid- den layers will increase energy consumption, but decrease perplexity. Interestingly, with it having a correlation of 0.056 to PEP value, it could suggest that hyperopt has found a good compromise in the number of hidden layers that gives a good balance between low energy consumption and perplexity. The number of attention heads is at 8, which is higher than what our initial assumption was, but with a deviation of 4.13, it varies a lot from model to model. It is important to note that this parameter is dependent on the type of positional embedding used, as the way attention is calculated is heavily dependent on it.
While these models have good performance, with the energy consumption averaging at 1.70 and perplexity at 27.23, it is important to note that these two metrics have a negative correlation with each other of -0.88. If one is reduced, the other one increases. This could suggest that we have hit a point where we can no longer make our models smaller without severely affecting our end performance. RoBERTa reported perplexities between 3.68 and 3.99 in table 3 in (Liu et al., 2019). While our perplexities on average are roughly 7.4 times higher, both our amount of training data and model size are vastly smaller by a factor of 10, thus probably leading to a shorter downstream task fine-tuning time, and as a result, lower energy consumption.
On the opposite end of the spectrum are the worst 15% of the models in terms of PEP value. We assume that these models would be bigger, both in terms of hidden layers, their hidden size, and intermediate size, as these would most likely result in longer training times than smaller ones, thus consuming more energy. Comparing Table 2 to the best 15% supports this analysis. The number of hidden layers has increased from an average of 1.91 to 6.52, hidden size from 273 to 670, and intermediate size from 716 to 1237. The energy consumption is indeed also higher, as it has gone from 1.7 kWh to 5.7 kWh. The perplexity is also very bad, being at an average of 1809.
Reducing LLM training power consumption without reducing quality
There is also a slight correlation between hidden dropout probability and perplexity, but there is almost no correlation between that probability and energy consumption. These correlations suggest that a low hidden dropout probability results in a lower perplexity, without impacting the energy consumption of the models. The top 15% of models at η = 10 ( Figure 3) indicate interesting correlations. This matrix suggests a stronger negative correlation between energy consumption and perplexity. These results indicate that it is hard to reduce both perplexity and energy consumption at the same time, since lowering one tends to increase the other -a trade-off. This could be because the models are already performing decently well and have reached the point where subsequent iterations bring diminishing returns in perplexity, while still incurring a linear increase in energy consumption. Looking at the models, 10 of the 23 models in the best 15% appear on the Pareto-optimal curve as seen in Figure 1. This suggests that the remaining 13 models are very close to being optimal, whereas the 10 models which appear on the Pareto curve already are. The same trend happens in the number of hidden layers, where the correlations indicate that increasing the number of hidden layers will also increase energy consumption, but will decrease perplexity. Interestingly, the correlation previously seen with the hidden dropout probability has disappeared. This indicates that the best-performing models already have a low enough hidden dropout probability for this correlation to no longer be the case. When looking at the data, the entire dataset features an average hidden dropout probability of 0.385, where the best 15% of models have an average of 0.189, which supports the previous indication.
Different parameters have different effects in different realms
Most of our well-performing models -no matter if looking only at energy, perplexity, or PEP value -use the relative key query positional embedding. While this is a slightly more complex calculation for self-attention, this suggests that it doesn't have a huge impact on energy consumption, while it has a noticeable impact on perplexity. Relative key query introduces another 143K parameters but is relatively insignificant when compared to the 103M BERT already has (Huang et al., 2020). While our dataset contains 107 models with "relative key query" as the positional embedding type, when looking at the worst-performing models, the distribution looks close to uniform, at least when looking at perplexity and PEP. Furthermore, Table 13 shows the distribution in the first cluster, which features 94 out of the 107 models with relative key query, further strengthening the claim.
A similar trend appears for the choice of activation functions, with GELU being the bestperforming activation function for our best models in terms of perplexity and energy loss. If one wants an activation function purely for energy efficiency, SiLU is the most prevalent activation function among the models with the lowest power consumption. When looking at Table 13, it's seen that GELU and SiLU appear the most frequent, but ReLU is still close behind, in line with previous results (Derczynski, 2020).
Our findings were focused on hyperparameter optimization, and no compression of the LMs at all. Jacobsen et al. (2021) provide methods for measur-
Energy Consumption
Hyperopt (1 Epoch ing model size. Li et al. (2020) show that training larger models for longer times and then compressing them leads to more efficient training. Though the previous analysis on layers and attention heads is less relevant for this approach, our findings regarding loss functions and embedding types should hold if choosing to train a larger model and then compress.
Our energy consumption
This paper investigates alternatives to highperformance, high-energy consumption top-of-theline models by showing that other approaches can provide acceptable performance while cutting the size and training time. As this topic is relevant due to the environmental impact it can have, such as accelerating rising sea levels (Veng and Andersen, 2020), and as training NLP models has an actual impact on the environment due to high energy usage, reducing their consumption is, as a topic, both important and very unexplored. This also means that we, as the authors, have to stress the importance of noting that this is not a complete guide on how to create low-power, low-perplexity transformer models. As an example, reducing the number of hidden layers might have a positive effect in certain models, but not necessarily in others -one has to look at all the model parameters together. Table 6 shows the total amount of energy we've spent training our various models. As the price of 1 kWh in Denmark is roughly €0.28 (Eurostat, 2021), the total amount of money we spent training all of our models totals roughly €195 in electricity. While not a lot in comparison to the models presented in table 3 in Strubell et al. (2019), our models are also trained on a smaller dataset, and some of them have a configuration that slims down their size. In terms of watt-hour usage, the number of models we trained consumed half the amount of energy that the BERT base did. As a comparison, 700 kWh can power a Tesla Model 3 for a whole 4337 kilometres (standard range of 16kWh per 100 kilometres (Wikipedia, 2021)), which is roughly from Copenhagen to Barcelona and back.
Limitations
In this subsection, we discuss possible limitations of our study and indicate where future work could explore further.
Given the fact that this work was done on a small subset of the CC-News dataset, the best performing models may be smaller models where the small amount of data is not a downside. If the size of the corpus is increased, it would give space for the larger models to learn more, and maybe perform better than the smaller models deemed as the most efficient. Even though other efficient models might be uncovered, this will not change the correlation between the number of hidden layers and efficiency. There is also a small possibility that perplexity scores are inflated, as CC-News was used to pre-train RoBERTa (Liu et al., 2019), and this could occur non-linearly across hyperparameter configurations -but we expect the relations between network architecture components, power consumption, and performance to hold relative to each other overall.
We also note that this work was monolingual and focused on English, to the exclusion of other languages (Bender, 2019). While transformer-based LLMs seem to exhibit some properties in common across natural languages, there are also many language-specific considerations (e.g. for Finnish, Virtanen et al. (2019)). Our results cannot be guaranteed to generalise across other natural languages.
Given the fact that a heterogeneous cluster was utilized for training all of the models, there is no guarantee that the precise hardware configuration was used to train all 154 models. All compute nodes have the same type of graphics card but contain different server-grade CPUs. Since CPU and DRAM contribute to power consumption (Anthony et al., 2020), the variations in clock speed and core count could potentially have a small effect on the final energy consumption.
We used a relatively low epoch count for the search of hyperparameter space. This might not reflect high-epoch hyperparameter optima, and so the relations between perplexity scores are at best advisory. However, power consumption tends to be stable per epoch, and so this component of the function performs well.
We evaluate using perplexity, which has its own issues and is unlikely to approximate many NLU metrics. However, it does lend itself well to our exploratory compound loss function, retains agnosticity regarding the many possible NLU tasks (such as those in SuperGLUE). The correlations between evaluated language model performance and their final performance are not of maximum certainty; however, these results can guide hyperparameter search/tuning in terms of good balances, and we hope to see much more research in this area of transformer efficiency and quality/energy use trade-off.
We fixed the epoch count for studies. An alternative would be to fix the FLOPS available. Gordon et al. (2018) present a regulariser that optimises FLOPS usage. Because perplexity tends to be asymptotic with zero for useful models, making absolute differences between model scores tend to be smaller as time goes on. This makes the metric a little noisy and differences hard to note. Fixed FLOPS allowances will lead to high epoch counts for smaller models, increasing the risk of model performances being hard to tease apart. Thus, fixing epochs instead of FLOPS during exploration gives finer granularity for very small, or very efficient models, though FLOPS is a closer approximation to power consumption.
As NVIDIA (2021) highlights, some size multipliers are more efficient when it comes to matrixmatrix multiplication. Allowing the search space as denoted on table 1 to take non-efficient values can affect reproducibility because of hardware idiosyncrasies. While general trends can be extracted, precise figures might not generalise well beyond the test setup.
Conclusion
We investigated how hyperparameters affected the power consumption and model quality of transformer-based language models. This paper has presented a method for low-power investigation of hyperparameter tuning, integrating power consumption measurement. We identified factors that increase power without giving quality and identified factors that increase quality without taking power. There are many possible extensions to this work and it is our hope that power consumption measurement will be improved and will be integrated with more architecture searches during model training. Figure 2, 3, and 4 showcase the correlation matrices of the 154 trained models, the best 15% of models with regard to PEP value, and the worst 15% of models with regard to PEP value respectively. These correlation matrices are used to find relationships between different hyperparamters and PEP, in order to figure out which hyperparameters have a huge effect on PEP, and which do not have an effect. Figure 5, and 6 showcase the correlation matrices for the best 15% of models with regard to energy consumption, and the worst 15% of models with regard to energy consumption respectively. Having a correlation matrix over all 154 models would produce an identical result to Figure 2, so that's not included here. These correlation matrices are primarily used to find hyperparamters which greatly affect energy consumption, or which hyperparameters barely affect energy consumption. Both of these can be used to find hyperparamters which need to be tuned, and to find hyperparamters which can be tuned with regard to perplexity, given they do not affect energy consumption.
A PEP
B Energy
C Early optimisation results
In this section, we will present the preliminary result after the hyperparameter tuning done with hyperopt. This includes some of the first observable tendencies and was used to start the analytical process. The loss saved from each model produced by hyperopt is the average overall three epoch, instead of the actual loss at the end of the third epoch. This results in difficulty calculating perplexity, as the loss is now skewed heavily towards the mean, and in return also results in energy loss not being comparable to other parts of the project. Energy consumption has been logged after the end of the third epoch, and can therefore still be compared to the rest of the project. Furthermore, it's not possible to try and calculate what the actual perplexity is, given that there is no real relation between perplexity and the number of epochs. The data will still be presented, but given the fact that the data collection has been different from the rest of the project, it will mostly not be compared or analysed further.
After hyperopt training all 154 models on 3 epochs was done, an analysis of the different model configurations was performed in order to spot early trends in the data. A correlation matrix was constructed for all variable parameters, and compared to energy consumption, perplexity, and energy loss, as can be seen in Figure 7. When speaking about energy consumption, hyperparameters such as the number of hidden layers and the size of said hidden layers plays a big role. As either the number of hidden layers or their size increases, so does the energy consumption needed to train the model. Furthermore, when looking at perplexity, there are no correlations that are as strong as there were for energy consumption. The biggest thing to note here is that there is a positive correlation between hidden layer dropout probability and perplexity. The same goes for hidden layers and hidden size, although this correlation is not as strong as with energy consumption.
The last interesting thing to note here is that there is a positive correlation between energy consumption and perplexity. If one looks at the correlation between energy consumption and energy loss, or perplexity and energy loss, you see a closer correlation between those two parameters, which indicates that the ratio between energy consumption and perplexity, albeit a positive one, is a weaker correlation than the others. There are multiple explanations for this. It could be that this correlation shows that models can be improved in either department without sacrificing the other metric when looking at all models as a whole. It could also be that the aforementioned outliers are having an effect on the data. Following this, the correlation matrix for the best 15% of our models was constructed, as can be seen in Figure 8. When filtering for the best performing models, the strongest correlation is between energy consumption and perplexity, with a correlation of −0.86. This indicates that as energy consumption goes higher perplexity will go lower, and the other way around.
D Perplexity
As with the previous section where we went through the models sorted by the best and worst 15% in terms of energy consumption, this section will do the same but in terms of perplexity instead. By analysing the parameters, we might be able to find a relation between certain of these that can cause both low energy consumption and perplexity, thus reducing the model size without affecting performance. The general point with the best-performing models in terms of perplexity is that they are slightly larger than both the best performing energy loss, and energy only models, as can be seen on Table 7. More hidden layers, more attention heads, bigger feed forward neural networks. A lot of these parameters do have a high standard deviation compared to their averages, such as the intermediate size and the number of hidden layers, meaning that there is room for reduction to reduce energy consumption through lower training times. What is interesting to point out is that while the models are slightly bigger, both their average perplexity and energy consumption are vastly better than the worst models in regard to energy consumption. It is possible that our best-performing perplexity models can, due to a slimmer size, reach good performance more easily than compared with the bigger models of the worst energy consumption. This energy consumption is still on average 2.4 kWh higher than our best performing models in terms of energy consumption, being at 1 kWh, but with a vastly better perplexity. It could suggest that energy loss, as a value to minimise for hyperopt, has been effective in finding compromises between performance and energy consumption. As can be seen on the corresponding correlation matrix for the best perplexity models, Figure 9, there is a negative correlation between energy consumption and perplexity of -0.42. This means that reducing one increases the other, which also suggests that hyperopt has found a compromise between the two. This matrix also shows a very high correlation between both energy consumption, perplexity, and the number of hidden layers. For energy consumption, it is 0.76, and for perplexity, it is -0.49. Increasing the hidden layers will drastically increase energy consumption, but it will also lower perplexity a lot. As mentioned earlier, since this adds extra training time to the model, it will automatically increase energy consumption.
This could possibly suggest that the number of hidden layers has a direct effect on how well a model performs.
Appearances relative_key_query 21 relative_key 2 absolute 0 GELU 11 GELU_new 5 ReLU 3 SiLU 4 Interestingly, as can be observed in Table 8, the distribution over positional embedding and activation function resembles the distribution of the ones sorted by energy loss a lot more than the ones sorted by energy consumption only, with some slight variations. They almost exclusively use relative key query positional embedding and rely more on a GELU activation function. The primary assumption with the models that perform terribly in terms of perplexity is that they are big, and thus have not had enough training time to fully develop. When comparing Figure 1 to Table 9, there are a couple of models that follow this trend, with one taking a significant leap down in perplexity between epoch 8 to 9. As that model follows the trend of badly performing models that are right on top of each other in terms of perplexity, it could be assumed that more epochs are what is needed for perplexity to drop. When looking at the general trend of the parameters, both the hidden size, number of hidden layers, attention heads and intermediate size are higher compared to the mod- Figure 10: A correlation matrix of the worst 15% of models with regard to perplexity. els that perform well in terms of perplexity. When looking at Table 9, the parameters for the models in the top 15% worst percentage are definitely larger than those in the top 15% best percentage for perplexity in most aspects. Hidden size and layers are much increased, the same with the intermediate size. All dropout probabilities are also higher, which could be a reason as to why some of these models keep performing terribly -whatever they learn, they end up forgetting, thus making it harder to train a bigger model. When looking at Figure 10, the activation func-tion has a negligible effect on both perplexity and power consumption in terms of its correlation, but the positional embedding type has a significant impact on perplexity, with a correlation of -0.43. Looking at the difference between the choice of these between the best and worst models in terms of perplexity from Table 10 and Table 8, the better models all tend to use relative key query, whereas the distribution for the worse performing models is more uniform over the three choices. As most of the best models in terms of energy consumption and energy loss primarily use relative key query, the results suggest that there is little to no reason to use another type.
E Clustering
In this section, the data will be clustered in order to find commonalities among the different models, and group them into the three found clusters. Those clusters will then be described in detail, pointing out interesting characteristics in each one. The data were clustered using DBSCAN algorithm, which is a density-based clustering algorithm designed to do spatial clustering with handling of noise (M. Ester and Xu, 1996). The density-based clustering method that is DBSCAN does not handle varying axis ranges well, and given that energy consumption ranges from [0.529478; 12.986432] and perplexity ranges from [15.14847702; 2021.91427], normalization has to be done. The intent for this normalization is to have both axes have a mean of 0 and a standard deviation of 1. One of the consequences of doing this is that the distance used to specify clusters loses some of its intuitive understanding of what distance is now that both axes have been normalised. While this is true, the distance can now be used to properly be used to identify clusters for both metrics, rather than constraining the perplexity to the range of energy consumption. This enables us to find more than just vertical clusters.
For the actual clustering, a distance of 0.4 was chosen with the requirement of needing 5 samples to form a cluster. The clustering showcases the aforementioned outliers which ended at around 2000 perplexity but also finds another cluster close to our primary cluster, as well as 20 different outliers who were unable to be clustered, marked as black dots on Figure 11.
The exact distribution of the clusters can be seen here on Table 11. It's interesting to note that had the minimum samples been a little higher, cluster 3 would not have existed.
Number of Models
Cluster 1 118 Cluster 2 10 Cluster 3 6 Outliers 20 Given that these three clusters were found by our algorithm, it's only right to analyse those clusters further by constructing correlation matrices for each cluster. The first cluster, which can be seen in Figure 12, is the cluster featuring all the best performing models, and also subsequently the biggest cluster out of the three. The trends for this cluster are similar to what has been previously seen -the correlation matrix indicates that the number of hidden layers and energy consumption have a strong correlation, and the same goes for hidden size to a much lesser degree. Furthermore, the hidden size has a negative correlation with perplexity, and the hidden dropout probability has a correlation with perplexity. This indicates the higher the hidden size, the lower the perplexity, and the lower the hidden dropout probability, the lower the perplexity. Interestingly, energy consumption and perplexity has a negative correlation, but it is incredibly weak, and therefore not indicative of anything. Most of our Pareto entries are in this cluster, which contributes to the negative correlation, but given that there are 118 entries here, as seen on Table 11, it indicates that a predominant amount of the models can still be optimised on both parameters.
When looking at the average parameters for the first cluster, it's decently similar to that of the best 15% of models, as seen in Table 2 in section 4.2. Mostly all parameters follow an increasing trend compared to the other table, except for the actual hidden size, which remains slightly lower than when looking at the best 15% of models. As can visually be seen in Figure 11, this cluster features predominantly low perplexity, at an average of 56.28, whereas the energy consumption is slightly higher, sitting at an average of 2.34. Furthermore, it's also seen that there is a heavy bias towards the position embedding type rela-tive_key_query, which was also the case among our best 15% of models. The activation function remains slightly more spread out, with GELU and SiLU being the predominant activation functions, followed closely by ReLU. Among the best 15% of models, GELU was the predominant function, with SiLU at half of the occurrences, as can be seen on Table 2 Figure 11: The data clustered using DBSCAN with axes scaled and translated for a mean of 0 and a standard deviation of 1. When looking at the second cluster on Figure 11, it can already be seen from this figure that there might be some linear tendencies in the models from this cluster. And as can be seen in Figure 13, energy consumption, perplexity, and energy loss all have a correlation close to 1, which indicates this to be the case. The correlation matrix also indicates linearity in the parameters, given that all parameters have very close correlation ratios between energy consumption, perplexity, and energy loss.
Because of the previously explained linear tendency, a linear regression was done on the models in cluster 2 as can be seen in Figure 14. If one extends beyond the scope of the cluster, it's pos-sible that optimal models which decrease in both energy consumption and perplexity, which also lay on this line. Given that our hyperopt optimization was done on a limited scope, it's not possible to see if this is the case, although this linear regression strongly indicates that there are more models along this line that have not been explored.
The third cluster, being the cluster with all of the detected high perplexity models, can be seen in Figure 15. It features two strong correlations between the number of attention heads and energy consumption, as well as the hidden layer size and energy consumption. This indicates that the high amount of attention heads and hidden layer size have a strong influence on the increase of energy consumption. If one looks at the specific models of this cluster, it can be seen that they on average have 9.5 attention heads and a hidden size of 620.5. If one compares it to the results seen in Table 2, the average number of attention heads increases by approximately 1.5 and the average hidden size by 350. There is also a strong correlation between position embedding type and perplexity with 0.98, which strongly indicates that the position embedding type is tied to an increase in perplexity. Looking at the models in the cluster, it's seen that there's a uniform distribution between the three-position embedding types in the 6 models in the cluster. When referring back to Figure 10 it is seen that when looking at the 15% worst models with regard to perplexity, the correlation is still there, but not nearly as strong. Furthermore, the same uniform distribution of position embedding types can be seen on Table 10, which indicates that there are too few models to say anything about this correlation.
In Table 14 the 19 models situated on the Pareto curves through all 10 epochs are summarised along with the frequencies of occurrences, and for which epochs they occurred on the Pareto curve. Furthermore, the results with regard to energy consumption and perplexity are also showcased for the models after the 10th epoch, regardless of whether the model was a part of the Pareto curve at this point. Continuing on this point it is also important to note that towards the end of the Pareto curve, that the models most likely won't be the most effective models. This is for example the case with model number 103, which is a part of the Pareto curve for the 10th epoch, but with a perplexity of 1274.87. Model number 103 will probably not be a model you would want to focus on(especially since it was trained over 10 epochs) but can be used as a boundary to attempt to narrow down the search space, and thus still provide valuable information. Similarly, a model such as number 25, has a really strong score with regard to perplexity, but there is a number of models with similar perplexity scores while still having a much lower energy consumption. An important note here is to mention the possibility of training some of the low-cost models for even more epochs, to fairly compare their energy consumption vs perplexity with those of similar models. The reasoning here is if a low-cost model, such as model 103, would be comparable in perplexity to some of the other models after say 20 or 30 epochs, its energy consumption might still be in a relatively comfortable spot compared to another model with an energy consumption similar to that of model 25. The reasoning behind this argument comes from the linear cost of energy consumption for each epoch, such that even after 30 epochs of training, assuming a continuously linear tendency for the following epochs, model number 103 would have an approximate energy consumption of 0.58(the energy consumption for model 103 after 10 epochs) times 3 (to go from 10 to 30 epochs) would be 0.58 · 3 ≈ 1.76.
Figure 1 :
1The Pareto curve for epoch 10. An animated visualisation of all 10 epochs can be found at this link.
Figure 2 :
2Correlation matrix over hyperparameters for 10 epochs.
Figure 3 :
3Correlation matrix over hyperparameters for the best 15% of models for 10 epochs.
Figure 4 :
4Correlation matrix over hyperparameters for the worst 15% of models for 10 epochs.
Figure 5 :
5Correlation matrix of the best 15% of models wrt. power consumption.
Figure 6 :
6Correlation matrix of the most power-hungry 15% of models.
Figure 7 :
7Correlation matrix over hyperparameters for 3 epochs of training.
Figure 8 :
8Correlation matrix over hyperparameters for the best 15% of models for 3 epochs of training.
Figure 9 :
9A correlation matrix of the best 15% of models with regard to perplexity.
Figure 12 :
12The
Figure 13 :
13The correlation matrix for the second cluster
Figure 14 :
14Regression done on the models of cluster 2
Figure 15 :
15The correlation matrix for the third cluster
Table 1 :
1Hyperopt parameter search spaceproduct of cost and quality makes an assumption
-that changes in either component have a linear
impact. This could be regulated by adding a weight
to both the metrics in order to regulate the difference
of change within the respective metric.
Best 15% Mean Best 15% Std. Dev Worst 15% Mean Worst 15% Std. Devvocab_size
19458.52
5741.79
16072.78
9451.63
actual_hidden_size
273.17
101.14
670.52
468.80
num_hidden_layers
1.91
0.92
6.52
3.24
num_attention_heads
8.17
4.13
10.47
4.70
intermediate_size
716.17
531.30
1237.08
581.32
hidden_dropout_prob
0.18
0.06
0.52
0.24
attention_probs_dropout_prog
0.25
0.10
0.46
0.26
energy comsumption
1.70
0.51
5.70
3.11
perplexity
27.23
7.80
1809.32
501.77
Table 2: Mean and standard deviation of hyperparameters for the best 15% and worst 15% of models wrt: PEP
value.
Best 15% Count Worst 15% Count
relative_key_query
20
7
relative_key
0
8
absolute
3
8
GELU
14
8
GELU_new
2
7
ReLU
1
5
SiLU
6
3
Table 3 :
3Count of activation functions and position embedding types wrt: PEP value.attention heads result in models which have a hard
time learning.
Mean and standard deviation of hyperparameters for the best 15% and worst 15% of models wrt: energy consumption.vocab_size
21187.73
6426.71
18545.04
9141.88
actual_hidden_size
116.43
84.20
727.78
27.33
num_hidden_layers
1.52
0.77
7.82
3.26
num_attention_heads
8.08
5.27
12.78
4.48
intermediate_size
890.47
554.19
1149.65
739.19
hidden_dropout_prob
0.37
0.23
0.40
0.15
attention_probs_dropout_prog
0.28
0.14
0.47
0.28
energy comsumption
0.99
0.17
6.18
1.93
perplexity
338.51
576.74
1236.04
962.74
Table 4: Best 15% Count Worst 15% Count
relative_key_query
15
14
relative_key
4
6
absolute
4
3
GELU
3
11
GELU_new
3
5
ReLU
6
3
SiLU
11
4
Table 5 :
5Count of activation functions and position embedding types wrt: energy consumption.
Table 6 :
6Energy consumption for this work
Tadea Veng and Ole Andersen. 2020. Consolidating sea level acceleration estimates from satellite altimetry. Advances in Space Research.Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma,
Juhani Luotolahti, Tapio Salakoski, Filip Ginter, and
Sampo Pyysalo. 2019. Multilingual is not enough:
Bert for finnish. arXiv preprint arXiv:1912.07076.
Moshe Wasserblat, Oren Pereg, and Peter Izsak. 2020.
Exploring the boundaries of low-resource bert distil-
lation. In Proceedings of SustaiNLP: Workshop on
Simple and Efficient Natural Language Processing,
pages 35-40.
Wikipedia. 2021. Tesla Model 3. Page Version ID:
1024502085.
Table 7 :
7Table with average hyperparameters of the best 15% of models wrt: perplexity.
Table 8 :
8Count of activation functions and position em-
bedding types in the best 15% of models with regard to
perplexity.
Table 9 :
9Table with average hyperparameters of the worst 15% of models wrt: perplexity.
Table 10 :
10Count of activation functions and position
embedding types in the worst 15% of models wrt: per-
plexity.
Table 11 :
11Table showcasing the three cluster distributions and outliers for the DBSCAN clustering
Table 12 :
12Table with average hyperparameters of the top 15% of models in cluster 1.
Table 13 :
13Count of activation functions and position embedding types for the first cluster.
Table 14 :
14Table overpareto entries, featuring energy consumption and perplexity at 10 epochs, the number of occurrences in the pareto curves of each epoch, and the specific epochs at which it occurs on the pareto curve.
. Dario Amodei, AI and Compute. Dario Amodei. 2018. AI and Compute. https://ope- nai.com/blog/ai-and-compute/.
F Wolff Lasse, Benjamin Anthony, Raghavendra Kanding, Selvan, arXiv:2007.03051ArXiv: 2007.03051Carbontracker: Tracking and Predicting the Carbon Footprint of Training Deep Learning Models. cs, eess, statLasse F. Wolff Anthony, Benjamin Kanding, and Raghavendra Selvan. 2020. Carbontracker: Track- ing and Predicting the Carbon Footprint of Training Deep Learning Models. arXiv:2007.03051 [cs, eess, stat]. ArXiv: 2007.03051.
The #benderrule: On naming the languages we study and why it matters. The Gradient. M Emily, Bender, 14Emily M Bender. 2019. The #benderrule: On naming the languages we study and why it matters. The Gra- dient, 14.
On the dangers of stochastic parrots: Can language models be too big. Emily M Bender, Timnit Gebru, Angelina Mcmillan-Major, Shmargaret Shmitchell, 10.1145/3442188.3445922Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21. the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21New York, NY, USAAssociation for Computing MachineryEmily M. Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? . In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Trans- parency, FAccT '21, page 610-623, New York, NY, USA. Association for Computing Machinery.
Algorithms for hyper-parameter optimization. James Bergstra, Rémi Bardenet, Yoshua Bengio, Balázs Kégl, Proceedings of the 24th International Conference on Neural Information Processing Systems, NIPS'11. the 24th International Conference on Neural Information Processing Systems, NIPS'11Red Hook, NY, USACurran Associates IncJames Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. 2011. Algorithms for hyper-parameter optimization. In Proceedings of the 24th Interna- tional Conference on Neural Information Processing Systems, NIPS'11, page 2546-2554, Red Hook, NY, USA. Curran Associates Inc.
Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. James Bergstra, Daniel Yamins, David Cox, PMLRInternational conference on machine learning. James Bergstra, Daniel Yamins, and David Cox. 2013. Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision ar- chitectures. In International conference on machine learning, pages 115-123. PMLR.
Power consumption variation over activation functions. Leon Derczynski, arXiv:2006.07237arXiv preprintLeon Derczynski. 2020. Power consumption vari- ation over activation functions. arXiv preprint arXiv:2006.07237.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805[cs].ArXiv:1810.04805BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805 [cs]. ArXiv: 1810.04805.
Morphnet: Fast & simple resource-constrained structure learning of deep networks. Ariel Gordon, Elad Eban, Ofir Nachum, Bo Chen, Hao Wu, Tien-Ju Yang, Edward Choi, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionAriel Gordon, Elad Eban, Ofir Nachum, Bo Chen, Hao Wu, Tien-Ju Yang, and Edward Choi. 2018. Mor- phnet: Fast & simple resource-constrained structure learning of deep networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1586-1595.
Zhiheng Huang, Davis Liang, arXiv:2009.13658[cs].ArXiv:2009.13658Peng Xu, and Bing Xiang. 2020. Improve Transformer Models with Better Relative Position Embeddings. Zhiheng Huang, Davis Liang, Peng Xu, and Bing Xi- ang. 2020. Improve Transformer Models with Better Relative Position Embeddings. arXiv:2009.13658 [cs]. ArXiv: 2009.13658.
Perplexity of fixed-length models. Huggingface, HuggingFace. 2021. Perplexity of fixed-length models.
Optimal size-performance tradeoffs: Weighing PoS tagger models. Magnus Jacobsen, H Mikkel, Leon Sørensen, Derczynski, arXiv:2104.07951arXiv preprintMagnus Jacobsen, Mikkel H Sørensen, and Leon Der- czynski. 2021. Optimal size-performance trade- offs: Weighing PoS tagger models. arXiv preprint arXiv:2104.07951.
Jared Kaplan, Sam Mccandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, arXiv:2001.08361ArXiv: 2001.08361and Dario Amodei. 2020. Scaling Laws for Neural Language Models. cs, statJared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling Laws for Neural Language Models. arXiv:2001.08361 [cs, stat]. ArXiv: 2001.08361.
From research to production and back: Ludicrously fast neural machine translation. Young Jin Kim, Marcin Junczys-Dowmunt, Hany Hassan, Alham Fikri Aji, Kenneth Heafield, Roman Grundkiewicz, Nikolay Bogoychev, Proceedings of the 3rd Workshop on Neural Generation and Translation. the 3rd Workshop on Neural Generation and TranslationYoung Jin Kim, Marcin Junczys-Dowmunt, Hany Hassan, Alham Fikri Aji, Kenneth Heafield, Ro- man Grundkiewicz, and Nikolay Bogoychev. 2019. From research to production and back: Ludicrously fast neural machine translation. In Proceedings of the 3rd Workshop on Neural Generation and Trans- lation, pages 280-288.
. Lasse, 2021. lfwa/carbontracker. Original-date: 2020- 04-21T12:01:38ZLasse. 2021. lfwa/carbontracker. Original-date: 2020- 04-21T12:01:38Z.
Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, Joseph E Gonzalez, arXiv:2002.11794[cs].ArXiv:2002.11794Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers. Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, and Joseph E. Gonza- lez. 2020. Train Large, Then Compress: Rethink- ing Model Size for Efficient Training and Inference of Transformers. arXiv:2002.11794 [cs]. ArXiv: 2002.11794.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692[cs].ArXiv:1907.11692RoBERTa: A Robustly Optimized BERT Pretraining Approach. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretrain- ing Approach. arXiv:1907.11692 [cs]. ArXiv: 1907.11692.
A density-based algorithm for discovering clusters in large spatial databases with noise. J , Sander M Ester, H P Kriegel, X Xu, J. Sander M. Ester, H. P. Kriegel and X. Xu. 1996. A density-based algorithm for discovering clusters in large spatial databases with noise.
CC-News-En: A large English news corpus. Joel Mackenzie, Rodger Benham, Matthias Petri, Johanne R Trippas, Shane Culpepper, Alistair Moffat, Proceedings of the 29th ACM International Conference on Information & Knowledge Management. the 29th ACM International Conference on Information & Knowledge ManagementJoel Mackenzie, Rodger Benham, Matthias Petri, Jo- hanne R Trippas, J Shane Culpepper, and Alistair Moffat. 2020. CC-News-En: A large English news corpus. In Proceedings of the 29th ACM Inter- national Conference on Information & Knowledge Management, pages 3077-3084.
Dominic Masters, Carlo Luschi, arXiv:1804.07612ArXiv: 1804.07612Revisiting Small Batch Training for Deep Neural Networks. cs, statDominic Masters and Carlo Luschi. 2018. Revisit- ing Small Batch Training for Deep Neural Networks. arXiv:1804.07612 [cs, stat]. ArXiv: 1804.07612.
Dl performance matrix multiplication. Nvidia, NVIDIA. 2021. Dl performance matrix multiplication.
Roy Schwartz, Jesse Dodge, Noah A Smith, Oren Etzioni, arXiv:1907.10597ArXiv: 1907.10597Green AI. cs, statRoy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. 2019. Green AI. arXiv:1907.10597 [cs, stat]. ArXiv: 1907.10597.
Energy and Policy Considerations for Deep Learning in NLP. Emma Strubell, Ananya Ganesh, Andrew Mccallum, 10.18653/v1/P19-1355Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsEmma Strubell, Ananya Ganesh, and Andrew McCal- lum. 2019. Energy and Policy Considerations for Deep Learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 3645-3650, Florence, Italy. Association for Computational Linguistics.
. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, arXiv:1706.03762[cs].ArXiv:1706.03762Attention Is All You Need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. arXiv:1706.03762 [cs]. ArXiv: 1706.03762. |
11,064,772 | Room 820. MH" Artificial Intelligence I ~lb Cambridge. MA 02139 AIISTRACI" Natural langt~ages are often assumed to be constrained so that they are either easily learnable or parsdble, but few studies have investigated the conrtcction between these two "'functional'" demands, Without a fonnal model of pamtbility or learnability, it is difficult to determine which is morc "dominant" in fixing the properties of natural languages. In this paper we show that if we adopt one precise model of "easy" parsability, namely, that of boumled context parsabilio,, and a precise model of "easy" learnability, namely, that of degree 2 learnabilio" then we can show that certain families of grammars that meet the bounded context parsability ct~ndition will also be degree 2 learnable. Some implications of this result for learning in other subsystems of linguistic knowledge are suggested. 1 | [] | Robert C Ilcrwick
IIOLiNI)EI) CONH'XT PARSING AND FASY I.I'AI+.NAIIII.ITY
Room 820. MH" Artificial Intelligence I ~lb Cambridge. MA 02139 AIISTRACI" Natural langt~ages are often assumed to be constrained so that they are either easily learnable or parsdble, but few studies have investigated the conrtcction between these two "'functional'" demands, Without a fonnal model of pamtbility or learnability, it is difficult to determine which is morc "dominant" in fixing the properties of natural languages. In this paper we show that if we adopt one precise model of "easy" parsability, namely, that of boumled context parsabilio,, and a precise model of "easy" learnability, namely, that of degree 2 learnabilio" then we can show that certain families of grammars that meet the bounded context parsability ct~ndition will also be degree 2 learnable. Some implications of this result for learning in other subsystems of linguistic knowledge are suggested. 1
I INTRODUCTION
Natural languages are usually assumed to be constrained so that they arc both learnable and par'sable. But how are these two functional demands related computationally?
With some exceptions, 2 there has been little or no work connecting these two key constraints on natural languages, even though linguistic researchers conventionally assume that learnability somehow plays a dominant role in "shaping" language, while eomputationalists usually assume that efficient prncessability is dominant. Can these two functional demands be recrtnciled? There is in fact no a priori reason to believe that the demands of learnability and parsability are necessarily compatible. After all. learuability has to do with the scattering of possible grammars with respect tu evidence input to a learning procedure. This is a property of a family of grammars. Efficient parsability, on the other hand. is a property of a single grammar. A family of grammars could be easily learnable but not easily parsable, or vice-versa. It is easy to provide examples of both sorts. For example, there are finite collections of grammars generating non-rccursivc languages that are easily learnable (just use a disjoint vocabulary as triggering evidcncc to distinguish among them), Yet by dcfinition these languages cannot be easily parsable. On the other hand as is wcll known even the class of all 1. This v,'ork has h~n ~rried out at the MIT Artificial Intelliger.¢e I,aboratory. Support for the l.aborator3"s artificial intdligenc¢ research ~s provided m part by the Dcf~:nse Advanced Research Projects Agency. 2. See Ik~r~iek 1980 for a sketch of the connections between learnability and parsability.
Iinite languages plus the tmiver~d inlirtite language coxcring them all is not learnable from just positive evidence (Gold 1967). Yet each of these languages is linite state and hence efficiently analyzable.
'lhis paper establishes tile first known resolts lbnnally linking efficient par~tbility to efficient Icarnability. It connects a particular model of efficient parsing, namely, bounded context pal.'sing with lookahead as developed by Marcus 1980. to a particular model of language acqnisilitm, the Bounded l)egree of Error (Ill)E) model of Wexlcr and Culicovcr 1980. The key result: bounded context parsability implies "'easy" learnability. Here, "easily learnable" means "'learnable from simple, positive (grammatical) sentences of bounded dcgrec of embedding." In this case then, the constraints required to guarantee easy parsability, as enforced by the bounded context eortstraJllt, are at least as strong as those required for easy learnability. This means that if we have a language and associated grammar that is known to be parsable by a Marcus-type machine. then we already know that it meets the constraints of bounded degree learning, as defined by Wcxler and Culicover.
A number of extensions to the learnability-parsability connection are also suggested. One is to apply the result to other linguistic subsystems, notably, morphological and phonological rule systems. Although these subsystems are finite state, this does not automatically imply easy learnability, as Gold (1967) shows. In fact, identification is still computationally intractable --it is NP-hard (Gold 1978), taking an amount of evidence exponentially proportional to the number of states in the target finite state system. Since a given natural language could have a morphological system of a few hundred or even a few thousand states (Kimmn 1983, for Finnish), this is a serious problem, Thus we must find additional constraints to make natural morphological systems tractably learnable.
An analog of the bounded context model for morphological systems may suffice. If we require that such systems be k-reversible, as defined by Angluin (in press), then art efficient polynomial time induction algorithm exists.
To summarize, what is the importance of this result for computational linguistics? o It shows for the first time that parsability is stronger constraint titan learnability, at least given this particular way of defining the comparison. Thus computationalists may have been right in tbcusing on efficient parsability as a metric for comparing theories.
o It provides an explicit criterion for learnability. This criterion can bc tied to known grammar and language class results. For example, we can .say that the language anbncn will be easily learnable, since it is hounded context parsablc (in an extended sense).
u It Ibrlnall.~ cnnnects the Marcus model fi~r p.nsing to a model of acquisition. It pinf~oints the rcl,ttionship of tile Marcus parser ~o the 1.1~,( k I and btmndcd context p,trsmg models.
o It suggests criteria fi~r tile learnability ~f phomflogical and rnorphulugical systems. In particular, fl~c notitm of k-reversibility, the anah~g of bounded context par.~d'~ility Ibr Iinite slaue s3,stems, may play a key nile here. The reversibility constraint thus lends learnahilit.v support to computational frameworks that propose "'reversible" rules (such as that of Koskcnnicmi 1983) versus those that do not (such as standard generative approaches).
This paper is organized as follows. Section l reviews the basic definitions of the bounded context model for parsing and the bounded degree of error model for learning. Section 2 sketches the main result, leaving aside the details of certain lemmas. Section 3 extends the bounded context--bounded degree of error model to morphological and phtmological systems, and advances the notion of k.reversibility as the analog of bounded context parsability for such finite state sysiems.
1I IIOUNDED CONTEXT PARSAIflI.ITY AND I)OUNDED DEGREE OF EI~,ROR I.EARNING
To begin, we define the models of parsing and learning that will be used in the sequel. The parsing model is a variant of the Marcus parser. "I11e learning theory is the Degree 2 theory of Wexler and Culicover (1980). The Marcus parser defines a class of languages (and associated grammars) that are easily pa~able; Degree 2 theory, a class of languages (and asstx:iated grammars) that is easily learnable.
To begin our comparison, We must say what class of "easily learnable" languages l)egrec 2 theory defines. The aim of the theory is to define constraints such that a family of transfonnational grammars will be learnable from "'simple" data; the learning procedure can get positive (grammatical) example sentences of depth of embedding of two or tess (sentences up to two embedded sentences, but no more). The key property of the translbrmational family that establishes learnability is dubbed Bounded Degree of I?rror. Roughly and intuitively. BI)E is a property related to the "separability" of langu:tges and grammars given simple data: if there is a way for the learner to tell that a currently hypnthesized language {and grammar) is incorrect, then there must be some simple scntc'~ce that reveals this --all languages in the family must be separable b',' simple sentences.
The wa.~ that the learner can tell that a currentl~ I1H~othesizcd grammar is wrong given some sample sentence is by trying to see whether the current granlmar can nl~lp from a deep structure for the sentence to the observed ~mple sentence. That is, we imagine the learner being li~d with a series of hase (deep structnre)-st, rface sentence (denoted "'b, s") pairs. (See Wexler and Culicover 1980 fur details and justification of this approach, as well as a weakening of the requirement that base structures be available: see Berwick 1980Berwick 1982 for an independently developed conlputational version.) Ifthe learner's current transformational component. '1 I, can map from b to s. then all is well. If not. and Tl(b)=s does not equal s. then a detectable error has been uncovered.
With this background we can provide a precise definition of the BI)E property:
A family of transrormationally-generated languages k possesses the BI)t-property iff for any base grammar B (fur languages in 13 there exists a finite integer U. such that for an). possible adult transformational component A and learner component C, if A and C disagree on any phrase-marker b generated by B. then they disagree on some phrase-marker b generated by B, with b' ofdegree at most U. Wexler and Culicover 1980 page 108.
If we substitute 2 for U in the theorem, we get the Degree 2 constraint.
Once IIDE is established for some family of languages, then convergence of a learning procedure is easy to proved. Wexler and Culicover 1980 have the details, but the key insight is that the number of possible errors is now bounded from above.
The BDE property can be defined in any grammatical framework, and this is what we shall do here. We retain the idea of mapping from some underlying "base" structure to the surface sentence. (If we are parsing, we must map from the surface sentence to this underlying structure.) The mapping is not necessarily transformational, however; for example, a set of context-free rules could carry it out. In this paper w? assume that the mapping from surface sentences to underlying structures is carried out by a Marcus-type parser. The mapping from structure to sentence is then defined by the inverse of the operation of this machine. This fixes one possible target language. (The full version of this paper defines this mapping in full.)
Note further that the BDE property is defined not just with respect to possible adult target languages, but also with respect to the distribution of the learner's possible guesses. So for example, even if there were just ten target languages (defining 10 underlying grammars), the BDE property must hold with respect to those languages and any intervening learner languages (grammars). So we must also define a family of languages to be acquired. This is done in the next section.
BI)E, then, is our criterial property for easy learnability. Just those lhmilies of grammars that possess the BI)E property (with respect to a learner's guesses) are easily learnable. Now let us I11rn to bounded context parsal)ilit). (llCl>). The definition ~)1" IICI ) used here an extension t)f the standard delinition as in Aht)and Lillmall 1972 p. 427. Intuitively. a grammar is IICP if it is "'backwards deterministic" given a radius nf k tokens around cvcry parsing decision. That is. it is possible to find dcte.rmiuistically the production that vpplied at a given step in a derivation by examining just a btnmded mnuber of tokens (fixed in advance) to the left and right at that point in the derivation.
Following Aho and UIIman we have this definition for bounded right-context grammars:
G is bounded right-context if the following four conditions:
(1) S=:'aA,~=:'a#~ and (2) S=%,Bx=~-~,~x = a'B,b are rightmost derivations in the grammar;
(3) the length ofx is less than or equal to the length of,/, and (4) the last m symbols of a and a' coincide, and the first n symbols of,., and ~, coincide imply that A=B, a'=v, and ,/' = x.
We will u~ the term "bounded context" instead of "bounded right-context." To extend the definition we drop the requirement that the derivation is rightmost and use instead non-canonical derivation sequences as defined by Szymanski and Williams (1976 Have the students take the exam, the Marcus parser must delay analyzing hare until the full NP the students is processed. Thus a canonical (rightmost) parse is not produced, and the lookahead for the parser includes the sequence NP--take, successfully distinguishing this parse from the NP--taken sequence for a yes-no question. This extension was first proposed by Knuth (1965) and developed by Szymanski and Williams (1976). In this model we can postpone a canonical rightmost derivation some fixed number of thnes t. This corresponds to building t complete subtrees and making these part of the lookahead before we return to the postponed analysis. The Marcus machine (and the model we adopt here) is not as general as an l.R(k) type parser in one key respect. An I.R(k) parser can use the entire left context m making its parsing decisions.
(It alst) uses a bounded right context, its h)okahead.)The 1.R(k) ,nachine can do this because the entire left context can be stored as a regular set in the finite control of the parsing machine (see Knuth 1965). That is, l.R(k) parsers make use uf an encoding of the left context in order to keep track of what to do. The Marcus machine is much mure limited than this. l.ocal parsing decisions arc made by examining strictly litend contexts an)und file current locus of parsing contexts. A finite state encoding of left context is not permitted.
The BCP class also makes sense its a pn)xy for "'efficiently parsable" because all its members are analyzable in time linear in the length t)[" their input sentences, at least if file associated gr~lllllllars are COlttext-fiee. If die ~r~lllllTlars are nol etmtext-free. then BCP members are parsahle in at ~orst quadratic (n squared) time. (See Szymanski and Williams 1976 fur proofs of these results.)
III CONNIT_q'ING PARSABII.ITY AND I.EARNABII.ITY
We can now at least furmalize our problem of comparing learnability and parsability. The question now becomes: What is the relationship between the Ill)t" property and the BCP property? Intuitively, a grammar is BCP if we can always tell which of two rules applied in a given bounded context. Also intuitively, a family of grammars is III)E il: given any two grammars in the family G and G" with different roles R and R" say. we can tell which rule is the correct one by looking at two derivations ofbotmded degree, with R applying in one and yielding surface string s, and R" applying in the udder yielding surface string s'. with s not equal to s'. This property must hold with respect to all possible adult and learner grammars. So a space of possible target grammars must be considered. The way we do this is by considering some '*fixed" grammar G and possible variants of G formed by substituting the production rules in G with hypothesized alternatives.
The theorem we want to now prove is:
If the grammars formed by augmenting G with possible hypothesized grammar rules arc BCP. then that family is also BDE.
The theorem is established by using the BCP property to directly construct a small-degree phrase marker that meets the BDE condition. We select two grammars G, G' from the family of grammars. Both are BCP, by definition. By assumption, there is a detectable error that distinguishes G with rule R from G' with rule R'. Letus .say that Rule R is of the form A~a; R' is B=*'a'.
Since R' determines a detectable error, there must be a derivation with a common sentential form ,t, such that R applies to ,I, and eventually derives sentence s, while R' applies to ¢, and eventually derives s' different from s. The number of steps in the derivation of the the two sentences may be arbitrary, however. What we must show is that there are two derivations bounded in advance by some constant that yield two different sentences.
The BCP conditions state that identical (re.n) contexts imply that A and B are equal. Taking the contrapositive, if A and B are unequal, then the 0n,n) context must be nonidentical. This establishes that BCP implies (re.n) context error detectability. 3 We are not yet done though. An (Ul.U) context detectable error could consist of tenninal and nonterminal elements, not just terminals (words) as required by the detectable error condition. We must show that we can extend such a detectable error to a surface sentence detectable error with an underlying structure of bounded degree. An easy lemma establishes this.
If R' is an (m.n) context detectable error, then R' is bounded degree of error detectable.
The proof (by induction) is omitted: only a sketch will be given here. Intuitively. the reason is that ~e can extend any nonterminals in the error-detectable (m,n) context to some valid surface sentence and bound this derivation by some constant fixed in advance and depending only on the grammar. This is because unbounded derivations are possible only by the repetitiort of nontermirmls via recursion: since there are only a finite number of distinct nonterminals, it is only via recursion that wc can obtain a derivation chain that is arbitrarily deep. But. as is well knuwn (compare the proof of the pumping lemma for context-free grammars), any such arbitrarily deep derivation producing a valid surface sentence also has an associated truncated derivation, bounded by a constant dependent on the grammar, that yields a valid sentcnce of the language. Thus we can convert any (re.n) context detectable error to a bounded degree of error sentence. This proves the basic result.
As an application, consider the strictly context-sensitive language anbnc n. This language has a grammar that is BCP in the extended sense (Szymanski and Williams 1976). The family of grammars obtained by replacing the rules of this IICP grammar by alternative rules that are also 11CP (including the original grammar) meets the BDE condition.
This result was established independently by Wexler 1982.
IV EXTENSIONS OF THE BASIC RESULT
In the domain of syntax, we have seen that constraints ensuring efficiem parsability also guarantee easy lcarnability. This result suggests an extension to other domains of linguistic knowledge. Consider morphological rule systems. Several recent models suggest finite state transducers as a way to pair lexical (surface) and underlying titans of words (Koskenniemi 1983: Kaplan andKay 1983). While such systems may well be efficiently analyzable, it is not so ~ell known that easy learnability does not follow directly from this adopted formalism. To learn even a finite state system one must examine all possible state-transition combinations. This is combinatorially explosive, as Gold 1978 proves. Without additional constraints, finite trzmsducer induction is intractable.
What is needed is some way to localize errors: this is what the bounded degree ofern)r condition does.
Is there ill) an;dog tlf the the IICP condition for finite state systems that also implies easy learnahility? The answer is yes. The essence of BCP is that derivations are backwards and forwards deterministic within local (m.n) contexts. But this is precisely the notion of k-reversibilit.I; as defined by Angluin (in press). Angluin shows that k-reversible automata have polynomial time induction algorithms, in contrast to the result for general finite state automata. It then becomes important to .see if k-reversibility holds for current theories of morphological rule systems. The fifll paper analyzes bt)th "'classical" generative theories (that do not seem to meet the test of reversibility) and recent transducer theories.
Since k-reversibility is a sufficient, but evidently not a necessary constraint fi,r Icarnability. there could be other conditions guaranteeing the Ic;,rnability of finite state systems. For instance. One of the~, the strict cycle condition in phonology, is also examined in the full paper. We show that the strict cycle also st, flices to meet the III)E condition.
In short, it eppcars that .".t Icz:st in terms of one framework in which a fontal comparison can bc made, the same constraints that forge efficient parsability also ensure easy learnability.
).This model corresponds to Marcus's (1980) use of attention shi.Bs to
postpone parsing decisions until more right context is examined.
The effect is to have a lookahead that can include nonterminai
names like NP or VP. For example, in order to successfully parse
One of lhe nlh,,'r ~hJee nCP ~mdilions could al.~ be ~ioldle.d, bu! ll'lcs~ ate a::~:un.ed t.~e .~)) ~,~Ud,nlic::, W;" ."..',~Jme (h~' existence of dcd,.ali~,ns meeting ,"(mdh!(m.~ t l ).rod L",) ~n Ihc cxlet:,l..'d !:¢n,.u. i!s v.cJl as ccmdi!ion (3).
J Aho, J Ullman, The Theory of Parsh~g, Translation, and Compiling. Englewood-Cliffs, N JPrentice-Hall1Aho, J. and Ullman, J. 1972. The Theory of Parsh~g, Translation, and Compiling, vol. 1., Englewood-Cliffs, N J: Prentice-Hall.
Induction of k-reversible languages. D Angluin, press, JACM. Angluin, D. 1982. Induction of k-reversible languages. In press, JACM.
Computational analogs of constraints on grammars. R Berwiek, Proceedings of the 18th Annual Meeting of the Association for Computational Linguistics. the 18th Annual Meeting of the Association for Computational LinguisticsBerwiek, R. 1980. Computational analogs of constraints on grammars. Proceedings of the 18th Annual Meeting of the Association for Computational Linguistics.
Locality Principles and the Acquisition of Syntactic Knowledge, PhD dissertation. R Berwick, MIT Department of Electrical Engineering and Computer ScienceBerwick, R. 1982. Locality Principles and the Acquisition of Syntactic Knowledge, PhD dissertation, MIT Department of Electrical Engineering and Computer Science.
Language identification in the limit. E Gold, Information and Control. 10Gold, E. 1967. Language identification in the limit. Information and Control, 10.
On the complexity of minimum inference of regular sets. E Gold, h~fonnation and Control. 39Gold, E. 1978. On the complexity of minimum inference of regular sets. h~fonnation and Control 39, 337-350.
Word recognition. R Kaplan, M Kay, Xerox Palo Alto Research CenterKaplan, R. and Kay, M. 1983. Word recognition. Xerox Palo Alto Research Center.
Two-Level Morphology: A General Computational Model for Word Form Recognition and Production, Phi) dissc~ltion, University ofl lelsinki. K Koskennicmi, Koskennicmi, K. 1983. Two-Level Morphology: A General Computational Model for Word Form Recognition and Production, Phi) dissc~ltion, University ofl lelsinki.
On the translation of languages from left to right. D Knuth, fimnathm and ('ontroL 8Knuth. D. 1965. On the translation of languages from left to right. In.fimnathm and ('ontroL 8.
A Model of Syntactic Recognition for Natural Language. Marcus M , MIT PressCambridge MAMarcus. M. 1980. A Model of Syntactic Recognition for Natural Language. Cambridge MA: MIT Press.
Williams T J Szymanski, Noncanonical extensions of bottomup parsing techniques. SIAM .1. Computing, 5. Szymanski. T. and Williams. J. 1976. Noncanonical extensions of bottomup parsing techniques. SIAM .1. Computing, 5.
Some isst,es in the formal theory of learnability. K Wexler, The Logical Problem of l,anguage Acquisition. C. Baker and J. McCarthyWexler, K. 1982. Some isst,es in the formal theory of learnability. in C. Baker and J. McCarthy (eds.). The Logical Problem of l,anguage Acquisition.
Formal Principles of Language Acquisition. K Wexler, P Culicover, Mrr PressCambridge, MAWexler, K. and P. Culicover 1980. Formal Principles of Language Acquisition, Cambridge, MA: Mrr Press. |
|
250,390,827 | MS@IW at SemEval-2022 Task 4: Patronising and Condescending Language Detection with Synthetically Generated Data | In this description paper we outline the system architecture submitted to Task 4, Subtask 1 at SemEval-2022. We leverage the generative power of state-of-the-art generative pretrained transformer models to increase training set size and remedy class imbalance issues. Our best submitted system is trained on a synthetically enhanced dataset with 10.3 times as many positive samples as the original dataset and reaches an F1 score of 50.62%, which is 10 percentage points higher than our initial system trained on an undersampled version of the original dataset. We explore possible reasons for the comparably low score in the overall task ranking and report on experiments conducted during the post-evaluation phase. | [
218974529,
211258652,
226976077,
250390607
] | MS@IW at SemEval-2022 Task 4: Patronising and Condescending Language Detection with Synthetically Generated Data
July 14-15, 2022
Selina Meyer
Chair for Information Science
University of Regensburg
Germany
Maximilian Schmidhuber
Chair for Information Science
University of Regensburg
Germany
Udo Kruschwitz udo.kruschwitz@ur.demaximilian.schmidhuber@stud.uni-regensburg.de
Chair for Information Science
University of Regensburg
Germany
MS@IW at SemEval-2022 Task 4: Patronising and Condescending Language Detection with Synthetically Generated Data
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
the 16th International Workshop on Semantic Evaluation (SemEval-2022)July 14-15, 2022
In this description paper we outline the system architecture submitted to Task 4, Subtask 1 at SemEval-2022. We leverage the generative power of state-of-the-art generative pretrained transformer models to increase training set size and remedy class imbalance issues. Our best submitted system is trained on a synthetically enhanced dataset with 10.3 times as many positive samples as the original dataset and reaches an F1 score of 50.62%, which is 10 percentage points higher than our initial system trained on an undersampled version of the original dataset. We explore possible reasons for the comparably low score in the overall task ranking and report on experiments conducted during the post-evaluation phase.
Introduction
Task 4 of SemEval-2022 focuses on the detection of patronising and condescending language (PCL) in news (Pérez-Almendros et al., 2022). PCL in popular media and news sources is detrimental to an emancipated and equal society, as it is usually targeted towards minorities and socially disadvantaged communities, often in an unsuccessful attempt to show solidarity (Perez Almendros et al., 2020). PCL has the potential to strengthen existing stereotypes by representing minorities either as passive entities to be pitied and supported, thus taking away their agency and focusing on their vulnerabilities or praising members of vulnerable groups for everyday achievements simply because of their background (Nolan and Mikami, 2013). In contrast to hate speech, PCL is usually subtle, well intentioned, and free of discriminatory phrases or racial slurs, which makes it an interesting Natural Language Processing (NLP) problem.
In other domains with more discriminatory classes such as hate speech detection, generative models have recently become increasingly popular and successful as a tool to increase classification performance (Wullach et al., 2021;Anaby-Tavor et al., 2020). In our contribution to the shared task, we explored to what extent this approach is feasible for the presented use case, where classification of a text sample is less distinct and often relies on world knowledge (Perez Almendros et al., 2020). The dataset provided for the task was fairly small, with less than 10% of the data belonging to the positive class. We thus enhanced the original dataset in two ways for our system runs:
• balancing the dataset by generating only PCL samples
• increasing overall dataset size, by generating an equal amount of PCL and non-patronizing (nPCL) samples
We generally followed the approach used by Wullach et al. (2021) and initially fine-tuned a BERT classifier on the original dataset. We then finetuned GPT-3 (Brown et al., 2020) and generated samples of PCL and nPCL which were classified using our fine-tuned system. Samples for which the BERT classification did not correspond to the intended output were discarded. We then fine-tuned a new BERT instance with the modified dataset PCLenhanced including the synthetic data. Although our system only ranked middle field in the competition, both classifiers trained on the modified datasets improve our initial classifier trained on the original dataset by multiple percentage points. We conclude that this approach does add value to classification, even in cases where the distinction between the positive and the negative class relies on subtleties. The code described in the following as well as the synthetic data used for the modification of the original dataset is available on Github 1 .
Text
Class Meanwhile "throughout this island, the high level of suicide is terrible and terrifying. "As Christians" we can give hope, where a person feels only darkness and hopelessnes," he said PCL As the house prices go up, so do rents , and the pohara poor families ca n't afford to live. Those who own houses, and are only just making it through, will be rated out of their homes nPCL Table 1: Examples of PCL and nPCL in the DPM.
Background
We participated in Subtask 1 of the competition, which entailed the binary classification of news paragraphs as either patronizing or not patronizing. Basis for the task was the Don't Patronize Me! dataset (DPM) (Perez Almendros et al., 2020), which contains 10,469 paragraphs of annotated data from 20 English news sources. While all paragraphs include references to potentially vulnerable groups, only 993 are examples of patronising speech. The dataset included meta-information about the country each paragraph was published in, an article id, a keyword indicating which vulnerable group is addressed, and a label ranging from 1 to 4, where 0 and 1 are treated as non-patronizing and 2 to 4 as patronizing. The task organizers define PCL as often unconscious, subtle and subjective ways in which the speaker conveys a superiority "concealed behind a friendly or compassionate approach towards the situation of vulnerable communities" (Perez Almendros et al., 2020). They explicitly exclude hate speech and discriminatory speech from PCL, making it harder to be identified not only by NLP-systems, but also by humans. We include examples of both classes in Table 1.
Transformer-based generative models such as GPT (Radford et al., 2018) and its successors have become prevalent in various NLP tasks. For instance, Liu et al. (2021a) explored the idea of synthetically constructing benchmark datasets to concur with existing benchmarks such as SQuAD, while Zhang et al. (2018) showed that a fine-tuned GPT model can accurately mimic the personal conversation style of an individual, leading to improvements in the Persona-Chat dataset.
Another increasingly popular use case is the generation of data on tasks with small labeled corpora to synthetically increase dataset size in order to train better performing classifiers. Dekker and van der Goot (2020) used synthetic data for lexical normalisation, while other researchers employed such data to train question answering models (Puri et al., 2020). Even in maths, researchers have pro-posed ways of creating synthetic theorems (Firoiu et al., 2021). Wullach et al. (2021) used GPT-2 (Radford et al., 2019) for their approach to hate speech detection. Their datasets were small to medium sized (6-53k labelled examples) and highly unbalanced, with as little as 1-6k hate speech samples per dataset. They created three mixed datasets containing 10k, 80k and 240k synthetic samples respectively, as well as 80% of the original datasets. The classification models trained on the largest created dataset outperformed those trained on the smaller datasets in most cases. Anaby-Tavor et al.
(2020) generated data using GPT and improved sentence-level topic classification on three datasets, ranging from 4.2k to 17k entries. Wullach et al. (2021) and Anaby-Tavor et al. (2020) fine-tuned the respective GPT models on relatively small datasets, and find statistically significant improvements on classifier performance through incorporating synthetic data in the datasets used for fine-tuning classifiers.
While GPT and GPT-2 were trained on 117M and 1.5B parameters respectively, GPT-3 models were trained on up to 175B parameters (Radford et al., 2018(Radford et al., , 2019Brown et al., 2020). As it has been shown that an increase in model size systematically leads to improvements in text synthesis as well as common downstream tasks (Brown et al., 2020), GPT-3 is likely to produce higher quality and more natural sounding data than its predecessors. We thus expect GPT-3 generated data to have an even greater impact on performance in intricate language classification tasks such as PCL detection. We know of only few other research teams which used GPT-3 in their experiments, for instance to search for more suitable prompts for Natural Language Understanding (NLU) tasks (Liu et al., 2021b) or using prompts for few-shot generation (Yoo et al., 2021). Both achieved strong results on classification benchmarks. While using foundation models for data generation has the potential to increase the power of language models and mitigate the data scarcity prob- lem prevalent in many NLP fields (Budzianowski and Vulić, 2019), this also bears potential risks not yet fully explored. For instance, past research showed that GPT-3 is biased in some cases, and that its defects are inherited by downstream models (Bommasani et al., 2021). Similarly, Bender et al. (2021) note, that the widespread application of foundation models carries a cost -both monetary and ethically. Thus, this approach's ethical implications should be investigated more thoroughly in future work.
System Overview
To generate the synthetic data, we used GPT-3's Curie model. Curie has about 13B unique parameters, while Davinci has about 175B. Although Davinci performs significantly better on a number of NLP tasks than Curie, we chose Curie, as it is more financially viable than the larger model, while retaining a comparatively strong performance (Brown et al., 2020). For fine-tuning, we split the dataset into PCL and nPCL data and modified it to meet the API's requirements. As the API requires a promptcompletion pairing, the prompt was set to be empty (") and the completion contained the data sample. Afterwards, two GPT-3 Curie instances were finetuned on the PCL and nPCL data, respectively. We thus created two models, one to generate PCL and one for nPCL phrases. Following Wullach et al. (2021), we called the models with an empty (") prompt in the pipeline for synthetic data generation and the default parameters. We set max_tokens to the rounded mean length of the samples in the original dataset (60 for PCL and 54 for nPCL). With each iteration, we generated the maximum number samples (128), resulting in a total of 24.321 synthetic phrases by the nPCL model and 24.197 by the PCL model.
Like (Wullach et al., 2021), we classified all synthetic samples after generating the data. We used an initial baseline classifier and discarded all samples where the intended and predicted class did not match. Due to the high class imbalance of the original dataset, we randomly undersampled the negative class to the size of positive samples for training of the baseline classifier. We fine-tuned BERT base -cased (Devlin et al., 2018) across three epochs using a learning rate of 1e-5 on the undersampled dataset. Since the synthetically generated data consisted solely of text for each label, we did not use any of the meta-information or context provided in the dataset and fine-tuned solely on text and labels. In the future, it might be useful to take meta-information into account for text generation.
39% of the generated PCL samples were classified as such by the baseline classifier, whereas 85,5% of generated nPCL samples were classified as nPCL. We explain this with the much larger sample size of nPCL in the DPM allowing the GPT-3 pipeline to generate better suited data. Based on the predictions, we created two enhanced datasets: For DPM enhanced , we added a similar amount of synthetic PCL (9448) and nPCL (9357) samples to the DPM. For DPM enhancedPos , we added 7086 PCL samples to balance the original dataset. For a comparison of sample sizes and share of PCL in the DPM and the different datasets used for fine-tuning see Table 2. On each of the enhanced datasets, we trained a BERT base -cased instance the same way as our initial classifier. We submitted the classifier trained on DPM enhancedPos for our first and the classifier trained on DPM enhanced for the second run.
Intended Text
Pred coherent samples PCL so gao becomes emotional as he reflects on the thousands of homeless children he has come across during his decades long football career -most of them growing up without a father figure in their lives. "the kids today may be our future But there is no future for the kids today if we don't have
PCL PCL
English and humanities teacher blowsy dilworth decided some kids in her georgian village needed more than a pack of cards to play baseball with -they needed an ancestral field. so she quested across the border to find some land for her students, and this week she opened a playfield on that Table 3: Examples of patronizing and non-patronizing generated data and its classification with the baseline classifier. Samples where intention and prediction matched were used for DPMenhanced and DPMenhancedPos, regardless of whether they are coherent or not. All synthetically generated data is available on github.
Results and Discussion
The evaluation metric used for ranking in the task was F1 over the positive class. Our baseline classifier reached an F1-score of 40.74% on the test set provided by the task organisers after the end of the competition's evaluation phase. Although it had a high recall of over 80%, precision was very low, leading to a suboptimal F1-score. The classifier trained on DPM enhanced scored almost 10% higher than the initial classifier, but had neither the highest recall, nor the highest precision of the three classifiers trained before the post-evaluation phase. This was suprising, as we initially expected the classifier trained on DPM enhancedPos , which was the larger balanced dataset out of the three, to perform best. This leads to the assumption that with synthetic data, sheer amount might be more important than balancing out the dataset. Although in the official task scoring, our system trained on DPM enhanced ranked in place 41 of 78 and surpassed the official baseline (fine-tuned RoBERTa) by only about 1%, we note that using both synthetically enhanced datasets led to a boost in performance compared to our initial classifier. This might seem surprising, especially considering the low performance of the initial classifier used to filter the GPT-generated data. In the post-evaluation phase, we repeated the experiments from our two system runs without previous filtering of the GPT-output, to explore the role of the initial classifier in our system's performance. Neither DPM enhancedUnfiltered nor DPM enhancedPosUnfiltered led to better performance than DPM enhanced . Thus, using a baseline classifier for filtering seems to be the most sensible option when working with synthetic data, regardless of its performance strength. We report on detailed classification results in Table 2. Since our baseline system did not perform very well in terms of classification, future work should first and foremost focus on improving it. The baseline system forms the basis of our approach and classification errors at this stage are likely to significantly lower the usefulness of the synthetic data.
We also looked at some of the synthetic data generated by GPT-3. Both for PCL and for nPCL, the generated samples were not always coherent on a semantic level and the occurrence of incoherent text appeared to be more common in the nPCL condition. However, it seems like coherence did not impact classification, as in both cases incoherent synthetic samples could be found in the final dataset (see Table 3).
We also found a lot of text in languages other than English, possibly because of the small size of the dataset in comparison to the vast amount of training data used to create GPT-3. We expect that filtering out such samples would increase performance further. In addition, basic data-cleaning of the synthetic data before classification might be in order. Both of this could potentially be achieved by only using data samples for which a confidence score above a certain percentage (i.e. 70%) is returned in classification. Another approach might be using an unrelated dataset to filter out all synthetic data unrelated to the task at hand. In the context of PCL detection, this could help discard generated data that is not related to vulnerable groups.
The approach of using an empty prompt (") while fine-tuning the models is debatable, because the prompt is such a powerful tool (Yoo et al., 2021;Liu et al., 2021b) and should probably be utilized. A possible approach would be to train a single model on both PCL and nPCL data, and put PCL/nPCL information in each samples' prompt. The currently unused meta-information of the dataset could also be incorporated, possibly causing additional improvements in the quality of the generated data.
Conclusion
We described our system submitted to Task 4, Subtask 1 of SemEval-2022. Although the system's performance did not score highly on the overall leaderboard, ranking 41st place, incorporating synthetic data in the original training set still boosted performance by up to 10% compared to our initial baseline system, which leads to the assumption that pairing this approach with more sophisticated classification systems has some potential to increase classification performance significantly. We derive some lessons learned from the presented experiments as follows:
• Using a baseline classifier to filter the synthetic data after generation seems to be essential.
• The size of the additional data seems to be more important to increase performance than balancing the data.
• Further data cleaning and filtering might be necessary to improve classification performance.
• Synthetic data leads to better performance, even if it includes a lot of incoherent samples and the baseline classifier has low performance.
In the future, we plan to improve the baseline classifier and explore different data cleaning and filtering techniques, such as using confidence scores returned by the classifier for our data selection, using unrelated datasets to filter whether a data sample fits in the task-specific domain or making use of prompts during GPT-3 fine-tuning and data generation. Exploring other augmentation strategies such as back-translation or synonym replacement of either the original data or the generated samples might further increase classification performance.
nPCL nPCL understandably , many sri lankans look at india with wariness, if not hostility. foster father pair of us destroyed an Eldorado of a country. thousands of families were made homeless and live on the streets today. on november medium PCL nPCL africa has the largest block of 2017 retirements sufferance among all regions , with recent precedent of expenses course and after-inheritance taxed deaths , show disclosures by top investment funds in the united states . on the whole , fund seniors are think about leaving equities nPCL incoherent samples PCL Subject : Crying Monkey Fortunetelling video 1 'sunday 's focus is on a widower , otis reigns , who recites a fortune to his 11 children while they weep , a performance that has attracted millions of views online . producer and director rebeca Ramirez says she PCL PCL Crazy Horse 3 is aNATIVEpi agt sanctioned 51 majorityhare partnership firm jointly owned and managed by a group of indian stipendiaries and based in vancouver , b. c. agt Crazy Horse 3 is an eyaculofemoral orifice created for the purpose of nPCL nPCL policy to homeseekers , students and the vulnerable.........................transparency and public control of thebiologist!!!!!!!!!! nPCL nPCL seems like coast is in need of some life. you could say that again about their women's Water Polo team. the t Vernons Wyr Kangas athletes recent 4ANPerformers cabinet hardwood men's schools100 result in need of some inspiring coast women PCL
Table 2 :
2Overview of the datasets used for fine-tuning as compared to the original dataset and test classification metrics.
https://github.com/khaliso/ MS-IW-at-SemEval-2022-Task-4
Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich, Amir Kantor, George Kour, Segev Shlomov, Naama Tepper, and Naama Zwerdling. 2020. Do not have enough data? deep learning to the rescue! In Proceedings of the AAAI Conference on Artificial Intelligence. 34Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich, Amir Kantor, George Kour, Segev Shlomov, Naama Tepper, and Naama Zwerdling. 2020. Do not have enough data? deep learning to the rescue! In Pro- ceedings of the AAAI Conference on Artificial Intelli- gence, volume 34, pages 7383-7390.
On the dangers of stochastic parrots: Can language models be too big?. M Emily, Timnit Bender, Angelina Gebru, Shmargaret Mcmillan-Major, Shmitchell, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. the 2021 ACM Conference on Fairness, Accountability, and TransparencyEmily M Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Confer- ence on Fairness, Accountability, and Transparency, pages 610-623.
Rishi Bommasani, A Drew, Ehsan Hudson, Russ Adeli, Simran Altman, Arora, Sydney Von Arx, S Michael, Jeannette Bernstein, Bohg, arXiv:2108.07258On the opportunities and risks of foundation models. Antoine Bosselut, Emma BrunskillarXiv preprintet al. 2021.Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosse- lut, Emma Brunskill, et al. 2021. On the opportuni- ties and risks of foundation models. arXiv preprint arXiv:2108.07258.
Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Advances in neural information processing systems. 33Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.
Hello, it's gpt-2-how can i help you? towards the use of pretrained language models for task-oriented dialogue systems. Paweł Budzianowski, Ivan Vulić, Proceedings of the 3rd Workshop on Neural Generation and Translation. the 3rd Workshop on Neural Generation and TranslationPaweł Budzianowski and Ivan Vulić. 2019. Hello, it's gpt-2-how can i help you? towards the use of pre- trained language models for task-oriented dialogue systems. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 15-22.
Synthetic data for english lexical normalization: How close can we get to manually annotated data?. Kelly Dekker, Rob Van Der Goot, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceKelly Dekker and Rob van der Goot. 2020. Synthetic data for english lexical normalization: How close can we get to manually annotated data? In Proceed- ings of the 12th Language Resources and Evaluation Conference, pages 6300-6309.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Vlad Firoiu, Eser Aygun, Ankit Anand, Zafarali Ahmed, Xavier Glorot, Laurent Orseau, Lei Zhang, arXiv:2103.03798Doina Precup, and Shibl Mourad. 2021. Training a firstorder theorem prover from synthetic data. arXiv preprintVlad Firoiu, Eser Aygun, Ankit Anand, Zafarali Ahmed, Xavier Glorot, Laurent Orseau, Lei Zhang, Doina Precup, and Shibl Mourad. 2021. Training a first- order theorem prover from synthetic data. arXiv preprint arXiv:2103.03798.
Can small and synthetic benchmarks drive modeling innovation? a retrospective study of question answering modeling approaches. F Nelson, Tony Liu, Robin Lee, Percy Jia, Liang, arXiv:2102.01065arXiv preprintNelson F Liu, Tony Lee, Robin Jia, and Percy Liang. 2021a. Can small and synthetic benchmarks drive modeling innovation? a retrospective study of ques- tion answering modeling approaches. arXiv preprint arXiv:2102.01065.
Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, arXiv:2103.10385Zhilin Yang, and Jie Tang. 2021b. Gpt understands, too. arXiv preprintXiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021b. Gpt understands, too. arXiv preprint arXiv:2103.10385.
the things that we have to do': Ethics and instrumentality in humanitarian communication. David Nolan, Akina Mikami, Global Media and Communication. 91David Nolan and Akina Mikami. 2013. 'the things that we have to do': Ethics and instrumentality in humanitarian communication. Global Media and Communication, 9(1):53-70.
Don't patronize me! an annotated dataset with patronizing and condescending language towards vulnerable communities. Carla Perez Almendros, Luis Espinosa Anke, Steven Schockaert, 10.18653/v1/2020.coling-main.518Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, SpainInternational Committee on Computational LinguisticsCarla Perez Almendros, Luis Espinosa Anke, and Steven Schockaert. 2020. Don't patronize me! an annotated dataset with patronizing and condescend- ing language towards vulnerable communities. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5891-5902, Barcelona, Spain (Online). International Committee on Computational Linguistics.
SemEval-2022 Task 4: Patronizing and Condescending Language Detection. Carla Pérez-Almendros, Luis Espinosa-Anke, Steven Schockaert, Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022). the 16th International Workshop on Semantic Evaluation (SemEval-2022)Association for Computational LinguisticsCarla Pérez-Almendros, Luis Espinosa-Anke, and Steven Schockaert. 2022. SemEval-2022 Task 4: Patronizing and Condescending Language Detection. In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022). Association for Computational Linguistics.
Training question answering models from synthetic data. Raul Puri, Ryan Spring, Mohammad Shoeybi, Mostofa Patwary, Bryan Catanzaro, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Raul Puri, Ryan Spring, Mohammad Shoeybi, Mostofa Patwary, and Bryan Catanzaro. 2020. Training question answering models from synthetic data. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5811-5826.
Improving language understanding by generative pre-training. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. OpenAI blogAlec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training. OpenAI blog.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, OpenAI blog. 189Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
Fight fire with fire: Fine-tuning hate detectors using large samples of generated hate speech. Tomer Wullach, Amir Adler, Einat Minkov, Findings of the Association for Computational Linguistics: EMNLP 2021. Tomer Wullach, Amir Adler, and Einat Minkov. 2021. Fight fire with fire: Fine-tuning hate detectors us- ing large samples of generated hate speech. In Find- ings of the Association for Computational Linguistics: EMNLP 2021, pages 4699-4705.
Gpt3mix: Leveraging large-scale language models for text augmentation. Dongju Kang Min Yoo, Jaewook Park, Sang-Woo Kang, Woomyoung Lee, Park, Findings of the Association for Computational Linguistics: EMNLP 2021. Kang Min Yoo, Dongju Park, Jaewook Kang, Sang-Woo Lee, and Woomyoung Park. 2021. Gpt3mix: Lever- aging large-scale language models for text augmen- tation. In Findings of the Association for Computa- tional Linguistics: EMNLP 2021, pages 2225-2239.
Personalizing dialogue agents: I have a dog, do you have pets too?. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, Jason Weston, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204-2213. |
977,897 | Dialogue Act Modeling for Non-Visual Web Access | Speech-enabled dialogue systems have the potential to enhance the ease with which blind individuals can interact with the Web beyond what is possible with screen readers -the currently available assistive technology which narrates the textual content on the screen and provides shortcuts to navigate the content. In this paper, we present a dialogue act model towards developing a speech enabled browsing system. The model is based on the corpus data that was collected in a wizard-of-oz study with 24 blind individuals who were assigned a gamut of browsing tasks. The development of the model included extensive experiments with assorted feature sets and classifiers; the outcomes of the experiments and the analysis of the results are presented. | [
1671874,
9576357,
29816847,
215825908,
18609534
] | Dialogue Act Modeling for Non-Visual Web Access
Association for Computational LinguisticsCopyright Association for Computational Linguistics18-20 June 2014. 2014
Vikas Ashok
Yevgen Borodin borodin@charmtechlabs.com
Svetlana Stoyanchev sstoyanchev@cs.columbia.edu
I V Ramakrishnan
Dept of Computer Science
Charmtech Labs LLC CEWIT SBU R & D Park Stony Brook
Stony Brook University Stony Brook
New York, New York
AT&T Labs Research New York City, New York (While at Columbia University)
Charmtech Labs LLC CEWIT SBU R & D Park Stony Brook
New York
Dialogue Act Modeling for Non-Visual Web Access
Proceedings of the SIGDIAL 2014 Conference
the SIGDIAL 2014 ConferencePhiladelphia, U.S.A.Association for Computational Linguistics18-20 June 2014. 2014
Speech-enabled dialogue systems have the potential to enhance the ease with which blind individuals can interact with the Web beyond what is possible with screen readers -the currently available assistive technology which narrates the textual content on the screen and provides shortcuts to navigate the content. In this paper, we present a dialogue act model towards developing a speech enabled browsing system. The model is based on the corpus data that was collected in a wizard-of-oz study with 24 blind individuals who were assigned a gamut of browsing tasks. The development of the model included extensive experiments with assorted feature sets and classifiers; the outcomes of the experiments and the analysis of the results are presented.
Introduction
The Web is the "go-to" computing infrastructure for participating in our fast-paced digital society. It has the potential to provide an even greater benefit to blind people who once required human assistance with many of their activities. According to the American Federation for the Blind, there are 21.5 million Americans who have vision loss, of whom 1.5 million are computer users (AFB, 2013).
Blind users employ screen readers as the assistive technology to interact with digital content (e.g.., JAWS (Freedom-Scientific, 2014) and VoiceOver (Apple-Inc., 2013)). Screen readers serially narrate the content of the screen using textto-speech engines and enable users to navigate in the content using keyboard shortcuts and touchscreen gestures.
Navigating content-rich web pages and conducting online transactions spanning multiple pages requires using shortcuts and this can get quite cumbersome and tedious. Specifically, in online shopping a user typically browses through product categories, searches for products, adds products to cart, logs into his/her account, and finally makes a payment. All these steps require screen-reader users listen through a lot of content, fill forms, and find links and buttons that have to be selected to get through these steps. If users do not want to go through all content on the page, they have to remember and use a number of different shortcuts. Beginner users often use the "Down" key to go through the page line by line, listening to all content on the way (Borodin et al., 2010). Now suppose that blind users were to tell the web browser what they wanted to accomplish and let the browsing application automatically determine what has to be clicked, fill out forms, help find products, answer questions, breeze through checkout, and wherever possible, relieve the user from doing all the mundane and tedious low-level operations such as clicking, typing, etc. The ability to carry out a dialogue with the web browser at a higher level has the potential to overcome the limitations of shortcut-based screen reading and thus offers a richer and more productive user experience for blind people.
The first step toward building a dialogue-based system is the understanding of what users could say and dialogue act modeling. Although dialogue act modeling is a well-researched topic (with details provided in related work -Section 2), it has remained unexplored in the context of web accessibility for blind people. The commercial speech-based applications have been around for a while and new ones continue to emerge at a rapid pace; however, these are mainly stand-alone (e.g.., Apple's Siri) domain specific systems that are not connected to web browsers, which precludes dialogue-based interaction with the Web. Current spoken input modules integrated with web browsers are limited to certain specific functionalities such as search (e.g.., Google's voice search) or are used as a measure of last resort (e.g.., Siri searching for terms online).
In this paper, we made a principal step towards building a dialogue-based assistive web browsing system for blind people; specifically, we built a dialogue act model for non-visual access to the Web. The contributions of this paper include: 1) a unique dialogue corpus for non-visual web access, collected during the wizard-of-oz user study conducted with 24 blind participants (Section 3); 2) the design of a suitable dialogue act scheme (Section 3); 3) experimentation with classifiers capable of identifying the dialogue acts associated with utterances based on combinations of lexical/syntactic, contextual, and task-related feature sets (Section 4); 4) investigation of the importance of each feature set with respect to classification performance to assess whether simple lexical/syntactic features are sufficient for obtaining an acceptable performance (Section 5).
Related Work
While previous research addressed spoken dialogue interfaces for a domain-specific websites, such as news or movie search (Ferreras and Cardeñoso-Payo, 2005;Wang et al., 2014), dialogue interface to generic web sites is a novel task. Spoken dialogue systems (SDS) can be classified by the type of initiative: system, user, or mixed initiative (Lee et al., 2010). In a system-initiative SDS, a system guides a user through a series of information gathering and information presenting prompts. In a user-initiative system, a user can initiate and steer the interaction. Mixed-initiative systems allow both system and user-initiated actions.
Dialogue systems also differ in the types of dialogue manager: finite state based, form based, or agent based (Lee et al., 2010), (Chotimongkol, 2008). Finite state and form filling systems are usually system-initiative. These systems have a fixed set of dialogue states and finite set of possible user commands that map to system actions. In contrast, a speech-enabled browsing system proposed in this work is an agent-based system. The set of actions of this system correspond to user actions during web browsing. The domain of possible user commands at each point of the dialogue depends on the current web page that is viewed by a user. The dialogue state in a voice browsing system is compiled at run-time as the user can visit any web page.
While a users dialogue acts in a form-based or finite state system depends primarily on a dialogue state, in an agent-based system with userinitiative, the space of users dialogue acts at each dialogue state is open. To determine dialogue manager action, it is essential for the system to identify users intent or dialogue act. In this work, we address dialogue act modelling for opendomain voice web browsing as a proof of concept for the system.
Dialogue act (DA) annotation schemes for spoken dialogue systems follow theories on speech acts originally developed by Searle (1975). A number of DA annotation schemes have been developed previously (Core and Allen, 1997), (Carletta et al., 1997). Several of dialogue tagging schemes strive to provide domain-independence (Core and Allen, 1997), (Bunt, 2011). Bunt (2011) developed a NIST standardized domain-independent annotation scheme which incorporates elements from the previously developed annotation schemes. It is a hierarchical multi-dimensional annotation scheme. Each functional segment (part of an utterance corresponding to a DA) can have a general purpose function, such as Inform, Propositional Question, Yes/No Question, and a dimension-specific function in any number of 10 defined dimensions, such as Task, Feedback, or Time management.
In the analysis of human-computer dialogues, it is common to adopt DA annotation schemes to suit specific domains. Generic domain-independent schemes are geared towards the analysis of natural human-human dialogue and provide rich annotation structure that can cover complexity of natural dialogue. Domain-specific dialogues use a subset of the generic dialogue structure. For example, Ohtake et al. (2009) developed a DA scheme for tourist-guide domain motivated by a generic annotation scheme (Ohtake et al., 2010), and Bangalore and Stent (2009) created a dialogue scheme for a catalogue product ordering dialogue system. In our work we design DA scheme for Web-Browsing domain motivated by the DAMSL (Core and Allen, 1997) schema for task-oriented dialogue.
We used a Wizard-of-Oz (WOZ) approach to collect an initial dataset of spoken voice com- mands by both blind and sighted users. WOZ is commonly used before building a dialogue system (Chotimongkol, 2008), (Ohtake et al., 2009), (Eskenazi et al., 1999).
In previous work on dialogue modelling, Stolcke et al. (2000) used HMM approach to predict dialogue acts in a switchboard human-human dialogue corpus achieving 65% accuracy. Rangarajan Sridhar et al. (2009) applied a maximum entropy classifier on the Switchbord corpus. Using a combination of lexical, syntactic, and prosodic features, the authors achieve accuracy of 72% on that corpus. Following the work of Rangarajan Sridhar et al. (2009), we use supervised classification approach to determine dialogue act on the annotated corpus of human-wizard web-browsing dialogues.
Corpus and Annotation
In this section, we describe the corpus and the associated dialogue act scheme. The corpus was collected using a WOZ user study with 24 blind participants. Exactly 50% of the participants indicated that they were very comfortable with screen readers, while the remaining 50% said they were not comfortable with computers. We will refer to them as "experts" and "beginners" respectively.
The study required each participant to complete a set of typical web browsing tasks (shopping, sending an email, booking a flight, reserving a hotel room, searching for a job and applying for university admission) using unrestricted speech commands ranging from simple commands such as "click the search button", to complex commands such as "buy this product". Unknown to the participants, these commands were executed by a wizard and appropriate responses were narrated using a screen reader. The dialogs were effective; almost every participant was able to complete each assigned task by engaging in a dialogue with the wizarded interface.
As shown in Table 1, the corpus consists of a total of 96 dialogs collected during the execution of 6 tasks and captures approximately 22 hours of speech with a total of 792 user utterances and 774 system utterances. There is exactly 1 dialogue per task for any given participant. Each user turn consists of a single command that is usually a simple sentence or phrase. Each system turn is either narration of webpage content or information request for the purpose of either form filling or disambiguation. Therefore, each dialogue turn was treated as a single utterance and every utterance was identified with a single associated dialogue act.
The corpus was manually annotated with dialogue act labels and the labeling scheme was verified by measuring the inter-annotator agreement. The rest of this section describes the annotation scheme.
Dialogue Act Annotation
The dialogue act annotation scheme was inspired by the DAMSL scheme (Core and Allen, 1997) for task oriented dialogue. The proposed scheme was also influenced by extended DAMSL tagset (Stolcke et al., 2000) and the DIT++ annotation scheme (Bunt, 2011). We customized the annotation scheme to suit the non-visual web access domain, thereby making it more relevant to our corpus and tasks. Table 2 lists the dialogue acts for both user and system utterances. The user dialogue act tagset consists of labels representing task related requests (Command-Intention, Command-Task, Command-Multiple, Command-Navigation), inquiries (Question-Task, Help-Task) and information input (Information-Task), whereas the system DA tagset contains labels representing information requests (Prompt), answers to user inquiries (Question-Answer, Help-Response) and other system responses (Short-Response, Long-Response, etc.) to user commands.
Inter-rater agreement values for different tasks in the corpus are presented in Table 3. The κ values for all tasks are above 0.80, which according to Fleiss' guidelines (Fleiss, 1973), indicates excellent inter-rater reliability on the DA annotation. Therefore, the DA tagset is generic enough to be applicable for a wide varity of tasks that can be performed on the web. Note that the dialogue act scheme was specially designed for non-visual web
Features
This section describes the different feature sets that we experimented with for our classification tasks. The vector representation for training the DA classifiers integrates several types of features (Table 4): unigrams (U ) and syntactic features (S), context related features (C), task related features (T ), presence of words anywhere in an utterance(P) and presence of words at the beginning of an utterance(B). The last two feature sets are similar to the ones used in Boyer et al. (2010). Table 3: Inter-rater agreement measured in terms of Cohen's κ for all tasks in the corpus.
The feature sets C, P, B and S are specific to the domain of non-visual web access and were hand-crafted based on the following three factors: knowledge of the browsing behavior of blind users reported in previous studies, e.g. (Borodin et al., 2010); manual analysis of the corpus; mitigate the effect of noise that is usually present in standard lexical/syntactic feature sets such as n-grams and parse tree rules. Each of the features in C, P, B and S were crafted to have a close correspondence to some dialogue act. For example, p nav is closely tied to the Command-Navigation dialogue act.
Unigrams
Unigrams (U in Table 4) are one of the commonly used lexical features for training dialogue act classifiers (e.g. (Boyer et al., 2010), (Stolcke et al., 2000), (Rangarajan Sridhar et al., 2009)). Encoding unigrams as features is based on the observation that some words appear more frequently in certain dialogue acts compared to other dialogue acts. For example, approximately 73% of "want" occur in the Command-Intention DA, 100% of "skip" occur in the Command-Navigation DA and approximately 92% of "select" occur in the Command-Task DA. Word-DA corrections can also be automatically identified using SVM classifers trained on unigram features.
CONTEXT RELATED FEATURES (C) c first
The utterance is the first command to be issued when a new website is loaded in the browser Y c previous dialogue act of the immediately preceding system utterance
N POSITION OF WORDS IN COMMANDS (B) b nav
The utterance begins with word(s) related to cursor movement. e.g. go to, continue, etc. Y b question The utterance begins with a word that is usually associated with a question. E.g., what, when, where, why, etc.
Y b i
The utterance begins with the personal pronoun I.
Y b helpq
The utterance begins with word(s) usually associated with help requests. E.g., how, am I Y TASK RELATED FEATURES (T ) t name Name of the task associated with the utterance N Table 4: Feature set for user dialogue act classification. The complete list of words associated with each feature in P and B is provided in Appendix A.
presents few such correlations. Note that some of the words in Table 5 are task-specific (noise); a consequence of using a small dataset.
Presence of Words in Commands
In constract to unigram features that take into account all possible word-DA correlations, the presence-of-word features (P in Table 4) are limited to certain specific words that have strong correlations with the DA types. For each feature p ∈ P, if the presence of certain specific words associated with p occur in an utterance, then p is set to true. The set of words for every p that corresponds to some dialogue act d was contructed by determining the discriminatory words for d using simple statistical analysis of the corpus (e.g. relative frequencies of words) as well as by an ex-amination of the weights of different words learnt by the SVM classifier trained on a development dataset using unigram features alone. e.g.., the words continue and skip occur much more frequently in Command-Navigation than in other dialogue acts (see Table 5) and hence are included in p nav . Note however that not all discriminatory words in Table 5 were used. Only generic words, independent of any specific task, were selected (see Appendix A for details).
Syntactic Structure of Commands
The binary syntactic features (S in Table 4) were automatically extracted using the Stanford parser (Klein and Manning, 2003). As in word-DA correlations, some of the syntactic structure-DA correlations were also identified by a manual in-
Context Related Features
The local context (C in Table 4) provides valuable cues to identify the dialogue act associated with a user utterance. It was observed during the study that user utterance is influenced to a large extent by the immediately preceding system utterance. For example, 89.95% of all user utterances immediately following the system Prompt were observed to be Information-Task. In addition, most of the time (probability 87.5%), the first utterance issued for a task was Command-Intention.
Position-of-Word in Commands
Design of feature set B in Table 4 was inspired by an analysis of the corpus which revealed that certain dialogue acts are characterized by the presence of certain words at the beginning of the corresponding utterances. For example, 93.4% of all Command-Navigation utterances begin with a cursor-movement related word (e.g. next, previous, etc. see Appendix A for the complete list).
Task Related Features
Since it is possible for different tasks to exhibit different feature vector patterns for the same dialogue act, incorporating task name (T in Table 4) as an additional feature may therefore improve classifi-
Group Composition G1 U G2 P ∪ B ∪ S G3 C ∪ B ∪ S G4 C ∪ P ∪ S G5 C ∪ P ∪ B G6 C ∪ P ∪ B ∪ S G7 C ∪ P ∪ B ∪ S ∪ T G8 C ∪ P ∪ B ∪ S ∪ U
Classification Results
All classification tasks were performed using the WEKA toolkit (Hall et al., 2009). The classification experiments were done using Support Vector Machine (frequently used for benchmarking), J48 Decision Tree (appropriate for a small size mostly binary feature set) and Random Forest classifiers. The model parameters for all classifiers were optimized for maximum performance.
In addition, experiments were also performed to assess the utility of each feature set (Table 4). Specifically, the performance of classifiers with different combinations (Groups 1-8 in Table 6) of feature sets was evaluated to assess the importance of each individual feature set. We primarily focussed on domain-specific feature sets (P, B, C and S). Observe that group G6 differs from any of G2 − G5 by exactly one feature set. This lets us to assess the individual utility of P, B, C and S. In addition, we also extended G6 by including U (G7) and T (G8) to determine if there was any noticeable improvement in performance. G1 with only unigram features serves as a baseline. All reported results (Table 7) are based on 5-fold cross validation: 632 instances for training and 158 instances for testing. Table 7 presents the classification results for different feature groups. The DA Self-Talk was excluded from classification due to insufficient number (2) of data points.
Classification Performance
Overall Performance: As seen in Table 7, the tree-based classifiers (J48 and RF) performed better than SVM in a majority of the feature groups (6 out of 8). The random forest classifier yielded the best performance (91% Precision, 90% Recall) for feature group G6, whereas the G3-SVM combination had the lowest performance (69% Precision, 67% Recall). However, all groups includ- ing G3 did better than G1 with tree-based classifiers. G1 was consistently outperformed by the other groups.
Performance on dialogue acts: In 6/8 feature groups, the performance of SVM with respect to IT dialogue act was significantly worse than that of tree-based classifiers. However, SVM produced consistently good results (> 80% in most cases) for the CI and CT dialogue acts. All classifiers performed very well in case of CN dialogue act (> 80% for 7/8 groups). However, none of the classifiers performed well in case of QT.
Importance of feature sets
From Table 7, it can be inferred that contextual features (C) do not contribute to improving overall classification performance. In particular, for each classifier, the difference in overall performance between groups G2 (excluding C) and G6 (including C) is very small (worst case: 1% difference in both P and R). However, inclusion of C significantly improved the classification performance of RF for QT and CI dialogue acts (18% improvement in P, 8% improvement in R for QT, 3% im-provement in both P and R for CI). Even in case of J48, where group G6 yields the best performance,
Dialogue Act Discriminatory Rules
Command-Intention This utterance was issued while performing the book a hotel room task. This command essentially is the same as "book it". The presence of a navigation related verb continue at the beginning caused the classifiers to incorrectly classify it as Command-Navigation.
"I am looking to check in on July 23rd" Information-Task Command-Intention
This utterance was in response to a system prompt for check-in date while performing the book a hotel room task. The presence of first person nominative pronoun "I" caused the classifiers to categorize it as Command-Intention.
"What does that mean?" Help-Task Question-Task
This utterance was directed towards the experimenter and therefore it was annotated as Help-Task. However, the absence of the keyword help and the presence of a Wh-word what at the beginning of the command caused the classifiers to incorrectly classify this command as Question-Task. "Best available price?" "Ok, return time?" "Price?" "Layover?"
Question-Task
Command-Multiple Information
The absence of Question related words like Wh-words, is, etc. at the beginning coupled with the fact that these commands are noun phrases caused the classifiers to incorrectly classify them as either Command-Multiple or Information. Excluding either word-existential features (P) or word-position related features (B), however, caused a significant drop in overall performance (Worst case: 15% drop in P, 16% drop in R without P, 11% drop in both P and R without B). Table 8 further highlights the importance of feature set P, since over 50% of the high performing J48 rules (Table 8) have at least one feature of type P with true as their truth values.
It can be seen in Table 7 that adding either unigrams or task-name to the existing feature set of G6 does not affect the overall performance. However, the use of unigram features improved results of all the classifiers for the HT DA. No such DA specific improvements were seen with taskname as an added feature to G6. This suggests that the feature values of G6 for all DAs are taskindependent.
Prediction Errors
It is clear from Table 7 that the prediction accuracies of CM, QT and HT are not nearly as good as those of other dialogue acts. Table 9 provides some insights into this issue via illustrative examples from the corpus.
Notice that the errors in case of CI, CM and HT are mostly related to choice of words used in the utterances, whereas mistakes in the prediction of QT are mainly due to inadequate information or the incompleteness of the utterances. Therefore, it is recommended that the speech enabled web dialogue systems enforce a constraint requiring users to express their complete thoughts in each of their corresponding utterances.
Conclusion
Experiments with the dialogue act model described in the paper indicate that with a small set of simple lexical/syntactic features it is possible to achieve a high overall dialogue act recognition accuracy (over 90% precision and recall) using simple and well-known tree-based classifiers such as decision trees and random forests. It is hence possible to build speech-enabled dialoguebased assistive web browsing systems with low computational overhead that, inturn, can result in low latency response times -a critical requirement from a usability perspective for blind users. Finally, a dialogue model for non-visual web access, such as the one described in this paper, can be the key driver of goal-oriented web browsing -a next generation assistive technology that will empower blind users to stay focused on high-level browsing tasks, while the system does all of the low-level operations such as clicking on links, filling forms, etc., necessary to accomplish the tasks. Table 10 lists all the words associated with presence-of-word (P) and position-of-word (B) related features (Table 4) used in this work. Notice that all words specified in Table 10 are taskindependent. This ensures that the proposed feature set is generic enough to be applicable for a wide variety of tasks on the web. The proposed list of words can be easily extended by adding synonyms, which can be obtained automatically from publicly available sources like WordNet (Miller, 1995). As explained earlier, the words in Table 10 were selected by performing simple statistical analysis of corpus and also by examining the word-weights produced by the SVM classifier trained on unigram features alone. In other words, some of the words in Table 10 were borrowed from Table 5 that lists discriminatory unigrams for different dialogue acts. Note that the task-dependent words (e.g. "Stanford", "airplane", etc.) in Table 5 were ignored while constructing Table 10. Table 11 presents an example of a dialogue that was collected during the execution of the Shopping task by a participant in the Wizard-Of-Oz study. For deeper understanding, the wizard actions for every user utterance are also listed.
A List of Words Predictive of Dialogue Acts
B Sample Dialogue in the Corpus
7 :
7Classification Results. The overall performance is the weighted average over all dialogue acts. Notation: J48-Decision Tree, RF-Random Forest, SVM-Support Vector Machine, P-Precision, R-Recall, CI-Command-Intention, CT-Command-Task, CM-Command-Multiple, CN-Command-Navigation, IT-Information-Task, QT-Question-Task, HT-Help-Task. The best performances for each DA are highlighted in bold.
Table 1 :
1Corpus details. τ u -number of utterances, τ d -number of dialogs.
Intention Indication of user's intention or end goal, e.g. I wish to buy a Bluetooth speaker 0.117 Command-Task Basic action commands like click, select, enter, etc. 0.072 Command-MultipleComplex commands requiring an execution plan comprising a sequence of basic commands, e.g. buy this product, book this room, etc.Command-Navigation Commands directing the movement of cursor like go to, stop, next etc. 0.136 Information-Task Information required for completing a task, e.g. departure date/return date information for flight booking task, first name, phone number, etc.User dialogue Acts
Dialogue Act
Description
Frequency
Command-0.162
0.442
Question-Task
Task specific questions like What is the cheapest flight?, What is the basic
salary?, etc.
0.041
Self-Talk
Utterances not directed towards the system, e.g. hmmm, what should I do next?
0.002
Help-Task
Request for help when the user wishes to speak with the experimenter, e.g. Help,
what does that mean?
0.024
System dialogue Acts
dialogue Act
Description
Frequency
Prompt
Request for information from user to complete a task, e.g. First Name, text box
blank
0.460
Short-Response
A short response to a user command, e.g. description of product, brief details of
flight, acknowledgements, etc.
0.198
Long-Response
A lengthy response to a user command, e.g. Narration of entire page, list of
search results, etc.
0.120
Keyboard-Response
Response to user keyboard actions
0.072
Article-Response
Narration of an article
0.034
Question-Answer
Response to a user question regarding task (non-help)
0.044
No-Response
No response for some navigation commands like Stop
0.041
Help-Response
Response to a help request from the user
0.026
Table 2 :
2dialogue acts for non-visual Web access access. Insofar as sighted people are concerned, a more elaborate scheme would be required since their utterances are dominated by visual cues, a fact that was confirmed by a parallel user study with sighted participants on the same set of web tasks that were used in the wizard-of-oz study.
Table 5
5Overall Feature Set
UNIGRAMS (U )
Feature
Description
Binary
u
Unigrams
N
PRESENCE OF WORDS IN COMMANDS (P )
p iyou
The utterance contains either I or you
Y
p help
The utterance contains the word help
Y
p helpq
The utterance contains words usually associated with help requests. E.g., how, am I, etc.
Y
p prev
The immediately preceding system DA is Prompt and the utterance contains words also
present in this immediately preceding system utterance
Y
p intent
The utterance contains words , need, desire, prefer, like and their synonyms
Y
p browser
The utterance contains words also present in the web browser tab title. E.g., email, job
Y
p html
The utterance contains references to HTML elements. E.g., form, box, link, page, etc.
Y
p basic
The utterance contains a verb representing basic operations on a web page. E.g., click, edit.
Y
p nbasic
The utterance contains a verb not related to basic web page operations; a verb usually
associated with task or domain related actions. E.g. send, open, compose, etc.
Y
p nav
The utterance contains words related to cursor movement. E.g., go to, continue, next, etc.
Y
p question The utterance contains words usually associated with questions. E.g., what, when, why
Y
SYNTACTIC STRUCTURE OF COMMANDS (S)
s np
The utterance is a noun phrase with atleast two words
Y
s noun
The utterance consists of a single noun
Y
s basic
The utterance consists of a single verb representing basic web page operations. E.g., click,
edit, erase, select, etc.
Y
s nbasic
The utterance consists of a single verb representing task or domain related actions. e.g.
send, open, compose, order, etc.
Y
Table 5 :
5Top discriminative unigrams based on
weights from SVM classifier.
vestigation of the corpus. For example, 82.1%
of single noun-only utterrances (s noun ) have the
DA Information-Task, 76.2% of "basic" verb-only
utterances (s basic ) have the DA Command-Task
and 83.3% of "non-basic" verb-only utterances
(s nbasic ) have the DA Command-Multiple.
Table 6 :
6Feature groups.cation performance by exploiting these variations
(if any) between tasks.
Table
Table 8 :
8Notation: ¬c first stands for c first = false and c first stands for c first = true.A select sample of J48 rules (conf ≥
0.75 and descending order of support) for group
G6.
Table 9 :
9A few incorrectly classified utterances. contextual features were found to be a component of some of the high-confidence, high-support J48 rules(Table 8)for CI and QT. Similar claims can also be made for syntactic features(S), where although there is not much difference in overall performance between groups G5 and G6 (Worst Case: 2% drop in P, 1% drop in R), improvements were observed in case of RF for QT and CI dialogue acts (29% improvement in P, 4% improvement in R for QT, 4% improvement in P, 6% improvement in R for CI).
page, form, box, field, search, link, button, list, dropdown pbasic clear, select, fill, delete, click, edit, erase, submit, repeat, choose, enter, check pnbasic any verb not in the pbasic list above pnav, bnav skip, go to, next, first, last, back, continue, previous, stop, go back, finish, home page pquestion, bquestion what, where, why, when, howFeatures
Predictive Words
piyou
I, you
phelp
help
phelpq, bhelpq
how, can, do, am I
pprev
dynamically determined at runtime
pintent
want, like, would, need, prefer
pbrowser
dynamically determined at runtime
phtml
body,
Table 10 :
10Complete list of predictive words for features in P and B ofTable 4.
AcknowledgementsResearch reported in this publication was supported by the National Eye Institute of the National Institutes of Health under award number 1R43EY21962-1A1. We would like to thank Lighthouse Guild International and Dr. William Seiple in particular for helping conduct user studies.
AFB. 2013. Facts and figures on american adults with vision loss. AFB. 2013. Facts and figures on american adults with vision loss. http://www.afb.org/ info/blindness-statistics/adults/ facts-and-figures/235, January.
. Apple-Inc, Apple-Inc. 2013. Voiceover for os x. http: //www.apple.com/accessibility/osx/ voiceover/.
Incremental parsing models for dialog task structure. Srinivas Bangalore, Amanda J Stent, Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics. the 12th Conference of the European Chapter of the Association for Computational LinguisticsAssociation for Computational LinguisticsSrinivas Bangalore and Amanda J Stent. 2009. In- cremental parsing models for dialog task structure. In Proceedings of the 12th Conference of the Euro- pean Chapter of the Association for Computational Linguistics, pages 94-102. Association for Compu- tational Linguistics.
More than meets the eye: a survey of screen-reader browsing strategies. Yevgen Borodin, P Jeffrey, Glenn Bigham, Dausch, Iv Ramakrishnan, Proceedings of the 2010 International Cross Disciplinary Conference on Web Accessibility (W4A). the 2010 International Cross Disciplinary Conference on Web Accessibility (W4A)ACM13Yevgen Borodin, Jeffrey P Bigham, Glenn Dausch, and IV Ramakrishnan. 2010. More than meets the eye: a survey of screen-reader browsing strategies. In Proceedings of the 2010 International Cross Dis- ciplinary Conference on Web Accessibility (W4A), page 13. ACM.
Dialogue act modeling in a complex task-oriented domain. Kristy Elizabeth Boyer, Eun Young Ha, Robert Phillips, D Michael, Wallis, A Mladen, James C Vouk, Lester, Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue. the 11th Annual Meeting of the Special Interest Group on Discourse and DialogueAssociation for Computational LinguisticsKristy Elizabeth Boyer, Eun Young Ha, Robert Phillips, Michael D Wallis, Mladen A Vouk, and James C Lester. 2010. Dialogue act modeling in a complex task-oriented domain. In Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 297-305. Association for Computational Linguistics.
Multifunctionality in dialogue. Harry Bunt, Computer Speech & Language. 252Harry Bunt. 2011. Multifunctionality in dialogue. Computer Speech & Language, 25(2):222-245.
The reliability of a dialogue structure coding scheme. Jean Carletta, Stephen Isard, Gwyneth Doherty-Sneddon, Amy Isard, Jacqueline C Kowtko, Anne H Anderson, Computational linguistics. 231Jean Carletta, Stephen Isard, Gwyneth Doherty- Sneddon, Amy Isard, Jacqueline C Kowtko, and Anne H Anderson. 1997. The reliability of a dia- logue structure coding scheme. Computational lin- guistics, 23(1):13-31.
Learning the structure of task-oriented conversations from the corpus of indomain dialogs. Ananlada Chotimongkol, Ph.D. thesis, SRI InternationalAnanlada Chotimongkol. 2008. Learning the structure of task-oriented conversations from the corpus of in- domain dialogs. Ph.D. thesis, SRI International.
Coding dialogs with the damsl annotation scheme. G Mark, James Core, Allen, AAAI fall symposium on communicative action in humans and machines. Boston, MAMark G Core and James Allen. 1997. Coding dialogs with the damsl annotation scheme. In AAAI fall sym- posium on communicative action in humans and ma- chines, pages 28-35. Boston, MA.
Data collection and processing in the carnegie mellon communicator. Maxine Eskenazi, Alexander I Rudnicky, Karin Gregory, C Paul, Robert Constantinides, Christina L Brennan, Jwan Bennett, Allen, EUROSPEECH. Maxine Eskenazi, Alexander I Rudnicky, Karin Gre- gory, Paul C Constantinides, Robert Brennan, Christina L Bennett, and Jwan Allen. 1999. Data collection and processing in the carnegie mellon communicator. In EUROSPEECH.
Development and evaluation of a spoken dialog system to access a newspaper web site. González César, Valentín Ferreras, Cardeñoso-Payo, IN-TERSPEECH. César González Ferreras and Valentín Cardeñoso-Payo. 2005. Development and evaluation of a spoken di- alog system to access a newspaper web site. In IN- TERSPEECH, pages 857-860.
Statistical methods for rates and proportions Rates and proportions. J L Fleiss, Screen reading software from freedom scientific. Wiley. Freedom-ScientificJ.L. Fleiss. 1973. Statistical methods for rates and proportions Rates and proportions. Wiley. Freedom-Scientific. 2014. Screen read- ing software from freedom scientific.
The weka data mining software: an update. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, Ian H Witten, ACM SIGKDD explorations newsletter. 111Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H Witten. 2009. The weka data mining software: an update. ACM SIGKDD explorations newsletter, 11(1):10- 18.
Accurate unlexicalized parsing. Dan Klein, D Christopher, Manning, Proceedings of the 41st Annual Meeting on Association for Computational Linguistics. the 41st Annual Meeting on Association for Computational LinguisticsAssociation for Computational Linguistics1Dan Klein and Christopher D Manning. 2003. Ac- curate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computa- tional Linguistics-Volume 1, pages 423-430. Asso- ciation for Computational Linguistics.
Recent approaches to dialog management for spoken dialog systems. Cheongjae Lee, Sangkeun Jung, Kyungduk Kim, Donghyeon Lee, Gary Geunbae Lee, JCSE. 41Cheongjae Lee, Sangkeun Jung, Kyungduk Kim, Donghyeon Lee, and Gary Geunbae Lee. 2010. Re- cent approaches to dialog management for spoken dialog systems. JCSE, 4(1):1-22.
Wordnet: a lexical database for english. A George, Miller, Communications of the ACM. 3811George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39-41.
Annotating dialogue acts to construct dialogue systems for consulting. Kiyonori Ohtake, Teruhisa Misu, Chiori Hori, Hideki Kashioka, Satoshi Nakamura, Proceedings of the 7th Workshop on Asian Language Resources. the 7th Workshop on Asian Language ResourcesAssociation for Computational LinguisticsKiyonori Ohtake, Teruhisa Misu, Chiori Hori, Hideki Kashioka, and Satoshi Nakamura. 2009. Annotat- ing dialogue acts to construct dialogue systems for consulting. In Proceedings of the 7th Workshop on Asian Language Resources, pages 32-39. Associa- tion for Computational Linguistics.
Dialogue acts annotation for nict kyoto tour dialogue corpus to construct statistical dialogue systems. Kiyonori Ohtake, Teruhisa Misu, Chiori Hori, Hideki Kashioka, Satoshi Nakamura, LREC. Kiyonori Ohtake, Teruhisa Misu, Chiori Hori, Hideki Kashioka, and Satoshi Nakamura. 2010. Dialogue acts annotation for nict kyoto tour dialogue corpus to construct statistical dialogue systems. In LREC.
Predictive web automation assistant for people with vision impairments. Yury Puzis, Yevgen Borodin, Rami Puzis, Proceedings of the 22nd international conference on World Wide Web. the 22nd international conference on World Wide WebInternational World Wide Web Conferences Steering CommitteeYury Puzis, Yevgen Borodin, Rami Puzis, and IV Ra- makrishnan. 2013. Predictive web automation as- sistant for people with vision impairments. In Pro- ceedings of the 22nd international conference on World Wide Web, pages 1031-1040. International World Wide Web Conferences Steering Committee.
Combining lexical, syntactic and prosodic cues for improved online dialog act tagging. Vivek Kumar Rangarajan, Srinivas Sridhar, Shrikanth Bangalore, Narayanan, Computer Speech & Language. 234Vivek Kumar Rangarajan Sridhar, Srinivas Bangalore, and Shrikanth Narayanan. 2009. Combining lexi- cal, syntactic and prosodic cues for improved online dialog act tagging. Computer Speech & Language, 23(4):407-422.
Indirect speech acts. Syntax and semantics. John R Searle, 3John R Searle. 1975. Indirect speech acts. Syntax and semantics, 3:59-82.
Dialogue act modeling for automatic tagging and recognition of conversational speech. Andreas Stolcke, Klaus Ries, Noah Coccaro, Elizabeth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, Marie Meteer, Computational linguistics. 263Andreas Stolcke, Klaus Ries, Noah Coccaro, Eliza- beth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics, 26(3):339-373.
Leveraging semantic web search and browse sessions for multi-turn spoken dialog systems. Subject: "I want to buy CD" Wizard: [enter 'CDs' in the search box, clicks 'Go', and narrate search results one by one] Screen reader: Verbatim 97458 700 MB 80 Minute. Lu Wang, Larry Heck, Dilek Hakkani-Tur, Disc CD-R by Verbatim link. Lu Wang, Larry Heck, and Dilek Hakkani-Tur. 2014. Leveraging semantic web search and browse ses- sions for multi-turn spoken dialog systems. Subject: "I want to buy CD" Wizard: [enter 'CDs' in the search box, clicks 'Go', and narrate search results one by one] Screen reader: Verbatim 97458 700 MB 80 Minute . . . Disc CD-R by Verbatim link . . .
Wizard: [searches for 'new age items', clicks on results] Screen reader: Age of Wushu -Free Amazon. Game Connect link by Snail Games USA. Search for new age itemsSubject: "Search for new age items" Wizard: [searches for 'new age items', clicks on results] Screen reader: Age of Wushu -Free Amazon . . . Game Connect link by Snail Games USA . . .
Subject: "new age music CDs by Kitaro" Wizard: [Searches for 'new music cds by Kitaro'] Screen reader: Most Relaxing New Age Music link by Kitaro. Subject: "new age music CDs. Wizard: [searches for 'new music CDs'] Screen reader: The ultimate Most Relaxing New Age. List 8 items, $10.87 link, order in the next 3 hours . . . Tenku link by Kitaro . .Subject: "stop" Wizard: [presses pause shortcut] Subject: "new age music CDs" Wizard: [searches for 'new music CDs'] Screen reader: The ultimate Most Relaxing New Age . . . . Subject: "new age music CDs by Kitaro" Wizard: [Searches for 'new music cds by Kitaro'] Screen reader: Most Relaxing New Age Music link by Kitaro, List 8 items, $10.87 link, order in the next 3 hours . . . Tenku link by Kitaro . . .
Wizard: [clicks the title of the next item in search result] Screen Reader: Ancient link by Kitaro. next CD. Subject: "sample recordings" Wizard: [follows the current link] Screen reader: Page loading Wizard: [plays the first sample track] Screen reader: [audio sample] Subject: "next" Wizard: [plays the next sample track] Screen reader: [audio sample] Subject: "next" Wizard: [plays the next sample track] Screen reader: [audio sample. go back" Wizard: [presses back button] Screen reader: page loading, [repeats the visited link. $14.98 link . . . Subject: "listen to audio" Wizard: [follows link] Screen Reader: Page loading Wizard: [plays the next sample track] Screen reader: [audio sample] Subject: "next" Wizard: [plays the next sample track] Screen reader: [audio sample] Subject: "buy this cd" Wizard: [clicks 'Add to cart' button, then clicks 'Proceed to Checkout' button] Screen reader: [reads out all captionsSubject: "sample recordings" Wizard: [follows the current link] Screen reader: Page loading Wizard: [plays the first sample track] Screen reader: [audio sample] Subject: "next" Wizard: [plays the next sample track] Screen reader: [audio sample] Subject: "next" Wizard: [plays the next sample track] Screen reader: [audio sample] Subject: "go back" Wizard: [presses back button] Screen reader: page loading, [repeats the visited link] Subject: "next CD" Wizard: [clicks the title of the next item in search result] Screen Reader: Ancient link by Kitaro . . . $14.98 link . . . Subject: "listen to audio" Wizard: [follows link] Screen Reader: Page loading Wizard: [plays the next sample track] Screen reader: [audio sample] Subject: "next" Wizard: [plays the next sample track] Screen reader: [audio sample] Subject: "buy this cd" Wizard: [clicks 'Add to cart' button, then clicks 'Proceed to Checkout' button] Screen reader: [reads out all captions] |
219,305,558 | [] | American Journal of Computational Linguistics Microfiche
50 : 2 D I C K H
FREDERICKSEN I B M THOMAS J
WATSON RESEARCH CENTER YORKTWON WIGHTS
10598NEW YORK
This paper describes a loxical organization in which "scnscs" arc rcprcscnted in their own right, along with "words" and "phrases", by distinct data itcms. The objcctivc of the scheme is to facilitate recognition and employment of synonyms and stock phrases by programs which process natural la,nguage. Besides presenting the proposed organization, the paper characterizes thc lexical "senses" which result.1. Introduction. -This paper describes an internal lexical organization which is particularly designed to capture the facts about synonymy. Besides recording the inclusion of each word in one or more synonym sets (identified with its various "senses"), the scheme attempts to distribute attributes per~picuously between "senses", "wordings", and the intersections of the two. I n addition, there is provision to record multi-word idioms, stock phrases, and the like. and to include these as elements in synonym sets when appropriate.Briefly, "senses" are represented in their own right, along with "words" and "phrases", by distinct data items. Each word or phrase is associated with a list of the "senses" which it can express; conversely, each "sense" is associated with a list of "alternative wo iings" Additionally. each word is associated with a list of phrases in which it occurs.Grammatical category, features, selection restrictions, and the like are applicable at three different levels: to words or phrases as such, to "senscs" as such, or to particular usages of words or phrases (equivalently, to particular wordings of "senses").An Organization for a Dictionary of Word SensesThis lexical organization has been inlplemerlted at 1BM Research, Yorktown Heights, N.Y., by a program -not t o be described here -which builds such dictionaries in a very compact form, giving interactive assistance to the person making the entries. (For cxaniple, the plrogram p d n ts out the possibility of merging "senses" whenever their wordings overlap and their attributes arc compatible, and merges them if so directed.) There are suitable I'acilities for saving the results, retrieving them in various ways, and for altering such things as sclienies of classification w i t l i o~~~ scrapping previously prepared work.The ultimate intent is that the "dictionary of senses" should serve as the lexical coniponent in a natural language fact-retrieval system. Pending its incorporation in that rolc, it will be used to amass and organize information on the semantic relations among words and phrases.The balance of this paper conies in two sections:Section 2 presents the proposed lexical data structures, and suggests haw they arc to be used.Included is a sketch of how various types of grammatical anri semantic "attributes" fit into thc scheme.Section 3 discusses the character of the "senses" encoded in the resulting dictictlary. Reasons arc advanced for regarding lexical "senses" as something far short of semantic primitives. At the same time, synonym sets are defended against the view that "true paraphrases are rare or rtonexistcnt". 1 "Sense Chain" Link Each KDI (Key Data Item) or PDI (Phrase Data Item) coiltains an "alternative senses" link -al pointer to the first SLE (Sense Link Element) in a chain of SLE's which represent the various senses of the word or phrase. The SLE's are chained via their own "alternative senses" links, and the final member points back to the KDI or PDI. Thus, we shall speak of such a chain as a ring Local Attributes "Alternative Wordings" Link An Organization for a Dictionary of Word Senses 6specifically, an "alternative senses ring". If no senses are on record for a particular word ox phra , the "alternative senses" link in the KDI or PDI is ssl: referent.Reciprocally, each SDI (Sense Data Item) contahs an "alt-native wordings" link. This leads to a chain of S L E~S which represent more-or-less synonymous wordings mat express the sense. These SLE's are chained through their own "alternative wordink links, and again the chain is closed into a ring -this time beginniog and ending with the SDI.The structure that is shaping up may now be seen inFig. 2, The crtrcicrl poirir is that encll S L E
An Organization far a Dictionary of Word Senses 2. The Internal Remesentation.
It will be our purpose in this section to say just enough about internal representation to lay bare the organizing principles of the lexicon. The focus is on architecture afid motivations; details of field layouts, internal codes, etc. are not at issue here.
To make the discllssion concrete, suppose we are interested in the senses of the word "changev Assuming that none of the words are unfamiliar, the following should put us in mind of two senses: change: 1. v alter;
2. n small coin.
This, of course, is just a dictionary entry in the traditional format (though with synonyms offered in lieu of definitions), On the other hand, we might approach the same information from a different direction: starting with the two concepts, we might seek words to express them. It is difficult to picture this latter situation without assigning artificial labels to the concepts. Call them concepts 1 and 2, and suppose for a moment that there were a practical way to look the concepts up (witlzozlt having thought of either word for either concept). Then the information to be retrieved might be envisioned this way: I. v change, alter 2. n change, small coin It is this duality of viewpoint -that words have senses, while senses have wordings -that our lexical representation must reflect.
The starting point, then, is that words, phrases, and "senses" are separately represented. There are three principal types of data item, plus a standard connector:
1. A "Key Data Item " ( K D I ) represent:; a single word.
A "Phrase Data
Item" (PDI) represents a string of two or more words which are to serve as a unit in some context.. (SDI) represents one distinct sense common to a set of wordq and/or phrases. In general, a word or phrase may be usable in more than one sense, while a given sense may have alternative (synonymous) wordings. Both these types of variability are recorded making use of the next data item: 4. A "Sertse Link Element" (SLE) is a connective item, to be explained shortly.
An Organization for a Dictionary of Word Senses
A "Sense Data Item"
Three principal fields will engage our attention in each type of data item. Schematic of Data Items, with Principal Contents represents the intersection betwen nn "alternntive sense^ ring nrrd an "cllterncirive wordi~rgs" ring. From the standpoint of the word or phrase, it; represents a particular scnse; from the standpoint of the sense, it represents a particular wordink Starting from a KDI or PDI, one gets to the SDI for a particular sense by advancing along the "alternative senses" ring to the relevant SLE, then &touring along the ring which connects t h~ latter to the SDI (as one of the SDI's "alternative wordings"). Starting from an SDI, one gets to a particular wording by the reverse process. Since each "alternative senses" ring contains exactly one KDI or PDI, while each "alternative wording#' ring contains exactly one SDI, each SLE is tied to exactly one sense of one word or phrase. (Eani~alently, it is tied to one wording of one sense.)
The next point of interest is that "attribute" f'i~lds are present in all four types of data item -even in the connectors (SLE's). The attributes which may be recorded in each, however, come from different bags.
T o begin with, the attributes found in an SDI characterize a11 the wordings of a given sense whenever the wording-, are used in that sense. In Fig. 2, for example, sense "I" should be marked as a "verb" sense, whil? sense "2" is a "noun". One would not wish to record the attribute "verb" in the KDI for the word "change", for the KDI represents facts about the word itself, irrespective of sense, and "verb" does no! 'hold for all uses of the word "change". On the other hand, "verb" does characterize all wordings of sense "I", whenever they're being employed to express that sense. It would furthermore apply to any additional wordings which we might think of, such as "modify", provided they are really used in a synonymous way.
As a matter of fact, it turns out that the traditional parts of speech -noun, verb, adjective, preposition, etc. -fit best in this scheme as global attributes of senses, recorded ill the SDI's. An Organization for a Dictionary of Word Senses senses of "alter" Fig. 2 "Alternative Senses" and "Alternative Wordings" Rings (The first sense has two wordings: "alter" and "change". The second sense has wordings "change" and "small coin". Two senses are recorded for "change", and one sense each for "alter'.' and "small coin".)
A different sort of attribute may be recorded in a KDI, as a global feature of the word itself. For example, we may note of the word "change" that it is "regularly conjugated". That is, when used An Organization for a Dictionary of Word Senses 8 as a verb, it forms the third person singular by adding "s", and both past ~n d past participle by adding "ed" To be sure, this "global" attribute applies only to the "verb" senses of "change"; but a moment's reflection will confirm that "change" has more than one "verb" sense, and the regularity of its conjugation is common to all of them. Thus, it is useful to note this regularity as an attribute of the word itself. (Contrast this with the behavior of the word "can", which is regular w h n it means "to pack in cans", but irregular when it means "is able to".)
Various other attributes suggest themselves as global characterizers of the words thcn~selvcs, to bc recarded in the KDI's. For example, one might wish to note of "change" that it drops its final "e" whon adding "ing" (this is the normal rule) but of "singe" that it doesn't.
Still other attributes are appropriate when characterizing multi-word units (in PDl's). A string o f words whose meaning is not evident from the nlere juxtaposition of its constittlents (such as "givc up") may be classified as an "idiom", A string of words whose meaning could be figured out from the meanings of its constituents, but which occurs with enough frequency to warrant inclusion in the dictionary, might be classed as a "stock phrase". (Example: "drop dead".) A string like "perform in a subordinate role", which one would not normally expect to encounter in its own right, might be classed as a "definition" (for a certain sense of the word "accompany", difficult to reword except with a definition).
Perhaps the most unexpected site for recording attributes is in the connective elements (SLE's).
These are the logical place, though, to note features that apply to a specific sense of a word, without being global to either the sense or the word. Consider the following four senrenccs:
On the way to the office, he stopped daydreaming.
On the way to the office, he ceased daydreaming.
On the way to the office, he ceased to daydream.
versus:
On thebway to the office, he stopped to daydream.
Suppose we choose to view this as a restriction upon the (surface) object of the verb: "stop", when applied to an action, must take a gerund as its object; "cease" can take either a gerund or an infinitive. (It wouldn't affect the point being made if we said that "stop" inhibits a certain grammatical transformation en route to surface structure, while "cease" pennits it.)
An Organization tur a Dictionary of Word Senses
9
Now, we wouldn't want to mark "gerund object only" as a global. attribute of the sense, for we have just shown that "cease" and "stop", two wordings of the sense, differ with respect to this restriction. On the other hand, it doesn't belong among the global attributes of the word "stop" as such, for "stop" has other verb senses, even transitive ones, tu which the restriction is completely inapplicable. (Consider "stop a hole in the dike", "stop a catastrophe", etc.) That leaves the alternative we are suggesting: treat the restriction as an attribute of one particular usage of thc word (equivalently, one particular wording of the sense). Besides having senses, individu~l words are involved in phrases, and this fact is also represented in our data structure. Fig. 3 shows the plan of attack. I'n the KDI for each word, there is a link connecting it to the PDI for the first phrase in which the word is known to occur, together with a number designating the position of the word (Ist, Znd, 3rd. etc.) in that phrase. In the PDI itself, there is a coiltinuation link for each word of the phrase, together with its niiri~ber in the next phrase. In the final PDI involving a given word, the link for that word points back to the KDI.
Thus, independent of its "altcrnative senses" ring, each KDI tnay hmc r~ "phrase invol\~cit~ents"
ring.
This structure makes it possible to retrieve all the idioms, stock phrases, definitions, etc., in which a given word has made its appearance, anywhere in the dictionary. As the same structure is used to encode every multi-word unit, no occurrence of a word is ever lost sight of, and r~ phrase can be looked up via any of its constituent words.
Of the fields to which Fig. 1 calls attention, we have discussed all but one. In the SDI for each "sense", tbpre is a "sense chain" link field. This links the SDI to its successor in a global chain of "senses". Using this chain, it is possible to make an exhaustive, non-duplicative list of all the "senses" recorded in the dictionary. The listing program has only to proceed down the chain, retrieve frdm each SDI its attributes, decode them, then chase around the "alternative wordings' ring of the SDI and list the wordings alongside the attributes.
One more feature of the internal representation deserves mention: the data items for words occur as "leaves" in a lexical tree (Fig. 4). That is, the KDI for a word can be looked u p letter-by-letter, following a chain of pointers that correspond to successive letters. The chain ends at a KDI after following a substring sufficient to distinguish the word fro111 the nearest thing like it in thc dictionary. The lexical tree has the advantage that words can be looked up either at random or in sequence.
Recapitulating, these are the essential features of the representation: *1) "Senses" are represented separately from "wordings", and the mutual connections between them are made explicit in both directions.
*2) "Wordings" may be either single words or multi-word pnrases. I hese are representea by distinct types of data item, and may be subject to distinct schemes of classification. but they are on the same footing with regard to "sense" connections. With each word is associated an exhaustive list of the phrases in which it occurs. (For a dictionary containing only the words a , "above", "abate", and "monkey", this would be the full tree. The path to each word is only as long as needed to distinguish it from the neighbor with which it shares the longest leading substring.) *3) Classifiers and features, drawn from appropriate sets, may be attributed separately to words, to phrases, to senses, or to particular senses of words or phrases (i.e., to particular wordings of senses). *4) The data items which represent senses are globally chained, and may be exhaustively listed.
An Organization for a Dictionary of Word Senses
An Organhation for a Dictionary of Word Senses *5) The data items which represent words are accessible as "leaves" of a lexical tree; hence they may either be retrieved by lookup (in response to presentation of the words) or volunteered in alphabetical order, Given a con~mitn~ent to repreiknt a lexicon as suggested by points * 1 through 9 tabovc, various implementations would be possible. Alternative implementations of individtlal points (though not of the scheme as a whole) have in fact been described by other writers. Tlte lexical tree ( * S ) , for example, is no great novelty: Sydney M Lamb and William H. Jacobsen describe implementation details of one such tree [SJ. [lo] also concerns a dictionary which uscs this general style of organization for lookup. For '.hat matter, the lexical tree is reminiscent of Fcigcrrbr~um's "discrin~ination tree." [ 11 More interestingly, the Separate representation of senses n11d wordings has been incorporated in othei. systems by R. F. Simnlons ([Ill,[12]) and by Larry R. Harris (31. This way of looking at matters led Harris to remark some of the same points that we have been stressing: that senses have alternative wordings just as words have alternative senses; that multi-word phrases might occur on the same footing as individual words in the expression of a sense; and (interestingly enough) that part-of-speech information really adheres to the "sense", not to the "word" Similarly, Simmons associates his "deep case" information with lexical nodes representing "wordsenscs",, while words the~nselves are treated as "print image" attributes of the wordsenses.
Harris's dictionary was only a minor component in a small-scale model of concept acquisition. No great number of either words or concepts was required to illustrate the principles at stake, so Harris programmed the dictionary as an array, with words represented by rows and "concepts" by columns. Elements of the array were merely frequencies, indicating the strength of association between each word and each concept.
Needless to say, for a full-scale vocabulary of words and concepts, such an array is mostly empty; nobody would dream of expanding it in that form. From a programming standpoint, the only thinkable choice is some form of list structure. Having decided in principle to use "some form of list structure", though, one might well ask: Why chains? Why rings? Why not just include in each Key Data Item a full list of pointers to the corresponding Sense Data Items. and vice-versa?
The answer is simply one of convenience. It's easier to handle insertions and deletions when they don't require the movement of expanded items to new quarters, or the provision of "overflow" pointers. It's easier to reclaim freed storage when deleted items come in a handful of standard An Organization for a Dictionary of Word Senses 13 sizes. As for "rings", they eliminate the need for two-way pointers, since one can break into a ring at any point and follow it to its source.
It should be noted that to make rings an attractive representation, the details of the material being represented must cooperate, In particular, the rings must not become too long, or the processing requlred to follow them becomes excessive. It happens that "alternative senses" rings and "alternative wordings" rings are typically short rarely more than a dozen links per ring. "Phrase involvement" rings, on the other hand, can become spectacularly long, especially for words like "a*' and "to". In practice, it's necessary to provide these rings with short-cut links.
Any of these, programming details could be altered, however, without abando~ting tltc ossencc of the scheme, which is given in points * 1 through * 5 above.
An OqginiZatioh for a Dictionary of Word Senses 3. The Character of Lexical Senses.
---
Pe~haps the first tljing to get straight about the "senses" represented in this dictionary is what they are not. They are aot "concepts"; they are not a set of "primitives" into which human espurlcncc an be decomposed, No conjecture is put forivard here [hot any such collcrtion of discrete, i~tr~riiic concepts even exists, let ulonc that it might be finite.
Rather, the "senses" of the dictionary are in thc nr.urc of fuzzy equivalcncc sets among worcis.
(This is only a metaphor; we shall do more and more violence to thc ttchnicul notion of an "equivalence set" as we proceed.) Each "sense" groups n set of words which, in n set of appr:~priate cu.ntexts; might be tiscd ~nore or less intcrchangenbly. That the cq\livalenct. sots arc fu7'r.y. nrie can convir~cc otleself with but the briefest im~~~crsion in the r~lntcrials of thc. l a n g~i~p~ -trying to decide whether particular words belong in particular groups or justiPi\~ the crcr~tion of new groups.
Consider, for example, the following set of words and phrases:
(abandon, give up, surrender, relinquish, let go. desert, leave, forsake, abdicate)
Clearly, there is a common theme that can run through all of these, given the right circumstances.
It might be expressed as "reluctant parting from somebody or something". This can be seen by coupling the verbs with var.ious possible objects: words carry nuances, which it may or may not be easy to ignore in a particular context, "Forsake", for example, can suggest that there is something reprehensible about the action. It can also connote formal renunciation, and the above example from a marriage vow shows that the formality can be present without the reprehensibility. Nuances get in the way of interchangeability; it would sound strange to substitute "desert" into the marriage vow.
Besides nuances, the individual words have conventional areas of application. One does not normally say that the doctors "deserted" all hope, or that an errant husband "surrendered" his wife and children. The minister officiating at a wedding would be considered daft if be adjured the bride and groom to "abdicate" all others, and a merchant would not advertize that he was "relinquishing" his entire stock at a loss. (Somehow, the larkr situaaon calls for more pedestrian language.) At the opposite extreme, overawed by this lack of interchangeability, we might decide to respect the unique personality of each word, abolishing equivalenioe classes altogether. The inconvenience of such a cop-out is obvious: we then have to introduce some other mechanism for recognizing the equivalence of utterances that are intended synonymously, though they employ different words.
But beyond being inconvenient, the exclusion of equidence sets is a denial of linguistic facts -just as bad, in its own way, as the naive attribution of unconditional synonymy.
For it is a commonplace of everyone's experience that the speaker and the listener agree to ignore the nuances of words, whenever nuances get in the way of communication. A writer who has used the word "give up" eight times in five lines w i l l surely cast about for some alternative ways of saying the same thing. If "relinquish" and "abandon" would normally be too flowery, or if "mender" would in other circumstances call to mind an armistice ceremony in a railway wagon.
that will not deter the writer from tossing in a few occurrences of those words -once a context has been established that discourages the overtones. Nor will the reader understand matters any differently. It is as if writer and reader conspired: "We're fed up with that word, let's hear another." Or, perhaps, the writer simply connives at jolting the reader awake with frequent changes of idiom, maybe even an occasional incongruity. En any case, synonymy is imposed upan Not only can words be stripped of nuances normally present; they can toke on colorations suggested by the context. The suggestion of "reluctance" conveyed by all thc verbs of our example can be inferred, in at least one case, from the setting alone; and in this case, a variety of rnorc neutral verbs could be used synonymously:
An Organization for a Dictionary of Word Senses
(part with, take leave of) our entire stock at a loss
One could even substitute the word "sell", and it wouldn't change the meaning that was already bad into the utterance. But to adnlit contest-dependent synonyniy of this clcgrcc is to strctch the equivalence sets" to thc point of uselessness.
It comes to this: neither the grouping nor the separation of words can be fully justified. Grouping is nearly always conditional, and separation is often so. If one could anticipate all possible contexts in which a group of words could occur, one could perhaps enumerate all possible equivalence sets -one for each combination of word group with a set of contexts making the words interchangeable. Anyone, however, can see the futility of that aspiration.
In the end, one settles for messy compromises. Words are grouped if a largish set of contexts in which they are interchangeable springs readily to mind. They are separated (into perhaps overlapping groups) if the imagination readily suggests contexts in which their meanings differ "significantly" -whatever "significantly" may mean. In doubtful cases, when words are grouped somewhat questionably, one promises oneself to add markings some clay that will prevent misuse of the equivalence. When words are separated somewhat questionably, one promises oneself to add a mechanism some day that will recognize their relatedness.
In the end, too, one assigns internal structure to the equivalence sets. That's the effect of assigning local attributes to the alternative wordings ("animate subject", "object a vehicle", etc.): constraints are imposed upon the interchangeability of the wordings. More radical structuring can be accomplished if, for example, one notes "government" as an alternative wording of the sense "govern, rule, control", with the attribute "nominalization".
A trenchant discussion of such difficulties may be found in Kelly and Stone [4]. There the emphasis is upon disambiguation: given a word in a passage of text, they seek to identify (by selection from a fixed list of possibilities) the sense in which it is used. Building a computerized An Organization for a Dictionary of Word Senses dictionary for the purpose, they soon became concerned with the arbitariness and the proliferation of target "senses", as taken from standard desk dictionaries. They argue, with persuasive examples, that what lexicographers conventionally distinguish as separate senses of a word are often just applications of the word's underlying concept to different contexts. To cover the various contexts, the underlying concept has to be stretched a little, by a process of metaphoric extension. This metaphoric process is beyond our present power to computerize, but for 'the long run looks indispensable for successful language processing. Meanwhile, the authors advocate a dictionary which records for each word as few discrete senses as practicable, combining into one scnsc all the usages which can reasonably be united by a common underlying thougl~t.
It is interesting to re-examine Kelly and Stone's argument with a diffcrent task in n~inii: not tile disambiguation of one word, but the recognition of synonymy between two words. A n~etaphorical capability would be as useful for the one task as for the other, but in the case of synonynl rccognition, some of the considerations which have guided traditional lexicography remain pertinent. In particular, it is necessary to ask not merely whether the concepts overlap, but whether the one word may in fact be used in place of the other. As noted before, usage is restricted by conventional domaitls of application; for example, an "alteration" is conceptually both a "change" and a "modification", but one wouldn't call it a change or a modification when painting a sign for a tailor's shap.
The arbitrariness of the equivalence sets is not all that disqualilies then1 as "conceptual primitives". There is a much deeper difficulty in the fact that practically all "senses" can be paraphrased in terms of other "senses". Take, for example, the intransitive sense of "change" (as in "My, but you've changed!"). Surely, one would suppose, the concept of "change" must be primitive? Change of state is what well-nigh a third of all verbs are about.
But if "change" is a "primitive", it's a peculiar sort of "primitive", for it can be paraphrased in a variety of ways:
(change, become different, cease to be the same, assume new characteristics, make a transition into a new state)
Note that the multi-word paraphrasals are not idioms; the individual words contribute their usual meaqings to concatenated meanings which express the concept "change".
An Organization for a Dictionary of Word Senses
18
But perhaps we were merely unlucky? Perhaps we chanced upon a concept which looked elemental but actually turned out to be complex. Maybe the real primitives are "become", "be", "cease", "different", "same", etc. Let's dig into that possibility.
What does it mean to "become X", where X is an adjective? The meaning can bc \~ario\isly expressed:
(become X, come to bq X, get to be X, get X, turn X, grow X, assume thc charrtctcristic X)
That's a discouraging number of ways for a "primitive" to be re-expressible -though if we choose to regard "come to be" and "get to be" as idiomatic concatcnr\tions of words, only onc of the alternatives makes use of other concepts to explain the one at hand.
As for "different", it implies a whole underlying anecdote about sortlebody making a contparisoi~, after first making a judgment about relevant things to compare. In the combination of the two concepts --"become different" --, we furthermore drop mention of the objects being compared.
It's simply understood that they are certain attributes of the subject at two points in time.
It is tempting to invent ad-hoc "transformational" explanations for these phenomena. One might conjecture, for example, tliat "The man changed." is a surface realization of four underlying sentences:
(Man be X at time m. Man be Y at time n. X not equal Y, Time n greater-than time n~. )
Thetrouble with explanations of this sort -apart from the fact that they introduce growing complexity into the understanding of straightforward utterances -is that they assign arbitrary primacy to some concepts at the expense of others. Why should "time n greater-than time m" be an assumed primitive? May we not equally well conjecture that "time n greater-than time m" is a surface realization of these?:
(Time be m. Time change. Then time be n.)
For that matter, why not view
An Organization for a Dictibnary of Word Senses
"Time elapsed. " as a surface form of this?:
"At least one thing in the universe changed."
After all, what is "time" but a nominalized way of talking about the presence and partitioning of change?
The difficulty, it would seem, lies in the very notion of context-independent "conceptual primitives". The metaphor itself is at fault: it calls to mind a fixed set of dements, like thosc of which matter is composed, out of which all ideus must be compounded. But where concepts arc concerned, primitivity is a matter of focus, Shift the perspective a little, and new elenlcnts swim into view as fundamentals, while former simples become con~plex.
A more promising metaphor is the analogy to a 'vector space. A set of basis vectors is, in a way, a set of "primitives" out of which all the entities in the space can be composed. These primitives have the appealing property that they are only primitive relative to one frame of reference. Rotatc your point of view, and what used to come natural as basis vectors are now at an angle; they become easier to express as sums of vectors that lie along new axes. That bears a resemblance to what we have seen,in tlhe case of lexical "primitives".
Thus far and no further may the analogy be pushed, however. The elements which span "conceptual space" can be no such uniform set of objects as those in a vettor space, while the rules of composition are coextensive with grammar -at a minimum. Composition of concepts itself contributes to the meaning. (For that matter, it is arguable whether concepts are sufficiently separable to model them as discrete objects at all -whether simple or composite.) Moreover as "conceptual space" must encompass all things thinkable, the rules of composition must themselves be part of the space. That is, the operators as much as the things operated upon lie within the space to be spanned.
A seming counterexample to these remarks may be found in the "primitive ACT's" of conceptual dependency theory, as propounded by Schank,Goldman,Rieger,and Riesbeck (121,[7], [8], [9]).
On a close reading, however, the "primitive ACT's" turn out to be verb paradigms -powerful, semantically motivated generalizations about large classes of verbs. The names of these paradigms replace specific verbs as building blocks in the "conceptual" representation of an utterance. The
An manbation f~r a Dictionary of Word Senses
2 0 effect is to provide strong guidelines for the inference of unstated information, for the comparison of related utterances, for paraphrasal, etc.
TO represent a particular verb in terms of these ACT's, however, it is necessary to augment each ACT with various substructures which detail the manner, the means, the type of actor or object, etc. No reduced set of representatives is as yet offered for the adverbs, nouns, adjectives, etc. in terms of which the "primitive ACT's" are qualified. If such additional condcnsntion werc attempted, the elaboration of a given utterance in terms of the full set of "primitives" might well ramify without practical end. In other words, reduction of the set of names L' or nodes (and labels for arcs) must be purchased at the expense of extending the number of them required to represent each utterance,
In conceptual dependency representation, just as in the sernantic networks" of Quillinn [ 6 ] ,
Simmons ([I I],
[12]), Slocum, and others, reality ultimately appears as a shimmering web, every part of which trembles when any part of it is touched upon. Taken in its totality, the system -as yet -is entirely compatible with skepticism about a corrrprehensive set of "conceptual primitives"
In any case, the verbal "senses" proposed here lie at a far lower level of generality than the "primitive ACT's" used in conceptual dependency theory. In terms of that theory, they come closest to the so-called "CONCEXICON entries" used by Goldman in realizing surface expressions ' of a concept from its conceptual representation [2]. Given a primitive ACT, Goldman narrows it down to a particular "CONCEXICON" entry by applying the tests in a discrin~ination tree to the rest of the structure in which the ACT appears.
Our lexical "senses", therefore, are lcft with a humbled role. If they span anything, it might best be thought of as "communication space", not "conceptual space". Even in this light, they arc a hugely redundant basis, and a not at all unique one. They form no inventory of the experiences being communicated about; "meaning" is still a step removed, still evoked rather than embodied by the elements of this basis.
If we persist in calling these things "senses", it is because that is the traditional term for what is brought to mind as the synonym sets of a given word are enumerated. The tie-in with meaning is tenuous, but the human user is able to supply it. There is at least this much justification for the term: synonym sets, more forcefully than words, direct attention to the points at which a tie-in must be made between the tokens of communication and the underlying representation of "world knowledge"
An Organization for a Dictionary of Word Senses 2 1
In a full-fledged system for processing natural language, then, we must envision the "dictionary of senses" as a component stretching vertically across the "upper" layers. Its "sense data items" must link, ifi some way, to the deeper-lying data structures which encode "knowledge of the world" (the "pragmatic component"). The "key data items" and "phrase data items" register tokens to be expected or employed in "surfzrce" utterances. Global and local attributes recorded in the various data items guide parsing and interpretation. Where one takes it froin there depends upon thc linguistic approach to be used.
Fig. 1
1
Fig. 1 Schematic of Data Items, with Principal Contents
Fig. 3 "
3Phrase Involvement" Rings
(
Where numbers are shown on connecting links, they indicate the position of the word in the phrase which is linked to.)
Fig. 4 Lexical Tree
the words, and this literary behavior merely exaggerates what people do habitually in cbmmon speech.1 6
Simulation of Verbal Learning Bchaviot. Edward A Feigenbaum, E, A. Feigenbaum and J. Feldman, McGrnw HillFeigenbaum. Edward A. (1963), "Simulation of Verbal Learning Bchaviot", in Cnniplrrrrs and T/lotcglzt, eds. E, A. Feigenbaum and J. Feldman, McGrnw Hill,
Sentence Paraphrasing from a Conceptual Base. Neil Goldman, Cor?rr?~llrric.nriori.s c! f ' the ACM. 18Goldman, Neil (1 975), "Sentence Paraphrasing from a Conceptual Base", Cor?rr?~llrric.nriori.s c! f ' the ACM, February, 1975, Vol. 18 No. 2.
A Model for Adaptive Problem Solving Applies to Natural Language Acquisition. Larry R Harris, Corneli U~iiversity. Ithaca, N.Y. PB-2Harris, Larry R. (1972), "A Model for Adaptive Problem Solving Applies to Natural Language Acquisition", Corneli U~iiversity, Ithaca, N.Y. PB-2 1 1 378.
Computer Recognition of English Word Senses. Edward Kelly, Philip Stone, North-Holland Publishing CoAmsterdamKelly, Edward, and Stone, Philip (1975). "Computer Recognition of English Word Senses", Chapter IV, North-Holland Publishing Co, Amsterdam.
A High-speed Large-Capacity Dictionary System. Sydney M Lamb, William H Jacobsen, Jr, Readings irz Atrtoniatic Lar~glrage Processirzg. David G . HaysNew YorkAmerican Elsevier Publishing CompanyLamb, Sydney M., and Jacobsen, William H., Jr. (1966), "A High-speed Large-Capacity Dictionary System", in Readings irz Atrtoniatic Lar~glrage Processirzg. ed. David G . Hays, American Elsevier Publishing Company, New York.
Semantic Memory. M Quillian, Ross, Serttnmtic Irljornzntior~ Processirlg. Quillian, M. Ross (1968), "Semantic Memory", in Serttnmtic Irljornzntior~ Processirlg. ed.
. M Marvin, The MlT PressCambridge. MassachusettsMarvin M i n~ ky, The MlT Press, Cambridge. Massachusetts.
Margie: Memory, Analysis, Response Generation, and Inference on English. R Schank, N Goldman, C Rieger, C Riesbeck, Proceedings. Third Iniet+nationaI Joint Conference on Artficial Intelligence. Third Iniet+nationaI Joint Conference on Artficial IntelligenceStanford Research Institute; Stanford, CaIifcrniaSchank, R., Goldman, N., Rieger, C., and Riesbeck, C. (1973), "Margie: Memory, Analysis, Response Generation, and Inference on English", Proceedings. Third Iniet+nationaI Joint Conference on Artficial Intelligence. Stanford Research Institute, Stanford, CaIifcrnia.
Identification of Conceptcalizations Underlying Natural Language. Roger C Schank, Compzcter Models of Thought and Language. R. Sctrank and K ColbySchank, Roger C. (1973), "Identification of Conceptcalizations Underlying Natural Language", in Compzcter Models of Thought and Language, eds. R. Sctrank and K Colby.
. W H Freeman, San Co, Francisco, W. H. Freeman & Co., San Francisco.
The Conceptual Analysis of Natural Lang-iage. Roger C Schank, Nartrrai Language Processing. Randall RustinNew YorkAlgorithrnics Press, IncSchank, Roger C. (1973), "The Conceptual Analysis of Natural Lang-iage", in Nartrrai Language Processing, ed. Randall Rustin, Algorithrnics Press, Inc., New York.
A Dictionary Structure for Use with an English Language Preprocessor to a Computerized Information Retrieval System. Charles T Schmidt, Naval Postgraduate School. 710363Schmidt, Charles T. (1970); "A Dictionary Structure for Use with an English Language Preprocessor to a Computerized Information Retrieval System", Naval Postgraduate School, hionterey, California. AD 710 363.
Generating English Discourse from Sc~nantic Networks. R F Simmons, J Slocum, Communicatiorts of the ACM. 15Simmons, R. F., and Slocum, J. (1972), "Generating English Discourse from Sc~nantic Networks", Communicatiorts of the ACM, October 1972, Vol. 15 No. 10.
Semantic Networks: Their Computation and Use.for Understanding English Sentences. R F Simmons, Contputer Models of Ti~ought arid Langtlage. R. Schank and K. ColbyW. H. Freelnan & CoSa11 FranciscoSimmons, R.F. (1973), "Semantic Networks: Their Computation and Use.for Understanding English Sentences", in Contputer Models of Ti~ought arid Langtlage. eds. R. Schank and K. Colby, W. H. Freelnan & Co., Sa11 Francisco. |
||
229,365,646 | [] | LIMSI @ WMT 2020
Sadaf Abdul Rauf
-Saclay, & CNRS, LIMSI & Inria Paris-Saclay, & CNRS, LIMSI & Systran Paris
Univ. Paris-Saclay, & CNRS
LIMSI
Univ. Paris
Univ. Paris
Univ. Paris-Saclay, & CNRS
LIMSI
José Carlos Rosales
-Saclay, & CNRS, LIMSI & Inria Paris-Saclay, & CNRS, LIMSI & Systran Paris
Univ. Paris-Saclay, & CNRS
LIMSI
Univ. Paris
Univ. Paris
Univ. Paris-Saclay, & CNRS
LIMSI
Minh Pham
-Saclay, & CNRS, LIMSI & Inria Paris-Saclay, & CNRS, LIMSI & Systran Paris
Univ. Paris-Saclay, & CNRS
LIMSI
Univ. Paris
Univ. Paris
Univ. Paris-Saclay, & CNRS
LIMSI
Quang
-Saclay, & CNRS, LIMSI & Inria Paris-Saclay, & CNRS, LIMSI & Systran Paris
Univ. Paris-Saclay, & CNRS
LIMSI
Univ. Paris
Univ. Paris
Univ. Paris-Saclay, & CNRS
LIMSI
François Yvon
-Saclay, & CNRS, LIMSI & Inria Paris-Saclay, & CNRS, LIMSI & Systran Paris
Univ. Paris-Saclay, & CNRS
LIMSI
Univ. Paris
Univ. Paris
Univ. Paris-Saclay, & CNRS
LIMSI
LIMSI @ WMT 2020
Online, November 19-20, 2020. c 2020 Association for Computational Linguistics 803
This paper describes LIMSI's submissions to the translation shared tasks at WMT'20. This year we have focused our efforts on the biomedical translation task, developing a resource-heavy system for the translation of medical abstracts from English into French, using back-translated texts, terminological resources as well as multiple pre-processing pipelines, including pre-trained representations. Systems were also prepared for the robustness task for translating from English into German; for this large-scale task we developed multi-domain, noise-robust, translation systems aim to handle the two test conditions: zero-shot and few-shot domain adaptation.
Introduction
This paper describes LIMSI's submissions to the translation shared tasks at WMT'20. This year we have focused our efforts on the biomedical translation task, developing a resource-heavy system for the translation of medical abstract from English into French, using back-translated texts, terminological resources as well as multiple preprocessing pipelines, including pre-trained representations. Systems where also prepared for the robustness task for translating from English into German; for this large-scale task we developed multi-domain, noise-robust, translation systems aim to handle the two test conditions: zero-shot and few-shot domain adaptation.
Machine translation for the biomedical domain is gaining interest owing to the unequivocal significance of medical scientific texts. The vast majority of these texts are published in English and Biomedical MT aims to also make them available in multiple languages. This is a rather challenging task, due to the scope of this domain, and the corresponding large and open vocabulary, including terms and non-lexical forms (for dates, biomedical entities, measures, etc). The quality of the resulting MT output thus varies depending on the amount of biomedical (in-domain) resources available for each target language.
We participated in this years WMT'20 biomedical translation evaluation for English to French direction. English-French is a reasonably resourced language pair with respect to Biomedical parallel corpora, allowing us to train our Neural Machine Translation (NMT) (Sutskever et al., 2014) with only in-domain corpora and dispense with the processing of large out-of-domain data that exist for this language pair. Our main focus for this year's participation was to develop strong baselines by making the best of auxiliary resources: back translation of monolingual data; partial pre-translation of terms; pre-trained multilingual contextual embeddings and IR retrieved in domain corpora. Two pre-prossessing pipelines, one using the standard Moses tools 1 and subword-nmt (Sennrich et al., 2016b) and other using HuggingFace BERT API were developed and compared. All systems are based on the transformer architecture (Vaswani et al., 2017), or and on the related BERT-fused transformer model of Zhu et al. (2020). If our baselines were actually strong, we only managed to get relatively small gains from our auxiliary resources, for reasons that by and large remain to be analyzed in depth. Our biomedical systems are presented in Section 2.
We also participated in the Robustness translation task, developing a multi-domain, noise-robust and amenable to fast adaptation translation system for the translation direction English-German. Our main focus was to study in more depth the adaptor architecture initially introduced in (Bapna and Firat, 2019) in a large-scale setting, where multiple heterogeneous corpora of unbalanced size are available for training, and explore ways to make the system robust to spelling noise in the test data. The zero-shot system is a generic system which does not use any adaptation layer; for our few-shot adaptation submission, we did not use the supplementary data provided by the organizers, which turned out to be only mildly relevant for the test condition, but resorted to a data selection strategy. In any case, our submissions are constrained and only use the parallel WMT data for this language pair; they are further described in Section 3.
2 Bio-medical translation from English into French
Data sources
We trained our baseline systems on a collection of biomedical corpora, excluding by principle any outof-domain parallel corpus, so as to keep the size of our systems moderate and a reduced training time. We gathered parallel and monolingual corpora available for English-French in the biomedical domain. These first included the biomedical texts provided by the WMT'20 organizers: Edp, Medline abstracts and titles (Jimeno Yepes et al., 2017), Scielo (Neves et al., 2016) (Tiedemann, 2012). These were selected using the data selection scheme described in (Abdul-Rauf and Schwenk, 2009). Medline titles were used as queries to find the related sentences. We used 3-best sentences returned from the IR pipeline as additional corpus to build the models (these are shown as X7 in table2). For development purposes, we used Khresmoi, Edp and Scielo test corpora. The Medline test sets of WMT'18 and 19 5 were used as internal test data.
Monolingual sources
Supplementary French data from two monolingual sources were collected from public archives: abstracts of medical papers published by Elsevier from the Lissa portal 6 and a collection of research articles collected from various sources 7 henceforth referred to as Med Fr (Maniez, 2009). The former corpus contains 41K abstract and totals approximately 7.7M running words; the latter contains 65K sentences, for a little more than 1.5M running words.
These texts were back-translated (Sennrich et al., 2016a;Burlot and Yvon, 2018) into French using a relatively basic neural French-English engine trained with the official WMT data sources for the biomedical task, using the HuggingFace pipeline (see details below). This system had a BLEU score of 31.2 on Medline 18 test set.
Note that back-translation has also been effec-Symptoms of bacterial pneumonia frequently overlap those present with viral infections or reactive airway disease. Symptoms of pneumonie bactérienne frequently overlap those present with infections virales or reactive airway maladie.
Pre and post-processing
The document level corpora were first retrieved from xml, split 8 into sentences and sentence aligned using Microsoft bilingual aligner (Moore, 2002): these include Cochrane, Scielo and some unaligned documents from Edp. All train, development and test corpora were cleaned by removing instances of empty lines, URLs and lines containing more than 60% non-alphabetic forms. For tokenization into words and subwords units, two pipelines were considered. The first one is set up as follows (a) tokenize the French and English texts using Moses scripts 9 ; (b) compute a joint Byte-pair Encoding (BPE) inventory of 32K units with subword-nmt; 10 (c) generate the translation; (d) detokenize and truecase the output, again with Moses scripts. Systems based on this pipeline are prefixed M * . The second one is slightly more complex as it heavily relies on the HuggingFace API 11 for accessing pre-trained BERT models. The corresponding systems are prefixed with H * and comprise the following steps: (a) a simple tokenization script, (b) a multilingual segmenter mapping BPE units to pre-trained encodings generated according to (Devlin et al., 2019) as input to the translation system (step (c)). In that case, the MT output is also a sequence of multilingual BPE units that further needs (d) to be reaccentuated and recased, before a final (e) detokenization.
Step (d) is non-trivial and is performed by a monolingual translation system trained to convert HuggingFace BPE units into Moses BPE units, 12 which can then be properly reassembled and detokenized as for the Moses pipeline.
Fine-tuning
The fine-tuning process starts from corresponding models trained to convergence, based on BLEU score on dev sets. These are then further fine-tuned using a selected part of the training corpus containing only the Medline abstracts and the three Cochrane corpora, again until convergence. The corresponding systems are post-fixed with * -ft.
Pre-translating terms
Medical terms, made of monolexical or polylexical units, are abound in medical terms, and getting their translation right is a very difficult task. Approaches to Biomedical MT have tried to deal this in various ways including explicitly using terminology list ( We developed systems aimed at improving the translation of terms mainly following the recent proposals of (Dinu et al., 2019;Song et al., 2019). They mostly imply to pre-translate English terms into French, merely replacing the English version with a desired translation in a preprocessing step. The translation system thus inputs mixed-language sentences comprising both English and French words. In our implementation, we followed (Song et al., 2019) and did not mark the pre-translated segments in the input. The target side (French) remained unchanged. Figure 1 displays a sentence extracted from Medline 18 before and after pretranslation (in the latter, French segments are underlined).
Terms are extracted from the French-English version of the Medical Subject Headings thesaurus (MeSH), available in XML format. 13 We extracted a list of about 30K English terms and their preferred translation. This list was extended by searching our training corpus for instances where (a) a term is found in the English sentence; (b) a possible translation is found in the French sentence.
Step (b) relies on a much larger list of about 800K possible associations, also extracted from the MeSH. The final term list contains about 40K entries. Training was performed in two steps: starting with our best system (M3), we resume training with partially pre-translated sentences, using only the following corpora: Cochrane, Medline, Taus and a large portion of Scielo (for a grand total of 2M sentence pairs). This process is performed until convergence. The same fine-tuning process as described above is optionally performed.
In testing, we replace any matching English term with its translation subject to length constraints to avoid irrelevant, ambiguous or accidental matches. We only substitute terms of (source+target) length greater or equal to 7 characters, yielding the pretranslation of 462 and 795 terms respectively in the Medline 18 and Medline 19 test sets. Cases where one term has several translations are disambiguated based on frequency of occurrences in training. These systems appear in the last two rows of Table 2 with the postfix * -pt.
Translation framework
We mostly used two architectures to build our systems: basic Transformer models (Vaswani et al., 2017) as well as BERT-fused transformer models (Zhu et al., 2020). All systems use Facebook's seq-2-seq library fairseq (Ott et al., 2019) with parameters settings borrowed from transformer iwslt de en. 14 We used memory efficient FP16 optimizer. The ReLU activation function was used in all 6 encoder and 6 decoder layers, 1024 hidden layer size and batch size of 4K. Training was optimized using Adam and a learning rate of 0.0005 was fixed for all experiments.
For the BERT-based models, we relied on This allowed us to build the BERT-fused models using the same architecture and parameters as the baseline transformer models and to establish fair comparisons. In BERT-fused NMT model, the contextual representations are first computed by the BERT model for each token (in the source and target), these are then combined at each encoder and decoder layer using the attention mechanism. Full details are in Zhu et al. (2020).
Given the size of our training data, the "lazy" output dataset implementation was used to enable data loading in the RAM. Systems were trained until convergence based on the BLEU score on the development sets. Evaluation is performed using sacrebleu (Post, 2018). Scores are chosen based on the best score on the development set (Khres+Edp+Scielo) and the corresponding scores for that checkpoint are reported on Medline 18 and 14 https://fairseq.readthedocs.io/en/ latest/models.html 15 https://github.com/bert-nmt/bert-nmt Medline 19 test sets. For systems using terminology pre-translation, Khresmoi and Edp were used as development sets.
Results
Results are in Table 2, where we report BLEU scores for the three tracks explored in this work. M * denotes the Moses tokenization pipeline, H * represents the HuggingFace pipeline and B * denotes the BERT models with HuggingFace tokenization. We computed the scores on Medline 18, Medline 19 and Medline 20 test sets, 16 based on the best checkpoint on our development corpus. Base systems are given on the left, (⇒) identifies the derived (fine-tuned) systems. We first built baseline systems for the three tracks. X0 denotes the systems built using only the data provided by the organizers. X1 are our baseline systems built using all our parallel corpora. We see a unanimous improvement in all tracks ranging from 0.6 to 5.3 BLEU points, which is obtained by adding around 1M sentences of additional Cochrane and Taus corpora to the already available 2.9M sentences from WMT20. This hints at the relevance of the additional in-domain parallel corpora used.
These baselines X1 are then further fine-tuned with Cochrane and Medline abstracts as discussed in section 2.2.1, these are shown post-fixed with * -ft. All the systems show an improvement in the Moses track. Similarly, we see gain for all tracks for Medline 18 with the highest improvement on BERT-fused systems. For Medline 19 and 20, fine-tuning resulted in a small drop in performance across the board (except than Moses track), for reasons that remain to be analyzed.
Comparing M1-M2 with H1-H2, we see that the Moses pre-processing, which is simpler that HuggingFace's and relies on domain-adapted BPE units is slightly better than the alternative. As using HuggingFace's tools was a way to also experiment with BERT and other extensions, it was nonetheless used for the other systems.
Having established the adequacy of the supplementary parallel corpora, we built systems with back-translated monolingual corpora (section 2.1.1). These appear as X3 and X4 in Table 2. These back-translations were somewhat helpful, not to the extent that we were expecting them to be. Comparing with our baseline X1 systems, we 16 Again with our own sentence alignment. see a small gain of (0.2,0.6,0.8) for our transformer models using HuggingFace tokenization (H1 vs. H3) but no gain for the BERT track (B1 vs. B3). We can speculate about various reasons for this behaviour: (a) genre mismatch with the test set: even though the monolingual corpora also contain scientific texts in biomedical domain, the use of full documents might yield subtle differences in style and term used with what is observed in abstracts, which are more rigidly structured; (b) the use of a comparatively small amount of back-translations as compared to the baseline corpora; (c) the quality of back translations.
Our experiments with pre-translated terms resulted in a small drop of the BLEU scores for the corresponding systems (X5, X6). Our initial analysis of term use 17 in the references and in the system outputs helps understand why this is the case. As it turns out, references translations contain a smaller proportion of licensed terms than our baseline translations (55.6% for the reference, 61.1% and 61.6% for respectively X3 and X4), which in turn contain less terms than our term-sensitive systems (H5 and H6, for which these numbers are respectively 68.9 and 64.2). Another way to look at this is to realize that only 58.6% of our pre-translations were actually in the reference. All in all, using more translations from the MeSH makes our output less similar to the reference than the baselines, and contributes to degrade the BLEU score. It is however reinsuring to see that pre-translating terms actually increases the number of terms in the output -in fact, for H5 and H6 we find that respectively 84.2% and 81.9% of these pre-translations are actually copied in the target, even though there was no indication of these French inserts in the mixedlanguage input. We can also note that the majority of the pre-translated terms were frequent Biomedical terms (such as "patients", "health", etc) that were also correctly translated by the baseline systems. Evaluating these outputs with more useful metrics than BLEU still needs to be performed.
Adding the IR retrieved sentences finally brought us nearly one extra BLEU point on all test sets for the HuggingFace systems, but not much improvement for the BERT-fused system. 17 Based on the proportion of source word in our term list that are actually translated with a translation that exists in the Mesh. These proportions are computed on an aggregate of the Medline testsets for 2018, 2019 and 2020, only counting terms with source+target length greater than 7.
Conclusion
In conclusion, our participation to this year's WMT biomedical task has enabled us to develop basic tools and pipelines for a variety of architectures and to start exploring domain-adapted extensions of a baseline Transformer architecture, using complementary resources, such as supplementary corpora, pre-trained embeddings and terminological resources. If all these extensions were not equally useful, we still were able to develop strong systems for this task that provide us with a solid starting point for further developments of domain-adapted NMT systems.
3 Robustness: translating English challenge test sets into German
Data sources
Our sole data sources are the parallel corpora distributed by the organizers for the News task, which we significantly down-sampled in order to reduce the overall computational training cost. Monolingual data sources were not considered. These parallel corpora were then grouped into 8 broad domains. Statistics for each corpus / domain are in Table 3. Our development set is composed of a varied set of common benchmarks, aimed to represent a wide diversity of genres and domains.
Pre-processing
The first step of pre-processing consists of cleaning the parallel corpora using the following rules: (a) discard sentences based on length (with a maximum length of 99 words), and on the source/target length ratio (in the interval [2/3; 3/2]); (b) dis-card instances of non-English and non-German sentences, using the langid toolkit; 18 (c) remove duplicates sentence pairs. After cleaning, the parallel corpus used in training contains 50,875,449 sentences pairs. The next step is to lowercase and to tokenize the text into words and subword units. We use the Tokenizer library from OpenNMT. 19 We first lowercased every word, adding a special marker at the beginning of capitalized words, and likewise for uppercased words and segments. For instance, this procedure replaces "It" with "U it", and "NOVEM-BER RAIN" with "BU november EU BU rain EU". These markers are preserved during the BPE tokenization. We learned a joint BPE vocabulary for both languages using 32K merge operations.
Training a robust multi-domain system
Our approach to robustness aims at building a system that (a) could fare well for test sets that would be similar to the training domain; (b) could also accommodate data from new, unseen, domains; (c) would be easy to adapt to a new domain (for the few-shot condition); (d) could be robust to spelling noise in the test. Requirements (a)-(c) lead us to implement an extension of the baseline Transformer architecture with residual adapters (more on this in section 3.3.2); to meet requirement (d), we implemented a data augmentation technique described in Section 3.3.3.
Baseline
The baseline system relies on the Transformer Large architecture from (Vaswani et al., 2017). We set the embeddings size and the hidden layers size to 1024. Transformers use multi-head attention with 16 heads in each of the 6+6 layers; the inner feedforward layer contains 4096 cells. Training uses a batch size of 12288 tokens; optimization uses Adam with parameters β 1 = 0.9, β 2 = 0.98 and Noam decay (warmup steps = 4000) and a dropout rate of 0.1 for all layers.
Residual adapters
Our main source of inspiration is the work of Bapna and Firat (2019), who initially introduced the use of residual adapter modules for domain adaption. In a nutshell, this proposal adds an additional, domain-specific layer on top of every layer of the encoder and the decoder. It thus provides us with a lightweight, computationally efficient alternative to domain adaptation with full fine-tuning, which implies to update all the system parameters. We generalize this approach by training (or rather finetuning) a distinct residual adapter for each of the 8 train domains, while freezing the parameters of the baseline (generic) system. These adapter modules are made of 2-layer perceptrons, with an inner ReLU activation function operating on normalized entries of dimension 2048.
Any test sentence from a known domain would then use the corresponding adapter; for test sentences from new domains two options are possible: use only the generic system (without adapter), or use the adapter for the more similar domain. This methodology was chosen in the view of the fewshot task, where a new adapter could easily be learned for a new domain, even with a very small amount of data.
We evaluate the effectiveness of the residual adapters architecture using a varied set of internal test sets. Table 4 reports the BLEU scores of the baseline, generic model, prior to adaptation, as well as the adapted system. As expected, performance are overall better when selecting the appropriate domain for each test set.
We applied this idea to improve the ability our generic model to handle noisy data. Recall that most of the training data (with the exception of the web domain) comes from "clean" sources. To this end, we generated artificial training data for an additional "noise" domain, by automatically altering the source side of randomly selected training data. The noise generation procedure is described below. By doing this way, we expect the model to take advantage of the residual layer when input with noisy data that is similar to our artificial noisy domain, while keeping (a) its good performance on the other known domains, (b) a reasonable behaviour on any other clean data (using the generic baseline model without adapter).
Artificial noise generation
In order to account for possible user generated content (UGC) at test time, we explored the possibility of learning typical UGC noise at the character-level. To this end, we used an automatically scrapped Wikipedia correction corpus (Grundkiewicz and Junczys-Dowmunt, 2014), which has been filtered to keep only word replacements with, at most, a character edit distance of 30% of the word length. In the end, we kept a total of roughly 17.8M pairs of errors and editions. We then trained a characterlevel Transformer with the same architecture as our base translation model, which had a perfectmatch error rate of 22% on the test data partition. Finally, we augmented the original training data by sampling random original words according to a uniform probability distribution and replacing them with the prediction of our character-based UGC noise generator, resulting in the same number of sentences in the original corpora. We have set a 7% probability of replacement, that has been estimated by the percentage of Out-of-Vocabulary words in a real-world UGC corpus. This heuristic later seemed, as discussed in Section 3.4, to overestimate the quantity of noise to be added and, in retrospective, we should have used other metrics to estimate the noise level, such as the n-gram Kullback-Leibler divergence, as discussed in (Alonso et al., 2016;Rosales Núñez et al., 2019). Table 5 displays some examples of noise entries produced by our character-based generator. Regarding these, although typographical errors prevail, due to the nature of automatic filtering of the Wikipedia editions, some learned replacements operations can change the semantics and syntax of the sentence, e.g. (using → use), (for → in) or (may → can); thus introducing unexpected confusion in the training data.
Results
We report the BLEU scores of our various systems in Table 6. Our submission to the zero-shot evaluation was FT-Adapt-Noise, which we found was sub-optimal afterwards. However, interestingly, the residual adapter mechanism proved to substantially outperform the classical fine-tuning of the whole model (i.e. FT-Full-Noise). Finally, the residual adapter fine-tuned using the ParaCrawl corpus (FT-Adapt-Web) had the best performance on the test set, probably due to the higher similarity of this corpus to the target test. In addition, we noted that the baseline and FT-Adapt-Noise output a considerable number of English phrases, leaving most of the source sentence unchanged, whereas the FT-Adapt-Web reduced the number of sentences that presented this issue.
In order to assess how much the 172 sentences that were left completely untranslated impact the performance of the FT-Adapt-Noise model, we replaced them with the output of the Table 6: BLEU scores for the EN-De models developed for the Robustness track. We also report, for each system, the number of sentences that were left unchanged.
The design and organization of the few-shot part of the evaluation was not fully satisfactory: while we did train an adapter module using the new data seemingly corresponding to a novel domain, it seems that the corresponding test set was never released and we could not fully evaluate our approach. Working on this task was nonetheless very instructive, and helped us better understand the strength and pitfall of the residual adapter architecture when applied to a very large scale task and in the face of unbalanced, heterogeneous, training data.
Conclusions
In this paper, we have described the development undertaken for this year's participation to WMT shared tasks. Taking part to the Biomedical track as allowed us to collect and prepare useful resources (monolingual and bilingual corpora, term lists) for this domain, and to explore several pipelines and translation architectures. The general results are overall satisfactory, even though a deeper analysis of the MT is still needed to strengthen our conclusions. They will also help us prepare for next year tasks, where we expect to work on more language pairs. Our experiment for the Robustness track were less successful: we were not really prepared for the general tone and style that was observed in the zero-shot test set; we also did not understand the general orientation taken for the few-shot adaptation, as it seemed to us that the adaptation data was not really relevant for the only test set that was ever released.
Figure 1 :
1An example sentence containing pre-translated terms in French tively used to cater for parallel corpus shortage in the Biomedical domain in(Stojanovski et al., 2019;Peng et al., 2019;Soares and Krallinger, 2019).
Carrino et al., 2019), domain adaptation (Hira et al., 2019; Stojanovski et al., 2019) and transfer learning(Khan et al., 2018;Peng et al., 2019;Saunders et al., 2019).
Table 1
1details the corpora used in training.Parallel
Corpus
Wrds (M)
Sents.
English
French
Ufal
89.5
100.3 2.72 M
Edp
0.04
0.04 2.44 K
Medline titles
5.97
6.43 0.63 M
Medline abstracts
1.23
1.44 0.06 M
Scielo
0.17
0.21 7.84 K
Cochrane-Reference
2.23
2.74 0.12 M
Cochrane-PE
0.43
0.53 20.5 K
Cochrane-GooglePE
0.63
0.77 30.3 K
Taus
20.1
23.2 8.86 M
IR Retrieved
13.2
14.7
3.6M
Development
Scielo
0.09
0.13
4333
Edp
6.2K
7.1K
328
Khresmoi
28K
33K
1500
Test
Medline 18
5.7K
6.9K
265
Medline 19
9.8K
12.4K
537
Medline 20
12.7K
16.2K
699
Monolingual
Corpus
English French
Sent.
(Synthetic) (Human)
Lissa
8.79
7.70 0.33 M
Med Fr
16.3
16.2 0.06 M
Table 1: Data sources for the English-French biomedi-
cal task (before tokenization)
Table 2 :
2BLEU scores for the various biomedical systems on Medline 18, 19 and 20 test sets. Superscripts * n denote the runs submitted: H4, M2, B8.
Table 3 :
3Data used in the Robustness task: number of
parallel lines (×10 3 ), number of tokens (×10 6 )
Table 4 :
4BLEU scores on various test sets using our baseline and adapted NMT systems for each domain. NT stands for NewsTestoriginal the
combination may concerning using no common
developing
for status also
noisy
this combonation
can
concering
use
not
comon
developping
in
staus aslo
Table 5 :
5Examples of clean and artificially noisy word inputs baseline and observed a performance increase to 31.3 BLEU. This suggests that our data augmentation technique introduced confusion to the base model after fine-tuning and the resulting translation system was less adapted to the zero-shot test set.robustness-set1 #EN Sents.
Baseline
31.6
120
FT-Adapt-Noise
30.2
172
FT-Full-Noise
24.6
256
FT-Adapt-Web
34.2
34
FT-Full-Web
33.8
49
http://www.statmt.org/moses/
https://ufal.mff.cuni.cz/ufal_ medical_corpus 3 https://github.com/fyvo/ CochraneTranslations/ 4 https://md.taus.net/corona 5 With our own sentence alignment. 6 https://www.lissa.fr/dc/#env=lissa 7 https://crtt.univ-lyon2.fr/ les-corpus-medicaux-du-crtt-613310.kjsp
https://github.com/berkmancenter/ mediacloud-sentence-splitter 9 http://www.statmt.org/moses/ 10 https://github.com/rsennrich/ subword-nmt 11 https://Huggingface.co/transformers/ model_doc/bert.html 12 This process is not completely error prone, and yields a BLEU score of 98.2 on Medline 18 test set.
http://mesh.inserm.fr/FrenchMesh/
https://github.com/saffsd/langid.py 19 https://github.com/OpenNMT/Tokenizer
AcknowledgmentsThis work is (partly) based on computations performed on the Saclay-IA and on the Jean ZAY computing platforms. The authors wish to thank Pierre Zweigenbaum for his help finding French corpora in the biomedical domain and Hicham El-Boukkouri for providing guidance setting up BERTbased systems. The second author wishes to acknowledge the help and guidance of Djamé Seddah and Guillaume Wisniewski; his work is funded by the French Research Agency via the ANR project ParSiTi (ANR-16-CE33-0021).
On the use of comparable corpora to improve SMT performance. Sadaf Abdul-Rauf, Holger Schwenk, Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009). the 12th Conference of the European Chapter of the ACL (EACL 2009)Athens, GreeceAssociation for Computational LinguisticsSadaf Abdul-Rauf and Holger Schwenk. 2009. On the use of comparable corpora to improve SMT perfor- mance. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 16-23, Athens, Greece. Association for Computa- tional Linguistics.
From noisy questions to minecraft texts: Annotation challenges in extreme syntax scenario. Djamé Héctor Martínez Alonso, Benoît Seddah, Sagot, Proceedings of the 2nd Workshop on Noisy User-generated Text. the 2nd Workshop on Noisy User-generated TextOsaka, JapanThe COL-ING 2016 Organizing CommitteeHéctor Martínez Alonso, Djamé Seddah, and Benoît Sagot. 2016. From noisy questions to minecraft texts: Annotation challenges in extreme syntax sce- nario. In Proceedings of the 2nd Workshop on Noisy User-generated Text, NUT@COLING 2016, Osaka, Japan, December 11, 2016, pages 13-23. The COL- ING 2016 Organizing Committee.
Simple, scalable adaptation for neural machine translation. Ankur Bapna, Orhan Firat, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAssociation for Computational LinguisticsAnkur Bapna and Orhan Firat. 2019. Simple, scal- able adaptation for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing, EMNLP-IJCNLP, pages 1538- 1548, Hong Kong, China. Association for Compu- tational Linguistics.
Using monolingual data in neural machine translation: a systematic study. Franck Burlot, François Yvon, 10.18653/v1/W18-6315Proceedings of the Third Conference on Machine Translation: Research Papers. the Third Conference on Machine Translation: Research PapersBrussels, BelgiumAssociation for Computational LinguisticsFranck Burlot and François Yvon. 2018. Using mono- lingual data in neural machine translation: a system- atic study. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 144-155, Brussels, Belgium. Association for Com- putational Linguistics.
Terminologyaware segmentation and domain feature for the WMT19 biomedical translation task. Bardia Casimiro Pio Carrino, Marta R Rafieian, José A R Costajussà, Fonollosa, 10.18653/v1/W19-5418Proceedings of the Fourth Conference on Machine Translation. the Fourth Conference on Machine TranslationFlorence, Italy3Association for Computational LinguisticsCasimiro Pio Carrino, Bardia Rafieian, Marta R. Costa- jussà, and José A. R. Fonollosa. 2019. Terminology- aware segmentation and domain feature for the WMT19 biomedical translation task. In Proceed- ings of the Fourth Conference on Machine Transla- tion (Volume 3: Shared Task Papers, Day 2), pages 151-155, Florence, Italy. Association for Computa- tional Linguistics.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Training neural machine translation to apply terminology constraints. Georgiana Dinu, Prashant Mathur, Marcello Federico, Yaser Al-Onaizan, 10.18653/v1/P19-1294Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsGeorgiana Dinu, Prashant Mathur, Marcello Federico, and Yaser Al-Onaizan. 2019. Training neural ma- chine translation to apply terminology constraints. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3063-3068, Florence, Italy. Association for Compu- tational Linguistics.
The wiked error corpus: A corpus of corrective wikipedia edits and its application to grammatical error correction. Roman Grundkiewicz, Marcin Junczys-Dowmunt, Advances in Natural Language Processing. Springer8686Roman Grundkiewicz and Marcin Junczys-Dowmunt. 2014. The wiked error corpus: A corpus of correc- tive wikipedia edits and its application to grammat- ical error correction. In Advances in Natural Lan- guage Processing -Lecture Notes in Computer Sci- ence, volume 8686, pages 478-490. Springer.
Exploring transfer learning and domain data selection for the biomedical translation. Sadaf Noor-E Hira, Kiran Abdul Rauf, Kiani, 10.18653/v1/W19-5419Proceedings of the Fourth Conference on Machine Translation. the Fourth Conference on Machine TranslationFlorence, ItalyAssociation for Computational Linguistics3Ammara Zafar, and Raheel NawazNoor-e Hira, Sadaf Abdul Rauf, Kiran Kiani, Ammara Zafar, and Raheel Nawaz. 2019. Exploring transfer learning and domain data selection for the biomed- ical translation. In Proceedings of the Fourth Con- ference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 156-163, Florence, Italy. Association for Computational Linguistics.
Diagnosing high-quality statistical machine translation using traces of post-edition operations. Julia Ive, Aurélien Max, François Yvon, Philippe Ravaud, International Conference on Language Resources and Evaluation -Workshop on Translation Evaluation: From Fragmented Tools and Data Sets to an Integrated Ecosystem. Portorož, SloveniaJulia Ive, Aurélien Max, François Yvon, and Philippe Ravaud. 2016. Diagnosing high-quality statistical machine translation using traces of post-edition op- erations. In International Conference on Language Resources and Evaluation -Workshop on Transla- tion Evaluation: From Fragmented Tools and Data Sets to an Integrated Ecosystem (MT Eval 2016 2016), page 8, Portorož, Slovenia.
Findings of the WMT 2017 biomedical translation shared task. Antonio Jimeno Yepes, Aurélie Névéol, Mariana Neves, Karin Verspoor, Ondřej Bojar, Arthur Boyer, Cristian Grozea, Barry Haddow, Madeleine Kittner, Yvonne Lichtblau, Pavel Pecina, Roland Roller, Rudolf Rosa, Amy Siu, Philippe Thomas, Saskia Trescher, 10.18653/v1/W17-4719Proceedings of the Second Conference on Machine Translation. the Second Conference on Machine TranslationCopenhagen, DenmarkAssociation for Computational LinguisticsAntonio Jimeno Yepes, Aurélie Névéol, Mariana Neves, Karin Verspoor, Ondřej Bojar, Arthur Boyer, Cristian Grozea, Barry Haddow, Madeleine Kit- tner, Yvonne Lichtblau, Pavel Pecina, Roland Roller, Rudolf Rosa, Amy Siu, Philippe Thomas, and Saskia Trescher. 2017. Findings of the WMT 2017 biomedical translation shared task. In Proceedings of the Second Conference on Machine Translation, pages 234-247, Copenhagen, Denmark. Association for Computational Linguistics.
Hunter NMT system for WMT18 biomedical translation task: Transfer learning in neural machine translation. Abdul Khan, Subhadarshi Panda, Jia Xu, Lampros Flokas, 10.18653/v1/W18-6447Proceedings of the Third Conference on Machine Translation: Shared Task Papers. the Third Conference on Machine Translation: Shared Task PapersBelgium, BrusselsAssociation for Computational LinguisticsAbdul Khan, Subhadarshi Panda, Jia Xu, and Lampros Flokas. 2018. Hunter NMT system for WMT18 biomedical translation task: Transfer learning in neu- ral machine translation. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 655-661, Belgium, Brussels. Associa- tion for Computational Linguistics.
L'adjectif dénominal en langue de spécialité:étude du domaine de la médecine. François Maniez, 14Revue française de linguistique appliquéeFrançois Maniez. 2009. L'adjectif dénominal en langue de spécialité:étude du domaine de la médecine. Revue française de linguistique ap- pliquée, 14(2):117-130.
Fast and accurate sentence alignment of bilingual corpora. C Robert, Moore, Proc. AMTA'02. AMTA'02Tiburon, CA, USASpringer Verlag2499Robert C. Moore. 2002. Fast and accurate sentence alignment of bilingual corpora. In Proc. AMTA'02, Lecture Notes in Computer Science 2499, pages 135-144, Tiburon, CA, USA. Springer Verlag.
The Scielo Corpus: a parallel corpus of scientific publications for biomedicine. Mariana Neves, Antonio Jimeno Yepes, Aurélie Névéol, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). the Tenth International Conference on Language Resources and Evaluation (LREC'16)Portorož, SloveniaEuropean Language Resources Association (ELRAMariana Neves, Antonio Jimeno Yepes, and Aurélie Névéol. 2016. The Scielo Corpus: a parallel cor- pus of scientific publications for biomedicine. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 2942-2948, Portorož, Slovenia. European Language Resources Association (ELRA).
fairseq: A fast, extensible toolkit for sequence modeling. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli, 10.18653/v1/N19-4009Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations). the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)Minneapolis, MinnesotaAssociation for Computational LinguisticsMyle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics (Demonstrations), pages 48-53, Minneapolis, Min- nesota. Association for Computational Linguistics.
Huawei's NMT systems for the WMT 2019 biomedical translation task. Wei Peng, Jianfeng Liu, Liangyou Li, Qun Liu, 10.18653/v1/W19-5420Proceedings of the Fourth Conference on Machine Translation. the Fourth Conference on Machine TranslationFlorence, Italy3Association for Computational LinguisticsWei Peng, Jianfeng Liu, Liangyou Li, and Qun Liu. 2019. Huawei's NMT systems for the WMT 2019 biomedical translation task. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), Florence, Italy. As- sociation for Computational Linguistics.
A call for clarity in reporting BLEU scores. Matt Post, 10.18653/v1/W18-6319Proceedings of the Third Conference on Machine Translation: Research Papers. the Third Conference on Machine Translation: Research PapersBelgium, BrusselsAssociation for Computational LinguisticsMatt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Belgium, Brussels. Association for Computa- tional Linguistics.
Comparison between NMT and PBSMT performance for translating noisy user-generated content. José Carlos Rosales Núñez, Djamé Seddah, Guillaume Wisniewski, Proceedings of the 22nd Nordic Conference on Computational Linguistics. the 22nd Nordic Conference on Computational LinguisticsTurku, FinlandLinköping University Electronic PressJosé Carlos Rosales Núñez, Djamé Seddah, and Guil- laume Wisniewski. 2019. Comparison between NMT and PBSMT performance for translating noisy user-generated content. In Proceedings of the 22nd Nordic Conference on Computational Linguistics, pages 2-14, Turku, Finland. Linköping University Electronic Press.
UCAM biomedical translation at WMT19: Transfer learning multi-domain ensembles. Danielle Saunders, Felix Stahlberg, Bill Byrne, 10.18653/v1/W19-5421Proceedings of the Fourth Conference on Machine Translation. the Fourth Conference on Machine TranslationFlorence, ItalyAssociation for Computational Linguistics3Danielle Saunders, Felix Stahlberg, and Bill Byrne. 2019. UCAM biomedical translation at WMT19: Transfer learning multi-domain ensembles. In Pro- ceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 169-174, Florence, Italy. Association for Computational Linguistics.
Improving neural machine translation models with monolingual data. Rico Sennrich, Barry Haddow, Alexandra Birch, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyLong Papers1Association for Computational LinguisticsRico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.
Neural machine translation of rare words with subword units. Rico Sennrich, Barry Haddow, Alexandra Birch, 10.18653/v1/P16-1162Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyLong Papers1Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany.
BSC participation in the WMT translation of biomedical abstracts. Felipe Soares, Martin Krallinger, 10.18653/v1/W19-5422Proceedings of the Fourth Conference on Machine Translation. the Fourth Conference on Machine TranslationFlorence, ItalyAssociation for Computational Linguistics3Felipe Soares and Martin Krallinger. 2019. BSC par- ticipation in the WMT translation of biomedical ab- stracts. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 175-178, Florence, Italy. Association for Computational Linguistics.
Code-switching for enhancing NMT with pre-specified translation. Kai Song, Yue Zhang, Heng Yu, Weihua Luo, Kun Wang, Min Zhang, 10.18653/v1/N19-1044Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsKai Song, Yue Zhang, Heng Yu, Weihua Luo, Kun Wang, and Min Zhang. 2019. Code-switching for enhancing NMT with pre-specified translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 449-459, Minneapolis, Minnesota. Association for Computa- tional Linguistics.
The LMU munich unsupervised machine translation system for WMT19. Dario Stojanovski, Viktor Hangya, Matthias Huck, Alexander Fraser, 10.18653/v1/W19-5344Proceedings of the Fourth Conference on Machine Translation. the Fourth Conference on Machine TranslationFlorence, ItalyAssociation for Computational Linguistics2Shared Task Papers, Day 1)Dario Stojanovski, Viktor Hangya, Matthias Huck, and Alexander Fraser. 2019. The LMU munich unsuper- vised machine translation system for WMT19. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 393-399, Florence, Italy. Association for Computational Linguistics.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, V Quoc, Le, Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems. Montreal, Quebec, CanadaIlya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Sys- tems 27: Annual Conference on Neural Informa- tion Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104-3112.
Parallel data, tools and interfaces in OPUS. Jörg Tiedemann, Proceedings of the Eight International Conference on Language Resources and Evaluation, LREC'12. the Eight International Conference on Language Resources and Evaluation, LREC'12Istanbul, TurkeyEuropean Language Resources Association (ELRAJörg Tiedemann. 2012. Parallel data, tools and inter- faces in OPUS. In Proceedings of the Eight Interna- tional Conference on Language Resources and Eval- uation, LREC'12, Istanbul, Turkey. European Lan- guage Resources Association (ELRA).
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems. I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. GarnettCurran Associates, Inc30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.
Incorporating BERT into Neural Machine Translation. Jinhua Zhu, Yingce Xia, Lijun Wu, Di He, Tao Qin, Wengang Zhou, Houqiang Li, Tieyan Liu, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsICLRJinhua Zhu, Yingce Xia, Lijun Wu, Di He, Tao Qin, Wengang Zhou, Houqiang Li, and Tieyan Liu. 2020. Incorporating BERT into Neural Machine Transla- tion. In Proceedings of the International Conference on Learning Representations, ICLR. |
||
227,230,557 | [] | Neural Networks approaches focused on French Spoken Language Understanding: application to the MEDIA Evaluation Task
OnlineCopyright OnlineDecember 8-13, 2020
Sahar Ghannay
Christophe Servan
Sophie Rosset
Paris-Saclay
CNRS
LIMSI
91405OrsayFrance
QWANT
61 rue de Villier92200Neuilly-sur-SeineFrance
Neural Networks approaches focused on French Spoken Language Understanding: application to the MEDIA Evaluation Task
Proceedings of the 28th International Conference on Computational Linguistics
the 28th International Conference on Computational LinguisticsBarcelona, SpainOnlineDecember 8-13, 20202722
In this paper, we present a study on a French Spoken Language Understanding (SLU) task: the MEDIA task. Many works and studies have been proposed for many tasks, but most of them are focused on English language and tasks. The exploration of a richer language like French within the framework of a SLU task implies to recent approaches to handle this difficulty. Since the MEDIA task seems to be one of the most difficult, according to several previous studies, we propose to explore Neural Networks approaches focusing of three aspects: firstly, the Neural Network inputs and more specifically the word embeddings; secondly, we compared French version of BERT against the best setup through different ways; Finally, the comparison against State-of-the-Art approaches. Results show that the word embeddings trained on a small corpus need to be updated during SLU model training. Furthermore, the French BERT fine-tuned approaches outperform the classical Neural Network Architectures and achieves state of the art results. However, the contextual embeddings extracted from one of the French BERT approaches achieve comparable results in comparison to word embedding, when integrated into the proposed neural architecture.
Introduction
Spoken language understanding (SLU) module is a key component for a spoken language dialogue system. It consists on semantically analyse user queries and identifies text spans that mention semantic concepts. SLU task can fall into three sub-tasks: domain classification, intent classification, and slotfilling (Tur and Mori, 2011). The latter is the task that interests us in this study.
Over the past five years, the studies developed for SLU task are based on neural network architectures (Yao et al., 2014;Mesnil et al., 2015;Guo et al., 2014;Zhang and Wang, 2016;Dinarelli et al., 2017;Simonnet et al., 2017;Korpusik et al., 2019;Ghannay et al., 2020). Recent approaches take benefit from contextual or language model embeddings such as BERT (Devlin et al., 2019). Korpusik et al. (2019) investigated the transfer ability of a pre-trained BERT representation for English SLU tasks. But, as far as we know, there are no such studies on a French SLU task.
Following Ghannay et al. (2020)'s study, many avenues can be explored. In their study, the word embeddings have been frozen during training (are not updated), since Lebret et al. (2013) show that finetuned word embeddings show very similar performance and provide comparable results. However, the evaluation of whether updating the embeddings during SLU model training improves or not the results, for SLU task, is less studied. In addition, their SLU model is fed only with word embeddings, and they did not use any additional features, thus there are some rows for improvements. Finally, Béchet and Raymond (2019) benchmarked several SLU tasks and proposed a difficulty hierarchy in which the MEDIA evaluation (Bonneau-Maynard et al., 2006) seems to be the most difficult SLU task.
Contributions: This study focuses on a French SLU task: the MEDIA evaluation, in which we firstly propose the evaluation of whether updating the word embeddings during training can improve the results, according to several scenarios. Secondly, we propose to use a BiLSTM-CNN architecture (Ma, 2016) that This work is licensed under a Creative Commons Attribution 4.0 International Licence.
Licence details: http://creativecommons.org/licenses/by/4.0/ integrates character embeddings as additional features, using a convolution layer. Finally, we propose to evaluate the performance of BERT approaches against the BiLSTM-CNN architecture and State-of-the-Art on the MEDIA task (Simonnet, 2019) through different ways: i) We propose to fine-tune BERT on SLU task using two french models: CamemBERT (Martin et al., 2020) and FlauBERT (Le et al., 2020). ii) based on the results of i) we propose to integrate the extracted BERT contextual embeddings to the BiLSTM-CNN architecture and compare it to word embeddings.
SLU Model descriptions
This section describe the SLU models used in this study. The first two models are based on BiLSTM and its update: the BiLSTM-CNN. The NeuroNLP2 implementation 1 was used for both BiLSTM implementations. The third model is based on the BERT models.
BiLSTM (Bidirectional long short-term memory) architecture has been proven to be relevant to model output dependencies on SLU tasks (Yao et al., 2014;Mesnil et al., 2015).
To further improve the performance of our SLU model, we propose to use a BiLSTM-CNN (convolutional neural network) architecture (Ma, 2016) that integrates character embeddings using a convolution layer, in addition to the word embeddings.
Finally, we propose to fine-tune BERT (Devlin et al., 2019) on SLU task using two french models: CamemBERT (Martin et al., 2020) and FlauBERT (Le et al., 2020). The CamemBERT model is trained on the French part of the OSCAR corpus (Suárez et al., 2019) composed of 138GB of raw text, and FlauBERT (Le et al., 2020) on 71GB of heterogeneous French corpora.
Experiments
In this section, we present the experiments we performed using our approaches and their setup 2 . For both BiLSTM and BiLSTM-CNN, we made some hyper-parameters tuning by varying the number of layers l ∈ {1, 2, 3}, the size of the BiLSTM hidden layers n ∈ {128, 256, 512} and the batch size b ∈ {16, 32, 64}. For BiLSTM-CNN, in addition to the other parameters, the window size is set to 3 and the number of filters (dimension of character embeddings) is set to s ∈ {30, 50, 100}.
Data
Experiments are conducted on the French MEDIA 3 corpus, composed of 1258 transcribed dialogues, which is about hotel reservation and information (Bonneau-Maynard et al., 2006). The corpus was manually annotated, following a BIO model, with semantic concepts characterized by a label and its value. The corpus is split into three parts: a training corpus composed of 13k sentences, a development corpus composed of 1.3k sentences, and a test corpus composed of 3.5k sentences.
Word embeddings training
One of the aim of our experiments is to see whether the update of the word embeddings during training of the SLU model (update) improves or not the results by mainly varying the data used to train the word embeddings and the hyper-parameters of the SLU model.
Following Ghannay et al. (2020)'s results, we propose to use CBOW word embeddings approach from word2vec (Mikolov et al., 2013), which is trained using the default parameters using three different corpora setup. The first one is a small and task-dependent corpus: training set of MEDIA corpus is used, by keeping all the words due to the small data size. A huge and out-of-domain corpus was used as second setup: the French Wikipedia dump (WIKI), which is composed of 573 million words using a vocabulary size of 923k words. Finally, both corpora (noted WIKI+MEDIA).
The common parameters used to train our word embeddings are: window size=5, negative sampling=5, dimension=300. They have been selected based on previous studies (Pennington et al., 2014;Bojanowski et al., 2017). Note that the out of vocabulary (OOV) words are represented by null vectors.
Results
Embeddings update
Those experiments aim to observe the impact of the update (noted update) of CBOW word embeddings or their freeze (noted freeze) while training of SLU module. We proposed different training setups for the word embeddings (MEDIA, WIKI and WIKI+MEDIA), presented in section 3.2. Also, the BiLSTM number of layers is set from 1 to 3.
Results summarized in Table 1 show that when the word embeddings are trained on MEDIA data, the update of the word embeddings is helpful and improves the results in terms of F1 score, whatever the size of the architecture. However, when the embeddings are trained on WIKI or WIKI+MEDIA data the update of the embeddings while training is not helpful and degrades the results. Thus, the best results are obtained using the BiLSTM architecture composed of 3 hidden layers, using one of the CBOW embeddings trained on WIKI or WIKI+MEDIA corpora, that obtain comparable results in terms of F1 score (
Character embeddings evaluation
In this section, we propose to use a BiLSTM-CNN architecture (Ma, 2016) that integrates character embeddings as additional features, using a convolution layer. We experiment the use of different character embeddings dimensions, and different numbers of BiLSTM layers. Based on the results in section 3.3.1, for those experiments, we used the embeddings trained on both WIKI and WIKI+MEDIA corpora, which are frozen during the BiLSTM-CNN training. Results summarized in table 2, show that the use of character embeddings as additional features was helpful and improves the performance in comparison to the results in table 1. We observe that, both embeddings trained on WIKI and WIKI+MEDIA achieve comparable results. This shows that, we don't need to use both a task-dependent corpus and another out-of-domain corpus to train the word embeddings. Note that, we observed the same thing, when the embeddings trained on WIKI data are fine-Tuned on MEDIA data. The best results (F1=87.40) achieved using the embeddings trained on WIKI data, using the appropriate parameters: 3 BiLSTM layers and character embeddings dimension of 100. Note that, beyond these values the performance of the system drops slightly.
Comparison to French BERT
In this section, we propose to evaluate the performance of BERT approaches on the MEDIA task through two different ways. The experimental results are summarized in table 3 using F1 score and the Concept Error Rate (CER), which is estimated like the Word Error Rate (WER). The CER is used to compare our approach with the State-of-the-Art proposed by Simonnet (2019) and noted biRNN-EDA. We also reported the results of the best system presented in table 2.
i) We propose to fine-tune BERT on SLU task using two French models: CamemBERT (Martin et al., 2020) and FlauBERT (Le et al., 2020) base models. Results in table 3 show the performance of BERT's models on SLU task. The best results achieved using CamemBERT base model trained on ccnet data. It yields 29.35% of relative improvement in terms of CER reduction in comparison to the baseline (7.56 vs 10.7). In addition, it outperforms the proposed BiLSTM-CNN system and improves the prediction of Table 3: Performance on Test MEDIA in terms of F1 and CER scores of the proposed systems ( † is the best system presented in table 2). some tags: "nom, chambre-fumeur, objet, ...". We observe that the models CamemBERT base (trained on OSCAR data) and FlauBERT obtain competitive results in terms of F1 and CER scores. Note that CamemBERT and FlauBERT base models achieve better results than the large models.
ii) Last, we propose to integrate the extracted BERT's contextual embeddings to the BiLSTM and BiLSTM-CNN architectures, instead of CBOW word embeddings. Based on the results of i) we used the embeddings extracted from CamemBERT base model trained on ccnet data. After tokenizing the ME-DIA corpus, the CamemBERT model was applied on the resulting data to extract the embeddings of 768 dimensions, for each sub-word from the last transformer layer. The token embeddings corresponds to the sum of its sub-word embeddings. A new CBOW embeddings is trained on WIKI data with dimension 768 to have comparable results. Results (last 2 lines) show that the use of CamemBERT contextual embeddings achieves competitive results in comparison to CBOW embeddings whatever the architecture used (BiLSTM or BiLSTM-CNN). Those results corroborate the results reported by Ghannay et al. (2020), in which, CBOW and ELMo (Peters et al., 2018) embeddings obtained comparable results in terms of F1 score (86.06 vs. 86.42). Last, the results with BiLSTM and BiLSTM-CNN architectures reveals the importance of character embeddings, even when they are combined with contextual embeddings.
Conclusions and future work
The paper presented a study focuses on French Spoken Language Understanding (SLU) task using the MEDIA corpus. First we proposed the evaluation of whether updating the word embeddings during training improves or not the results, according to several scenarios.Second, we proposed to use a BiLSTM-CNN architecture that integrates character embeddings as additional features. Last, we proposed to evaluate the performance of BERT approaches on the MEDIA task through different ways.
Experimental results show, that the word embeddings needed to be updated during SLU model training are the ones trained on small corpus like MEDIA. However, It is better for word embeddings trained on huge and out-of-domain to be frozen, since those word embeddings have captured enough general semantic and syntactic characteristics relevant to SLU task. More, The word embeddings trained on WIKI and WIKI+MEDIA achieve comparable results. This shows that, we don't need to use both a task-dependent corpus and another out-of-domain corpus to train the word embeddings. In addition, we observed the usefulness of character embeddings when added as additional features. Regarding the evaluation of the performance of BERT approaches, the fune-tuning of CamemBERT and Flaubert base models show that the best results are achieved using CamemBERT base model trained on ccnet data. It yields to 29.35% of relative improvement in terms of CER reduction in comparison to the baseline (7.56 vs 10.7). Finally, the integration of the extracted CamemBERT's contextual embeddings to the BiLSTM and BiLSTM-CNN architectures reveal that contextual embeddings achieves competitive results in comparison to CBOW word embeddings whatever the architecture, and confirm the importance of character embeddings.
For future work, we propose to evaluate the performance of Bert's contextual embeddings extracted from different encoder's layers, and to make in-depth error analysis for the different systems.
Table 1 :
186.40 vs. 86.69). Performance on Test MEDIA in terms of F1 score of CBOW word embeddings approach trained on three corpora (MEDIA, WIKI and WIKI+MEDIA), using the BiLSTM architecture.Config.
Update
Freeze
Train
#nb. BiLSTM layers
#nb. BiLSTM layers
Emb.
1
2
3
1
2
3
MEDIA
84.18
84.18
85.35
72.36 79.57 80.69
WIKI
84.73
85.82
86.47
84.11 86.06 86.40
WIKI
84.84
85.35
86.00
84.08 85.74 86.69
+MEDIA
#Layer
WIKI
WIKI+MEDIA
Character embeddings dimensions
30
50
100
30
50
100
1
84.38 84.59 84.85
84.13 84.47 84.73
2
85.88 86.43 86.18
86.20 86.75 86.34
3
87.02 87.05 87.40
87.29 87.01 87.30
Table 2 :
2Results on Test MEDIA in terms of F1 score using embeddings trained on both wiki and wiki+MEDIA corpora, using the BiLSTM-CNN architecture. (The word embeddings are frozen)
https://github.com/XuezheMax/NeuroNLP2 2 the code an data needed to run the experiments are available here: https://github.com/saharghannay/MEDIA Eval 3 MEDIA is publicly available for academic use: https://catalogue.elra.info/en-us/repository/browse/ELRA-S0272/
AcknowledgementsThis work has been partially funded by the LIHLITH project (ANR-17-CHR2-0001-03), and supported by ERA-Net CHIST-ERA, and the "Agence Nationale pour la Recherche" (ANR, France).
Benchmarking benchmarks: introducing new automatic indicators for benchmarking spoken language understanding corpora. Frédéric Béchet, Christian Raymond, Interspeech. Graz, AustriaFrédéric Béchet and Christian Raymond. 2019. Benchmarking benchmarks: introducing new automatic indicators for benchmarking spoken language understanding corpora. In Interspeech, Graz, Austria.
Enriching word vectors with subword information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, Transactions of the Association for Computational Linguistics. 5Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5.
Sophie Rosset, Christophe Servan, and Jeanne Villaneau. 2006. Results of the French Evalda-Media evaluation campaign for literal understanding. Hélène Bonneau-Maynard, Christelle Ayache, Frédéric Bechet, Alexandre Denis, Anne Kuhn, Fabrice Lefevre, Djamel Mostefa, Matthieu Quignard, lrec. GenoaHélène Bonneau-Maynard, Christelle Ayache, Frédéric Bechet, Alexandre Denis, Anne Kuhn, Fabrice Lefevre, Djamel Mostefa, Matthieu Quignard, Sophie Rosset, Christophe Servan, and Jeanne Vil- laneau. 2006. Results of the French Evalda-Media evaluation campaign for literal understanding. In lrec, Genoa.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423NAACL-HLT. Minneapolis, MinnesotaAssociation for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, Minneapolis, Minnesota. Association for Computational Linguistics.
Label-dependency coding in Simple Recurrent Networks for Spoken Language Understanding. Marco Dinarelli, Vedran Vukotic, Christian Raymond, Interspeech. Stockholm, SwedenMarco Dinarelli, Vedran Vukotic, and Christian Raymond. 2017. Label-dependency coding in Simple Recurrent Networks for Spoken Language Understanding. In Interspeech, Stockholm, Sweden.
What is best for spoken language understanding: small but task-dependant embeddings or huge but out-of-domain embeddings?. S Ghannay, A Neuraz, S Rosset, 10.1109/ICASSP40776.2020.9053278ICASSP 2020 -2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). S. Ghannay, A. Neuraz, and S. Rosset. 2020. What is best for spoken language understanding: small but task-dependant embeddings or huge but out-of-domain embeddings? In ICASSP 2020 -2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8114-8118.
Joint semantic utterance classification and slot filling with recursive neural networks. Daniel Guo, Gokhan Tur, Wen-Tau Yih, Geoffrey Zweig, Spoken Language Technology Workshop (SLT). IEEEDaniel Guo, Gokhan Tur, Wen-tau Yih, and Geoffrey Zweig. 2014. Joint semantic utterance classification and slot filling with recursive neural networks. In Spoken Language Technology Workshop (SLT), 2014 IEEE, pages 554-559. IEEE.
A comparison of deep learning methods for language understanding. Mandy Korpusik, Zoe Liu, James Glass, Interspeech. Graz, Austria, Graz, AustriaMandy Korpusik, Zoe Liu, and James Glass. 2019. A comparison of deep learning methods for language understanding. In Interspeech, September 15-19, 2019, Graz, Austria, Graz, Austria.
Flaubert: Unsupervised language model pre-training for french. Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab, Proceedings of The 12th Language Resources and Evaluation Conference. The 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationHang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, and Didier Schwab. 2020. Flaubert: Unsupervised lan- guage model pre-training for french. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 2479-2490, Marseille, France. European Language Resources Association.
Is deep learning really necessary for word embeddings?. Rémi Lebret, Joël Legrand, Ronan Collobert, Technical reportIdiapRémi Lebret, Joël Legrand, and Ronan Collobert. 2013. Is deep learning really necessary for word embeddings? Technical report, Idiap.
End-to-end sequence labeling via bi-directional lstm-cnns-crf. Eduard Ma, Xuezheand Hovy, 10.18653/v1/P16-1101ACL. Association for Computational LinguisticsEduard Ma, Xuezheand Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In ACL. Association for Computational Linguistics.
Camembert: a tasty french language model. Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsDjamé Seddah, and Benoît Sagot. 2020Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary,Éric Ville- monte de la Clergerie, Djamé Seddah, and Benoît Sagot. 2020. Camembert: a tasty french language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.
Using recurrent neural networks for slot filling in spoken language understanding. Grégoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xiaodong He, Larry Heck, Gokhan Tur, Dong Yu, Geoffrey Zweig, 10.1109/TASLP.2014.2383614IEEE/ACM Trans. Audio, Speech and Lang. Proc. 233Grégoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xiaodong He, Larry Heck, Gokhan Tur, Dong Yu, and Geoffrey Zweig. 2015. Using recurrent neural networks for slot filling in spoken language understanding. IEEE/ACM Trans. Audio, Speech and Lang. Proc., 23(3):530-539.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, arXiv:1301.3781In arXiv preprintTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word repre- sentations in vector space. In arXiv preprint arXiv:1301.3781.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, EMNLP. 14Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP, volume 14.
E Matthew, Mark Peters, Mohit Neumann, Matt Iyyer, Christopher Gardner, Kenton Clark, Luke Lee, Zettlemoyer, arXiv:1802.05365Deep contextualized word representations. arXiv preprintMatthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365.
Deep learning applied to spoken langage understanding. Edwin Simonnet, Université du MaineThesesEdwin Simonnet. 2019. Deep learning applied to spoken langage understanding. Theses, Université du Maine.
ASR error management for improving spoken language understanding. Edwin Simonnet, Sahar Ghannay, Nathalie Camelin, Yannick Estève, Renato De Mori, Stockholm, SwedenEdwin Simonnet, Sahar Ghannay, Nathalie Camelin, Yannick Estève, and Renato De Mori. 2017. ASR error management for improving spoken language understanding. In Interspeech 2017, Stockholm, Sweden.
Asynchronous pipeline for processing huge corpora on medium to low resource infrastructures. Pedro Javier Ortiz Suárez, Benoît Sagot, Laurent Romary, Challenges in the Management of Large Corpora (CMLC-7) 2019. 9Pedro Javier Ortiz Suárez, Benoît Sagot, and Laurent Romary. 2019. Asynchronous pipeline for process- ing huge corpora on medium to low resource infrastructures. Challenges in the Management of Large Corpora (CMLC-7) 2019, page 9.
Spoken Language Understanding: Systems for Extracting Semantic Information from Speech. Gokhan Tur, Renato De Mori, John Wiley & SonsGokhan Tur and Renato De Mori. 2011. Spoken Language Understanding: Systems for Extracting Semantic Information from Speech. John Wiley & Sons.
Spoken language understanding using long short-term memory neural networks. Kaisheng Yao, Baolin Peng, Yu Zhang, Dong Yu, Geoffrey Zweig, Yangyang Shi, Spoken Language Technology Workshop (SLT). IEEEKaisheng Yao, Baolin Peng, Yu Zhang, Dong Yu, Geoffrey Zweig, and Yangyang Shi. 2014. Spoken lan- guage understanding using long short-term memory neural networks. In Spoken Language Technology Workshop (SLT), 2014 IEEE, pages 189-194. IEEE.
A joint model of intent determination and slot filling for spoken language understanding. Xiaodong Zhang, Houfeng Wang, IJCAI. Xiaodong Zhang and Houfeng Wang. 2016. A joint model of intent determination and slot filling for spoken language understanding. In IJCAI, pages 2993-2999. |
||
128,296,356 | SOCIAL IQA: Commonsense Reasoning about Social Interactions | We introduce SOCIAL IQA, the first largescale benchmark for commonsense reasoning about social situations. SOCIAL IQA contains 38,000 multiple choice questions for probing emotional and social intelligence in a variety of everyday situations (e.g., Q: "Jordan wanted to tell Tracy a secret, so Jordan leaned towards Tracy. Why did Jordan do this?" A: "Make sure no one else could hear"). Through crowdsourcing, we collect commonsense questions along with correct and incorrect answers about social interactions, using a new framework that mitigates stylistic artifacts in incorrect answers by asking workers to provide the right answer to a different but related question. Empirical results show that our benchmark is challenging for existing question-answering models based on pretrained language models, compared to human performance (>20% gap). Notably, we further establish SOCIAL IQA as a resource for transfer learning of commonsense knowledge, achieving state-of-the-art performance on multiple commonsense reasoning tasks (Winograd Schemas, COPA). | [
44090948,
9890246,
52967399,
7363686,
155091369,
2924682,
53296520,
11816014,
52019251,
52115700,
51878517,
1994584,
4537113,
1461182
] | SOCIAL IQA: Commonsense Reasoning about Social Interactions
Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 3-7, 2019. 2019
Maarten Sap msap@cs.washington.edu
Paul G
Allen School of Computer Science & Engineering
SeattleWAUSA
♦
Hannah Rashkin hrashkin@cs.washington.edu
Paul G
Allen School of Computer Science & Engineering
SeattleWAUSA
Derek Chen
Paul G
Allen School of Computer Science & Engineering
SeattleWAUSA
Ronan Le Bras ronanlb@allenai.org
Yejin Choi
Paul G
Allen School of Computer Science & Engineering
SeattleWAUSA
♦ ♥ ♦ Allen
Institute for Artificial Intelligence
SeattleWAUSA
SOCIAL IQA: Commonsense Reasoning about Social Interactions
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAssociation for Computational LinguisticsNovember 3-7, 2019. 2019
We introduce SOCIAL IQA, the first largescale benchmark for commonsense reasoning about social situations. SOCIAL IQA contains 38,000 multiple choice questions for probing emotional and social intelligence in a variety of everyday situations (e.g., Q: "Jordan wanted to tell Tracy a secret, so Jordan leaned towards Tracy. Why did Jordan do this?" A: "Make sure no one else could hear"). Through crowdsourcing, we collect commonsense questions along with correct and incorrect answers about social interactions, using a new framework that mitigates stylistic artifacts in incorrect answers by asking workers to provide the right answer to a different but related question. Empirical results show that our benchmark is challenging for existing question-answering models based on pretrained language models, compared to human performance (>20% gap). Notably, we further establish SOCIAL IQA as a resource for transfer learning of commonsense knowledge, achieving state-of-the-art performance on multiple commonsense reasoning tasks (Winograd Schemas, COPA).
Introduction
Social and emotional intelligence enables humans to reason about the mental states of others and their likely actions (Ganaie and Mudasir, 2015). For example, when someone spills food all over the floor, we can infer that they will likely want to clean up the mess, rather than taste the food off the floor or run around in the mess (Figure 1, middle). This example illustrates how Theory of Mind, i.e., the ability to reason about the implied emotions and behavior of others, enables humans to navigate social situations ranging from simple conversations with friends to complex negotiations in courtrooms (Apperly, 2010).
Both authors contributed equally. In the school play, Robin played a hero in the struggle to the death with the angry villain.
REASONING ABOUT WHAT HAPPENS NEXT
A Q
REASONING ABOUT MOTIVATION
Why did Tracy do this?
Tracy had accidentally pressed upon Austin in the small elevator and it was awkward.
(a) get very close to Austin (b) squeeze into the elevator ✔ (c) get flirty with Austin A Q Figure 1: Three context-question-answers triples from SOCIAL IQA, along with the type of reasoning required to answer them. In the top example, humans can trivially infer that Tracy pressed upon Austin because there was no room in the elevator. Similarly, in the bottom example, commonsense tells us that people typically root for the hero, not the villain.
While humans trivially acquire and develop such social reasoning skills (Moore, 2013), this is still a challenge for machine learning models, in part due to the lack of large-scale resources to train and evaluate modern AI systems' social and emotional intelligence. Although recent advances in pretraining large language models have yielded promising improvements on several commonsense inference tasks, these models still struggle to reason about social situations, as shown in this and previous work (Davis and Marcus, 2015;Nematzadeh et al., 2018;Talmor et al., 2019). This is partly due to language models being trained on written text corpora, where reporting bias of knowledge limits the scope of commonsense knowledge that can be learned (Gordon and Van Durme, 2013;Lucy and Gauthier, 2017).
In this work, we introduce Social Intelligence QA (SOCIAL IQA), the first large-scale resource to learn and measure social and emotional intelligence in computational models. 1 SOCIAL IQA contains 38k multiple choice questions regarding the pragmatic implications of everyday, social events (see Figure 1). To collect this data, we design a crowdsourcing framework to gather contexts and questions that explicitly address social commonsense reasoning. Additionally, by combining handwritten negative answers with adversarial question-switched answers (Section 3.3), we minimize annotation artifacts that can arise from crowdsourcing incorrect answers (Schwartz et al., 2017;Gururangan et al., 2018).
This dataset remains challenging for AI systems, with our best performing baseline reaching 64.5% (BERT-large), significantly lower than human performance. We further establish SOCIAL IQA as a resource that enables transfer learning for other commonsense challenges, through sequential finetuning of a pretrained language model on SOCIAL IQA before other tasks. Specifically, we use SOCIAL IQA to set a new state-of-the-art on three commonsense challenge datasets: COPA (Roemmele et al., 2011) (83.4%), the original Winograd (Levesque, 2011) (72.5%), and the extended Winograd dataset from Rahman and Ng (2012) (84.0%).
Our contributions are as follows: (1) We create SOCIAL IQA, the first large-scale QA dataset aimed at testing social and emotional intelligence, containing over 38k QA pairs. (2) We introduce question-switching, a technique to collect incorrect answers that minimizes stylistic artifacts due to annotator cognitive biases. (3) We establish baseline performance on our dataset, with BERTlarge performing at 64.5%, well below human performance. (4) We achieve new state-of-the-art accuracies on COPA and Winograd through sequential finetuning on SOCIAL IQA, which implicitly endows models with social commonsense knowledge. 2 Task description SOCIAL IQA aims to measure the social and emotional intelligence of computational models through multiple choice question answering (QA). In our setup, models are confronted with a question explicitly pertaining to an observed context, where the correct answer can be found among three competing options. By design, the questions require inferential reasoning about the social causes and effects of situations, in line with the type of intelligence required for an AI assistant to interact with human users (e.g., know to call for help when an elderly person falls; Pollack, 2005). As seen in Figure 1, correctly answering questions requires reasoning about motivations, emotional reactions, or likely preceding and following actions. Performing these inferences is what makes us experts at navigating social situations, and is closely related to Theory of Mind, i.e., the ability to reason about the beliefs, motivations, and needs of others (Baron-Cohen et al., 1985). 2 Endowing machines with this type of intelligence has been a longstanding but elusive goal of AI (Gunning, 2018).
ATOMIC
As a starting point for our task creation, we draw upon social commonsense knowledge from ATOMIC (Sap et al., 2019) to seed our contexts and question types. ATOMIC is a large knowledge graph that contains inferential knowledge about the causes and effects of 24k short events. Each triple in ATOMIC consists of an event phrase with person-centric variables, one of nine inference dimensions, and an inference object (e.g., "PersonX pays for PersonY's ", "xAttrib", "generous"). The nine inference dimensions in ATOMIC cover causes of an event (e.g., "X needs money"), its effects on the agent (e.g., "X will get thanked") and its effect on other participants (e.g., "Y will want to see X again"); see Sap et al. (2019) for details.
Given this base, we generate natural language contexts that represent specific instantiations of the event phrases found in the knowledge graph. Furthermore, the questions created probe the commonsense reasoning required to navigate such contexts. Critically, since these contexts are based off of ATOMIC, they explore a diverse range of motivations and reactions, as well as likely preceding or following actions.
Dataset creation
SOCIAL IQA contains 37,588 multiple choice questions with three answer choices per question. Questions and answers are gathered through three phases of crowdsourcing aimed to collect the context, the question, and a set of positive and negative answers. We run crowdsourcing tasks on Amazon Mechanical Turk (MTurk) to create each of the three components, as described below.
Event Rewriting
In order to cover a variety of social situations, we use the base events from ATOMIC as prompts for context creation. As a pre-processing step, we run an MTurk task that asks workers to turn an ATOMIC event (e.g., "PersonX spills all over the floor") into a sentence by adding names, fixing potential grammar errors, and filling in placeholders (e.g., "Alex spilled food all over the floor."). 3
Context, Question, & Answer Creation
Next, we run a task where annotators create full context-question-answers triples. We automatically generate question templates covering 3 This task paid $0.35 per event.
Alex spilt food all over the floor and it made a huge mess.
What will Alex want to do next?
WHAT HAPPENS NEXT
What did Alex need to do before this? ✔mop up ✔give up and order take out ✘ have slippery hands ✘ get ready to eat ✔ have slippery hands ✔ get ready to eat WHAT HAPPENED BEFORE Figure 2: Question-Switching Answers (QSA) are collected as the correct answers to the wrong question that targets a different type of inference (here, reasoning about what happens before instead of after an event).
the nine commonsense inference dimensions in ATOMIC. 4 Crowdsourcers are prompted with an event sentence and an inference question to turn into a more detailed context 5 (e.g. "Alex spilled food all over the floor and it made a huge mess.") and an edited version of the question if needed for improved specificity (e.g. "What will Alex want to do next?"). Workers are also asked to contribute two potential correct answers.
Negative Answers
In addition to correct answers, we collect four incorrect answer options, of which we filter out two. To create incorrect options that are adversarial for models but easy for humans, we use two different approaches to the collection process. These two methods are specifically designed to avoid different types of annotation artifacts, thus making it more difficult for models to rely on data biases. We integrate and filter answer options and validate final QA tuples with human rating tasks.
Handwritten Incorrect Answers (HIA) The first method involves eliciting handwritten incorrect answers that require reasoning about the context. These answers are handwritten to be similar to the correct answers in terms of topic, length, and style but are subtly incorrect. Two of these answers are collected during the same MTurk task as the original context, questions, and correct answers. We will refer to these negative responses as handwritten incorrect answers (HIA).
Question-Switching Answers (QSA)
We collect a second set of negative (incorrect) answer (e.g., What will Kai want to do next?) (e.g., How would Robin feel afterwards?) (e.g., How would you describe Alex?) (e.g., Why did Sydney do this?) (e.g., What does Remy need to do before this?) (e.g., What will happen to Sasha?) Figure 3: SOCIAL IQA contains several question types which cover different types of inferential reasoning. Question types are derived from ATOMIC inference dimensions.
candidates by switching the questions asked about the context, as shown in Figure 2. We do this to avoid cognitive biases and annotation artifacts in the answer candidates, such as those caused by writing incorrect answers or negations (Schwartz et al., 2017;Gururangan et al., 2018). In this crowdsourcing task, we provide the same context as the original question, as well as a question automatically generated from a different but similar ATOMIC dimension, 6 and ask workers to write two correct answers. We refer to these negative responses as question-switching answers (QSA). By including answers to a different question about the same context, we ensure that these adversarial responses have the stylistic qualities of correct answers and strongly relate to the context topic, while still being incorrect, making it difficult for models to simply perform patternmatching. To verify this, we compare valence, arousal, and dominance (VAD) levels across answer types, computed using the VAD lexicon by Mohammad (2018). Figure 4 shows effect sizes (Cohen's d) of the differences in VAD means, where the magnitude of effect size indicates how different the answer types are stylistically. Indeed, QSA and correct answers differ substantially less than HIA answers (|d|≤.1). 7
QA Tuple Creation
As the final step of the pipeline, we aggregate the data into three-way multiple choice questions. For each created context-question pair contributed by crowdsourced workers, we select a random correct answer and the incorrect answers that are least entailed by the correct one, following inspiration from Zellers et al. (2019a).
For the training data, we validate our QA tuples through a multiple-choice crowdsourcing task where three workers are asked to select the right 6 Using the following three groupings of ATOMIC dimensions: {xWant, oWant, xNeed, xIntent}, {xReact oReact, xAttr}, and {xEffect, oEffect}. 7 Cohen's |d|<.20 is considered small (Sawilowsky, 2009). We find similarly small effect sizes using other sentiment/emotion lexicons. answer to the question provided. 8 In order to ensure even higher quality, we validate the dev and test data a second time with five workers. Our final dataset contains questions for which the correct answer was determined by human majority voting, discarding cases without a majority vote. We also apply a lightweight form of adversarial filtering to make the task more challenging by using a deep stylistic classifier to remove easier examples on the dev and test sets (Sakaguchi et al., 2019). 9 To obtain human performance, we run a separate task asking three new workers to select the correct answer on a random subset of 900 dev and 900 test examples. Human performance on these subsets is 87% and 84%, respectively.
Data Statistics
To keep contexts separate across train/dev/test sets, we assign SOCIAL IQA contexts to the same partition as the ATOMIC event the context was based on. Shown in Table 1 (top), this yields a total set of around 33k training, 2k dev, and 2k test tuples. We additionally include statistics on word counts and vocabulary of the training data. We report the averages of correct and incorrect answers in terms of: token length, number of unique tokens, and number of times a unique answer appears in the dataset. Note that due to our three-way multiple choice setup, there are twice as many incorrect answers which influences these statistics.
We also include a breakdown (Figure 3) across question types, which we derive from ATOMIC inference dimensions. 10 In general, questions relating to what someone will feel afterwards or what they will likely do next are more common in SOCIAL IQA. Conversely, questions pertaining to (potentially involuntary) effects of situations on people are less frequent.
Methods
We establish baseline performance on SOCIAL IQA, using large pretrained language models based on the Transformer architecture (Vaswani et al., 2017). Namely, we finetune OpenAI-GPT (Radford et al., 2018) and BERT (Devlin et al., 2019), which have both shown remarkable improvements on a variety of tasks. OpenAI-GPT is a uni-directional language model trained on the BookCorpus (Zhu et al., 2015), whereas BERT is a bidirectional language model trained on both the BookCorpus and English Wikipedia. As per previous work, we finetune the language model representations but fully learn the classifier specific parameters described below.
Multiple choice classification To classify sequences using these language models, we follow the multiple-choice setup implementation by the respective authors, as described below. First, we concatenate the context, question, and answer, using the model specific separator tokens. For OpenAI-GPT, the format becomes start <context> <question> delimiter <answer> classify , where start , delimiter , and classify are special function tokens. For BERT, the format is similar, but the classifier token comes before the context. 11 For each triple, we then compute a score l by 10 We group agent and theme ATOMIC dimensions together (e.g., "xReact" and "oReact" become the "reactions" question type passing the hidden representation from the classifier token h CLS ∈ R H through an MLP:
l = W 2 tanh(W 1 h CLS + b 1 ) where W 1 ∈ R H×H , b 1 ∈ R H and W 2 ∈ R 1×H .
Finally, we normalize scores across all triples for a given context-question pair using a softmax layer. The model's predicted answer corresponds to the triple with the highest probability.
Experiments
Experimental Set-up
We train our models on the 33k SOCIAL IQA training instances, selecting hyperparameters based on the best performing model on our dev set, for which we then report test results. Specifically, we perform finetuning through a grid search over the hyper-parameter settings (with a learning rate in {1e−5, 2e−5, 3e−5}, a batch size in {3, 4, 8}, and a number of epochs in {3, 4, 10}) and report the maximum performance. Models used in our experiments vary in sizes: OpenAI-GPT (117M parameters) has a hidden size H=768, BERT-base (110M params) and BERT-large (340M params) hidden sizes of H=768 and H=1024, respectively. We train using the HuggingFace PyTorch (Paszke et al., 2017) implementation. 12
Context
Question Answer
(1)
Jesse was pet sitting for Addison, so Jesse came to Addison's house and walked their dog.
What does Jesse need to do before this?
(a) feed the dog (b) get a key from Addison (c) walk the dog (2) Kai handed back the computer to Will after using it to buy a product off Amazon.
What will Kai want to do next?
(a) wanted to save money on shipping (b) Wait for the package (c) Wait for the computer (3) Remy gave Skylar, the concierge, her account so that she could check into the hotel.
What will Remy want to do next?
(a) lose her credit card (b) arrive at a hotel (c) get the key from Skylar (4) Sydney woke up and was ready to start the day. They put on their clothes.
What will Sydney want to do next?
(a) go to bed (b) go to the pool (c) go to work (5) Kai grabbed Carson's tools for him because Carson could not get them.
How would Carson feel as a result?
(a) inconvenienced (b) grateful (c) angry (6) Although Aubrey was older and stronger, they lost to Alex in arm wrestling.
How would Alex feel as a result?
(a) they need to practice more (b) ashamed (c) boastful (1) and (2) and incorrectly in the other four examples shown here. Examples (3) and (4) illustrate the model choosing answers that might have happened before, or that might happen much later after the context, as opposed to right after the context situation. In Examples (5) and (6), the model chooses answers that may apply to people other than the ones being asked about.
Results
Our results (Table 2) show that SOCIAL IQA is still a challenging benchmark for existing computational models, compared to human performance. Our best performing model, BERT-large, outperforms other models by several points on the dev and test set. We additionally ablate our best model's representation by removing the context and question from the input, confirming that reasoning over both is necessary for this task.
Learning Curve To better understand the effect of dataset scale on model performance on our task, we simulate training situations with limited knowledge. We present the learning curve of BERT-large's performance on the dev set as it is trained on more training set examples (Figure 5). Although the model does significantly improve over a random baseline of 33% with only a few hundred examples, the performance only starts to converge after around 20k examples, providing evidence that large-scale benchmarks are required for this type of reasoning.
Error Analysis
We include a breakdown of our best model's performance on various question types in Figure 6 and specific examples of errors in the last four rows of Table 3. Overall, questions related to pre-conditions of the context (people's motivations, actions needed before the context) are less challenging for the model. Conversely, the model seems to struggle more with questions relating to (potentially involuntary) effects, stative descriptions, and what people will want to do next. Table 3 further indicate that, instead of doing advanced reasoning about situations, models may only be learning lexical associations between the context, question, and answers, as hinted at by Marcus (2018) and Zellers et al. (2019b). This leads the model to select are incorrectly timed with respect to the context and question (e.g., "arrive at a hotel" is something Remy likely did before checking in with the concierge, not afterwards). Additionally, the model often chooses answers related to a person other than the one asked about. In (6), after the arm wrestling, though it is likely that Aubrey will feel ashamed, the question relates to what Alex might feel-not Aubrey. Overall, our results illustrate how reasoning about social situations still remains a challenge for these models, compared to humans who can trivially reason about the causes and effects for multiple participants. We expect that this task would benefit from models capable of more complex reasoning about entity state, or models that are more explicitly endowed with commonsense (e.g., from knowledge graphs like ATOMIC).
Examples of errors in
SOCIAL IQA for Transfer Learning
In addition to being the first large-scale benchmark for social commonsense, we also show that SOCIAL IQA can improve performance on downstream tasks that require commonsense, namely the Winograd Schema Challenge and the Choice of Plausible Alternatives task. We achieve state of the art performance on both tasks by sequentially finetuning on SOCIAL IQA before the task itself.
COPA The Choice of Plausible Alternatives task (COPA; Roemmele et al., 2011) is a twoway multiple choice task which aims to measure commonsense reasoning abilities of models. The dataset contains 1,000 questions (500 dev, 500 test) that ask about the causes and effects of a premise. This has been a challenging task for computational systems, partially due to the limited amount of training data available. As done previously (Goodwin et al., 2012;Luo et al., 2016), we finetune our models on the dev set, and report performance only on the test set.
Winograd Schema The Winograd Schema Challenge (WSC; Levesque, 2011) is a wellknown commonsense knowledge challenge framed as a coreference resolution task. It contains a collection of 273 short sentences in which a pronoun must be resolved to one of two antecedents (e.g., in "The city councilmen refused the demonstrators a permit because they feared violence", they refers to the councilmen). Because of data scarcity in WSC, Rahman and Ng (2012) created 943 Winograd-style sentence pairs (1886 sentences in total), henceforth referred to as DPR, which has been shown to be slightly less challenging than WSC for computational models.
We evaluate on these two benchmarks. While the DPR dataset is split into train and test sets (Rahman and Ng, 2012), the WSC dataset contains a single (test) set of only 273 instances for evaluation purposes only. Therefore, we use the DPR dataset as training set when evaluating on the WSC dataset.
Sequential Finetuning
We first finetune BERT-large on SOCIAL IQA, which reaches 66% on our dev set (Table 2). We then finetune that model further on the taskspecific datasets, considering the same set of hyperparameters as in §5.1. On each of the test sets, Table 4: Sequential finetuning of BERT-large on SO-CIAL IQA before the task yields state of the art results (bolded) on COPA (Roemmele et al., 2011), Winograd Schema Challenge (Levesque, 2011) andDPR (Rahman andNg, 2012). For comparison, we include previous published state of the art performance.
we report best, mean, and standard deviation of all models, and compare sequential finetuning results to a BERT-large baseline.
Results Shown in Table 4, sequential finetuning on SOCIAL IQA yields substantial improvements over the BERT-only baseline (between 2.6 and 5.5% max performance increases), as well as the general increase in performance stability (i.e., lower standard deviations). As hinted at by Phang et al. (2019), this suggests that BERT-large can benefit from both the large scale and the QA format of commonsense knowledge in SOCIAL IQA, which it struggles to learn from small benchmarks only. Notably, we find that sequentially finetuned BERT-SOCIAL IQA achieves state-of-the-art results on all three tasks, showing improvements of previous best performing models. 13
Effect of scale and knowledge type To better understand these improvements in downstream task performance, we investigate the impact on COPA performance of sequential finetuning on less SOCIAL IQA training data (Figure 7), as well as the impact of the type of commonsense knowledge used in sequential finetuning. As expected, the downstream performance on COPA improves when using a model pretrained on more of SO-CIAL IQA, indicating that the scale of the dataset 13 Note that OpenAI-GPT was reported to achieve 78.6% on COPA, but that result was not published, nor discussed in the OpenAI-GPT white paper (Radford et al., 2018). is one factor that helps in the fine-tuning. However, when using SWAG (a similarly sized dataset) instead of SOCIAL IQA for sequential finetuning, the downstream performance on COPA is lower (76.2%). This indicates that, in addition to its large scale, the social and emotional nature of the knowledge in SOCIAL IQA enables improvements on these downstream tasks. (Speer and Havasi, 2012), these questions mostly probe knowledge related to factual and physical commonsense (e.g., "Where would I not want a fox?"). In contrast, SOCIAL IQA explicitly separates contexts from questions, and focuses on the types of commonsense inferences humans perform when navigating social situations.
Related Work
4471
Commonsense Knowledge Bases: In addition to large-scale benchmarks, there is a wealth of work aimed at creating commonsense knowledge repositories (Speer and Havasi, 2012;Sap et al., 2019;Zhang et al., 2017;Lenat, 1995;Espinosa and Lieberman, 2005;Gordon and Hobbs, 2017) that can be used as resources in downstream reasoning tasks. While SOCIAL IQA is formatted as a natural language QA benchmark, rather than a taxonomic knowledge base, it also can be used as a resource for external tasks, as we have demonstrated experimentally.
Constrained or Adversarial Data Collection:
Various work has investigated ways to circumvent annotation artifacts that result from crowdsourcing. Sharma et al. (2018) extend the Story Cloze data by severely restricting the incorrect story ending generation task, reducing the sentiment and negation artifacts. Rajpurkar et al. (2018) create an adversarial version of the extractive questionanswering challenge, SQuAD (Rajpurkar et al., 2016), by creating 50k unanswerable questions. Instead of using human-generated incorrect answers, Zellers et al. (2018Zellers et al. ( , 2019b use adversarial filtering of machine generated incorrect answers to minimize surface patterns. Our dataset also aims to reduce annotation artifacts by using a multistage annotation pipeline in which we collect negative responses from multiple methods including a unique adversarial question-switching technique.
Conclusion
We present SOCIAL IQA, the first large-scale benchmark for social commonsense. Consisting of 38k multiple-choice questions, SOCIAL IQA covers various types of inference about people's actions being described in situational contexts. We design a crowdsourcing framework for collecting QA pairs that reduces stylistic artifacts of negative answers through an adversarial questionswitching method. Despite human performance of close to 90%, computational approaches based on large pretrained language models only achieve accuracies up to 65%, suggesting that these social inferences are still a challenge for AI systems. In addition to providing a new benchmark, we demonstrate how transfer learning from SOCIAL IQA to other commonsense challenges can yield significant improvements, achieving new state-ofthe-art performance on both COPA and Winograd Schema Challenge datasets.
the food she just prepared all over the floor and it made a huge mess.QREASONING ABOUT EMOTIONAL REACTIONS(a) sorry for the villain (b) hopeful that Robin will succeed ✔ (c) like Robin should lose How would others feel afterwards?
Figure 4 :
4Magnitude of effect sizes (Cohen's d) when comparing average dominance, arousal and valence values of different answer types where larger |d| indicates more stylistic difference. For valence (sentiment polarity) and dominance, the effect sizes comparing QSA and correct answers are much smaller, indicating that these are more similar tonally. Notably, all three answer types have comparable levels of arousal (intensity).
Figure 5 :
5answers with incorrect timing (examples 3 and 4) or answers pertaining to the wrong participants (examples 5 and 6), despite being trained on large amounts of examples that specifically distinguish proper timing and participants. For instance, in (3) and (4), the model selects answers which Dev accuracy when training BERT-large with various number of examples (multiple runs per training size), with human performance (86.9%) shown in orange. In order to reach >80%, the model would require nearly 1 million training examples.
Figure 6 :
6Average dev accuracy of BERT-large on different question types. While questions about effects and motivations are easier, the model still finds wants and descriptions more challenging.
Figure 7 :
7Effect of finetuning BERT-large on varying sizes of the SOCIAL IQA training set on the dev accuracy of COPA. As expected, the more SOCIAL IQA instances the model is finetuned on, the better the accuracy on COPA.
1 Available at https://tinyurl.com/socialiqaTable 1: Data statistics for SOCIAL IQA.SOCIAL IQA
# QA tuples
train
33,410
dev
1,954
test
2,224
total
37,588
Train statistics
Average
# tokens
context
14.04
question
6.12
answers (all)
3.60
answers (correct)
3.65
answers (incorrect)
3.58
Unique
# tokens
context
15,764
question
1,165
answers (all)
12,285
answers (correct)
7,386
answers (incorrect) 10,514
Average freq.
of answers
answers (correct)
1.37
answers (incorrect)
1.47
Table 3 :
3Example CQA triples from the SOCIAL IQA dev set with BERT-large's predictions ( : BERT's prediction, : true correct answer). The model predicts correctly in
Commonsense Benchmarks: Commonsense benchmark creation has been well-studied by previous work. Notably, the WinoGrad Schema Challenge (WSC;Levesque, 2011) and the Choice Of Plausible Alternatives dataset (COPA;Roemmele et al., 2011) are expert-curated collections of commonsense QA pairs that are trivial for humans to solve. Whereas WSC requires physical and social commonsense knowledge to solve, COPA targets the knowledge of causes and effects surrounding social situations. While both benchmarks are of high-quality and created by experts, their small scale (150 and 1,000 examples, respectively) poses a challenge for modern modelling techniques, which require many training instances.More recently,Talmor et al. (2019) introduce CommonsenseQA, containing 12k multiplechoice questions. Crowdsourced using Concept-Net
Theory of Mind is well developed in most neurotypical adults(Ganaie and Mudasir, 2015), but can be influenced by age, culture, or developmental disorders(Korkmaz, 2011).
We do not generate templates if the ATOMIC dimension is annotated as "none." 5 Workers were asked to contribute a context 7-25 words longer than the event sentence.
Agreement on this task was high (Cohen's κ=.70)9 We also tried filtering to remove examples from the training set but found it did not significantly change performance. We will release tags for the easier training examples with the full data.
https://github.com/huggingface/ pytorch-pretrained-BERT
AcknowledgmentsWe thank Chandra Bhagavatula, Hannaneh Hajishirzi, and other members of the UW NLP and AI2 community for helpful discussions and feedback throughout this project.We also thank the anonymous reviewers for their insightful comments and suggestions. This research was supported in part by NSF (IIS-1524371, IIS-1714566), DARPA under the CwC program through the ARO(W911NF-15-1-0543)
Mindreaders: the cognitive basis of" theory of mind. Ian Apperly, Psychology PressIan Apperly. 2010. Mindreaders: the cognitive basis of" theory of mind". Psychology Press.
Does the Autistic Child have a "Theory of Mind. Simon Baron-Cohen, Alan M Leslie, Uta Frith, Cognition. 211Simon Baron-Cohen, Alan M Leslie, and Uta Frith. 1985. Does the Autistic Child have a "Theory of Mind"? Cognition, 21(1):37-46.
Commonsense reasoning and commonsense knowledge in artificial intelligence. Ernest Davis, Gary Marcus, Commun. ACM. 58Ernest Davis and Gary Marcus. 2015. Commonsense reasoning and commonsense knowledge in artificial intelligence. Commun. ACM, 58:92-103.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, NAACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In NAACL.
Eventnet: Inferring temporal relations between commonsense events. H José, Henry Espinosa, Lieberman, MICAI. José H. Espinosa and Henry Lieberman. 2005. Event- net: Inferring temporal relations between common- sense events. In MICAI.
A Study of Social Intelligence & Academic Achievement of College Students of District Srinagar. M Y Ganaie, Hafiz Mudasir, J&K, India. Journal of American Science. 113MY Ganaie and Hafiz Mudasir. 2015. A Study of So- cial Intelligence & Academic Achievement of Col- lege Students of District Srinagar, J&K, India. Jour- nal of American Science, 11(3):23-27.
UTDHLT: Copacetic system for choosing plausible alternatives. Travis Goodwin, Bryan Rink, Kirk Roberts, Sanda M Harabagiu, Association for Computational Linguistics. NAACL workshop on SemEvalTravis Goodwin, Bryan Rink, Kirk Roberts, and Sanda M Harabagiu. 2012. UTDHLT: Copacetic system for choosing plausible alternatives. In NAACL workshop on SemEval, pages 461-466. As- sociation for Computational Linguistics.
A Formal Theory of Commonsense Psychology: How People Think People Think. S Andrew, Jerry R Gordon, Hobbs, Cambridge University PressAndrew S Gordon and Jerry R Hobbs. 2017. A Formal Theory of Commonsense Psychology: How People Think People Think. Cambridge University Press.
Reporting bias and knowledge acquisition. Jonathan Gordon, Benjamin Van Durme, 10.1145/2509558.2509563Proceedings of the 2013 Workshop on Automated Knowledge Base Construction, AKBC '13. the 2013 Workshop on Automated Knowledge Base Construction, AKBC '13New York, NY, USAACMJonathan Gordon and Benjamin Van Durme. 2013. Re- porting bias and knowledge acquisition. In Proceed- ings of the 2013 Workshop on Automated Knowledge Base Construction, AKBC '13, pages 25-30, New York, NY, USA. ACM.
Machine common sense concept paper. David Gunning, David Gunning. 2018. Machine common sense con- cept paper.
Annotation artifacts in natural language inference data. Swabha Suchin Gururangan, Omer Swayamdipta, Roy Levy, Samuel R Schwartz, Noah A Bowman, Smith, NAACL-HLT. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. 2018. Annotation artifacts in nat- ural language inference data. In NAACL-HLT.
A surprisingly robust trick for the winograd schema challenge. Ana-Maria Vid Kocijan, Oana-Maria Cretu, Yordan Camburu, Thomas Yordanov, Lukasiewicz, ACL. Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, and Thomas Lukasiewicz. 2019. A surprisingly robust trick for the winograd schema challenge. In ACL.
Theory of mind and neurodevelopmental disorders of childhood. Baris Korkmaz, Pediatr Res. 695Pt 2Baris Korkmaz. 2011. Theory of mind and neurodevel- opmental disorders of childhood. Pediatr Res, 69(5 Pt 2):101R-8R.
Cyc: A large-scale investment in knowledge infrastructure. B Douglas, Lenat, Communications of the ACM. 3811Douglas B Lenat. 1995. Cyc: A large-scale investment in knowledge infrastructure. Communications of the ACM, 38(11):33-38.
The winograd schema challenge. Hector J Levesque, AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning. Hector J. Levesque. 2011. The winograd schema chal- lenge. In AAAI Spring Symposium: Logical Formal- izations of Commonsense Reasoning.
Are distributional representations ready for the real world? evaluating word vectors for grounded perceptual meaning. Li Lucy, Jon Gauthier, In RoboNLP@ACLLi Lucy and Jon Gauthier. 2017. Are distributional representations ready for the real world? evaluating word vectors for grounded perceptual meaning. In RoboNLP@ACL.
Commonsense causal reasoning between short texts. Zhiyi Luo, Yuchen Sha, Q Kenny, Seung-Won Zhu, Zhongyuan Hwang, Wang, Fifteenth International Conference on the Principles of Knowledge Representation and Reasoning. Zhiyi Luo, Yuchen Sha, Kenny Q Zhu, Seung-won Hwang, and Zhongyuan Wang. 2016. Common- sense causal reasoning between short texts. In Fif- teenth International Conference on the Principles of Knowledge Representation and Reasoning.
Deep learning: A critical appraisal. Gary Marcus, abs/1801.00631CoRRGary Marcus. 2018. Deep learning: A critical ap- praisal. CoRR, abs/1801.00631.
Obtaining reliable human ratings of valence, arousal, and dominance for 20,000 english words. Saif Mohammad, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Saif Mohammad. 2018. Obtaining reliable human rat- ings of valence, arousal, and dominance for 20,000 english words. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 174-184.
The development of commonsense psychology. Chris Moore, Psychology PressChris Moore. 2013. The development of commonsense psychology. Psychology Press.
Evaluating theory of mind in question answering. Aida Nematzadeh, Kaylee Burns, Erin Grant, Alison Gopnik, Thomas L Griffiths, EMNLP. Aida Nematzadeh, Kaylee Burns, Erin Grant, Alison Gopnik, and Thomas L. Griffiths. 2018. Evaluating theory of mind in question answering. In EMNLP.
Automatic differentiation in pytorch. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary Devito, Zeming Lin, Alban Desmaison, Luca Antiga, Adam Lerer, NIPS-W. Adam Paszke, Sam Gross, Soumith Chintala, Gre- gory Chanan, Edward Yang, Zachary DeVito, Zem- ing Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W.
Solving hard coreference problems. Haoruo Peng, Daniel Khashabi, Dan Roth, HLT-NAACL. Haoruo Peng, Daniel Khashabi, and Dan Roth. 2015. Solving hard coreference problems. In HLT- NAACL.
Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. Jason Phang, Thibault Févry, Samuel R Bowman, abs/1811.01088CoRRJason Phang, Thibault Févry, and Samuel R. Bowman. 2019. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. CoRR, abs/1811.01088.
Intelligent technology for an aging population: The use of ai to assist elders with cognitive impairment. Martha E Pollack, 26AI MagazineMartha E. Pollack. 2005. Intelligent technology for an aging population: The use of ai to assist elders with cognitive impairment. AI Magazine, 26:9-24.
Improving language understanding by generative Pre-Training. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya SutskeverAlec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative Pre-Training.
Resolving complex cases of definite pronouns: The winograd schema challenge. Altaf Rahman, Vincent Ng, EMNLP. Stroudsburg, PA, USAAssociation for Computational Linguistics12Altaf Rahman and Vincent Ng. 2012. Resolving complex cases of definite pronouns: The winograd schema challenge. In EMNLP, EMNLP-CoNLL '12, pages 777-789, Stroudsburg, PA, USA. Asso- ciation for Computational Linguistics.
Pranav Rajpurkar, Robin Jia, Percy Liang, arXiv:1806.03822Know what you don't know: Unanswerable questions for squad. arXiv preprintPranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable ques- tions for squad. arXiv preprint arXiv:1806.03822.
Squad: 100, 000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy S Liang, EMNLP. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy S. Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In EMNLP.
Choice of plausible alternatives: An evaluation of commonsense causal reasoning. Melissa Roemmele, Andrew S Cosmin Adrian Bejan, Gordon, AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning. Melissa Roemmele, Cosmin Adrian Bejan, and An- drew S. Gordon. 2011. Choice of plausible alter- natives: An evaluation of commonsense causal rea- soning. In AAAI Spring Symposium: Logical For- malizations of Commonsense Reasoning.
Winogrande: An adversarial winograd schema challenge at scale. ArXiv, abs. Keisuke Sakaguchi, Le Ronan, Chandra Bras, Yejin Bhagavatula, Choi, Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhaga- vatula, and Yejin Choi. 2019. Winogrande: An adversarial winograd schema challenge at scale. ArXiv, abs/1907.10641.
Atomic: An atlas of machine commonsense for ifthen reasoning. Maarten Sap, Emily Ronan Le Bras, Chandra Allaway, Nicholas Bhagavatula, Hannah Lourie, Brendan Rashkin, Roof, A Noah, Yejin Smith, Choi, AAAI. Maarten Sap, Ronan Le Bras, Emily Allaway, Chan- dra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for if- then reasoning. In AAAI.
Handling multiword expressions in causality estimation. Shota Sasaki, Sho Takase, Naoya Inoue, Naoaki Okazaki, Kentaro Inui, IWCS. Shota Sasaki, Sho Takase, Naoya Inoue, Naoaki Okazaki, and Kentaro Inui. 2017. Handling multi- word expressions in causality estimation. In IWCS.
New effect size rules of thumb. Shlomo S Sawilowsky, Journal of Modern Applied Statistical Methods. 82Shlomo S. Sawilowsky. 2009. New effect size rules of thumb. Journal of Modern Applied Statistical Meth- ods, 8(2):597-599.
The effect of different writing tasks on linguistic style: A case study of the ROC story cloze task. Roy Schwartz, Maarten Sap, Ioannis Konstas, Li Zilles, Yejin Choi, Noah A Smith, CoNLL. Roy Schwartz, Maarten Sap, Ioannis Konstas, Li Zilles, Yejin Choi, and Noah A Smith. 2017. The effect of different writing tasks on linguistic style: A case study of the ROC story cloze task. In CoNLL.
Tackling the story ending biases in the story cloze test. Rishi Kant Sharma, James Allen, ACL. Omid Bakhshandeh, and Nasrin MostafazadehRishi Kant Sharma, James Allen, Omid Bakhshandeh, and Nasrin Mostafazadeh. 2018. Tackling the story ending biases in the story cloze test. In ACL.
Representing general relational knowledge in conceptnet 5. Robyn Speer, Catherine Havasi, LREC. Robyn Speer and Catherine Havasi. 2012. Represent- ing general relational knowledge in conceptnet 5. In LREC.
CommonsenseQA: A question answering challenge targeting commonsense knowledge. Alon Talmor, Jonathan Herzig, Nicholas Lourie, Jonathan Berant, NAACL. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A ques- tion answering challenge targeting commonsense knowledge. In NAACL.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.
From recognition to cognition: Visual commonsense reasoning. Rowan Zellers, Yonatan Bisk, Ali Farhadi, Yejin Choi, CVPR. Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019a. From recognition to cognition: Visual commonsense reasoning. In CVPR.
SWAG: A large-scale adversarial dataset for grounded commonsense inference. Rowan Zellers, Yonatan Bisk, Roy Schwartz, Yejin Choi, EMNLP. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversar- ial dataset for grounded commonsense inference. In EMNLP.
Hellaswag: Can a machine really finish your sentence. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, Yejin Choi, ACL. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019b. Hellaswag: Can a machine really finish your sentence? In ACL.
Ordinal common-sense inference. Sheng Zhang, Rachel Rudinger, Kevin Duh, Benjamin Van Durme, Transactions of the Association of Computational Linguistics. 51Sheng Zhang, Rachel Rudinger, Kevin Duh, and Ben- jamin Van Durme. 2017. Ordinal common-sense in- ference. Transactions of the Association of Compu- tational Linguistics, 5(1):379-395.
Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. Yukun Zhu, Ryan Kiros, Richard S Zemel, Ruslan R Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, IEEE International Conference on Computer Vision (ICCV). Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan R. Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. 2015 IEEE International Conference on Computer Vision (ICCV), pages 19- 27. |
11,395,323 | THE OLISSIPO AND LECTIO PROJECTS | OLISSIPO (Omnis Latinitatis Instrumentum Secundum Scholarum Instructionis Propositum Ordinatum) is a prototype developed by the Centro de Estudos Clássicos of the University of Lisbon and the Istituto di Linguistica Computazionale of CNR in Pisa, thanks to a common research project in the framework of the scientific agreement between the Consiglio Nazionale delle Ricerche (CNR -Italy) and the Gabinete de Relações Internacionais da Ciência e Ensino Superior (GRICES -Portugal). OLISSIPO extracts lists of basic vocabulary from any Latin text and displays them together with linguistic and extra-linguistic information stored in a database. It contains other functionalities, such as statistical analysis, search of words in the text, displaying of the context, search of words in the database (which can be used as a dictionary). The LECTIO project has also been carried out by the Centro de Estudos Clássicos of the University of Lisbon and the Istituto di Linguistica Computazionale of CNR in Pisa. Designed to be a follow-up of OLISSIPO, it aims at producing a prototype for analysis of Latin texts with more functionalities. The final result will be an open tool to be used both in teaching/learning and in scientific research. | [] | THE OLISSIPO AND LECTIO PROJECTS
Giuseppe Cappelli
ILC -CNR Pisa
Italy
Paulo Alberto paulo.alberto@fl.ul.pt
CEC -University of Lisbon
THE OLISSIPO AND LECTIO PROJECTS
OLISSIPO (Omnis Latinitatis Instrumentum Secundum Scholarum Instructionis Propositum Ordinatum) is a prototype developed by the Centro de Estudos Clássicos of the University of Lisbon and the Istituto di Linguistica Computazionale of CNR in Pisa, thanks to a common research project in the framework of the scientific agreement between the Consiglio Nazionale delle Ricerche (CNR -Italy) and the Gabinete de Relações Internacionais da Ciência e Ensino Superior (GRICES -Portugal). OLISSIPO extracts lists of basic vocabulary from any Latin text and displays them together with linguistic and extra-linguistic information stored in a database. It contains other functionalities, such as statistical analysis, search of words in the text, displaying of the context, search of words in the database (which can be used as a dictionary). The LECTIO project has also been carried out by the Centro de Estudos Clássicos of the University of Lisbon and the Istituto di Linguistica Computazionale of CNR in Pisa. Designed to be a follow-up of OLISSIPO, it aims at producing a prototype for analysis of Latin texts with more functionalities. The final result will be an open tool to be used both in teaching/learning and in scientific research.
Introduction
One of the problems to be dealt with in teaching and learning Latin is the student's low level of knowledge of vocabulary. It is fundamental that students progressively acquire and consolidate a domain of Latin vocabulary that is sufficient and appropriate for understanding the text they are studying, starting with the basic structure rules and interdependence of the words in the sentences.
In fact, the Latin teacher's experience shows that very often the students present considerable gaps in vocabulary which prevent them from acquiring a reasonable understanding of the Latin text. As well as having insufficient lexical knowledge, students are frequently incapable of establishing the relationship between the vocabulary, even between simple and compound, and comparing the Latin lexicon with that of their mother tongue.
The learning experience of some students has highlighted the need for a data processing tool which, when applied to a Latin text chosen by the teacher or the student, automatically produces results aiming at the presence of basic vocabulary, morphological categories and basic elements of the sentence. This would make it easier for the teacher to choose appropriate texts for his course.
Furthermore, a tool capable of providing additional data concerning the lexicon (relationship with other words, reference to the phenomenon of composition and origin, dependence on Portuguese vocabulary) would also be extremely useful for the students.
The opportunity to develop an application focusing on these issues came about in 1996, upon the invitation of Professor Antonio Zampolli, director of the Institute of Computational Linguistics in Pisa (CNR). He was an researcher with vast experience and humanistic talent who, even at the beginning of language processing and with the help of a computer, worked to Seneca's concordances (1975, with R Busa) and, years later, to Sìmaco's concordances (1983, with V Lomato and N Marinone).
Contacts were established between his prestigious Italian institute and the Centro de Estudos Clássicos da Universidade de Lisboa The aim was to work together on a common project within the field of the Portugal-Italy Scientific Working Convention This is how the OLISSIPO project came about.
Description of OLISSIPO
OLISSIPO can be described as a working environment in which the user is provided with different functions enabling him to work on a selected text, consult the analysis results and useful statistics, look for a specific work in the text, and modify the classification assigned to each word.
Furthermore, one specific function allows the user to access the data base, designed for classifying the words. Having processed the text, the user has at his disposal a function for extracting/choosing the lemma from the processed text and constructing/updating the basic vocabulary. OLISSIPO's objective is to create basic vocabulary lists and this can be achieved with individual strategies. Depending on the students' preparation and the selection of texts to be submitted to OLISSIPO, the teacher defines the most appropriate strategy (content and timing) for constructing the basic vocabulary.
The interface and other functions of OLISSIPO has been written in Visual Basic.
For the tagging/lemmatisation programme (MORPH) and the disambiguation (DISAMB) programme the programming language C++ has been used This means that more in depth processing can be carried out in and in less time. MORPH reads the file containing the text to be processed, determines the occurrences and then consults the dictionary indicated by the user. If the word under examination is found in the dictionary, the classification (or classifications in the case of homography) is taken from the dictionary and associated to the word.
If the search provides a negative outcome the word is segmented to the right for eventual enclitic identification. If an enclitic is present, by using the word without the enclitic as a search key, the dictionary is consulted again, and if the search is positive the word is classified with the morphological information, and associated to the individualised enclitic.
One specific module of MORPH automatically identifies and classifies proper nouns. A word is recognised as a proper noun if it starts with a capital letter and is not preceded by strong punctuation.
The morphological label is taken from an open list in which classification corresponds to each ending.
Words that are not analysed by MORPH are processed by LEMLAT, a computerised lemmatisation programme for Latin language developed at the Institute of Computational Linguistics in Pisa.
The DISAMB programme reads the file containing the labelled text, associates to each word the Part of Speech and the morphological information (case, gender and number) and extracts a sentence defined by strong punctuation. It then skims over the sentences and when it finds a homographic word (with more than one lemma or only one with several morphological values) it applies several morphosyntactic disambiguation rules for selecting the correct classification.
The interface written in Visual Basic has been studied in order to help the user with little experience in using computer tools for linguistic analysis.
When the user runs the OLISSIPO programme the window shown in figure 1 appears. In the bottom right three flags indicate the language the user wishes to interact with in the programme. To select the desired language the user simply clicks on the flag.
Figura 1
Apart from Italian and Portuguese, the languages of the Project partners, English is also available. Having clicked on the appropriate flag the window shown in figure 2 is opened which contains buttons for activating the OLISSIPO functions.
Figura 2
The icons on the buttons represent the eight OLISSIPO functions listed below.
1. seleziona testo: to select the text to be analysed;
2. analisi: for carrying out processing of the selected text; 3. risultati: to display the processing results; 4. ricerca per forma: to carry out searches within the text using the form as the keyword; 5. ricerca per lemma: to carry out searches within the text using the lemma as the keyword; 6. statistiche: displays the simple statistics provided by the programme; 7. lessico: to modify the information contained in the database; 8. vocabolario basico: to update the basic vocabulary after processing of a new text.
Only one window is shown below which relates to the analyses. For others please refer to the article OLISSIPO -entre filologia e informatica: recurso para gerir o estudio do texto latino, (Euphrosyne, 2004).
figura 3
The processing of the text is guided by several parameters that the user can configure according to his needs:
1) The LEMLAT 1 morphological analysis programme can be used interactively. This is helpful for checking the classification of a specific word or for carrying out didactic demonstrations;
2) The results can be displayed on the screen for initial checking; 3) It is possible to ask the programme to save the result of the processing onto a file. This is obligatory if the user wants to use the analysis functions offered by the programme; 4) It is possible to tell the programme to substitute the reference dictionary, used for automatic labelling, with one of its own which is more suitable for the analysis to be carried out.
Occasionally, when there has been a prototype presentation, as in the Colloquium Didacticum Classicum Olisiponense XVII (Lisbona, 30 Settembre -3 Ottobre 1998), at the meeting of the Bureau International of the Didactique des Langues Anciennes (Gand, 11-12 Maggio 2000) and in the Jornada Cientifica -Estudos Clássicos e Nova Filologia (Lisboa, 17-18 Maggio 2001), new functions have been suggested. For example, to be able to automatically recognise the correct value covered by the individual words considered in their own syntactic context. Thanks to the LECTIO project, partially supported by the National Agency of I and D (FCT), it has been possible to study new algorithms, in order to better meet didactic needs, and to study new capabilities also useful for research purposes.
One function has been implemented which enables the basic elements of the sentence to be highlighted with different colours. An example of this function is shown in figure 5.
Figura 5
The results of the analysis so far implemented are encouraging, but further work is needed. New algorithms will be implemented very soon, which will improve this module, specially in what concerns the delimitation of sentences.
Complementary Remarks
Basically, OLISSIPO is the prototype of a didactic application for acquisition of Latin lexicon Essentially, it produces lists of basic Latin vocabulary enhanced with linguistic information and linguistic extras in an open environment where the user has tools for adapting the prototype according to his specific needs.
If the student is provided with lists of vocabulary taken from texts of these authors (chosen by the teacher according to concrete criteria) his effort for improving his knowledge of new vocabulary will be supported by objective data and the teacher can rationalise and stimulate the effort made. At the same time, the knowledge of vocabulary previously acquired is consolidated.
The statistical results provided by OLISSIPO are of great help. If we want to facilitate, for instance, the study of demonstrative pronouns in the analysis of a text such as Cicero, Verr. 4, 48, 106, the diagram which shows the statistical results will support the validity of the chosen text for the lexical and morphosyntactic work.
In order to avoid the possibility of OLISSIPO being a closed tool, it has been devised so that it can be reused and associated with other tools in the field of computerised processing of the Latin language (for example, text sources and information sources in general). A considerable part of the methodology developed for OLISSIPO will be reused in the LECTIO project currently being developed at the Centro de Estudos Clássicos da Universidade de Lisboa and the Istituto di Linguistica Computazionale di Pisa (CNR). The new tool, which will be developed other than for didactic purposes, will also be useful for researchers interested in corpus processing.
In brief, OLISSIPO is of great help to students and teachers in their daily work of teaching and learning Latin and also to researchers in the work of linguistic analysis or in the construction of new applications. For the moment, it will be available on CD but the possibility of putting it onto the Internet is also being studied.
LEMLAT Analizzatore Morfologico Latino, is a CNR patent, the authors are: Dr. Andrea Bozzi -Prof. Nino Marinone (for the linguistic aspects) -Dr. Giuseppe Cappelli (for the informatic aspects).
. P • Alberto, Euphrosyne. 30• Alberto P., «O projecto Olissipo uma applicação no âmbito do ensino do latim», Euphrosyne, 30, 2003, 335-338.
A project for Latin lexicography: 2. A latin Morphological Analyze», Computers and Humanities, 24 V-VI. • Bozzi, A Cappelli, G , • Bozzi A., Cappelli G., «A project for Latin lexicography: 2. A latin Morphological Analyze», Computers and Humanities, 24 V-VI, 1990, 421-426.
uno strumento computazionale per l'analisi linguistica del latinosviluppo e prospettive. • Cappelli, G Passarotti, M , « Lemlat, • Cappelli G., Passarotti M., «LEMLAT: uno strumento computazionale per l'analisi linguistica del latino - sviluppo e prospettive», Euphrosyne, 31, 2003, 519-531
Dictionnarie Frequentiel et index inverse de la langue latine. L • Delatte, E Evrard, A «s ; • Nascimento, P Alberto, G Cappelli, A Pena, Identificação automática de elementos básicos da frase latina: o projecto Olissipo. Herent, Liegi31• Delatte L., Evrard E., «S. Dictionnarie Frequentiel et index inverse de la langue latine», Herent, Liegi, 1981. • Nascimento A., Alberto P., Cappelli G., Pena A., «Identificação automática de elementos básicos da frase latina: o projecto Olissipo», Euphrosyne 31, 2003, 515- 518.
. • Nascimento, A Alberto, P Cappelli, G Olissipoentre Filologia E Informática, Euphrosyne. 32• Nascimento A., Alberto P., Cappelli G., «OLISSIPO - entre filologia e informática: recursos para gerir o estudo do texto latino», Euphrosyne, 32, 2004 |
219,308,595 | [] | Building the Emirati Arabic FrameNet
May 2020
A Gargett andrew.gargett@stfc.ac.uk
Hartree Centre (Science & Technology Facilities Council)
United Kingdom
T Leung leung@uaeu.ac.ae
) United Arab Emirates University
United Arab Emirates
Building the Emirati Arabic FrameNet
Proceedings of the International FrameNet Workshop 2020: Towards a Global, Multilingual FrameNet
the International FrameNet Workshop 2020: Towards a Global, Multilingual FrameNetMarseilleMay 202070Emirati ArabicFrameNetcorpus linguistics
The Emirati Arabic FrameNet (EAFN) project aims to initiate a FrameNet for Emirati Arabic, utilizing the Emirati Arabic Corpus. The goal is to create a resource comparable to the initial stages of the Berkeley FrameNet. The project is divided into manual and automatic tracks, based on the predominant techniques being used to collect frames in each track. Work on the EAFN is progressing, and we here report on initial results for annotations and evaluation. The EAFN project aims to provide a general semantic resource for the Arabic language, sure to be of interest to researchers from general linguistics to natural language processing. As we report here, the EAFN is well on target for the first release of data in the coming year.
Introduction
The Emirati Arabic FrameNet (EAFN) project aims to initiate a FrameNet for Emirati Arabic, utilizing the Emirati Arabic Corpus (EAC, Halefom et al. 2013). The goal is to create a resource comparable to the initial stages of the Berkeley FrameNet (Baker et al. 1998). A FrameNet (FN) is a corpus-based resource, documenting the semantics of a natural language by linking the "lexical units" (or formmeaning pairings) of the language, such as words, to "frames". Frames represent the background knowledge against which lexical units are understood. This background knowledge typically surfaces in how a lexical unit is used in some situation, together with syntactically related units, termed "frame elements". For example, lexical units such as accuse, blame and esteem all have in common a JUDGEMENT frame, since they typically involve "a Cognizer making a judgment about an Evaluee" (such frame elements are usually presented capitalized).
This notion of a "Frame Semantics" has been pursued by Charles Fillmore and colleagues for over 4 decades, with a vast body of research to support the approach (e.g. Fillmore 1982. Fillmore et al. 2003, much of which can be accessed from the Berkeley FrameNet website. 1 Fillmore's key insight is that an individual's use of specific items in their language is structured by the background knowledge referred to above. Thus, expressing notions of judging draws upon a "'domain' of vocabulary whose elements somehow presuppose a schematization of human judgment and behavior involving notions of worth, responsibility, judgment, etc.'' (Fillmore 1982). This enables generalizations to be made about natural language patterns in terms of frames, which the FN seeks to capture.
A FN for a natural language thereby provides a rich and highly nuanced model of the syntactic and semantic patterns of the language. A FN project has the potential to add a number of valuable component resources to any existing corpus: a) Fine-grained information about grammatical roles and relations. b) A searchable database of semantically oriented annotations.
1 https://framenet.icsi.berkeley.edu/fndrupal/ c) Easily accessible and semantically organized example sentences, especially useful for language learning and teaching. d) Detailed annotations in a gloss language, such as English in the case of the EAFN project, also a significant resource for language learning and teaching. The EAFN will be an invaluable resource for primary theoretical research on Emirati Arabic, as well as for additional forms of research crossing a number of disciplines, including natural language processing, information retrieval, corpus linguistics, second language acquisition teaching and research, machine translation, psycholinguistics, and artificial intelligence. FNs are currently available for such major languages as English, German (Rehbein et al. 2012) and Japanese (Ohara 2012). FNs typically accompany a corpus resource of some description, in the target language, and the EAFN will employ data from the EAC for this purpose.
The Emirati Arabic Corpus
The Corpus of Emirati Arabic (EAC) was established and licensed by the Department of Linguistics at the United Arab Emirates University (Halefom et al. 2013). The EAC is a three-million-word corpus of Emirati Arabic. The data of the EAC was drawn from various naturalistic sources such as radio and TV interviews, and daily conversations. It also consists of some scripted conversations such as TV dramas and documentaries.
While the current size of the EAC is incomparable with other full-fledged corpora (e.g. British National Corpus), the EAC is the first annotated corpus of spoken Arabic (cf. other annotated corpora which are based on Modern Standard Arabic). It also serves as a useful tool for other potential research.
The EAC is fully annotated using the International Phonetic Alphabet (IPA). Narrow transcriptions are used in which detailed phonetic information instead of the citation form is described. In addition to the phonetic details, the EAC also provides further annotation including morphological boundaries (\mb), glossing (\ge), part of speech (\ps), and translation (\ft). For in-stance, Tables 1 and 2 contain two annotated examples from the EAC.
The Emirati Arabic FrameNet Project
The EAFN project aims to describe the range of semantic and syntactic combinations of each word in our collection in each of its senses. As mentioned earlier, this collection is a sub-corpus extracted from the EAC. To add FN information, annotators perform computer-assisted annotation of example sentences, these annotations then being collected in the EAFN database.
Currently, the Berkeley English FrameNet (BEFN), which began in 1997, consists of in excess of 13,000 entries for senses of lexical units, over 190,000 manually annotated sentences, rep-resenting more than 1200 frames. Started in 2015, the EAFN project originally planned for the initial 2year phase to collect 1000 senses of lexical units, and up to 10,000 annotated sentences, and an expectation of over 100 frames for Emirati Arabic. However, while a substantial amount of these initial objectives was met, the project was delayed until earlier this year, due to a change in circumstances for the first author. We are currently planning to complete the project by the end of this current year. Delivery of this database will represent a huge advance in knowledge about the language, and lay the groundwork for development of a rich array of corpusbased and other resources, including descriptive, computational and teaching and learning resources, for Emirati Arabic.
Our project aims to make a significant contribution to the level of resources for Arabic, and especially Emirati Arabic. The only comparable work to date is from outside the region, for example, the Leeds University Corpus, where within the Computer Science Department, the Corpus of Quranic Arabic has been developed. However, our project differs from such previous work, in that it aims to deliver large-scale information about deep-level syntactic (grammatical roles) as well as semantic (argument roles) information for this dialect of Arabic. This will involve developing novel collection materials, much of which involves using the BEFN.
Regarding research outcomes, the project aims to deliver a store of primary linguistic information about syntactic and semantic patterns of Emirati Arabic, in a detailed and searchable database of such patterns in this language. The information stored in this database will include: 1) Raw sound files (from the current Emirati Arabic Corpus). 2) Arabic and English Transcriptions of the data (a variety of texts in Emirati Arabic).
3) Annotations in the International Phonetic
Alphabet of the files listed in (1) above (from the current Emirati Arabic Corpus). 4) FrameNet annotations, including Frame Element (FE) components for each lexical unit: a) Frame Element (FE) name for lexical unit b) Grammatical function (e.g. subject, object, etc) c) Phrase type (e.g. noun phrase)
Method
The annotation in this project combines manual and automatic annotation techniques, and integrates these at several points, as explained below.
FrameNet Annotation
Formally, FN annotations are sets of triples that represent the FE realizations for each annotated sentence, each consisting of the frame element's name (for example, Food), a grammatical function (say, Object) and a phrase type (say, noun phrase). Working these out for a newly encountered language requires a range of decisions to be made. The first stage of our project involved developing a manual annotation protocol, as well as preparing the subcorpus of EAC texts for annotation (e.g. extracting citation forms for lexical units).
Developing a FN typically proceeds as follows (Fillmore and Atkins 1998, Fillmore et al. 2003, Boas 2009 1) Select the words to be analyzed. 2) Starting from the primary corpus (for the proposed project, this is the Emirati Arabic Corpus), define frame descriptions for these words by: a) first, providing in simplified terms a description of the kind of entity or situation represented by the frame,
\tx wεsˁalt \mb wεsˁal-t \ge arrived-2sp \ps v-pro \ft
Did you arrive? Table 2: Example from the EAC b) next, choosing labels for the frame elements (entities or components of the frame), c) finally, collecting words that apparently belong to the frame. 3) Next, focus on finding corpus sentences in the primary corpus that illustrate typical uses of the target words in specific frames. 4) Then, the sentences from (3) are annotated by tagging them for frame elements. 5) Finally, lexical entries are automatically prepared and stored in the database. Building a FN for a language from scratch involves a range of decisions, both linguistic and non-linguistic, raising questions about having sufficient data, about the kind of information to include (dependent on the size and scale of the project aims), and also about the tools required to carry out the work. Relatedly, there are questions about the overall approach to building the FN, such as, whether to employ largely manual or automatic techniques, there being advantages and disadvantages on both sides. As can be seen from the above outlines of a procedure for annotating frames, the complexity of annotating se-mantic information means manual annotation would be expected to yield higher quality data, although relatively much more expensively, whereas automatic annotation would potentially yield much more, lower quality data albeit far more cheaply.
In our project, we have combined manual and automatic annotation procedures, to maximize quality and yield, over the longer term of the project itself. Having a foundation of manually annotated frames provides for the EAFN a solid core on which to build our database. On the other hand, we faced a lengthy lead-in time for developing suitable software tools for the automatic annotation, and so having the manual annotation track enabled an immediate start on frame collection. Further, and perhaps more importantly, the manually collected gold-standard can be used to evaluate the output of automatic annotation, and in turn, manual annotators are able to evaluate the results of automatic annotation.
It might at first seem counter-intuitive that such a resource can indeed be constructed automatically, given the semantic complexity of natural language. Ambiguity abounds in daily communication, making the proposal that a computer system could somehow automatically perform accurate and reliable annotation a somewhat dubious one. However, it turns out that a key factor in being able to achieve this is the generality of the notion of frame, in particular its definition in usage-based terms: this definition leads us to expect that there is a significant overlap between the set of frames in one language and a completely unrelated language, since a frame consists of knowledge about the situations in which a specific language is used, and a significant number of such situations are common across languages. For example, while currencies and even protocols for proper financial arrangements may differ from country to country, the Transaction frame, wherein goods are exchanged for tokens or other goods of equal worth, is ubiquitous across language settings, covering a 2 We would like to thank an anonymous reviewer for pointing this out (complete with reference). range of activities, such as buying, selling, bartering, trading, and the like. The automatic side of the project aims to build resources able to leverage this generality of frames, and thereby interface the English FN with an Arabic language resource, in order to capture frames common across each language. Of course, this generality is known to be limited (e.g. Boas 2009), although, we have anticipated this with the manual annotation side of our project, which provides a capacity within our project for discovering frames unique to (Emirati) Arabic. Of course, we acknowledge the difficulty of the challenge involved in being able to build such a resource for generating frames across distinct languages (on this, see e.g. recent work by Czulo et al. 2019). 2 However, we are heartened by a range of results, particularly using more recent, scaled up datadriven approaches to Machine Translation, where Deep Neural Networks are making significant gains in automating the task of relating the semantics of one language to another, 3 and such work is already yielding impressive results (e.g. ElJundi et al. 2019).
2.1.1
Manual annotation One standard approach to building a large-scale resource like a FN is to construct a representative sample of the language, to carry out any required corpus analysis. Manual annotation on the EAFN follows this route, and starts from a sub-corpus specially selected from the EAC for this task.
In spring 2014, a research collaboration was established between the UAE University and the University of Birmingham with the aim of enriching the EAC by providing frame annotations. In particular, the research purpose is to annotate the EAC by adopting the framework laid out by the Berkeley FN (Baker et al. 1998). Researchers at the UAEU manually annotated the EAC with frames. Manual annotation was initiated with native Arabic speaker annotators being trained by the main EAFN researchers in frame annotation, in line with the protocols established by the Berkeley English FN (see section 2.1 above). Annotators then carried out annotation of sentences sample from the EAC.
Below are two examples of the same lexical unit ʔəmʃii which stems from the tri-consonantal root mʃʔ. All conceptual frames are arrived at through corpus-driven techniques, rather than through native speaker introspection. Note that for these initial stages of the EAFN, labels for frames and FEs have been largely drawn from the Berkeley English FN, although we fully anticipate this will need to be revised as the project further develops. The initial annotation process was carried out iteratively in two phases. During the initial development phase, annotators built the database using backslash entries, as demonstrated in Tables 1 to 4. In the second phase this backslash database was converted into an XML database using custom built parsers; for this phase, the initial annotation protocol can be refined, involving reconsideration of the range of categories required for annotating frames in Emirati Arabic, as well as the procedures for this annotation. For this first round of annotations, these phases gave rise to the foundation of the EAFN database; subsequent rounds of annotations continue to employ both phases, enabling a relatively flexible arrangement. Furthermore, this approach to building a database requires a minimal setup of a laptop on which to run a text editor, making the task highly mobile and relatively technology independent, with annotators employing relatively lightweight tools. Note that the flexibility of such a set-up potentially facilitates collecting such data in a more typical fieldwork type setting. Finally, by extending the custom parsers for the backslash database, we can extract the required information as XML, thereby making our database (re)usable in a range of ways.
Automatic annotation
This side of the project brings together a variety of Natural Language Processing tools, aiming to construct a state-ofthe-art system for automatically generating frames for Emirati Arabic. There have been a variety of attempts to use the Berkeley English FN to help build FNs in other languages (De Cao et al. 2008, Tonelli et al. 2009), often by linking existing electronic resources, such as a dictionary, in a target language to the English FN in some way, in order to label items from this language with frames from the English FN.
Along these lines, our approach makes use of the English FN (i.e. the Berkeley English FN), and the English and Arabic Wiktionaries. In order to link these resources, we have customized available NLP tools, and also built such tools from scratch, in order to use these resources to derive candidate frames for the EAFN, based on those from the English FN. A major part of this work has involved using the tools made available by the Ubiquitous Knowledge Processing (UKP) Lab at the University of Darmstadt in Germany. 4 In particular, we employed tools for parsing the English and Arabic Wiktionaries, the Java-based Wiktionary Library (JWKTL), 5 and the UBY database 6 (Gurevych et al. 2012).
Considering the UBY database first, we built tools for extracting information from the UBY database, in order to bridge the English Wiktionary and the English FN. This database stores a wealth of Wiktionary-related information across a range of languages, such as English and Arabic, as well as links to other resources, in particular the English FN. We extracted the following information from this:
1) For each English Wiktionary lexeme: a) Its written form b) Its sense 2) For each English FrameNet lexical unit matched to an English Wiktionary lexeme: a) Its index in the English FN b) Its UBY definition [essentially a gloss]
As well as supplying a ready-made parser for the English Wiktionary, the JWKTL library provides the means for customizing a parser for the Arabic Wiktionary; while wiktionaries largely overlap in their format, there can be significant differences from one language to another.
Actual entries in individual language wiktionaries contain information about a specific lexeme in that language, but also, importantly for our purposes, links to translations of this lexeme in wiktionaries of other languages; e.g. the English Wiktionary entry for book links to the Arabic Wiktionary entry for َاب ِت ك (this Arabic word being a direct translation of the English).
Using the newly customized parser for the Arabic Wiktionary, and the one already available for the English Wiktionary, we were able to collect information from both wiktionaries, as followsfor each lexeme in the English Wiktionary, we collected:
1) Word form 2) Part-of-speech 3) All possible definitions for this lexeme 4) The lexeme in the Arabic Wiktionary which the English lexeme has been linked to. For each of these Arabic lexemes, we also collected: a) Word form b) Part-of-speech c) Definition [supplied in English] Now, these links between the English to Arabic Wiktionaries are one-to-many, in that there are many possible Arabic word forms for each English lexeme. This means we need to carry out a disambiguation of some kind, if we are to properly align the FN and Wiktionary resources. Taking this need for disambiguation into account, we proceed with the alignment in two stages:
1) First, for each English Wiktionary lexeme from the UBY database, we split the list of English Wiktionary definitions, and calculate a measure of the similarity between this lexeme's UBY definition and its Wiktionary definition. For this work, we used the Gensim word2vec tools, 7 and trained models for this based on the so-called "1 Billion Word Language Model Benchmark". 8 We use this similarity measure as part of an automatically derived overall confidence score, which we later use when comparing competing frame entries in the database. 2) Second, we align the English Wiktionary definitions with the Arabic Wiktionary definitions, again calculating a similarity measure between these definitions (with the same set-up for Gensim word2vec referred to above), as another automatically derived component of the above-mentioned confidence score.
The automatically collected frame annotations of items from the Arabic Wiktionary, currently consist of lexical units (i.e. pairing of lemma and frame), including confidence measure derived from measuring the strength of the match between the English FN and Wiktionary definitions, on the one hand, and between English and Arabic gloss-es/definitions, on the other. Future work will involve extending this work to include annotations of Frame Elements.
Corpus progress
While the initial release of the EAFN is still un-der development, immediately below we provide a snapshot of the current data collection, for the initial stages of each collection track. In the next section, we present more detailed evaluations of both the automatic and manual collection efforts.
Currently the EAFN covers verbs only. For manually gathered entries, we have collected 29 frames, and 360 LUs. As we show later in this section, in initial evaluation studies, we have found reasonably high inter-annotator agreement for the manual annotation. We have also implemented a fully automatic procedure for collecting entries, for which we have gathered 630 frames and 2100
LUs. Of course, such results need to be treated with a great deal of caution, and indeed initial evaluation of this data suggests only a fraction of this data is expected to be of sufficient quality to justify its being retained for the initial release of the EAFN database.
While we are listing manually and automatically collected entries separately at this stage, these will be collected together for the initial release of the database.
Finally, we should also emphasize that the two sources of language are different in dialectal terms: the manual track works directly from the EAC, and so the yield is dialectbased, whereas the automatic track works from the Wiktionary, which is in fact closer to the Modern Stand Arabic dialect. This combination of dialects within the same resource raises many issues, and we intend to begin addressing these during the latter part of the current project, which constitutes the initial development stage of the EAFN. However, it is likely that more comprehensive solutions to the issues raised will be solved in later stages of the EAFN, once we have completed the initial release of the database.
Evaluation
Semantic annotation is fraught with issues regarding lack of reliability and accuracy, making quality control of data a key component of any project in this area. While our project is still at an early stage of development, we are working toward an initial release of our data, for which we are developing a comprehensive evaluation regime, incorporating both the manual and automatic annotation tracks. A description of this, as well as some early results, are included in the rest of this section.
Manual track
3.1.1 Procedure We are currently piloting several evaluation tasks, targeting accuracy of judgements about frames and the core elements of those frame. For these tasks, we first extract a random sample from the EAC, and annotators then carry out annotation of this data according to the annotation protocols we have developed (see Section 2 above). We then proceed to apply various measures of agreement between the annotators We have several measures of the quality of this data, centering on degrees of overlap in the annotations of two of the annotators currently involved in the collection efforts at the UAEU. The statistic we are using here is Cohen's kappa coefficient :
=
Pr( ) − Pr( ) 1 − Pr( ) Where Pr( ) models the probability of observed agreement among raters, and Pr( ) captures chance agreement; the higher the value for , the better the agreement between annotators. There are various interpretations of such scores, for example, 0.60 is often considered a threshold, with scores above this being taken to indicate "substantial agreement" (Landis & Koch 1977). enables quantifying the inter-annotator agreement (IAA), particular for qualitative data, which is closer to our evaluation task, involving as it does detailed semantic knowledge. 9
Results
For comparison of frame annotations on our sample, we achieve the following: = 0.790 (p-value ≪ .001, = 31). For annotation of core FEs, we achieve, = 0.899 (pvalue ≪ .001, = 31). This shows that using the protocol we have devised, annotators are achieving very good levels of agreement for judgements about FEs, and acceptable agreement for judgements about frames.
Automatic track
3.2.1
Procedure Evaluating the automatic annotation provides a key point of convergence between the two tracks. For this, the manual annotators evaluated the output of the automatic system, their responses to the automatically generated frames requiring them to draw on their intuitions, which have their foundations in their direct experience building the manual collection of frames. Feedback from the annotators is crucial to pinpoint where further development on the automatic system will be required. In this way, our aim is that the automatic track more closely approximates the results from the manual track.
The procedure we followed here involved manual annotators going through individual, automatically generated LUs, complete with brief information about the target LU, as well as the frame assigned to this LU. Each annotator was given a total of 198 randomly sampled lexical units to evaluate. Annotators rated this on the following 5-point scale: 1 = Completely correct, 2 = Mostly correct, 3 = Acceptable, 4 = Mostly incorrect, 5 = Completely incorrect. The sample was further split according to two conditions: either (1) the rendering of the lexical unit in Arabic script included vowel information, or (2) it did not. For Arabic script, information about vowels can help disambiguate LUs, and potentially influence the ratings assigned for any specific LU. We are interested in investigating such aspects of the automatic collection process more closely.
The key statistic we are using here is Cohen's kappa coefficient, the same statistic used for measuring agreement during evaluation of the manual annotation task. The difference for the task of evaluating the automatic annotation, is that this task results in ordered data (a likert scale), and so we need to use weighted kappa coefficients; specifically, we are using squared weights, whereby disagreements are weighted according to their squared distance from perfect agreement. Table 5 presents the results of this evaluation, with evaluation categories used by both annotators across the top and down the leftmost column, and inside the table showing how scores matched for each item. From this, we can see that by far the largest number of matches is where annotators agree that an item is "completely correct", and the next highest being where one an-notator thought that an item was "mostly correct" and the other annotator thought the same item was "completely correct".
Results
When ignoring the vowel vs. no-vowel condition, we achieve the following: = 0.443 (p-value value ≪ .001, = 198). However, when taking into consideration the vowel vs. no-vowel condition, this score improved somewhat: = 0.602 (p-value value ≪ .001, = 83).
Overall, we can see that general agreement be-tween annotators is quite low, despite the overall largest match being "completely correct". This suggests possible problems and indeed errors for many of the automatically collected frames. On the other hand, when we partition the data set, and extract those items with vowel information, for this subset, the IAA improves considerably, suggesting that such information is an important component to incorporate in future automatically acquired collections for the EAFN.
Conclusion
We have presented early results for the first iteration of the Emirati Arabic FrameNet (EAFN). The EAFN is a general semantic resource for the Arabic language, which is sure to be of interest to a range of researchers, from those in linguistics, to others working within natural language processing. The project is divided into manual and automatic tracks, based on the predominant techniques being used to collect frames in each track. Despite a hiatus, work on the EAFN has recommenced; we have here reported on initial results for annotations and evaluation of these annotations which have been carried out in both tracks. The EAFN is well on target for the first release of data in the coming year.
Acknowledgements
We would like to thank S. Al Helou and W. Ghaban for help with the evaluation. We would also like to thank M.
Table 3 :
3Example from the EAFN\entryid
EAC0016
\root
mʃʔ
\lexeme
ʔamʃii
\gloss
Walk
\pos
verb
\frame
Self_Motion
\corefe1_label Path
\corefe1_item
fi ha Siiħra
\corefe1_gloss in the desert
\example
ʔamʃi fi ha Siiħra
\free_trans
I will walk in this desert.
Table 4 :
4Example from the EAFN
Table 5 :
5Evaluation of automatic track (1 = Completely correct, 2 = Mostly correct, 3 = Acceptable, 4 = Mostly incorrect, 5 = Completely incorrect)
For a very recent example of this, see work by the Tsinghua University Natural Language Processing Group (https://github.com/THUNLP-MT/THUMT/)
https://www.informatik.tu-darmstadt.de/ukp/ 5 https://dkpro.github.io/dkpro-jwktl/
https://dkpro.github.io/dkpro-uby/
https://radimrehurek.com/gensim/
http://www.statmt.org/lm-benchmark/
For all of this, we have used the irr package in R, which has been specifically designed for modelling "interrater reliability and agreement."
Elkaref for some early technical help with the JWKTL library, as well as valuable discussions about carrying out computational linguistic modelling of Arabic. Finally, we would also like to thank J. Ruppenhofer for discussions about the English FrameNetElkaref for some early technical help with the JWKTL library, as well as valuable discussions about carrying out computational linguistic modelling of Arabic. Finally, we would also like to thank J. Ruppenhofer for discussions about the English FrameNet.
Bibliographical References. Bibliographical References
The Berkeley Framenet project. C F Baker, C J Fillmore, J B Lowe, Multilingual FrameNets in computational lexicography: methods and ap-plications. H. C.Walter de Gruyter1Proceedings of the 17th international conference on Computational linguisticsBaker, C. F., Fillmore, C. J., & Lowe, J. B. (1998). The Berkeley Framenet project. In Proceedings of the 17th international conference on Computational linguistics- Volume 1 (pp. 86-90). Association for Computational Linguistics Boas, H. C. (Ed.). (2009). Multilingual FrameNets in computational lexicography: methods and ap-plications (Vol. 200). Walter de Gruyter.
Designing a Frame-Semantic Machine Translation Evaluation Metric. O Czulo, T T Torrent, E E Da Silva Matos, A D Da Costa, D Kar, HiT-IT 20192nd Workshop on Human-Informed Translation and Interpreting Technology. Czulo, O., Torrent, T. T., da Silva Matos, E. E., da Costa, A. D., & Kar, D. (2019). Designing a Frame-Semantic Machine Translation Evaluation Metric. In 2nd Workshop on Human-Informed Translation and Interpreting Technology (HiT-IT 2019) (pp. 28-35).
Combining word sense and usage for modeling frame semantics. D De Cao, D Croce, M Pennacchiotti, R Basili, Proceedings of the 2008 Conference on Semantics in Text Processing. the 2008 Conference on Semantics in Text ProcessingAssociation for Computational LinguisticsDe Cao, D., Croce, D., Pennacchiotti, M., & Basili, R. (2008). Combining word sense and usage for modeling frame semantics. In Proceedings of the 2008 Conference on Semantics in Text Processing (pp. 85-101). Association for Computational Linguistics.
hULMonA: The Universal Language Model in Arabic. O Eljundi, W Antoun, N El Droubi, H Hajj, W El-Hajj, K Shaban, Proceedings of the Fourth Arabic Natural Language Processing Workshop. the Fourth Arabic Natural Language Processing WorkshopElJundi, O., Antoun, W., El Droubi, N., Hajj, H., El-Hajj, W., & Shaban, K. (2019). hULMonA: The Universal Language Model in Arabic. In Proceedings of the Fourth Arabic Natural Language Processing Workshop (pp. 68- 77).
Frame semantics. Linguistics in the morning calm. C Fillmore, Hanshin Publishing CoSeoul, South KoreaFillmore, C. (1982). Frame semantics. Linguistics in the morning calm, (pp. 111-137). Seoul, South Korea: Hanshin Publishing Co.
FrameNet and lexicographic relevance. C J Fillmore, B S Atkins, Proceedings of the First International Conference on Language Resources and Evaluation. the First International Conference on Language Resources and EvaluationGranada, SpainFillmore, C. J., & Atkins, B. S. (1998). FrameNet and lexicographic relevance. In Proceedings of the First International Conference on Language Resources and Evaluation, Granada, Spain (pp. 28-30).
Background to FrameNet. C J Fillmore, C R Johnson, M R L Petruck, International Journal of Lexicography. 16Fillmore, C.J., C.R. Johnson, and M.R. L. Petruck. (2003). "Background to FrameNet." International Journal of Lexicography. 16.3: 235-250.
Uby: A largescale unified lexical-semantic resource based on LMF. I Gurevych, J Eckle-Kohler, S Hartmann, M Matuschek, C M Meyer, C Wirth, Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics. the 13th Conference of the European Chapter of the Association for Computational LinguisticsAssociation for Computational LinguisticsGurevych, I., Eckle-Kohler, J., Hartmann, S., Matuschek, M., Meyer, C. M., & Wirth, C. (2012). Uby: A large- scale unified lexical-semantic resource based on LMF. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (pp. 580-590). Association for Computational Linguistics.
A Corpus of Emirati Arabic. NRF Grant 31 H001. G Halefom, T Leung, D Ntelitheos, United Arab Emirates UniversityHalefom, G, T. Leung and D. Ntelitheos. (2013). A Corpus of Emirati Arabic. NRF Grant 31 H001, United Arab Emirates University.
The measurement of observer agreement for categorical data. J R Landis, G G Koch, Biometrics. 33Landis, J.R. and G.G. Koch (1977). The measurement of observer agreement for categorical data. Biometrics. Vol. 33, pp. 159-174.
Semantic Annotations in Japanese FrameNet: Comparing Frames in Japanese and English. K Ohara, LREC. Ohara, K. (2012). Semantic Annotations in Japanese FrameNet: Comparing Frames in Japanese and English. In LREC (pp. 1559-1562).
Adding nominal spice to SALSA-framesemantic annotation of German nouns and verbs. I Rehbein, J Ruppenhofer, C Sporleder, M Pinkal, Proceedings of the 11th Conference on Natural Language Processing. the 11th Conference on Natural Language ProcessingRehbein, I., Ruppenhofer, J., Sporleder, C., & Pinkal, M. (2012). Adding nominal spice to SALSA-frame- semantic annotation of German nouns and verbs. In Proceedings of the 11th Conference on Natural Language Processing (KONVENS'12) (pp. 89-97).
Wikipedia as frame information repository. S Tonelli, C Giuliano, Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. the 2009 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational Linguistics1Tonelli, S., & Giuliano, C. (2009). Wikipedia as frame information repository. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1 (pp. 276-285). Association for Computational Linguistics |
||
252,624,723 | Crowdsourced Participants' Accuracy at Identifying the Social Class of Speakers from South East England | Five participants, each located in distinct locations (USA, Canada, South Africa, Scotland and (South East) England), identified the selfdetermined social class of a corpus of 227 speakers (born 1986-2001; from South East England) based on 10-second passage readings. This pilot study demonstrates the potential for using crowdsourcing to collect sociolinguistic data, specifically using LanguageARC, especially when geographic spread of participants is desirable but not easily possible using traditional fieldwork methods. Results show that, firstly, accuracy at identifying social class is relatively low when compared to other factors, including when the same speech stimuli were used (e.g., ethnicity: Cole 2020). Secondly, participants identified speakers' social class significantly better than chance for a threeclass distinction (working, middle, upper) but not for a six-class distinction. Thirdly, despite some differences in performance, the participant located in South East England did not perform significantly better than other participants, suggesting that the participant's presumed greater familiarity with sociolinguistic variation in the region may not have been advantageous. Finally, there is a distinction to be made between participants' ability to pinpoint a speaker's exact social class membership and their ability to identify the speaker's relative class position. This paper discusses the role of social identification tasks in illuminating how speech is categorised and interpreted. | [
218974341
] | Crowdsourced Participants' Accuracy at Identifying the Social Class of Speakers from South East England
Amanda Cole amanda.cole@essex.ac.uk
Department of Language and Linguistics
University of Essex
Wivenhoe ParkColchester, EssexUK
Crowdsourced Participants' Accuracy at Identifying the Social Class of Speakers from South East England
Proceedings of the 2nd Workshop on Novel Incentives in Data Collection from People @LREC2022, pages 38-45 Marseille, 25 June 2022. © European Language Resources Association (ELRA), licensed under CC-BY-NC-4.0 38social classsocial identification taskslanguage variation and changesociolinguisticscitizen linguistics, crowdsourcingSouth East England
Five participants, each located in distinct locations (USA, Canada, South Africa, Scotland and (South East) England), identified the selfdetermined social class of a corpus of 227 speakers (born 1986-2001; from South East England) based on 10-second passage readings. This pilot study demonstrates the potential for using crowdsourcing to collect sociolinguistic data, specifically using LanguageARC, especially when geographic spread of participants is desirable but not easily possible using traditional fieldwork methods. Results show that, firstly, accuracy at identifying social class is relatively low when compared to other factors, including when the same speech stimuli were used (e.g., ethnicity: Cole 2020). Secondly, participants identified speakers' social class significantly better than chance for a threeclass distinction (working, middle, upper) but not for a six-class distinction. Thirdly, despite some differences in performance, the participant located in South East England did not perform significantly better than other participants, suggesting that the participant's presumed greater familiarity with sociolinguistic variation in the region may not have been advantageous. Finally, there is a distinction to be made between participants' ability to pinpoint a speaker's exact social class membership and their ability to identify the speaker's relative class position. This paper discusses the role of social identification tasks in illuminating how speech is categorised and interpreted.
Introduction
The extent to which people can identify another person's class from their speech is an important consideration in sociolinguistics for two principal reasons. Firstly, social identification tasks -in which participants attempt to identify social information about a person such as class, ethnicity, gender, age or sexuality from speech stimuliinform us of how different social categories are referenced in participants' minds from speech. Patterns of accuracy in social identification tasks reveal to what extent different social labels and groupings are meaningful categories for participants and to what extent participants have accurate linguistic representations of these social groupings (see Campbell-Kibler 2010 for an overview). Secondly, social identification tasks aid our understanding of how discrimination and stereotyping are linked to linguistic variation. If social information about a person can be identified from speech, then this contributes to our understanding of linguistic profiling and the ways evaluations or judgements are made about people based on their speech. This paper presents the results of a pilot study, exploring participants' accuracy at identify the social class of speakers from South East England.
Social Identification Tasks
Accuracy at social identification tasks is in part related to the link between a social group and linguistic features. In sociolinguistics, the term "indexicality" refers to the ideological relationship between linguistic features and a social group, persona, characteristic or place that they signal (see Silverstein 2003;Eckert 2008). Linguistic features can be indexing of so-called macro-social groups such as class, gender, ethnicity or micro-categories which reflect local identities (e.g. "jocks" vs "burnouts" in Detroit: Eckert, 1989).
There are different orders of indexicalities (see Silverstein 2003). There could simply be correlations between social factors and linguistic features which do not attract overt commentary. At the opposite extreme, features may be socially salient such that people may perform, discuss, interpret and evaluate them. These linguistic features may become enregistered such that, following Johnstone's definition of enregisterment (2009: 159), linguistic features are linked with specific labels. In the same way that people may associate certain speech patterns with labels such as "Pittsburghese" (Johnstone, 2009), "Geordie" (Beal, 2018) or "chav" (Cole & Tieken, 2021), people may hold concepts of the way that different social class groupings such as "lower-working class" speak which may or may not be an accurate representation. In this way, social identification tasks shed some important insights into the links that participants make between speech and social groupings.
In addition, social identification tasks are important as they aid our understanding of how discrimination and stereotyping may be facilitated through linguistic perception and profiling. Purnell et al. (1999) demonstrated that in the US, a person's ethnicity could be determined from as little as the word hello. If social information about a person such as their ethnicity can be determined from their speech, then so too, speech can act as a vehicle for profiling and stereotyping. The authors also showed that when the same person inquired about a flat to let in a Standard American accent, they were more likely to receive a positive outcome such as an invitation to view the apartment than if they spoke in an African American or Chicano American accent (Purnell et al. 1999). If identifications about a person's social or demographic background can be made from speech alone, then the evaluations or judgements made about a person based on their speech can be a window into broader societal prejudice.
Previous work has shown that the lower a person's class in South East England, the more harshly they are judged, for instance on measures such as intelligence and friendliness (Cole 2021). In addition, it has been shown that when participants are instructed to assess potential candidates' interview performance and perceived hirability for a trainee solicitor position at a corporate law firm, there is a particular bias against "candidates" who spoke workingclass varieties from the South of England (Levon et al. 2021). Though studies have shown that working-class speakers are disadvantaged by their accent (which in itself is a marker that they are working class), there has not been substantive research into how accurately people's social class can be identified from their speech. This knowledge is an important component to understanding a fuller picture of how speech is perceived, categorised but also judged and evaluated in relation to social class.
Linguistic Variation and Class in Britain
Social class (or "class"), along with age, gender and ethnicity, is one of the most frequently studied social factors in sociolinguistics. The recurrent finding in a plethora of sociolinguistic production work in Britain, as well as many other locations, is that the lower a person's class, the more likely they are to use vernacular features. In contrast, the higher a person's class, the more likely they are to use standard features (see Cole, forthcoming for an overview). Trudgill (2001) envisages linguistic variation in Britain as a triangle shape with social class on the y-axis and regional variation on the x-axis at the base of the triangle. In essence, the lower a person's social class, represented at the base of the triangle, the greater linguistic variation. This means that working-class people tend to speak in ways that are regionally marked and vary, often substantially, to the dialects of other working-class people from different places to them. In contrast, as social class increases, the less regional variation is found. At the extreme, at the tip of the triangle, the highest classes in Britain are presumed to speak almost identically to each other, converging on Received Pronunciation (RP) (often called "Queen's English"). RP is an accent exemplified by the higher classes that is spoken across the country and is often defined as not being regionally marked, i.e., is not linked to where a person is from (Trudgill, 2001). It is well established then that the lower a person's class the more regional productions in their speech. It seems, then, like a sound, though to my knowledge an untested, hypothesis that the reverse is also true: the more regional productions in a person's speech, the lower their class. Following this, if participants are attuned to the structure of sociolinguistic variation in Britain, they may be able to infer a person's class by the degree of regional pronunciations in their speech.
It is worth emphasising that sociolinguistic variation is a matter of probabilities. A working-class person is more likely to have a regional pronunciation at a higher rate than a middle-class person. It is very rarely the case that middleclass people will never produce a feature and it is produced without exception in the speech of working-class people from the same speech community. It is much more probable that the feature will be produced by both workingclass and middle-class speakers but at different rates. Therefore, sociolinguistic variation is, at least in terms of social class, group-preferential and not group-exclusive. Following this, in a social class identification task, it is not simply the case that if a participant hears a regional linguistic feature they can be assured that the speaker is working-class. These features will also most likely be used by some middle-class speakers in the same community, but presumably to a lesser extent. Social class identification tasks test to what extent participants are attuned to sociolinguistic variation and can base probabilistic assumptions about a person's class from speech stimuli.
Accuracy at Social Class Identification Tasks
Previous research on social class identifications from speech has been very limited. There have been previous studies on how linguistic variation is perceived in relation to social class. For instance, in New Zealand, Hay et al. (2006) asked participants to listen to audio stimuli which could be variably interpreted as two different words due to a vowel merger in the speech community. If participants were led to believe that they are hearing a working-class speaker, they are more likely to believe they heard productions that are more common in working-class speakers. Buchstaller (2006) (Purnell et al., 1999;Holliday & Jaggers, 2015;Cole 2020), age (O'Cain, 2000), sexuality and perceived masculinity/femininity (Munson 2007;Levon, 2014) and location (McKenzie, 2015). These studies have shown that firstly, not all speaker groups are identified with equal accuracy, which is often related to the saliency of the different categories and their associated linguistic features. Secondly, not all participant groups perform the task with equal accuracy which is often conditioned by participants' familiarity or exposure to relevant linguistic variation (see Clopper & Pisoni, 2004).
As a result, though no predictions are made about the direction of the effect in this present study, it may be that some social classes are identified more accurately than others and/or that it is easier to identify the social class of either men or women. In addition, the primary hypothesis of this paper is that the participant located in South East England will perform the task with highest accuracy. There are five participants in the study, each located in a different place: USA, Canada, South Africa, Scotland and (South East) England. In much the same way that a geographic proximity effect is found in participants' ability to identify speakers' geographic provenance (Montgomery, 2012), this paper predicts that the participant located in South East England will perform with highest accuracy. It is probable that they are most familiar with patterns of sociolinguistic variation and the class structure in South East England.
Methods
This study uses crowdsourcing through LanguageARC to collect data on levels of accuracy in the identification of speakers' social class from speech stimuli. This paper is based on data collected through a LanguageARC project (see Cieri et al., 2018;2019), From Cockney to the Queen, which examines how language in South East England is produced, categorised and evaluated in relation to place, class and ethnicity (see Cole 2020 for further findings from this project). LanguageArc is an online resource which allows researchers to create language resources which members of the public can participate in (Cieri et al., 2018(Cieri et al., , 2019. LanguageARC encourages members of the public, or "Citizen Linguists", to spare as little or as much time as they would like to contribute to linguistic research. The From Cockney to the Queen project was open for a limited period of time and participants for this study were not overtly recruited, but instead, participated in the task as part of their contribution more generally to LanguageARC.
Research Questions
Can participants accurately identify the class of speakers significantly better than chance and is their accuracy affected by: a) speakers' gender? b) speakers' social class? c) participants' location (South East England; Scotland; USA; South Africa; Canada)?
Participants
In this study, the results of five participants are presented, each located in a different English-speaking area: (South East) England, Scotland, USA, South Africa and Canada. LanguageARC indicates the location of the participant at the point they took part in the experiment. It is not known how long participants have spent in that location or their linguistic background or levels of exposure to south-eastern varieties of English. More information such as age, gender and social class is not known about the participants.
It is also acknowledged that there is a very small number of participants in this present study due in part to the limited period of time that the project was open for contributions. The results presented are a pilot study and are tentative. This paper presents a case study, demonstrating how sociolinguistic data can be collected for sociolinguistic studies through crowdsourcing, specifically using LanguageARC. An advantage of this approach is that participants were not recruited to the task and instead, they completed it for their own enjoyment or desire to contribute to research. It is therefore likely that, though there was a very limited number of participants, they have engaged closely with the task.
In addition, through LanguageArc, participants from all over the world can easily contribute to research as long as they have an internet connection and willingness. This overcomes some confounding factors that sociolinguists may face when recruiting participants, for instance, people from different locations or with different linguistic backgrounds who are recruited through their similar experience living or studying in a single location. Although crowdsourcing is often considered for large-scale collection, it can also benefit collections where geographic spread is desirable but not possible using traditional fieldwork methods. The comparison of the person located in South East England and other locations around the world would have been difficult without the crowdsourcing platform.
Stimuli and Procedure
Participants heard speech stimuli taken from a corpus of 227 speakers from South East England. The order of the speech stimuli was randomised for each individual participant. For each speaker, participants heard an approximately 10-second audio clip extracted from a passage reading. Participants then selected the class of the speaker from six options: "lower working", "upper working", "lower middle", "upper middle", "lower upper" and "upper upper" or they had the choice to skip that speaker. A two-tier system was used within each class (e.g., working class was split into lower-and upper-working). This decision was made in order to align findings with production studies where this same division of classes is made. For instance, it has previously been acknowledged that the lower-middle class and upper-working class are key in leading language change (have highest rates of incoming variants for a variable in a process of change) (e.g., Labov 2001; see Cole, forthcoming for discussion on class divisions in sociolinguistics).
"Lower upper" and "upper upper" were included as possible selections even though it may seem improbable that participants come into regular contact with upper class speakers in day-to-day life. However, this study did not want to make any prior assumptions about participants' backgrounds or their conceptions of the class structure or what constitutes each class. The "lower upper" and "upper upper" values were included to give participants the full range of options without making prior assumptions. In addition, "upper class" was also split into "lower" and "upper" so as to mirror the values added for both workingand middle-class. It is possible that including such a broad range may have affected the judgements of participants as they may have felt they needed to use the full range of responses. Nonetheless, if participants do indeed hold associations for the specific class labels then the full range of responses would not greatly skew participants' accuracy. In addition, participants' accuracy was tested not only as a binary outcome (correct classification vs. incorrect classification) but also as a correlation between speakers' class and participants' responses.
The audio clips were lexically identical and were taken from passage readings which were recorded as part of a larger study on language production and perception in South East England (see Cole, 2021). Although spontaneous speech would likely lead to a higher rate of vernacular features, a reading passage was chosen to control for contextual information or lexical choice. Each clip lasted approximately 10 seconds and was taken from a reading of the same sentence which was chosen to include a range of linguistic variables known to be variable or important in South East England such as (T)-glottalling, (ING), (H)-dropping, (L)-vocalisation and variation in the vowel system. This paper does not have the scope but future research could investigate which linguistic variables and variants lead speakers to be identified as a certain class. The sentence selected was:
"The sky is falling", cried Chicken Little. His head hurt and he could feel a big painful bump on it. "I'd better warn the others", and off he raced in a panicked cloud of fluff.
Analysis
A consideration with LanguageARC is that each participant could complete as many or as few of the 277 judgements as they wished. The task did not have to be completed in one sitting, and participants could return to the task at any point and pick up where they left off. In fact, Citizen Linguists at LanguageARC are encouraged to dip into tasks even if they only wish to spare a few minutes. Though this approach encourages active engagement, it also means that there will almost always be an imbalance in the datapoints collected for each participant. Also, as participants do not have to complete the task in full, not all speakers are heard by all participants.
There was a total of 146 datapoints, excluding the 19 instances participants skipped a speaker rather than attempt to identify a speaker's class. In addition, upper-class speakers, of whom there was only four, were only heard a combined total of three times. As a result, identifications made of the four upper-class speakers were not included in the analysis. In spite of this, participants could identify speakers' class from the 6-way distinction (i.e. including "lower-upper" and "upper-upper" class. This means that, in this analysis, on any instance that a participant considered a speaker to be either lower-or upper-upper class, they were not correct. However, it is still of interest to know which speakers, if any, were considered to be upper class as this provides insights into participants' perceptual representation of the class system.
Of the 227 speakers in the corpus of speech stimuli, at least one identification was made for 115 speakers. Of the 146 judgements, 28 were made of lower-working speakers, 38 of upper working, 55 of lower middle, and 25 of upper middle. This pattern roughly matched the distribution of speakers' social classes. For instance, as mentioned, more speakers identified as lower-middle class than any other class and correspondingly, more lower-middle class speakers were heard by participants than any other class. In addition, there was an imbalance in the contribution of each participant. Of the 146 judgements, 67, 19, 20, 32 and 8 identifications were made by the participants located in South East England, South Africa, Scotland, Canada and the USA respectively.
The analysis was split into three parts. Firstly, it was tested whether participants' accuracy at identifying speakers' social class was better than chance. A one-sample Wilcoxon test was selected due to the non-parametric distribution of the datapoints. This test compared participants' average accuracy against the 1/6 probability of choosing the correct category out of chance.
Secondly, a logistic regression was run in R using the glm function to test whether the gender or social class of speakers or the location of participants predicted the accuracy of the class identifications. The dependent variable in the model was the participants' accuracy for each judgement: a two-level categorical variable coded as either "yes" or "no". Lower-working class was the reference level for the class variable as the extreme of the scale. South East England was the reference level for the participant location variable as the obvious baseline of comparison and due to the hypothesis that this participant would perform with highest rates of accuracy. For all comparisons, α was set at 0.05.
Thirdly, a Kendall's correlation was run to test the ordinal association between the two ranked variables for each participant: speakers' actual social class and the social class the participant classified them as. If a participant considers a lower-working class speaker as upper-working class, this seems is a more accurate judgement than considering the same speaker to be upper-middle class. The Kendall's correlation test established if there were positive correlations in participants' performance. That is, did they tend to consider lower-class speakers as of a lower class than they tended to consider higher-class speakers to be?
Results
Did Participants Perform Better than Chance?
Participants made relatively balanced selections between the six choices: there were 18, 27, 29, 43, 17 and 12 selections for "lower working", "upper working", "lower middle", "upper middle", "lower upper" and "upper upper" respectively. Participants were more likely to consider speakers to be middle class, particularly upper-middle class, compared to any other class group.
Participants had relatively low rates of accuracy when identifying the class of speakers, with an average across all judgements and all participants of 21.9% (32/146). As a point of comparison, based on the same speech stimuli and LanguageARC project, previous research (Cole 2020) explored participants' accuracy at identifying the ethnicity of speakers into the main "ethnic" groups in Britain according to the UK Census: White British, Black British and Asian British. In this study, participants found perceptual linguistic differences between speakers of all 3 ethnicities, averaging 80.7% accuracy at the task. The highest rate of accuracy (96%) was when identifying the ethnicity of Black British speakers from London whose speech seems to form a distinct, perceptual category. It is not the case then that there is no or very limited linguistic variation present in the speech stimuli, instead, participants in this present study could not identify class with the same accuracy that ethnicity was previously identified from the same speech stimuli.
On the whole, a one-sample Wilcoxon test did not find participants' rates of accuracy to be significantly greater than chance. It seems that participants do not have a 6-way class distinction, or at least, not one that translates to accuracy at linguistic identifications. However, when responses were amassed into three classes (working, middle and upper), a one-sample Wilcoxon test found that accuracy rates were significantly greater than chance averaging 47.3% (69/146) (p=0.03) (see Figure 1).
Which Factors Predict Participants' Accuracy?
There were no significant effects in the logistic regression model. There was a trend that women's class was identified more accurately than that of men (26.3% and 17.1% accuracy for female and male speakers respectively) but the effect was not significant (p=0.057) (Figure 2). In addition, accuracy was not greater when identifying any specific social class. The rates of accuracy for identifying speakers from each class were 21.4%, 21%, 20% and 28% for lowerworking, upper-working, lower-middle and upper-middle class speakers respectively (Figure 3).
There were no significant differences in accuracy rates between participants. Participants performed with similar rates of accuracy when identifying the class of speakers (see Figure 1). This is with the exception of the participant in the USA who performed with higher rates of accuracy than other but this difference was not significant and this participant had many less datapoints than the other participants. Though it was hypothesised that the participant located in South East England would perform significantly better than other participants, this was not found to be the case. The lack of significant effects in the model for the gender and class of speakers as well as the location of participants was also found to be true when the test was re-run with a three-class distinction.
Is there Correlation between Speakers'
Class and how they are Classified?
A Kendall's correlation test explored the relationship between speakers' social class and the classifications made by the participants. A significant correlation was only found for the South East participant and no others. For this participant there was a weak, yet significant correlation (p = 0.021; Tau = 0.23).
Figure 2: Speakers' gender and the accuracy with which their social class was identified from speech stimuli. Though women's social class was accurately identified more often than men's, the effect was not significant (p=0.057).
Figure 3: Speakers' social class and how accurately their class was identified from speech stimuli. There were no significant effects. Figure 1: Participant location (one participant per location) and their accuracy at identifying speakers' social class from speech stimuli. Participants' average performance was significantly greater than chance when identifying class from a 3-way distinction (working, middle, upper). Compared to the baseline of South-East England, there were no significant differences in participants' rates of accuracy.
For instance, as shown in Figure 4, this participant accurately classified lower-middle class speakers as lowermiddle class on six instances and inaccurately as uppermiddle class on 10 instances. They very infrequently considered the participant to be working class (one and two instances for lower and upper respectively) or upper class (four and two instances for lower and upper respectively).
In contrast, a lower-working class speaker was only correctly identified as lower-working class on two instances, but most frequently (on five instances) they were thought to be upper working class. These results further indicate that the participant's linguistic representation of the class system is more closely aligned with a three-way class system than a six-way system.
This trend mostly held with the exception of upper-working class speakers. The class of these speakers was accurately identified on only two instances and they were considered lower-working class on three instances. They were most often considered to be middle class (four and six instances for lower-middle and upper-middle class respectively). It may be that upper-working class speakers do not speak in a way that allows them to be accurately identified as working class. Instead, they speak in a way more similar to participants' perceptual representation of middle-class speech. This is reminiscent of Labov's (see 1966Labov's (see , 1972 previous assertations that lower-middle and upper-working class speakers have the most social and linguistic 'insecurity' and consequentially, they use standard features to a greater extent than would be expected relative to their bordering classes, reflecting their aspirations of upward social mobility. Further research could look at exploring this in more detail with greater participant numbers.
Discussion
Participants' accuracy was significantly better than chance when identifying speakers' class in a three-way distinction (working, middle, upper) but not for a six-way distinction (lower working, upper working, lower middle, upper middle, lower upper, upper upper). When exploring the effect of social factors on patterns of linguistic variation and change, sociolinguists typically divide up social class with a two-way distinction within each class (e.g., working class is split into upper-and lower-working etc.). Though sociolinguists have often found variation within this finegrained class system, it does not seem that participants were attuned to this variation as they did not make accurate class identifications in the six-way class division. Given that sociolinguists' class system apparently does not resonate with contributors, it may be that in future research, alternative comparisons could provide interesting insights into how class is perceived and categorised from linguistic stimuli. For example, participants could judge the relative class position of speakers e.g., whether they are the same class or if one speaker is of a higher or lower class than the other(s).
Rates of accuracy at the task were not significantly affected by either speakers' gender or social class. In addition, there were no significant differences in rates of accuracy between the five participants. In contrast to the paper's prediction, the participant located in South East England did not perform significantly better than the other participants. Though it was predicted that this participant would have greater familiarity with sociolinguistic variation and social class structures in South East England, they did not perform significantly better than other participants. This finding is reminiscent of the results of the pre-mentioned study in which, based on the same speech stimuli as this present study, participants were asked to identify the ethnicity of speakers from South East England (Cole, 2020). The five participants located in Britain did not perform significantly better than the five participants in the US.
Both ethnicity and class are macro social categories, and perhaps a geographic proximity effect would be found for more locally-meaningful, micro categories. As discussed, the structure of sociolinguistic variation in Britain is strongly related to social class i.e., the higher the social class, the lesser the regional variation. Following this, in order to complete this task, participants only needed to be attuned to the general principle of sociolinguistic variation in Britain: the closer a speaker is to RP, the higher their class. Previous work has shown that people in the US are familiar with RP and the accent is associated with notions of prestige and correctness (Stewart et al., 1985). It was perhaps not necessary to be familiar with south-eastern varieties but instead, to be able to discern the degree of difference from RP for each speaker, which may explain the lack of significant differences in participants' performance.
Nonetheless, there was an important difference in the performance of the participant located in South East England compared to other participants. For this participant, and none other, there was a significant correlation between the speakers' class and the class that they were classified as by the participant. Therefore, to some extent, this participant did perform more accurately than others but this difference was not found when accuracy was considered as a binary outcome. The South East England participant was somewhat attuned to the general trend of the relative class position of the person whose speech they heard, but this did not clearly translate to a clear ability to pinpoint which specific class a speaker pertained to.
As discussed, the results of a social class identification task are of interest to sociolinguists for two main reasons. Firstly, if a person's social or demographic factors can be identified from speech, then this provides insights into the ways that profiling and discriminatory practices can take place based on a person's speech (see Purnell et al., 1999). Accuracy at the class identification task was relatively low and was only significantly greater than chance for a threeway class distinction. Nonetheless, this does not mean that, based on speech stimuli, people of different classes face equal evaluations. As discussed, there is much previous evidence that in southern England, based on their speech, speakers of working-class accents are disadvantaged (Cole, 2021;Levon et al., 2022).
Nonetheless, linguistic variation is perhaps not overtly linked to social class in the minds of listeners. When participants heard speech that was strongly regionallymarked, this may not have overtly and explicitly indexed the label "working class" and even less so "lower-working class". In fact, this is perhaps why prejudice and negative attitudes towards working-class speech patterns are so pervasive in British society; there is not a salient awareness that these ideas contribute towards and bolster societal inequalities related to a person's social class. Instead, speech that is heavily regionally-marked may be considered in other framings such as incorrect, not proper or lazy rather than a marker of a person's social class despite the objective linguistic reality of linguistic variation by class.
This links with the other previously mentioned reason why social identification tasks are of importance to sociolinguists. These tasks can go some way to revealing if social labels are meaningful categories for participants and to what extent participants have accurate linguistic representations of these social groupings. Participants did not seem generally attuned with the linguistic make-up of the class groupings used in this study. Participants performed with higher accuracy for the three-way class distinction than the six-way distinction, but accuracy was relatively low across the task. Generally, the labels were not accurately referenced in participants' minds by the combinations of linguistic features they heard produced by the speakers.
However, these findings do not rule out the possibility that participants do explicitly associate specific ways of speaking with these class labels. Firstly, this paper tested participants ability to identify a person's class identity and not their class per se. It may be that there is not a clear alignment between social class as determined by objective criteria and social class identity. It is possible that rates of accuracy at the class identification task would have been different if class was determined and defined differently. Secondly, it may be that the linguistic features which index social class labels were not present in the stimuli presented to participants. However, as mentioned there was sufficient linguistic variation in the speech stimuli that in a previous study based on the same speech stimuli (Cole 2020), participants could identify speakers' ethnicity with much greater accuracy (averaging 80.7%). Thirdly, it may be that participants do indeed associate the linguistic features present in the speech stimuli with specific class labels but that this did not translate to accuracy at the task. Buchstaller (2006) has previously shown that British participants overtly associate quotatative go with the working class. However, when played matched-guise audio clips with variable rates of go, the participants did not believe that participants with higher rates of go were more likely to be working class. It is not necessarily the case that what participants' overtly associate with a label is entirely equitable with how they actually perceive and categorise speech stimuli.
In sum, this paper has presented the results of a pilot study testing the extent to which participants can identify another person's social class from their speech and which factors condition accuracy. This study has shown the potential for collecting sociolinguistic data with crowdsourcing, specifically using LanguageARC. This is a pilot study with a small number of participants so results are necessarily tentative. However, some interesting results have emerged. Firstly, accuracy at identifying social class is relatively low, for instance when compared to other factors in comparable studies (e.g.,ethnicity: Cole 2020). Secondly, participants could not identify speakers' social class significantly better than chance from a six-class distinction but they could for a three-class distinction. Thirdly, though there were some different patterns of responses, the participant located in South East England did not perform with significantly greater accuracy than other participants, suggesting familiarity with sociolinguistic variation in the region may not have been very advantageous. Finally, there is a distinction to be made between participants ability to pinpoint a speaker's exact social class membership and their ability to identify their relative class position. This paper has discussed these results in the context of how social identification tasks can illuminate patterns in how speech is categorised and interpreted.
Figure 4 :
4Results of a participant located in South EastEngland when identifying the social class of speakers from this region. The social class selected by the participant and social class of speakers are weakly but significantly correlated (p-value = 0.021; Tau = 0.23).
The Routledge Handbook of Language and Superdiversity. J Beal, A. Creese, A. BlackledgeRoutledgeAbingdon, Oxon./New YorkDialect as heritageBeal, J. (2018). Dialect as heritage. In A. Creese, A. Blackledge (Eds), The Routledge Handbook of Language and Superdiversity. Abingdon, Oxon./New York: Routledge, pp. 165-180.
Social stereotypes, personality traits and regional perception displaced: Attitudes towards the 'new'quotatives in the UK. I Buchstaller, Journal of Sociolinguistics. 103Buchstaller, I. (2006). Social stereotypes, personality traits and regional perception displaced: Attitudes towards the 'new'quotatives in the UK. Journal of Sociolinguistics, 10(3), pp. 362-381.
Sociolinguistics and perception. K Campbell-Kibler, Language and Linguistics Compass. 46Campbell-Kibler, K. (2010). Sociolinguistics and perception. Language and Linguistics Compass 4(6), pp. 377-389.
C Cieri, J Fiumara, M Liberman, C Callison-Burch, Wright , J , Introducing NIEUW: Novel Incentives and Workflows for Eliciting Linguistic Data Proceedings of the 11th Edition of the Language Resources and Evaluation Conference (LREC. MiyazakiCieri, C., Fiumara, J., Liberman, M., Callison-Burch, C., and Wright, J. (2018). Introducing NIEUW: Novel Incentives and Workflows for Eliciting Linguistic Data Proceedings of the 11th Edition of the Language Resources and Evaluation Conference (LREC 2018). Pages 151-155, Miyazaki, May 7-12.
LanguageARC: Using Citizen Science to Augment Sociolinguistic Data Collection and Coding NWAV48: New Ways of Analyzing Variation Eugene. C Cieri, J Write, J Fiumara, A Shelmire, M Liberman, Cieri, C., Write, J., Fiumara, J., Shelmire, A. and Liberman, M. (2019). LanguageARC: Using Citizen Science to Augment Sociolinguistic Data Collection and Coding NWAV48: New Ways of Analyzing Variation Eugene, October 10-12.
Homebodies and army brats: some effects of early linguistic experience and residential history on dialect categorization. C G Clopper, D B Pisoni, Language Variation and Change. 161Clopper, C. G., and Pisoni, D. B. (2004). Homebodies and army brats: some effects of early linguistic experience and residential history on dialect categorization. Language Variation and Change 16(1), pp.31-48.
Oxford Handbook of British Englishes. A Cole, C. Montgomery, C., E. MooreOxford University Pressforthcoming). Perceptions and ClassCole, A. (forthcoming). Perceptions and Class. In: C. Montgomery, C., E. Moore. Oxford Handbook of British Englishes. Oxford University Press.
Identifications of Speaker Ethnicity in South-East England: Multicultural London English as a Divisible Perceptual Variety. A Cole, Proceedings of the LREC 2020 Workshop on Citizen Linguistics in Language Resource Development. J. Fiumara, C. Cieri, M. Liberman, C. Callison-Burchthe LREC 2020 Workshop on Citizen Linguistics in Language Resource DevelopmentCole, A. (2020). Identifications of Speaker Ethnicity in South-East England: Multicultural London English as a Divisible Perceptual Variety. In J. Fiumara, C. Cieri, M. Liberman, C. Callison-Burch, (Eds), Proceedings of the LREC 2020 Workshop on Citizen Linguistics in Language Resource Development, pp. 49-57
Disambiguating language attitudes held towards socio-demographic groups and geographic areas in South East England. A Cole, Journal of Linguistic Geography. 91Cole, A. (2021). Disambiguating language attitudes held towards socio-demographic groups and geographic areas in South East England. Journal of Linguistic Geography 9 (1), pp. 13-27
Haagse Harry, a Dutch chav from The Hague? The enregisterment of similar social personas in different speech communities. A Cole, I Tieken-Boon Van Ostade, International Journal of Language and Culture. Cole, A. & Tieken-Boon van Ostade, I. (2021). Haagse Harry, a Dutch chav from The Hague? The enregisterment of similar social personas in different speech communities. International Journal of Language and Culture.
The sociolinguistic distribution of and attitudes toward focuser like and quotative like. J Dailey-O'cain, Journal of Sociolinguistics. 41Dailey-O'Cain, J. (2000). The sociolinguistic distribution of and attitudes toward focuser like and quotative like. Journal of Sociolinguistics 4(1), pp. 60-80.
Jocks and burnouts: Social categories and identity in the high school. P Eckert, Teachers college pressEckert, P. (1989). Jocks and burnouts: Social categories and identity in the high school. Teachers college press.
Variation and the indexical field. P Eckert, Journal of sociolinguistics. 124Eckert, P. (2008). Variation and the indexical field. Journal of sociolinguistics 12(4), pp. 453-476.
Factors influencing speech perception in the context of a mergerin-progress. J Hay, P Warren, K Drager, Journal of Phonetics. 344Hay, J., Warren, P. & Drager, K. (2006). Factors influencing speech perception in the context of a merger- in-progress. Journal of Phonetics 34(4), pp. 458-484.
Influence of suprasegmental features on perceived ethnicity of American politicians. N R Holliday, Z Jaggers, Proceedings of ICPhS. ICPhS84Pittsburghese shirts: Commodification and the enregisterment of an urban dialectHolliday, N.R. & Jaggers, Z. (2015). Influence of suprasegmental features on perceived ethnicity of American politicians. In Proceedings of ICPhS 2015. Johnstone, B. (2009). Pittsburghese shirts: Commodification and the enregisterment of an urban dialect. American Speech 84(2), pp. 157-175.
The social stratification of English. W Labov, Centre for Applied LinguisticsNew York City. Washington, DCLabov, W. (1966). The social stratification of English in New York City. Washington, DC: Centre for Applied Linguistics.
Sociolinguistic Patterns. W Labov, University of Pennsylvania PressPhiladelphiaLabov, W. (1972). Sociolinguistic Patterns. Philadelphia: University of Pennsylvania Press.
Categories, stereotypes, and the linguistic perception of sexuality. E Levon, Language in Society. 435Levon, E. (2014). Categories, stereotypes, and the linguistic perception of sexuality. Language in Society 43(5), pp. 539-566.
Accent Bias and Perceptions of Professional Competence in England. E Levon, D Sharma, D J Watt, A Cardoso, Y Ye, Journal of English Linguistics. 494Levon, E., Sharma, D., Watt, D.J., Cardoso, A. & Ye, Y. (2021). Accent Bias and Perceptions of Professional Competence in England. Journal of English Linguistics 49(4), pp. 355-388.
The sociolinguistics of variety identification and categorisation: Free classification of varieties of spoken English amongst non-linguist listeners. R M Mckenzie, Language Awareness. 242McKenzie, R.M. (2015). The sociolinguistics of variety identification and categorisation: Free classification of varieties of spoken English amongst non-linguist listeners. Language Awareness 24(2), pp. 150-168.
The acoustic correlates of perceived masculinity, perceived femininity, and perceived sexual orientation. B Munson, Language and Speech. 501Munson, B. (2007). The acoustic correlates of perceived masculinity, perceived femininity, and perceived sexual orientation. Language and Speech 50(1), pp.125-142.
Perceptual and phonetic experiments on American English dialect identification. T Purnell, W Idsardi, J Baugh, Journal of language and social psychology. 181Purnell, T., Idsardi, W., & Baugh, J. (1999). Perceptual and phonetic experiments on American English dialect identification. Journal of language and social psychology 18(1), pp. 10-30.
Indexical order and the dialectics of sociolinguistic life. M Silverstein, Language & Communication. 233-4Silverstein, M. (2003). Indexical order and the dialectics of sociolinguistic life. Language & Communication 23(3-4), pp. 193-229
Accent and social class effects on status and solidarity evaluations. M A Stewart, E B Ryan, H Giles, Personality and social psychology bulletin. 111Stewart, M.A., Ryan, E.B. & Giles, H. (1985). Accent and social class effects on status and solidarity evaluations. Personality and social psychology bulletin 11(1), pp. 98- 105.
Sociolinguistic Variation and Change. P Trudgill, Edinburgh University PressEdinburghTrudgill, P. (2001). Sociolinguistic Variation and Change. Edinburgh: Edinburgh University Press. |
44,155,936 | KOI at SemEval-2018 Task 5: Building Knowledge Graph of Incidents | We present KOI (Knowledge of Incidents), a system that given news articles as input, builds a knowledge graph (KOI-KG) of incidental events. KOI-KG can then be used to efficiently answer questions such as "How many killing incidents happened in 2017 that involve Sean?" The required steps in building the KG include: (i) document preprocessing involving word sense disambiguation, named-entity recognition, temporal expression recognition and normalization, and semantic role labeling; (ii) incidental event extraction and coreference resolution via document clustering; and (iii) KG construction and population. | [
1977529,
44166420
] | KOI at SemEval-2018 Task 5: Building Knowledge Graph of Incidents
June 5-6. 2018
Paramita Mirza paramita@mpi-inf.mpg.de
Max Planck Institute for Informatics
Germany
Fariz Darari
Faculty of Computer Science
Universitas Indonesia
Indonesia
Rahmad Mahendra rahmad.mahendra@cs.ui.ac.id
Faculty of Computer Science
Universitas Indonesia
Indonesia
KOI at SemEval-2018 Task 5: Building Knowledge Graph of Incidents
Proceedings of the 12th International Workshop on Semantic Evaluation (SemEval-2018)
the 12th International Workshop on Semantic Evaluation (SemEval-2018)New Orleans, LouisianaJune 5-6. 2018
We present KOI (Knowledge of Incidents), a system that given news articles as input, builds a knowledge graph (KOI-KG) of incidental events. KOI-KG can then be used to efficiently answer questions such as "How many killing incidents happened in 2017 that involve Sean?" The required steps in building the KG include: (i) document preprocessing involving word sense disambiguation, named-entity recognition, temporal expression recognition and normalization, and semantic role labeling; (ii) incidental event extraction and coreference resolution via document clustering; and (iii) KG construction and population.
Introduction
SemEval-2018 1 Task 5: Counting Events and Participants in the Long Tail (Postma et al., 2018) addresses the problem of referential quantification that requires a system to answer numerical questions about events such as (i) "How many killing incidents happened in June 2016 in San Antonio, Texas?" or (ii) "How many people were killed in June 2016 in San Antonio, Texas?" Subtasks S1 and S2 For questions of type (i), which are asked by the first two subtasks, participating systems must be able to identify the type (e.g., killing, injuring), time, location and participants of each event occurring in a given news article, and establish within-and cross-document event coreference links. Subtask S1 focuses on evaluating systems' performances on identifying answer incidents, i.e., events whose properties fit the constraints of the questions, by making sure that there is only one answer incident per question. * Both share the same amount of work. 1 http://alt.qcri.org/semeval2018/ Subtask S3 In order to answer questions of type (ii), participating systems are also required to identify participant roles in each identified answer incident (e.g., victim, subject-suspect), and use such information along with victim-related numerals ("three people were killed") mentioned in the corresponding answer documents, i.e., documents that report on the answer incident, to determine the total number of victims.
Datasets The organizers released two datasets: (i) test data, stemming from three domains of gun violence, fire disasters and business, and (ii) trial data, covering only the gun violence domain. Each dataset contains (i) an input document (in CoNLL format) that comprises news articles, and (ii) a set of questions (in JSON format) to evaluate the participating systems. 2 This paper describes the KOI (Knowledge of Incidents) system submitted to SemEval-2018 Task 5, which constructs and populates a knowledge graph of incidental events mentioned in news articles, to be used to retrieve answer incidents and answer documents given numerical questions about events. We propose a fully unsupervised approach to identify events and their properties in news texts, and to resolve within-and crossdocument event coreference, which will be detailed in the following section.
System Description
Document Preprocessing
Given an input document in CoNLL format (one token per line), for each news article, we first split the sentences following the annotation of: (i) whether a token is part of the article title or content; (ii) sentence identifier; and (iii) whether a to-ken is a newline character. We then ran several tools on the tokenized sentences to obtain the following NLP annotations.
Word sense disambiguation (WSD) We ran Babelfy 3 (Moro et al., 2014) to get disambiguated concepts (excluding stop-words), which can be multi-word expressions, e.g., gunshot wound. Each concept is linked to a sense in Babel-Net 4 (Navigli and Ponzetto, 2012), which subsequently is also linked to a WordNet sense and a DBpedia entity (if any).
Named-entity recognition (NER) We relied on spaCy 5 for a statistical entity recognition, specifically for identifying persons and geopolitical entities (countries, cities, and states).
Time expression recognition and normalization
We used HeidelTime 6 (Strötgen and Gertz, 2013) for recognizing textual spans that indicate time, e.g., this Monday, and normalizing the time expressions according to a given document creation time, e.g., 2018-03-05.
Semantic role labeling (SRL) Senna 7 (Collobert et al., 2011) was used to run semantic parsing on the input text, for identifying sentence-level events (i.e., predicates) and their participants.
Event Extraction and Coreference Resolution
Identifying document-level events Sentencelevel events, i.e., predicates recognized by the SRL tool, were considered as the candidates for the document-level events. Note that predicates containing other predicates as the patient argument, e.g., 'says' with arguments 'police' as its agent and 'one man was shot to death' as its patient, were not considered as candidate events. Given a predicate, we simultaneously determined whether it is part of document-level events and also identified its type, based on the occurrence of BabelNet concepts that are related to four event types of interest stated in the task guidelines: killing, injuring, fire burning and job firing. A predicate is automatically labeled as a sentencelevel event with one of the four types if such re-lated concepts occur either in the predicate itself or in one of its arguments. For example, a predicate 'shot', with arguments 'one man' as its patient and 'to death' as its manner, will be considered as a killing event because of the occurrence of 'death' concept. 8 Concept relatedness was computed via pathbased WordNet similarity (Hirst et al., 1998) of a given BabelNet concept, which is linked to a WordNet sense, with a predefined set of related WordNet senses for each event type (e.g., wn30:killing.n.02 and wn30:kill.v.01 for the killing event), setting 5.0 as the threshold. Related concepts were also annotated with the corresponding event types, to be used for the mention-level event coreference evaluation.
We then assumed all identified sentence-level events in a news article belonging to the same event type to be automatically regarded as one document-level event, meaning that each article may contain at most four document-level events (i.e., at most one event per event type).
Identifying document-level event participants
Given a predicate as an identified event, its participants were simply extracted from the occurrence of named entities of type person, according to both Senna and spaCy, in the agent and patient arguments of the predicate. Furthermore, we determined the role of each participant as victim, perpetrator or other, based on its mention in the predicate. For example, if 'Randall' is mentioned as the agent argument of the predicate 'shot', then he is a perpetrator. Note that a participant can have multiple roles, as is the case for a person who kills himself.
Taking into account all participants of a set of identified events (per event type) in a news article, we extracted document-level event participants by resolving name coreference. For instance, 'Randall', 'Randall R. Coffland', and 'Randall Coffland' all refer to the same person.
Identifying document-level number of victims
For each identified predicate in a given document, we extracted the first existing numeral in the patient argument of the predicate, e.g., one in 'one man'. The normalized value of the numeral was then taken as the number of victims, as long as the predicate is not suspect-related predicates such as suspected or charged. The number of victims of document-level events is simply the maximum value of identified number of victims per predicate.
Identifying document-level event locations To retrieve candidate event locations given a document, we relied on disambiguated DBpedia entities as a result of Babelfy annotation. We utilized SPARQL queries over the DBpedia SPARQL endpoint 9 to identify whether a DBpedia entity is a city or a state, and whether it is part of or located in a city or a state. Specifically, an entity is considered to be a city whenever it is of type dbo:City or its equivalent types (e.g., schema:City). Similarly, it is considered to be a state whenever it is either of type yago:WikicatStatesOfTheUnitedStates, has a senator (via the property dbp:senators), or has dbc:States of the United States as a subject.
Assuming that document-level events identified in a given news article happen at one certain location, we simply ranked the candidate event locations, i.e., pairs of city and state, based on their frequencies, and took the one with the highest frequency.
Identifying document-level event times Given a document D, suppose we have dct as the document creation time and T as a list of normalized time expressions returned by HeidelTime, whose types are either date or time. We considered a time expression t i ∈ T as one of candidate event times T ⊆ T , if dct − t i is a non-negative integer less than n days. 10 We hypothesize that the event reported in a news article may have happened several days before the news is published.
Assuming that document-level events identified in a given news article happen at one certain time, we determine which one is the document-level event time from the set of candidates T by applying two heuristics: A time expression t j ∈ T is considered as the event time, if (i) t j is mentioned in sentences containing event-related concepts, and (ii) t j is the earliest time expression in the candidate set.
Cross-document event coreference resolution
We approached cross-document event coreference by clustering similar document-level events that 9 https://dbpedia.org/sparql 10 Based on our empirical observations on the trial data we found n = 7 to be the best parameter. are of the same type, via their provenance, i.e., news articles where they were mentioned. From each news article we derived TF-IDF-based vectors of (i) BabelNet senses and (ii) spaCy's persons and geopolitical entities, which are then used to compute cosine similarities among the articles. Two news articles will be clustered together if (i) the computed similarity is above a certain threshold, which was optimized using the trial data, and (ii) the event time distance of documentlevel events found in the articles does not exceed a certain threshold, i.e., 3 days. All document-level events belonging to the same document cluster are assumed to be coreferring events and to have properties resulting from the aggregation of locations, times and participants of contributing events, with the exception of number of victims where the maximum value was taken instead.
Constructing, Populating and Querying the Knowledge Graph
We first built an OWL ontology 11 to capture the knowledge model of incidental events and documents. We rely on reification (Noy and Rector, 2006) for modeling entities, that is, incident events, documents, locations, participants and dates are all resources of their own. Each resource is described through its corresponding properties, as shown in Table 1. An incident event can be of type injuring, killing, fire burning, and job firing. Documents are linked to incident events through the property event, and different documents may refer to the same corresponding incident event. We borrow URIs from DBpedia for values of the properties city and state. Participant roles can be either victim, perpetrator or other. A date has a unified literal value of the format "yyyy-mm-dd", as well as separated values for the day, month, and year.
To build the KOI knowledge graph (KOI-KG) Figure 1: A SPARQL query over KOI-KG for "Which killing events happened in 2017 that involve persons with Sean as first name?" we relied on Apache Jena, 12 a Java-based Semantic Web framework. The output of the previously explained event extraction and coreference resolution steps was imported into the Jena TDB triple store as RDF triples. This facilitates SPARQL querying, which can be done using the Jena ARQ module. The whole dump of KOI-KG is available for download at https://koi.cs.ui.
ac.id/incidents.
Given a question in JSON format, we applied mapping rules to transform it into a SPARQL query, which was then used to retrieve corresponding answer incidents and answer documents. Constraints of questions such as event type, participant, date, and location were mapped into SPARQL join conditions (that is, triple patterns). Figure 1 shows a SPARQL representation for the question "Which killing events happened in 2017 that involve persons with Sean as first name?". The prefix koi is for the KOI ontology namespace (https://koi.cs.ui.ac.id/ns#). In the SPARQL query, the join conditions are over the event type killing, the date '2017' (as year) and the participant 'Sean' (as firstname).
For Subtask S2, we extended the SPARQL query with counting feature to retrieve the total number of unique events. Analogously, for Subtask S3, we retrieve number of victims by counting event participants having victim as their roles, and by getting the value of the numOfVictims property (if any). The value of the numOfVictims property was preferred as the final value for an incident if it exists, otherwise, KOI relied on counting event participants.
We also provide a SPARQL query interface for KOI-KG at https://koi.cs.ui.ac.id/ dataset.html?tab=query&ds=/incidents. 12 http://jena.apache.org/
Results and Discussion
Evaluation results Participating systems were evaluated according to three evaluation schemes: (i) mention-level evaluation, for resolving crossdocument coreference of event mentions, (ii) document-level evaluation (doc-f1), for identifying events and their properties given a document, and (iii) incident-level evaluation, for combining event extraction and within-/cross-document event coreference resolution to answer numerical questions in terms of exact matching (inc-acc) and Root Mean Square Error (inc-rmse). Furthermore, the percentage of questions in each subtask that can be answered by the systems (%ans) also contributes to the final ranking.
Regarding the mention-level evaluation, KOI achieves an average F1-score of 42.8% (36.3 percentage point increase over the baseline) from several established metrics for evaluating coreference resolution systems. For document-level and incident-level evaluation schemes, we report in Table 2 the performance of three different system runs of KOI: v1 Submitted version of KOI during the evaluation period.
v2 Similar as v1, however, instead of giving no answers when we found no matching answer incidents, KOI simply returns zero as the numerical answer with an empty list of answer documents.
v3 Submitted version of KOI during the postevaluation period, which incorporates improvement on document-level event time identification leading to enhanced crossdocument event coreference. 13
Compared to the baseline provided by the task organizers, the performance of KOI is considerably better, specifically of KOI v3 for subtask S2 with doc-f1 and inc-acc around twice as much as of the baseline. Hereafter, our quantitative and qualitative analyses are based on KOI v3, and mentions of the KOI system refer to this system run.
Subtask S1 We detail in Table 3, the performance of KOI on retrieving relevant answer documents given questions with event constraints, Table 3: KOI performance results for subtask S1, on answer document retrieval (p for precision, r for recall and f1 for F1-score).
in terms of micro-averaged and macro-averaged scores. Note that the official doc-f1 scores reported in Table 2 correspond to macro-averaged F1-scores. We first analyzed the system performance only on answered questions, i.e., for which KOI returns the relevant answer documents (55.1% of all questions), yielding 79.8% and 85.7% micro-averaged and macro-averaged F1-scores, respectively.
In order to have a fair comparison with systems that are able to answer all questions, we also report the performance of KOI that returns empty sets of answer documents for unanswered questions. In this evaluation scheme, the macro-averaged precision is significantly lower than the micro-averaged one (51.7% vs 86.6%), because systems are heavily penalized for not retrieving relevant answer documents per question, i.e., given zero precision score, which brings the average over all questions down. Meanwhile, the micro-averaged precision measures the systems' ability in returning relevant documents for all questions regardless of whether the questions were answered or not. KOI focuses on yielding high quality answer documents, which is reflected by high micro-averaged precision of above 80% in general. The following result analy- ses are based on the all questions scheme. By analyzing the document retrieval per event type, we found that KOI can identify fire burning events in documents quite well, yielding the highest recall among all event types, but the contrary for job firing events. With respect to event constraints, answering questions with location constraint results in the worst performance, meaning that our method is still lacking in identifying and/or disambiguating event locations from news documents. Specifically, questions with city constraint are more difficult to answer compared to the ones with state constraint (49.6% vs 61.5% microaveraged F1-scores, respectively).
Subtask S2
The key differences between Subtask S1 and S2 are: (i) questions with zero as an answer are included, and (ii) there can be more than one answer incidents per question, hence, systems must be able to cluster answer documents into the correct number of clusters, i.e., incidents.
As shown in Table 4, KOI is able to answer questions with zero as the true answer with 96.3% accuracy. Meanwhile, for questions with non-zero number of incidents as the answers, KOI gives numerical answers with 18.9% accuracy, resulting in overall accuracy (inc-acc) of 27.4% and RMSE inc-rmse) of 5.3.
We also analyzed questions (with non-zero answer incidents) for which KOI yields perfect sets of answer documents with 100% F1-score, i.e., 7.7% of all questions. For 61.8% of such answered questions, KOI returns the perfect number of inci-Event ID: 22409
2016-06-19
Man playing with gun while riding in a car fatally shoots, kills driver A man was fatally shot early Sunday morning after the passenger in the car he was driving accidentally discharged the gun, according to the San Antonio Police Department. The shooting occurred about 3 a.m. when group of four men were driving out of the Iron Horse Apartments at 8800 Village Square on the Northeast Side. The passenger in the front seat was playing with a gun and allegedly shot himself in the hand, according to officers at the scene. The bullet went through his hand and struck the driver in the abdomen. The men then drove to Northeast Baptist Hospital, which was nearby, but the driver was pronounced dead at the hospital, according to investigators. Police believe the driver and passenger to be related and are still investigating the incident. The other two men in the vehicle were detained. No charges have been filed.
2016-06-19
41-year -old man killed in overnight shooting SAN ANTONIO -A 41-year-old man is dead after a shooting police say may have been accidental. The victim died after another man drove him to Northeast Baptist Hospital for treatment of that gunshot wound. Police say they got a call at around 2:45 a.m. for the shooting in the 8800 block of Village Drive. The man told them he and the victim were in a pickup when he fired the shot, but police say it's not known why the men were in the truck. Investigators say the man told them he fired the shot accidentally and struck the victim. Police say the shooter took the victim to the emergency room at Northeast Baptist, where hospital personnel pronounced him dead. Police are questioning the man who did the shooting. Subtask S3 We also show in Table 4, the KOI performance on answering numerical questions about number of victims. KOI is able to answer correctly 55.2% of questions with zero answers, and 11.9% of the ones with non-zero answers.
Analyzing the questions with zero as the true answer, for which KOI is able to answer correctly, in 41.1% of the cases KOI is able to identify the non-existence of victims when the set of answer documents is not empty. In 40.0% of the cases, the correctly predicted zero answers are actually by chance, i.e., because KOI fails to identify relevant answer documents.
Meanwhile, for questions with gold numerical answers greater than zero, KOI returns wrong answers in 88.1% of the cases. Among these answers, 66.9% of the answers are lower than the true number of victims, and 33.1% are higher. This means that KOI tends to underestimate the number of victims with 6.6 RMSE.
For 22.5% of all questions, KOI is able to identify the perfect sets of answer documents with 100% F1-score. Among these questions, 34.3% were answered correctly with the exact number of victims, for which: 52.7% of correct answers result from solely counting participants (as victims), 35.3% were inferred only from numeral mentions, and the rest of 12.0% were answered by combining both victim counting and numeral mentions.
Qualitative Analysis
Recalling the example questions mentioned in the beginning of Section 1, for the first question, KOI is able to perfectly identify 2 killing incidents with 5 supporting documents pertaining to the event-time and -location constraints. One of the identified answer incidents with two supporting documents is shown in Table 5, which shows how well the system is able to establish cross-document event coreference, given overlapping concepts and entities. However, in answering the second question, KOI returns one less number of victims since it cannot identify the killed victim in the answer incident shown in Table 5, due to the lack of numeral mentions and named event participants as victims.
Conclusion
We have introduced a system called KOI (Knowledge of Incidents), that is able to build a knowledge graph (KG) of incidental events by extracting relevant event information from news articles. The resulting KG can then be used to efficiently answer numerical questions about events such as "How many people were killed in June 2016 in San Antonio, Texas?" We have submitted KOI as a participating system at SemEval-2018 Task 5, which achieved competitive results. A live demo of our system is available at https://koi.cs.ui. ac.id/. Future directions of this work include the incorporation of supervised (or semi-supervised) approaches for specific steps of KOI such as the extraction of numeral information (Mirza et al., 2017), as well as the investigation of applying our approach to other domains such as disease outbreaks and natural disasters.
Table 2 :
2KOI performance results at SemEval-2018 Task 5 (in percentages) for three subtasks, baseline was provided by the task organizers, *) denotes the system run that we submitted during the evaluation period.micro-averaged
macro-averaged
p
r
f1
p
r
f1
Overall
answered questions
86.6 74.0 79.8 94.2 83.6 85.7
all questions
86.6 41.6 56.2 51.7 45.9 47.1
Event type
killing
88.5 43.2 58.1 56.8 48.6 50.3
injuring
82.8 37.4 51.5 46.4 40.1 41.4
job firing
100.0
8.7 16.0 15.4 15.4 15.4
fire burning
96.9 66.2 78.7 65.5 66.2 65.7
Event constraint
participant
84.8 43.0 57.0 61.1 51.1 53.2
location
89.1 39.4 54.6 46.7 42.8 43.6
time
86.0 42.4 56.8 51.7 46.3 47.4
Table 4 :
4KOI performance results for subtasks S2 and
S3, on answering numerical questions, i.e., number of
incidents and number of victims.
Table 5 :
5An identified 'killing' event by KOI for "Which killing incidents happened in June 2016 in San Antonio, Texas?" with two supporting documents. dents. For the rest, KOI tends to overestimate the number of incidents, i.e., for 30.9% of the cases, KOI fails to establish cross-document event coreference links with the current document clustering method.
https://competitions.codalab.org/ competitions/17285
http://babelfy.org/ 4 http://babelnet.org/ 5 https://spacy.io/ 6 https://github.com/HeidelTime/ heideltime 7 https://ronan.collobert.com/senna/
We assume that a predicate that is labeled as a killing event cannot be labeled as an injuring event even though an injuring-related concept such as 'shot' occurs.
Available at https://koi.cs.ui.ac.id/ns
Submission v1 and v2 did not consider heuristic (i) that we have discussed in Section 2.2.
Natural language processing (almost) from scratch. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, Pavel Kuksa, J. Mach. Learn. Res. 12Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493-2537.
Lexical chains as representations of context for the detection and correction of malapropisms. WordNet: An electronic lexical database. Graeme Hirst, David St-Onge, 305Graeme Hirst, David St-Onge, et al. 1998. Lexical chains as representations of context for the detec- tion and correction of malapropisms. WordNet: An electronic lexical database, 305:305-332.
Cardinal virtues: Extracting relation cardinalities from text. Paramita Mirza, Simon Razniewski, Fariz Darari, Gerhard Weikum, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaShort Papers2Paramita Mirza, Simon Razniewski, Fariz Darari, and Gerhard Weikum. 2017. Cardinal virtues: Extract- ing relation cardinalities from text. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 -August 4, Volume 2: Short Papers, pages 347-351.
Andrea Moro, Alessandro Raganato, Roberto Navigli, Entity Linking meets Word Sense Disambiguation: a Unified Approach. Transactions of the Association for Computational Linguistics (TACL). 2Andrea Moro, Alessandro Raganato, and Roberto Nav- igli. 2014. Entity Linking meets Word Sense Disam- biguation: a Unified Approach. Transactions of the Association for Computational Linguistics (TACL), 2:231-244.
BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Roberto Navigli, Simone Paolo Ponzetto, Artificial Intelligence. 193Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual se- mantic network. Artificial Intelligence, 193:217- 250.
Defining N-ary Relations on the Semantic Web. W3C Working Group Note. Natasha Noy and Alan RectorRetrievedNatasha Noy and Alan Rector, editors. 2006. Defin- ing N-ary Relations on the Semantic Web. W3C Working Group Note. Retrieved Jan 10, 2017 from https://www.w3.org/TR/2006/NOTE-swbp-n- aryRelations-20060412/.
Semeval-2018 task 5: Counting events and participants in the long tail. Marten Postma, Filip Ilievski, Piek Vossen, Proceedings of the 12th International Workshop on Semantic Evaluation (SemEval-2018). Association for Computational Linguistics. the 12th International Workshop on Semantic Evaluation (SemEval-2018). Association for Computational LinguisticsMarten Postma, Filip Ilievski, and Piek Vossen. 2018. Semeval-2018 task 5: Counting events and par- ticipants in the long tail. In Proceedings of the 12th International Workshop on Semantic Evalu- ation (SemEval-2018). Association for Computa- tional Linguistics.
Multilingual and cross-domain temporal tagging. Language Resources and Evaluation. Jannik Strötgen, Michael Gertz, 47Jannik Strötgen and Michael Gertz. 2013. Multilingual and cross-domain temporal tagging. Language Re- sources and Evaluation, 47(2):269-298. |
5,321,936 | An exchange format for multimodal annotations | This paper presents the results of a joint effort of a group of multimodality researchers and tool developers to improve the interoperability between several tools used for the annotation of multimodality. We propose a multimodal annotation exchange format, based on the annotation graph formalism, which is supported by import and export routines in the respective tools. | [
16648418,
16204924,
18212263
] | An exchange format for multimodal annotations
Thomas Schmidt
University of Hamburg
Susan Duncan
University of Chicago
Oliver Ehmer
University of Freiburg
Jeffrey Hoyt
Michael Kipp
Dan Loehr loehr@mitre.org
MITRE Corporation
5 DFKISaarbrücken
Magnus Magnusson
Human Behavior Laboratory
Reykjavik
Travis Rose
Virginia Tech
Han Sloetjes
MPI for Psycholinguistics
Nijmegen
An exchange format for multimodal annotations
This paper presents the results of a joint effort of a group of multimodality researchers and tool developers to improve the interoperability between several tools used for the annotation of multimodality. We propose a multimodal annotation exchange format, based on the annotation graph formalism, which is supported by import and export routines in the respective tools.
Introduction
This paper presents the results of a joint effort of a group of multimodality researchers and tool developers (see [16], [17]) to improve the interoperability between several tools used for the annotation of multimodality. We propose a multimodal annotation exchange format, based on the annotation graph formalism, which is supported by import and export routines in the respective tools. The paper is structured as follows: section 2 gives an overview of the multimodal annotation tools involved. Section 3 discusses the main commonalities and differences in these tools' data models and formats. Section 4 describes the main characteristics of the exchange format. Section 5 is concerned with the implementation of conversion routines from the tools' formats to the exchange format and vice versa. Section 6, finally, discusses some possible future improvements or extensions of the exchange format. The appendix contains an exemplary, commentated annotation file in the multimodal annotation exchange format.
Tools
The following tools were considered in this effort: ANVIL, a video annotation tool, developed by Michael Kipp at the DFKI in Saarbrücken (see [1], [11] and [12]). C-BAS, a tool for coding events on video or audio tracks, developed by Kevin Moffit at the University of Arizona(see [7]). ELAN, a tool for multi-level annotation of video and audio, developed by the Max-Planck-Institute for Psycholinguistics in Nijmegen(see [6], [9] and [25]). EXMARaLDA Partitur-Editor, a tool for transcription of audio or video recordings of spoken language, developed by Thomas Schmidt at the University of Hamburg (see [10], [21], [22] and figure 1) MacVisSTA, a tool for annotation and visualization of multiple time-synchronized videos, developed by Travis Rose, Francis Queck and Chreston Miller at Virginia Tech (see [19] and [20]). Theme, a commercial software tool for finding pat-terns in temporally annotated data, developed by Magnus Magnusson for Noldus (see [23]). Transformer, a tool for editing and converting between several formats for linguistic transcription, developed by Oliver Ehmer at the University of Freiburg (see [24]).
These tools differ greatly in their technical details, in their intended target audiences, in the design of their user interfaces and in the specific tasks they help to solve. They have in common, however, that they all allow the creation of or the work with analytic textual data which is time-aligned to a video recording. They are therefore all used to carry out multi-modal annotation and analysis. It is not uncommon, though, that a researcher wants to exchange data between two or more of them, because no single tool offers all the functionality required for a given task. For instance, a typical processing pipeline could look like this: 1) EXMARaLDA is used to transcribe verbal behavior of an interaction, 2) ELAN's advanced video support is needed to add detailed annotation of multimodal behavior, 3) Theme is used to carry out an analysis of the annotated data, 4) Transformer is used to generate a visualization of the annotated data.
Up to a certain point, the interoperability needed for this kind of task was already provided before our effort by some of the tools in the form of import and export routines converting between the tools' own data format and that of another. However, this was an inefficient and unreliable solution because it meant that each tool developer had to keep track of and react to changes in all other tools. The obvious solution therefore was to agree on a common exchange format which can accommodate all the information contained in the individual tools' formats.
Comparison of data formats
As a first step towards this goal, a thorough analysis and comparison of the different tool formats was carried out. All formats have in common that their basic building blocks are annotation tuples consisting of a start and an end point (with, typically, a temporal interpretation) and one or more text labels (with no fixed interpretation). Since this is precisely the principle on which the annotation graph formalism (AG, see [4]) is based, it was natural to choose AGs as a general framework for our task. However, there are differences between the formats (1) with respect to the way the basic building blocks are organised into larger structures and (2) with respect to semantic specifications of and structural constraints on the basic and larger structural entities.
The following section discusses the differences in the formats which were identified to be relevant in terms of interoperability.
General organisation of the data structure
Tier-based data formats vs. non-tier-based formats
In ANVIL, ELAN, EXMARaLDA and Transformer, all annotations are partitioned into a number of tiers such that each annotation is part of exactly one tier, and no two annotations within a tier overlap. These tiers are usually used to group annotations which belong to one level of analysis (e.g. verbal vs. non-verbal behaviour, hand movements vs. facial expression) or to one participant (in multi-party interaction). By contrast, C-BAS and Mac-VisSTA do not have the concept of a tier; they keep all annotations in a single list. When converting from a non-tier-based format to a tier-based format, a partition of this list into tiers must be found.
Single vs. multiple labels
Annotations consist of a single label in ELAN, EX-MARaLDA, MacVisSTA and Transformer, while ANVIL and C-BAS can have multiple (typed) labels for one and the same annotation. When converting from the latter formats to one of the former, each multi-label annotation has to be split into a corresponding number of single-label annotations.
Implicit vs. explicit timeline
In ANVIL, C-BAS MacVisSTA and Transformer, the timestamps of annotations refer directly to media times in the recording. By contrast, ELAN and EXMARaLDA define an explicit external timeline, i.e. an ordered set of anchors to which annotations refer. Anchors in this external timeline can, but need not, be assigned an absolute timestamp which links them to the media signal. It is thus possible in ELAN and EXMARaLDA to leave the media offsets of certain annotations unspecified. 1 In terms of interoperability, the difference between implicit and explicit timelines poses two problems: First, the former do not permit unspecified media offsets. When going from a format with an explicit to a format with an implicit timeline, missing offsets therefore have to be calculated. The simplest way to achieve this is through (linear) interpolation. Second, in going from an implicit to an explicit timeline, the question arises of how to treat points with identical offsets. If two such points are mapped to different anchors (the EXMARaLDA and ELAN formats allow this), there is no straightforward way of ordering them and contradictory data structures (i.e. annotations whose endpoint precedes their startpoint in the timeline) may result. It therefore seems more practical to map them to a single anchor.
Semantic specifications and constraints
The properties of the multimodal annotation formats discussed so far, just like the AG framework in general, are on a relatively high level of abstraction. That is, they concern very general structural characteristics of annotation data, and they do not say very much about their semantics. 2 While a high level of abstraction is beneficial to interoperability in many ways, actual applications profit from more concrete semantic specifications. All of the 1 In other words: the annotator can freely determine the degree of precision of the alignment between annotations and recordings. By the same logic, it becomes possible to have a completely non-temporal interpretation of start and end pointsfor instance, when the annotated object is not a recording, but a written text. 2 It is in this sense that Bird/Liberman (2001:55) call their AG framework "ontologically parsimonious (if not positively miserly!)" Figure 1: User interface of the EXMARaLDA Partitur-Editor tools considered in our effort introduce such specifications as a part of their data models and formats, thereby often imposing further structural constraints on annotations which may have to be taken into account in data conversion.
Speaker assignment of tiers
In EXMARaLDA and ELAN, a tier can be assigned to one member of an externally defined set of speakers. In all other tools, speaker assignment can only be expressed on the surface, e.g. by using appropriate tier names, but speakers and speaker assignment are not an integral part of the semantics of the respective data model.
Parent/Child relations between tiers
In Anvil, ELAN and Transformer, tiers can be explicitly assigned to a parent tier. Tied to this assignment is the constraint that annotations in child tiers must have a(n) (chain of) annotation(s) in the parent tier with identical start and end points. This relationship can be used, for example, to ensure that annotating elements (e.g. POS tags) always have an annotated element (e.g. a word) they refer to. In fact, in ELAN, certain annotations in child tiers do not even get an immediate reference to the timeline. Instead, they inherit their start and end points from the corresponding annotation in the parent tier.
Tier types
All tier-based tools define some kind of tier typology, i.e. ways of classifying individual tiers with respect to their semantic or structural properties. Thus, tiers in ANVIL can be of type 'primary' 'singleton' or 'span', reflecting structural properties related to the parent/child distinction described above. Similarly, ELAN distinguishes between the 'symbolic' types 'time subdivision', 'included in', 'symbolic subdivision' and 'symbolic association' (Transformer uses the same distinctions), and EXMAR-aLDA has the tier types 'transcription', 'description' and 'annotation' . All these type distinctions address a similar issue -they tell 'their' application about meaningful operations that can be carried out on the annotation data. However, the very fact that the typologies serve an application-specific purpose makes it difficult to map between them when it comes to data conversion.
Restrictions on label content
Besides classifying annotations according to their structural properties, some tools also provide a way of prescribing permissible values for annotation labels. Anvil has the most far-reaching functionality in this respect -it allows the definition of possible annotation values in a separate 'specification file'. A similar purpose is fulfilled by a so-called 'controlled vocabulary' in ELAN. There was some discussion in our group as to whether these specifications are to be considered part of the tools' formats at all. In any case, the fact that not every tool format provides a place for specifying such restrictions on label content makes this kind of data problematic for data exchange.
Exchange Format
Given that AG had been chosen as the general framework, we decided to develop the exchange format on the basis of AG's XML-based file format, which is identical to level 0 of the Atlas Interchange Format (AIF, see [2], [13]). We agreed to use the following strategy: First, we would define the greatest common denominator of all tool formats and make sure that we achieve lossless exchange of this information. Second, we would devise a way of uniformly encoding all information which goes beyond the common denominator. In that way, the exchange format will at least capture all the available information, and each tool's import routine can decide whether and how it can make use of that information. While this manner of proceeding does not guarantee lossless round-tripping between different tools, it should at least make it possible for the user to work with a chain of tools with increasingly complex data formats without losing any information in the conversion process(es).
Essentially, the greatest common denominator consists in the basic building blocks (i.e. labels with start and end times) plus the additional structural entities (tiers and timeline) discussed in section 3.1. The concepts discussed in section 3.2., on the other hand, go beyond the common denominator information. Consequently, the main characteristics of the exchange format are as follows:
Annotations and timeline
As prescribed by AIF, annotations are represented in <Annotation> elements which refer to external <Anchor> elements via start and end attributes. The annotation text is represented in one or more 3 <Feature> elements underneath the <Annotation> element, e.g.:
<Anchor id="T6" offset="10" unit="milliseconds"/> <Anchor id="T7" offset="30" unit="milliseconds"/>
[…]
<Annotation type="TIE1" start="T6" end="T7"> <Feature name="description"> And so hee </Feature>
</Annotation>
As mentioned above, for tools without an explicit timeline, the <Anchor> elements have to be generated from timestamps within annotations.
Tier assignment
AIF's <MetadataElement> element is used to record the existence of and information about tiers. We prescribe a fixed name 'Tier' for this kind of information and a nested <MetadataElement> element with the fixed name 'Ti-erIdentifier' to provide each tier with a unique identifier. This identifier is then referred to from the type attribute in the <Annotation> element. Tools with non-tier-based data formats can ignore this information when importing from the exchange format, but need to generate it from appropriate other elements of the data structure (i.e. from other categorizations of annotations) when exporting to the exchange format.
<MetadataElement name="Tier"> <MetadataElement name="TierIdentifier">
TIE1 </MetadataElement> </MetadataElement> […]
<Annotation type="TIE1" start=" T6" end=" T7">
Additional information
Further information about tiers is stored in nested <MetadataElement> elements with the fixed name 'TierAttribute'. Each tier attribute is represented by the fixed triple Source-Name-Value, where 'Source' describes the defining instance (i.e. the tool), 'Name' the name given by the tool for that attribute and 'Value' its value.
Conversion Routines
All participating tool developers were asked to write routines which would convert between their tools' formats and the exchange format. The technology for implementing these routines could be freely chosen. Thus, the ANVIL and ELAN conversions are done using the AG programming library, the EXMARaLDA conversion is based on XSL stylesheets, the Theme converter is written in Perl, the Transformer converter is written in Visual-Basic, and MacVisSTA uses Python scripts for the task. At this point in time, we see no disadvantage in this diversity. Rather, we think that the fact that all these technologies have led to working conversion routines can be seen as a proof of the flexibility of our solution. Partly, the new conversion routines have been integrated into the respective tools. Partly, they can be used as standalone converters.
Conclusion and Outlook
Our effort so far has resulted in a format via which the common denominator information can be reliably exchanged between the tools and which stores additional information in a standardized way. The interoperability can be extended to other tools like Praat [5], Transcriber [3] or the TASX Annotator [15] by making use of existing import and export routines which some of the tools offer.
The new data exchange options and the fact that we have a systematic analysis of the tool formats' differences and commonalities are a major step forward towards the interoperability that many multimodality researchers expect from their tools. Further developments will concern the information which goes beyond the common denominator. There are two areas in which we plan to extend the current specification of the exchange format within our approach: Simple partial correspondences: Some bits of information, although they do not exist in every format, are nevertheless easily mappable between those formats in which they are defined. An example is the speaker assignment of tiers which is done through a 'participant' attribute in ELAN and a 'speaker' attribute in EXMARaLDA. Mapping between these two is therefore simply a matter of agreeing that their semantics are identical and specifying a unique name for them to be used in a <MetadataElement> in AIF. Complex partial correspondences: Other bits of information are also present in several formats, but are encoded in non-isomorphic ways. For example, the parent-child relation between tiers is encoded in both ANVIL and ELAN as an explicit attribute which points from the child tier to the parent tier via a unique ID. In EXMARaLDA, there is no such attribute, but the relation of a tier of type 'Transcription' to all tiers of type 'Annotation' which carry the same speaker assignment is also to be interpreted as a parent-child relation. If the exchange format defines a reliable way of recording information about parent-child relations between tiers, the EXMARaLDA export could transform this information accordingly and thus make it accessible to ANVIL and ELAN for import (and vice versa).
Going beyond our current approach, we see two ways of further enhancing tool interoperability. The first is to reduce incompatibilities by modifying and assimilating the tools' data formats themselves. However, given that the diversity in tool formats is to a great part motivated by the different specializations of the respective tools, we do not expect (nor do we think it is desirable) to fully standardize the representation of multimodal annotations in that way. Another strategy has been proposed by a working group at the EMELD/TILR workshop 2007 at which our proposal had been presented: wherever a format-based approach to interoperability such as ours meets its limits, it might be worthwhile considering process-based methods for data exchange. In such an approach, "interoperability is achieved by having the various annotation tools interact with each other via a well-defined process which mediates the interaction among the tools. Within this process would be the requisite information regarding the data models of each tool that would interact it with as well as methods for detecting (and ideally also for resolving) annotation conflicts." (see [8]). In other words: a higher degree of interoperability could be achieved by letting a third component memorize and restore information which was lost in a conversion between two tools. Such a third component could also act on the basis of the proposed exchange format. We intend to explore this possibility in the future.
Acknowledgements
The initiative described in this paper was sponsored in part by funding from The MITRE Corporation Technology Program. We also gratefully acknowledge the Annotation Graph and Annotation Graph Toolkit projects.
More than one <Feature> element is used whenever an annotation consists of more than one label (cf. section 3.1.2.)
Appendix: Commented example of an instance of the multimodal annotation exchange formatThis example was generated by the EXMARaLDA export mechanism. It corresponds to the annotation file illustrated in the screenshot in figure 1. <?xml version="1.0" encoding="UTF-8"?> <AGSet xmlns="http://www.ldc.upenn.edu/atlas/ag/" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.0" id="exmaralda"> <Metadata> <!--Each tier is defined in a MetaDataElement with name 'Tier' --> <MetadataElement name="Tier"> <!--A child MetadataElement with name 'TierIdentifier' spcifies a unique ID for this tier --> <!--This element is obligatory --> <MetadataElement name="TierIdentifier">TIE0</MetadataElement> <!--Further child MetadataElements with name 'TierAttribute' define further properties of the respective tier --> <!--These elements are optional --> <!--this tier property says that the tier is assigned to the speaker with the (unique) ID 'SPK0' --> <MetadataElement name="TierAttribute"> <!--This MetadataElement specifies the tool which defined the property -in this case EXMARaLDA --> <MetadataElement name="Source">EXMARaLDA</MetadataElement> <!--This MetadataElement specifies the name of the property in the tool's format --> <MetadataElement name="Name">speaker</MetadataElement> <!--This MetadataElement specifies the value of the property --> <MetadataElement name="Value">SPK0</MetadataElement> </MetadataElement> <!--Another tier property defined by EXMARaLDA: the tier is of category 'sup' (for 'suprasegmental') --> <MetadataElement name="TierAttribute"> <MetadataElement name="Source">EXMARaLDA</MetadataElement> <MetadataElement name="Name">category</MetadataElement> <MetadataElement name="Value">sup</MetadataElement> </MetadataElement> <!--Another tier property defined by EXMARaLDA: the tier is of type 'a' (for 'annotation') --> <MetadataElement name="TierAttribute"> <MetadataElement name="Source">EXMARaLDA</MetadataElement> <MetadataElement name="Name">type</MetadataElement> <MetadataElement name="Value">a</MetadataElement> </MetadataElement> </MetadataElement> <!--another tier definition --> <MetadataElement name="Tier"> <MetadataElement name="TierIdentifier">TIE1</MetadataElement> <!--follows another set of EXMARaLDA-specific tier attributes --> </MetadataElement> <!--yet another tier definition --> <MetadataElement name="Tier"> <MetadataElement name="TierIdentifier">TIE2</MetadataElement> <!--follows another set of EXMARaLDA-specific tier attributes --> </MetadataElement> <!--follow more tier definitions --> </Metadata> <!--The Timeline to which Anchors refer --> <Timeline id="exmaralda_Timeline1"> <!--The Signal element specifies the media file for this annotation --> <Signal id="exmaralda_Timeline1_Signal1" unit="miliseconds" mimeClass="" mimeType="video/quicktime" encoding="" xlink:href="pear.mov"/> </Timeline>
Anvil website. Anvil website. http://www.anvil-software.de/ [2] ATLAS Website: http://sourceforge.net/projects/jatlas/
Transcriber: Development and Use of a Tool for Assisting Speech Corpora Production. C Barras, E Geoffrois, Z Wu, M Liberman, Speech Communication. 33Barras, C.; Geoffrois, E.; Wu, Z. & Liberman, M. (2000). Transcriber: Development and Use of a Tool for Assisting Speech Corpora Production. Speech Communication 33, 5-22.
A formal framework for linguistic annotation. S Bird, M Liberman, Speech Communication. 33Bird, S. & Liberman, M. (2001). A formal frame- work for linguistic annotation. Speech Communi- cation 33, 23-60.
PRAAT, a system for doing phonetics by computer, version 3.4. Institute of Phonetic Sciences of the University of Amsterdam. P Boersma, D Weenik, Report 132. 182 pagesBoersma, P. & Weenik, D. (1996). PRAAT, a sys- tem for doing phonetics by computer, version 3.4. Institute of Phonetic Sciences of the University of Amsterdam, Report 132. 182 pages.
Annotating Multimedia/Multi-modal resources with ELAN. H Brugman, A Russel, Proceedings of LREC 2004, Fourth International Conference on Language Resources and Evaluation. LREC 2004, Fourth International Conference on Language Resources and EvaluationBrugman, H. & Russel, A. (2004). Annotating Multimedia/Multi-modal resources with ELAN. Proceedings of LREC 2004, Fourth International Conference on Language Resources and Evalua- tion
. C-Bas Website, C-BAS website. http://www.cmi.arizona.edu/go.spy?xml=cbas.xml
Report from TILR Working Group 1 : Tools interoperability and input/output formats. M Cochran, J Good, D Loehr, S A Miller, S Stephens, B Williams, I Udoh, Cochran M.; Good, J.; Loehr, D.; Miller, S.A.; Stephens, S.; Williams, B. & Udoh, I. (2007). Re- port from TILR Working Group 1 : Tools interop- erability and input/output formats. [http://tilr.mseag.org/wiki/index.php?title=Workin g_Group_1]
. Elan Website, ELAN website. http://www.lat-mpi.eu/tools/tools/elan
EXMARaLDA website. EXMARaLDA website. http://www.exmaralda.org
Anvil -A generic annotation tool for multimodal dialogue. M Kipp, Proceedings of the 7th. the 7thKipp, M. (2001). Anvil -A generic annotation tool for multimodal dialogue. Proceedings of the 7th
Speech Communication and Technology (Eurospeech). Aalborg. European Conference on Speech Communication and Technology (Eurospeech). Aalborg, 1367-1370.
Gesture Generation by Imitation -From human behavior to computer character animation. M Kipp, Dissertation.comBoca Raton, FloridaKipp, M. (2004). Gesture Generation by Imitation -From human behavior to computer character animation. Boca Raton, Florida: Dissertation.com.
Recent Improvements to the ATLAS Architecture. C Laprun, J Fiscus, J Garofolo, S Pajot, Proceedings of HLT 2002, Second International Conference on Human Language Technology. HLT 2002, Second International Conference on Human Language TechnologySan FranciscoLaprun, C.; Fiscus, J.; Garofolo, J. & Pajot, S. (2002). Recent Improvements to the ATLAS Ar- chitecture. Proceedings of HLT 2002, Second In- ternational Conference on Human Language Technology, San Francisco, 2002.
. Macvissta Website, MacVissta website. http://sourceforge.net/projects/macvissta/
The TASX Environment: An XML-Based Toolset for Time Aligned Speech Corpora. J.-T Milde, U Gut, Proceedings of the Third International Conference on Language Resources and Evaluation (LREC 2002). the Third International Conference on Language Resources and Evaluation (LREC 2002)Gran CanariaMilde, J.-T. & Gut, U. (2002). The TASX Envi- ronment: An XML-Based Toolset for Time Aligned Speech Corpora. Proceedings of the Third International Conference on Language Resources and Evaluation (LREC 2002), Gran Canaria.
Website of the multimodal annotation workshop. Website of the multimodal annotation workshop 2007. [http://www.multimodal-annotation.org]
K Rohlfing, D Loehr, S Duncan, A Brown, A Franklin, I Kimbara, J.-T Milde, F Parrill, T Rose, T Schmidt, H Sloetjes, A Thies, S Wellinghoff, Comparison of multimodal annotation tools: workshop report. Gesprächsforschung -Online-Zeitschrift zur verbalen Interaktion. Rohlfing, K.; Loehr, D.; Duncan, S.; Brown, A.; Franklin, A.; Kimbara, I.; Milde, J.-T.; Parrill, F.; Rose, T.; Schmidt, T.; Sloetjes, H.; Thies, A. & Wellinghoff, S. (2006). Comparison of multimodal annotation tools: workshop report. Gesprächsfor- schung -Online-Zeitschrift zur verbalen Interak- tion (7), 99-123.
. Macvissta Website, MacVissta website. http://sourceforge.net/projects/macvissta/
MacVisSTA: A System for Multimodal Analysis of Human Communication and Interaction. Master's thesis. T Rose, Virginia TechRose, T. (2007). MacVisSTA: A System for Mul- timodal Analysis of Human Communication and Interaction. Master's thesis, Virginia Tech.
MacVisSTA: A System for Multimodal Analysis. T Rose, F Quek, Y Shi, Proceedings of the 6th International Conference on Multimodal Interfaces. the 6th International Conference on Multimodal InterfacesRose, T. ; Quek, F. & Shi, Y. (2004). MacVisSTA: A System for Multimodal Analysis. Proceedings of the 6th International Conference on Multimodal Interfaces.
Time-Based data models and the TEI guidelines for transcriptions of speech. T Schmidt, HamburgWorking papers in Multilingualism (56Schmidt, T. (2005). Time-Based data models and the TEI guidelines for transcriptions of speech. Working papers in Multilingualism (56), Hamburg.
EXMARaLDA -Creating, analysing and sharing spoken language corpora for pragmatic research. T Schmidt, K Wörner, Corpus based pragmatics. Jens AllwoodTo appear inSchmidt, T. & Wörner, K. (2008). EXMARaLDA - Creating, analysing and sharing spoken language corpora for pragmatic research. To appear in Jens Allwood (ed.): Corpus based pragmatics.
Transformer website. Transformer website. http://www.oliverehmer.de/transformer/
ELAN: a Professional Framework for Multimodality Research. P Wittenburg, H Brugman, A Russel, A Klassmann, H Sloetjes, Proceedings of LREC 2006, Fifth International Conference on Language Resources and Evaluation. LREC 2006, Fifth International Conference on Language Resources and EvaluationWittenburg, P.; Brugman, H.; Russel, A.; Klassmann, A. & Sloetjes, H. (2006). ELAN: a Professional Framework for Multimodality Re- search. Proceedings of LREC 2006, Fifth Interna- tional Conference on Language Resources and Evaluation.
AG element holds the actual Annotations and their Anchors. <!--one AG element holds the actual Annotations and their Anchors -->
Anchor gets a unique ID --> <!--offsets are given in milliseconds --> <!--Anchors should be ordered by offset --> <Anchor id="T0" offset="0" unit="milliseconds"/> <Anchor id="T1" offset="1900" unit="milliseconds"/> <Anchor id=. <!- , -it refers to the Timeline defined above --> <AG timeline="exmaralda_Timeline1" id="exmaralda_AG1"> <!--each. T2" offset="2000" unit="milliseconds"/> <Anchor id="T3" offset="3211" unit="milliseconds"/> <Anchor id="T4" offset="5000" unit="milliseconds"/> <Anchor id="T5" offset="9200" unit="milliseconds"/> <Anchor id="T6" offset="10500" unit="milliseconds"/> <!--follow more Anchor definitions<!--it refers to the Timeline defined above --> <AG timeline="exmaralda_Timeline1" id="exmaralda_AG1"> <!--each Anchor gets a unique ID --> <!--offsets are given in milliseconds --> <!--Anchors should be ordered by offset --> <Anchor id="T0" offset="0" unit="milliseconds"/> <Anchor id="T1" offset="1900" unit="milliseconds"/> <Anchor id="T2" offset="2000" unit="milliseconds"/> <Anchor id="T3" offset="3211" unit="milliseconds"/> <Anchor id="T4" offset="5000" unit="milliseconds"/> <Anchor id="T5" offset="9200" unit="milliseconds"/> <Anchor id="T6" offset="10500" unit="milliseconds"/> <!--follow more Anchor definitions -->
<!--the value of the 'type' attribute refers to the PCDATA value of a MetadataElement with name 'TierIdentifier' --> <!--the values of the 'start' and 'end' attribute refer to the 'id' attributes of Anchor elements. <!--the value of the 'type' attribute refers to the PCDATA value of a MetadataElement with name 'TierIdentifier' --> <!--the values of the 'start' and 'end' attribute refer to the 'id' attributes of Anchor elements -->
Annotation element describes an annotation in tier 'TIE0', starting at anchor 'T1', ending at anchor 'T3' labelled 'louder. <!--This , <Annotation id="TIE0_T1" type="TIE0" start="T1" end="T3"> <!--the Feature element(s) contain the actual annotation label(s<!--this Annotation element describes an annotation in tier 'TIE0', starting at anchor 'T1', ending at anchor 'T3' labelled 'louder' --> <Annotation id="TIE0_T1" type="TIE0" start="T1" end="T3"> <!--the Feature element(s) contain the actual annotation label(s)-->
<!--the value of the 'name' attribute can be freely chosen --> <Feature name="description. <!--the value of the 'name' attribute can be freely chosen --> <Feature name="description">louder </Feature> </Annotation> <!--[...] -->
Three annotation elements from tier 'TIE1', describing verbal behaviour --> <Annotation id="TIE1_T0" type="TIE1" start="T0" end="T1"> <Feature name="description">So it starts out with: A </Feature> </Annotation> <Annotation id=. <!-- , TIE1_T1" type="TIE1" start="T1" end="T2"> <Feature name="description">roo</Feature><!--Three annotation elements from tier 'TIE1', describing verbal behaviour --> <Annotation id="TIE1_T0" type="TIE1" start="T0" end="T1"> <Feature name="description">So it starts out with: A </Feature> </Annotation> <Annotation id="TIE1_T1" type="TIE1" start="T1" end="T2"> <Feature name="description">roo</Feature>
</Annotation> <Annotation id="TIE1_T2" type="TIE1" start="T2" end="T3"> <Feature name="description">ster crows</Feature> </Annotation> <!. </Annotation> <Annotation id="TIE1_T2" type="TIE1" start="T2" end="T3"> <Feature name="description">ster crows</Feature> </Annotation> <!--[...] -->
Three annotation elements from tier 'TIE2', describing non-verbal behaviour --> <Annotation id="TIE2_T0" type="TIE2" start="T0" end="T1"> <Feature name="description">rHA on rKN, lHA on lSH</Feature> </Annotation> <Annotation id=. TIE2_T1" type="TIE2" start="T1" end="T3"> <Feature name="description">rHA up and to the right </Feature> </Annotation> <Annotation id="TIE2_T3" type="TIE2" start="T3" end="T4"> <Feature name="description">rHA stays up</Feature><!--Three annotation elements from tier 'TIE2', describing non-verbal behaviour --> <Annotation id="TIE2_T0" type="TIE2" start="T0" end="T1"> <Feature name="description">rHA on rKN, lHA on lSH</Feature> </Annotation> <Annotation id="TIE2_T1" type="TIE2" start="T1" end="T3"> <Feature name="description">rHA up and to the right </Feature> </Annotation> <Annotation id="TIE2_T3" type="TIE2" start="T3" end="T4"> <Feature name="description">rHA stays up</Feature> |
2,468,776 | Sentence Parsing with Double Sequential Labeling in Traditional Chinese Parsing Task | In this paper, we propose a new sequential labeling scheme, double sequential labeling, that we apply it on Chinese parsing. The parser is built with conditional random field (CRF) sequential labeling models. One focuses on the beginning of a phrase and the phrase type, while the other focuses on the end of a phrase. Our system, CYUT, attended 2012 the second CIPS-SGHAN conference Bake-off Task4, traditional Chinese parsing task, and got promising result on the sentence parsing task. | [
16690392,
2706681,
1739888,
17643319
] | Sentence Parsing with Double Sequential Labeling in Traditional Chinese Parsing Task
DEC. 2012
Shih-Hung Wu shwu@cyut.edu.tw
Institute for Information Industry Taipei
Chaoyang University of Technology Wufeng
TaichungTaiwan, ROC., Taiwan, ROC
Hsien-You Hsieh
Institute for Information Industry Taipei
Chaoyang University of Technology Wufeng
TaichungTaiwan, ROC., Taiwan, ROC
Liang-Pu Chen
Institute for Information Industry Taipei
Chaoyang University of Technology Wufeng
TaichungTaiwan, ROC., Taiwan, ROC
Sentence Parsing with Double Sequential Labeling in Traditional Chinese Parsing Task
Proceedings of the Second CIPS-SIGHAN Joint Conference on Chinese Language Processing
the Second CIPS-SIGHAN Joint Conference on Chinese Language ProcessingTianjin, ChinaDEC. 2012
In this paper, we propose a new sequential labeling scheme, double sequential labeling, that we apply it on Chinese parsing. The parser is built with conditional random field (CRF) sequential labeling models. One focuses on the beginning of a phrase and the phrase type, while the other focuses on the end of a phrase. Our system, CYUT, attended 2012 the second CIPS-SGHAN conference Bake-off Task4, traditional Chinese parsing task, and got promising result on the sentence parsing task.
Introduction
Parsing is to identify the syntactical role of each word in a sentence, which is the starting point of natural language understanding. Thus, parser is an important technology in many natural language processing (NLP) applications. Theoretically, given a correct grammar, a parser can parse any valid sentence. However, in real world each writer might have a different grammar in mind; it is hard to parse all the sentences in a corpus without a commonly accepted grammar. PARSEVAL measures help to evaluate the parsing results from different systems in English (Harrison et al., 1991).
Parsing Chinese is even harder since it lacks of morphological markers on different part-ofspeech (POS) tags, not to mention the different standards of word segmentation and POS tags. In 2012 CIPS-SGHAN Joint Conference on Chinese Language Processing, a traditional Chinese parsing task was proposed. The task was similar to the previous simplified Chinese parsing task (Zhou and Zhu, 2010), but it was with different evaluation set and standard. In this task, systems should recognize the phrase labels (S, VP, NP, GP, PP, XP, and DM), corresponding to Clause, Verb Phrase, Noun Phrase, Geographic Phrase, Preposition Phrase, Conjunction Phrase, and Determiner Measure phrase, all of which were defined in the User Manual of Sinica Treebank v3.0 1 . The goal of the task is to evaluate the ability of automatic parsers on complete sentences in real texts. The task organizers provide segmented corpus and standard parse tree. Thus, the task attenders can bypass the problem of word segmentation and the POS tag set problem, and focus on identifying the phrase boundary and type. The test set is 1,000 segmented sentences. Each sentence has more than 7 words, for example: 他 刊登 一則 廣告 在 報紙 上.
(He published an advertisement on newspaper in)
The system should recognize the syntactic structure in the given sentences, such as: S(agent:NP(Nh:他) | Head:VC:刊登 | theme: NP (DM:一則 | Na: 廣告) | location: PP (P:在 | GP(NP(Na:報紙) | Ng:上))).
In additional to the sentence parsing task, there is a semantic role labeling task, which aims to find semantic role of a syntactic constituent. The participants can use either the training data provided by the organizers, which is called closed track, or the additional data, which is called open track.
In the following sections we will report how we use sequential labeling models on sentence chunking in the sentence parsing task in the closed track.
Methodology
Sequential labeling is a machine learning method that can train a tagger to tag a sequence of data.
The method is widely used in various NLP applications such as word segmentation, POS tagging, named entity recognition, and parsing. Applying the method to different tasks requires different adjustment; first at all is to define the tag set. On POS tagging task, the tag set is defined naturally, since each word will have a tag on it from the POS tag set. On other tasks, the tag set is more complex, usually including the beginning, the end, and outside of a sub-sequence. With an appropriate tag set, the tagging sequence can indicate the boundary and the type of a constituent correctly.
Our parsing approach is based on chunking (Abney, 1991) as in the previous Chinese parsing works (Wu et al. 2005, Zhou et al. 2010. Finkel et al. (2008) suggested CRF to train the model for parsing English. Since chunking only provides one level of parsing, not full parsing, several different approaches were proposed to achieve full parsing. Tsuruoka et al. (2009) proposed a bottom-up approach that the smallest phrases were constructed first, and merge into large phrases. Zhou et al. (2010) proposed another approach that maximal noun phrases were recognized first, and then decomposed into basic noun phrases later. Since one large NP often contains small NPs in Chinese, this approach can simplify many Chinese sentences. In this paper, we also define a double sequential labeling scheme to deal with the problem in a simpler way.
Sequential labeling
Many NLP applications can be achieved by sequential labeling. Input X is a data sequence to be labeled, and output Y is a corresponding label sequence. While each label Y is taken from a specific tag set. The model can be defined as:
k k k f X Z X Y p ) exp( ) ( 1 ) | ( (1)
where Z(X) is the normalization factor, f k is a set of features, λ k is the corresponding weight.
Many machine learning methods have been used on training the sequential labeling model, such as Hidden Markov Model, Maxima Entropy (Berger, 1996), and CRF (Lafferty, 2001). These models can be trained by a corpus with correct labeling and used as a tagger to label new input. The performance is proportional to the size of training set and counter proportional to the size of tag set. Therefore, if large training set is not available, decreasing the tag set can be a way to promote the performance. In this task, we define two small tag sets for the closed task.
Double sequential labeling scheme
Sequential tagging can be used for labeling a series of words as a chunk by tagging them as the Beginning, or Intermediate of the chunk. The tagging scheme is call the B-I-O scheme. For the parsing task, we have to define two tags for each type of phrase, such as B-NP and I-NP for the noun phrase. The B-I-O scheme works well on labeling non-overlapping chunks. However, it cannot specify overlapping chunks, such as nested named entities, or long NP including short NPs.
In order to specify the overlapping chunks, we define a double sequential tagging scheme, which consists of two taggers, one is tagging the input sequence with I-B tags, and the other is tagging the input sequence with I-E tags, where E means the ending of some chunk. The first tagger can give the type and beginning position of each phrase in the sentence, while the second tagger can indicate the ending point of each phrase. Thus, many overlapping phrase can be specified clearly with this technology.
The Parsing Technology
The architecture of our system is shown in Figure 1. The system consists of three tagging modules and one post-processing module. The POS tagger will label each word in the input sentence with a POS tag. Then the sentence and the corresponding POS tags will be double labeled with beginning-or-intermediate-of-a-type and ending-or-not tag by the IB and IE taggers. A post-processing module will give the final boundary and the phrase type tag of the sentence. Each component will be described in the following subsections.
Part-of-Speech tagging
The POS tagging in our system is done by sequential labeling technology with CRF as in Lafferty (2001). We use the CRF++ toolkit 2 as our POS tagging tool. The model is trained from the official training set. We use the reduced POS tag set in our system. The tag set is the reduced POS tag set provided by CKIP. The complete set of POS tags is defined in CKIP 3 . Figure 2 shows the architecture of CRF tagger. For different applications, system developers have to update the tag set, feature set, preprocessing module and run the training process of the CRF model. Once the model is trained, it can be used to process input sentences with the same format. The feature set for POS tagging is the word itself and the word preceding it and the word following it. The training sentences have to be processed before they can be used as the input of CRF++ toolkit. Table 1 shows an example of the input format of training a CRF tagger. The original sentence in the training corpus is: S(NP(Nh: 他 |DE: 的 |NP(NP(Na: 作品)|Caa: 與 |NP(Na:生活|Na:情形)))|PP(P:被)|VG:拍成|Di: 了|NP(Na:電影)).
The first column shows the words in the sentence, the second column, which is for additional features, is not used in this case, and the third column is the POS tag. Since words in the DM phrases do not have POS tags in the training set, the tag DM itself is regarded as the POS tag for them. Table 1. A POS tagging training example Table 2 shows the features used to train the POS tagger. In our system, due to the time limitation, the features are only the word itself, the word preceding it, and the word following it. Zhou et al. (2010) suggested that more features, such as more context words, prefix or suffix of the context words, might improve the accuracy of POS tagging.
Boundaries and types of constituents tagging
The POS tagging is not evaluated in this task, which is regarded as the feature preparation for parsing. The parsing result is based on both words and POS. In our double sequential labeling scheme, every sentence will be labeled with two tags from two tag set. The first tag set is the IB set, which consists of B, the beginning word, and I, the intermediate word, of all the types of phrases in the task, ie., S, NP, VP, and PP. Note that DM and GP were processed separately. The second tag set is the IE set, which consists of only E, the ending word of any phrase or I, other words.
The training sentences also have to be processed before they can be used as the input of CRF++ toolkit. Table 3 shows an example of the input format of training the BIO CRF tagger. The first column shows the words in the sentence, the second column is the corresponding POS, and the third column is the IB tag. Table 3. An IB tagging training example Table 4 shows an example of the input format of training the EO CRF tagger. The first column shows the words in the sentence, the second column is the corresponding POS, and the third column is the IE tag. Di Na I I E I I E E I I E Table 4. An IE tagging training example Table 5 shows the features used to train the double sequential labeling tagger. In our system, also due to the time limitation, the features are the unigrams and bigrams of the word itself, the word preceding it, the word following it and the unigram, bigram, trigrams of the corresponding POSs of the context words. Zhou et al. (2010) suggested that the accuracy of tagging might be improved by more features, such as more context words, combination of POSs and words in the context. Table 5. Features used to train the double sequential labeling taggers
Word POS IB tag 他 的 作品 與 生活 情形 被 拍成 了 電影 Nh DE Na Caa Na Na P VG Di Na B-NP I-NP B-NP I-NP B-NP I-NP B-PP I-S I-S B-NPWord POS IE tag 他 的 作品 與 生活 情形 被 拍成 了 電影 Nh DE Na Caa Na Na P VG
Word
Post-processing to determine the boundaries and the types of constituents
After each word in the sentence is tagged with two tags, one from IB and one from IE, our system will determine the type and boundary of each phrase in the sentence. By integrating the information from both IB and IE labels, the boundary and type of phrases will be determined in the module.
Step 1: Combine the two labels to determine boundary. The B tags indicate the begging of a certain phrase. While the following I tags with the same phrase type indicate the intermediate of the same phrase. An I tag with different type or an E tag also indicates the end of a phrase. The type of the I tag which is different to the B tag will be stored for the next step.
Step 2: Put back the phrases with missing B tags during the step 1. The phrases contains I tag with different type will be labeled as a larger phrase with the type of the I tag.
Step 3: Add the GP phrase label according to the presentence of the Ng POS tag. Table 6 shows examples on how the post-processing works on GP. Phrases without ending tags will be tagged as ended at the last word.
Table 7 (at the end of the paper) shows a complete example. Table 6. When there is a word labeled Ng, our system will treated that phrase as NG.
Experiment results
The training set size is 5.8 MB, about 65,000 parsed sentences. The test set size is 55.4 KB, which consists of 1,000 sentences. The closed test on our POS tagging system is 96.80%. Since the official test does not evaluate POS, we cannot report the POS accuracy in open test.
Official test result
The official-run result of our system in 2012 Sighan Traditional Chinese Sentence Parsing task is shown in Table 8, and the detail of each phrase type is shown in Table 9. The Precision, Recall, and F1 are all above the baseline. The official evaluation required that the boundary and phrase label of a syntactic constituent must be completely identical with the standard. The performance metrics are similar to the metrics of PARSEVAL as suggested in (Black et al., 1991): Precision, Recall, F1 measure are defined as follows: Precision = # of correctly recognized constituents / # of all constituents in the automatic parse. Recall = # of correctly recognized constituents / # of all constituents in the gold standard parse. F1 = 2*P*R / (P + R). Table 9. Detailed result of our system
Error analysis on the official test result
In the official test, there were 87 sentences that our system gave correct full parsing. We find that most of the sentences contain large NP chunks. Since our system tend to chunk large NP, these sentences are best parsed by our system. In the formal run, there were 14 sentences that our system labeled wrong. We will analyze the causes and find a way to improve, especially on the missing S, GP error, and PP error sentences.
Error analysis on the missing S tag sentences
Our system will give an S tag if there is at least on word tagged B-S or I-S. Therefore, if there is no word tagged with S, our system will miss the S tag. Consider sentence no. 97, the parsing result of our system is: The precision, recall, and F1 are all 0. The main reason that our system failed to chunk the right NP is our system cannot tag the POS of the named entity 摩根富林明 as Nb. Also, since the NP is not complete and the last word of the sentence is a verb, our system failed to label the S. Named entity recognition is a crucial component of word segmentation, POS tagging, and parsing.
Error analysis on GP
Consider sentence no. 13, the parsing result of our system is:
S(GP(D: 然後|NP(Nh: 我 )|Ng: 後 )|VC: 排 |NP(DM: 一個|Na: 青年|Na: 男子|Na: 飛 躍)|VP(Cbb:而|VC:起)) System result:
{S( 然 後 我 後 排 一 個 青 年 男 子 飛 躍 而 起 ), GP(然後我後), NP(我), NP(一個青年男子飛躍), VP(而起)} Ground Truth: {S( 然 後 我 後 排 一 個 青 年 男 子 飛 躍 而 起 ), NP(我後排一個青年男子), NP(我後排), VP(而 起)}
The precision, recall, and F1 are 0.4, 0.5, and 0.4444 respectively. Our system reported an extra GP( 然後我後). In this case, the error is caused by a wrong POS tagging error. The POS of '後' is not Ng. This case is hard to solve, since the CKIP online POS tagger also tag it as Ng. Our system will tag the phrase GP once the POS Ng appeared.
Consider sentence no. 43, the parsing result of our system is:
S The precision, recall, and F1 are 0.5, 0.5, and 0.5. Our system missed the GP(一○二年底). Because the POS of '底' is tagged wrongly as Ncd, should be Ng. This case is hard, the CKIP online system segmented and tagged it differently as 一○二(Neu) 年底(Nd).
Error analysis on PP
Consider sentence no. 53, the parsing result of our system is: VP(NP(PP(P:如|NP(Na:簡易|Na:餐飲)|Neqa: 部分|D:可|VC:分包|PP(P:給|NP(VH:專業|Na:餐 飲|Na:業者))|VC:經營))) System result:
{PP(如簡易餐飲部分可分包給專業餐飲業 者經營), , PP(給專業餐飲業者) } Ground Truth:
{PP(如簡易餐飲部分), PP(給專業餐飲業者)} The precision, recall, and F1 are 0.5, 0.5, and 0.5. In this case, the error is caused by the missing ending tag of the first PP.
Consider sentence no. 237, the parsing result of our system is:
S(NP(NP(Na: 周 傑 倫 )|VA: 前進|Nc: 好 萊 塢 |Na:首作|Na:青蜂俠)|D:仍|PP(P:在|NP(VC:拍攝 |Na:階段))) System result: 237 { PP(在拍攝階段) } Ground Truth:
{no PP}
The precision, recall, and F1 are 0.6, 0.6, and 0.6. In this case, the ground truth does not include the PP(在拍攝階段). Because in this case, the POS of '在' is not P, should be VCL. This case is hard to solve, since the CKIP online POS tagger also tag it as P.
Consider sentence no. 673, the parsing result of our system is:
S(S(Nd: 目前|NP(DM: 這波|Na: 物價|Na: 跌 勢)|VH:主要)|V_11:是|NP(Cbb:因|Nc:全球|Na: 金融|Na:危機)|VP(Cbb:而|VC:起)) System result:
{no PP} Ground Truth:
{ PP(因全球金融危機) } The precision, recall, and F1 are 0.4, 0.5, and 0.4444 respectively. In this case, our system missed the PP(因全球金融危機). Because the POS of '因' is tagged as Cbb instead of P. This case is also hard to solve, since the CKIP online POS tagger also tag it as Cbb. Table 13. Error distribution on VP By observing the two tables, we find that missing the begin tag is the major cause of error. To overcome the shortage, IB tagging accuracy is the most important issue. Since the wrong type labeling error is not very heavy, our system should label more begin tag in the future.
Conclusion and Future work
This paper reports our approach to the traditional Chinese sentence parsing task in the 2012 CIPS-SIGHAN evaluation. We proposed a new labeling method, the double labeling scheme, on how to use linear chain CRF model on full parsing task. The experiment result shows that our approach is much better than the baseline result and has average performance on each phrase type.
According to the error analysis above, we can find that many error cases of our system were caused by wrong POS tags and wrong boundary of PP phrase. POS tagging accuracy can be improved by adding more effective features, as in the previous works, and enlarging the training set. The boundary of PP phrase determination can also be improved by a larger training set and rules. Our system works best on S, and worst on PP and VP. The main reason of missing VP and PP is the error of POS tagging. Therefore, a better POS tagger will improve the worst part significantly. Complicated NP is known to be the highest frequent phrase in Chinese and cannot be represented in linear chain CRF model. Our system still fails to recognize many NPs. The system performance on NP can be improved by defining better representation of tag set.
Due to the limitation of time and resource, our system is not tested under different experimental settings. In the future, we will test our system with more feature combination on both POS labeling and sentence parsing. 了|NP(Na:電影))) Table 7. A complete example of the Post-processing steps
Figure 1 .
1System
Figure 2 .
2CRF tagger architecture Preprocessing for POS tagging: 2 http://crfpp.googlecode.com/svn/trunk/doc/ 3 http://ckipsvr.iis.sinica.edu.tw/cat.htm
S
(agent:NP(Nh: 我 )|time:D: 原本|Head:VF: 打 算|goal:VP(PP(P:在|GP(NP(Na:自然|Na:科學 類)|Ng:中))|VC:找|NP(Na:答案))) PP(Head:P: 當 |DUMMY:GP(VP(VC: 教 |goal:NP(Nh:她)|NP(Na:水|Na:字))|Ng:時)) VP(concession:Cbb: 雖 |Head:VD: 帶給 |theme:NP(Na: 人們)|goal:NP(GP(NP(Na: 生 活 )|Ng: 上 )|VP(Dfa: 很 |VH: 大 )|DE: 的 |Nv: 方 便))
For example, sentence no.339: {S( 最 好 康 贈 品 包 括 買 筆 電 送 液 晶 螢 幕 ), NP(最好康贈品), VP(最好康), VP(買筆電送液 晶螢幕), NP(筆電), VP(送液晶螢幕), NP(液晶 螢幕)} and sentence no.580: {S(台中日光溫泉會館執行董事張榮福表示), NP( 台中日光溫泉會館執行董事張榮福), NP(台中日光溫泉會館執行董事), NP(台中日 光溫泉會館)}
VP(VC:摩根富林明|NP(Nc:台灣|Na:增長|Na: 基金|Na:經理人|Na:葉鴻儒)|VC:分析) System result: {VP(摩根富林明台灣增長基金經理人葉鴻 儒分析), NP(台灣增長基金經理人葉鴻儒)} Ground Truth: {S(摩根富林明台灣增長基金經理人葉鴻儒 分析), NP(摩根富林明台灣增長基金經理人葉 鴻儒), NP(摩根富林明台灣增長基金經理人), NP(摩根富林明台灣增長基金)}
(NP(Na: 司法院|DM: 多年)|VP(GP(Ng: 來)|VL:持續|VP(VC:選派|NP(Na:法官)|PP(P:到 |NP(Nc:國外))|VC:進修|VC:學習))) System result: {S(司法院多年來持續選派法官到國外進修 學習), NP(司法院多年), VP(來持續選派法官 到國外進修學習), GP(來), VP(選派法官到國 外進修學習), NP(法官), PP(到國外), NP(國外)} Ground Truth: {S(司法院多年來持續選派法官到國外進修 學習), NP(司法院), GP(多年來), VP(選派法官 到國外進修學習), NP(法官), VP(到國外進修 學習), NP(國外), VP(進修學習)} The precision, recall, and F1 are 0.5, 0.5, and 0.5. Our system found a wrong boundary of the GP( 多年 來 ). This is cause by another wrong boundary of VP. Consider sentence no. 69, the parsing result of our system is: VP(NP(S(NP(Na:總裁|Nb:莊秀石)|VE:預估 |VP(Dfa: 最 |VH: 快 )|NP(Na: 一○二年)|Ncd: 底)|VB:完工)) System result: {VP(總裁莊秀石預估最快一○二年底完工), NP(總裁莊秀 石預估 最快 一 ○二年 底完 工 ), S(總裁莊秀石預估最快一○二年底), NP(總裁 莊秀石), VP(最快), NP(一○二年)} Ground Truth: {S(總裁莊秀石預估最快一○二年底完工), NP(總裁莊秀石), VP(最快一○二年底完工), VP(最快), GP(一○二年底), NP(一○二年)}
投資新時代), NP(富蘭克林華美), VP(日前舉 辦迎接投資新時代), VP(迎接投資新時代), NP(投資新時代)} {S(富蘭克林華美投信日前舉辦迎接 投資新時代), NP(富蘭克林華美投信), VP(迎 接投資新時代), NP(投資新時代)起人口販運集團案), S(基隆市警察局外事課 今年破獲一起人口販運集團案), NP(基隆市警 察局外事課今年破獲), NP(基隆市警察局外事 課), NP(販運集團)} {S(基隆市警察局外事課今年破獲一 起人口販運集團案), NP(基隆市警察局外事 課), NP(一起人口販運集團案), NP(人口販運 集團案), NP(人口販運集團), NP(人口販運)( 詳 情 可 上 神 乎 科 技 官 網 瞭 解 ), NP(詳情), NP(神乎科技官網)} {S( 詳 情 可 上 神 乎 科 技 官 網 瞭 解 ), NP(詳情), NP(神乎科技官網), NP(神乎科技)縣迎曙光), NP( 黨主席蔡英文元旦當天), NP(黨主席蔡英文), PP(到台東縣), NP(台東 縣), NP(曙光)} {S(黨主席蔡英文元旦當天將到台東 縣迎曙光), NP(黨主席蔡英文), NP(元旦當天), PP(到台東縣), NP(台東縣), NP(曙光)不景氣時期舉債), NP(易債), NP(子孫)} {S(不景氣時期舉債反易債留子孫), VP(不景氣時期舉債), NP(不景氣時期), S(債 留子孫), NP(債), NP(子孫)} 0.5 0.3333 0.4 VP error type examples: Error type 1: 31 {S(各球團需補助才請洋將實在說不 過去), NP(各球團), VP(才請洋將), NP(洋將)} {S(各球團需補助才請洋將實在說不 過去), NP(各球團), NP(補助), VP(才請洋將實 在說不過去), NP(洋將), VP(實在說不過去)消防人員), VP( 才 能 讓 災損 減 到 最低 ), NP(災損減到)} {S(消防人員才能讓災損減到最低), NP( 消 防 人 員 ), NP( 災損), VP( 消防人員), VP( 才 能 讓 災損 減 到 最低 ), NP(災損減到)} {S(消防人員才能讓災損減到最低), NP( 消 防 人 員 ), NP( 災損), VP( 不景氣時期舉債), NP(易債), NP(子孫)} {S(不景氣時期舉債反易債留子孫), VP(不景氣時期舉債), NP(不景氣時期), S(債 留子孫), NP(債), NP(子孫)} 0.5 0.3333 0.4 The error analysis on NP: We manually analyze the error cases and show the percentage of each error type in the following tables. The percentage in
(Nh:他|DE:的|NP(Na:作品)|Caa:與|NP(Na:生活|Na:情形)|PP(P:被)|VG:拍成|Di:了 |NP(Na:電影)|@S Step 2 S(NP(Nh:他|DE:的|NP(Na:作品)|Caa:與|NP(Na:生活|Na:情形)|PP(P:被)|VG:拍成|Di: 了|NP(Na:電影)|@ Step 3 S(NP(Nh:他|DE:的|NP(Na:作品)|Caa:與|NP(Na:生活|Na:情形)|PP(P:被)|VG:拍成|Di:
Table 2. Features used to train the POS taggerWord Unigrams
W
-1
, W
0
,W
1
The type of begin tag is wrong.In the following examples, on the top is the output of our system, on the bottom is the ground truth.#
%
Wrong boundary
24
25%
Wrong IB type
27
28%
Missing POS P
48
50%
Correct PP
24
25%
Table 11. Result analysis on the 96 PP in official
test
5.4 Error analysis on NP and VP
We find that there are five types of error in the
NP or VP chunking of our system result.
1. Error on the right boundary
2. Error on the left boundary
3. Missing the NP or VP type
4. A large phrase covered two or more small
phrases with exactly substring.
5. Exchange on type labeling: NP into VP or
VP into NP
Causes of the errors:
1. Error on the right boundary is caused by the
error on IE tagging, one end tag is missing or
labeled at a wrong word.
2. Error on the left boundary is caused by the
error on IB tagging, one begin tag is labeled
at a wrong word or an additional tag is
tagged.
3. Missing type is caused by missing a begin
tag of NP or VP.
4. In many sentences, there are two small NPs
form a large NP. In this case, our system can
only recognize the large NP only, thus the
short NPs are missing.
5.
table 12 is defined as: # of error cases / total # of NP in gold standard Table 12. Error distribution on NPThe error analysis on VP:We manually analyze the error cases and show the percentage of each error type in the following table. The percentage in table 13 is defined as: # of error cases / total # of VP in gold standardError type
#
%
1
265
8.92%
2
415
13.96%
3
673
22.63%
4
31
1.05%
5
59
1.99%
Correct
1730
58.41%
Error type
#
%
1
31
4.57%
2
154
22.69%
3
362
53.32%
4
0
0%
5
59
8.06%
Correct
187
27.54%
http://turing.iis.sinica.edu.tw/treesearch, page 6
AcknowledgmentsThis study was conducted under the "III Innovative and Prospective Technologies Project" of the Institute for Information Industry which is subsidized by the Ministry of Economic Affairs, R.O.C.
Parsing by chunks, Principle-Based Parsing. Steven Abney, Kluwer Academic PublishersSteven Abney. 1991. Parsing by chunks, Principle- Based Parsing, Kluwer Academic Publishers.
A Maximum Entropy approach to Natural Language Processing. Adam L Berger, Stephen A Della Pietra, Vincent J Della Pietra, Computational Linguistics. 221Adam L. Berger, Stephen A. Della Pietra, and Vin- cent J. Della Pietra. 1996. A Maximum Entropy approach to Natural Language Processing. Computational Linguistics, Vol. 22, No. 1., pp. 39- 71.
. E Black, ; S Abney, E. Black; S. Abney;
. D Flickenger; C. Gdaniec; R. Grishman, ; P Harrison, ; D Hindle; R. Ingria, ; F Jelinek, ; J Klavans, D. Flickenger; C. Gdaniec; R. Grishman; P. Harrison; D. Hindle; R. Ingria; F. Jelinek; J. Klavans;
. M Liberman, M. Liberman;
A Procedure for Quantitatively Comparing the Syntactic Coverage of English Grammars. M Marcus; S. Roukos, ; B Santorini, ; T Strzalkowski, Speech and Natural Language workshop. Pacific Grove, California, USA, FeburayM. Marcus; S. Roukos; B. Santorini; T. Strzalkowski. 1991. A Procedure for Quantitatively Comparing the Syntactic Coverage of English Grammars. In Speech and Natural Language workshop, Pacific Grove, California, USA, Feburay 1991.
Efficient, Feature-based, Conditional Random Field Parsing. Jenny Rose Finkel, Alex Kleeman, Christopher D Manning, Proceedings of ACL-08: HLT. ACL-08: HLTColumbus, Ohio, USAJenny Rose Finkel, Alex Kleeman, Christopher D. Manning. 2008. Efficient, Feature-based, Con- ditional Random Field Parsing, in Proceedings of ACL-08: HLT, pages 959-967, Columbus, Ohio, USA, June 2008.
Evaluating Syntax Performance of Parser/Grammars of English. Philip Harrison, Steven Abney, Ezra Black, Dan Flickinger, Ralph Grishman , Claudia Gdaniec, Donald Hindle, Robert Ingria, Mitch Marcus, Beatrice Santorini, Tomek Strzalkowski, RL-TR-91-362Natural Language Processing Systems Evaluation Workshop. Jeannette G. Neal and Sharon M. WalterTechnical ReportPhilip Harrison, Steven Abney, Ezra Black, Dan Flickinger, Ralph Grishman Claudia Gdaniec, Donald Hindle, Robert Ingria, Mitch Marcus, Beatrice Santorini, and Tomek Strzalkow- ski.1991. Evaluating Syntax Performance of Parser/Grammars of English. In Jeannette G. Neal and Sharon M. Walter, editors, Natural Language Processing Systems Evaluation Work- shop, Technical Report RL-TR-91-362, pages 71- 77.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. John Lafferty, Andrew Mccallum, Fernando Pereira, Proceedings of 18th International Conference on Machine Learning. 18th International Conference on Machine LearningJohn Lafferty, Andrew McCallum, and Fernando Pe- reira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. in Proceedings of 18th International Confer- ence on Machine Learning, 2001.
Fast Full Parsing by Linear-Chain Conditional Random Fields. Yoshimasa Tsuruoka, Sophia Tsujii, Anaiakou, Proceedings of EACL'09. EACL'09Yoshimasa Tsuruoka, Jun'ichi Tsujii, Sophia Anaiakou. 2009. Fast Full Parsing by Linear- Chain Conditional Random Fields. In Proceed- ings of EACL'09, pages 790-798.
Applying Maximum Entropy to Robust Chinese Shallow Parsing. Shih-Hung Wu, Cheng-Wei Shih, Chia-Wei Wu, Tzong-Han Tsai, Wen-Lian Hsu, Proceedings of ROCLING 2005. ROCLING 2005Shih-Hung Wu, Cheng-Wei Shih, Chia-Wei Wu, Tzong-Han Tsai, and Wen-Lian Hsu. 2005. Apply- ing Maximum Entropy to Robust Chinese Shallow Parsing. in Proceedings of ROCLING 2005, pp 257-271.
. Qiang Zhou, Qiang Zhou;
Chinese Syntactic Parsing Evaluation. Jingbo Zhu, Proceedings of CIPS-SIGHAN Joint Conference on Chinese Language Processing. CIPS-SIGHAN Joint Conference on Chinese Language ProcessingBeijing, ChinaJingbo Zhu. 2010. Chinese Syntactic Parsing Evaluation. in Proceedings of CIPS- SIGHAN Joint Conference on Chinese Language Processing, August 28-29, Beijing, China.
. Qiaoli Zhou, Qiaoli Zhou;
. Wenjing Lang, Wenjing Lang;
. Yingying Wang, Yingying Wang;
. Yan Wang, Yan Wang;
The SAU Report for the 1st CIPS-SIGHAN-ParsEval-2010. Dongfeng Cai, Proceedings of CIPS-SIGHAN Joint Conference on Chinese Language Processing. CIPS-SIGHAN Joint Conference on Chinese Language ProcessingBeijing, ChinaDongfeng Cai. 2010. The SAU Report for the 1st CIPS-SIGHAN-ParsEval-2010. in Pro- ceedings of CIPS-SIGHAN Joint Conference on Chinese Language Processing, August 28-29, Bei- jing, China. |
10,776,665 | Automating Temporal Annotation with TARSQI | We present an overview of TARSQI, a modular system for automatic temporal annotation that adds time expressions, events and temporal relations to news texts. | [
3266019,
14027861,
2711126
] | Automating Temporal Annotation with TARSQI
June 2005
Marc Verhagen
Inderjeet Mani
Computational Linguistics
Georgetown University
WashingtonDCUSA
Roser Sauri
Robert Knippen knippen@cs.brandeis.edu
Seok Bae Jang
Computational Linguistics
Georgetown University
WashingtonDCUSA
Jessica Littman jlittman@cs.brandeis.edu
Anna Rumshisky
John Phillips
Computational Linguistics
Georgetown University
WashingtonDCUSA
James Pustejovsky jamesp@cs.brandeis.edu
Department of Computer Science
Brandeis University
02254WalthamMAUSA
Automating Temporal Annotation with TARSQI
Proceedings of the ACL Interactive Poster and Demonstration Sessions
the ACL Interactive Poster and Demonstration SessionsAnn ArborJune 2005
We present an overview of TARSQI, a modular system for automatic temporal annotation that adds time expressions, events and temporal relations to news texts.
Introduction
The TARSQI Project (Temporal Awareness and Reasoning Systems for Question Interpretation) aims to enhance natural language question answering systems so that temporally-based questions about the events and entities in news articles can be addressed appropriately. In order to answer those questions we need to know the temporal ordering of events in a text. Ideally, we would have a total ordering of all events in a text. That is, we want an event like marched in ethnic Albanians marched Sunday in downtown Istanbul to be not only temporally related to the nearby time expression Sunday but also ordered with respect to all other events in the text. We use TimeML Saurí et al., 2004) as an annotation language for temporal markup. TimeML marks time expressions with the TIMEX3 tag, events with the EVENT tag, and temporal links with the TLINK tag. In addition, syntactic subordination of events, which often has temporal implications, can be annotated with the SLINK tag.
A complete manual TimeML annotation is not feasible due to the complexity of the task and the sheer amount of news text that awaits processing. The TARSQI system can be used stand-alone or as a means to alleviate the tasks of human annotators. Parts of it have been intergrated in Tango, a graphical annotation environment for event ordering (Verhagen and Knippen, Forthcoming). The system is set up as a cascade of modules that successively add more and more TimeML annotation to a document. The input is assumed to be part-of-speech tagged and chunked. The overall system architecture is laid out in the diagram below. In the following sections we describe the five TARSQI modules that add TimeML markup to news texts.
GUTime
The GUTime tagger, developed at Georgetown University, extends the capabilities of the TempEx tagger (Mani and Wilson, 2000). TempEx, developed at MITRE, is aimed at the ACE TIMEX2 standard (timex2.mitre.org) for recognizing the extents and normalized values of time expressions. TempEx handles both absolute times (e.g., June 2, 2003) and relative times (e.g., Thursday) by means of a number of tests on the local context. Lexical triggers like today, yesterday, and tomorrow, when used in a specific sense, as well as words which indicate a positional offset, like next month, last year, this coming Thursday are resolved based on computing direction and magnitude with respect to a reference time, which is usually the document publication time.
GUTime extends TempEx to handle time expressions based on the TimeML TIMEX3 standard (timeml.org), which allows a functional style of encoding offsets in time expressions. For example, last week could be represented not only by the time value but also by an expression that could be evaluated to compute the value, namely, that it is the week preceding the week of the document date. GUTime also handles a variety of ACE TIMEX2 expressions not covered by TempEx, including durations, a variety of temporal modifiers, and European date formats. GUTime has been benchmarked on training data from the Time Expression Recognition and Normalization task (timex2.mitre.org/tern.html) at .85, .78, and .82 F-measure for timex2, text, and val fields respectively.
EVITA
Evita (Events in Text Analyzer) is an event recognition tool that performs two main tasks: robust event identification and analysis of grammatical features, such as tense and aspect. Event identification is based on the notion of event as defined in TimeML. Different strategies are used for identifying events within the categories of verb, noun, and adjective. Event identification of verbs is based on a lexical look-up, accompanied by a minimal contextual parsing, in order to exclude weak stative predicates such as be or have. Identifying events expressed by nouns, on the other hand, involves a disambiguation phase in addition to lexical lookup. Machine learning techniques are used to determine when an ambiguous noun is used with an event sense. Finally, identifying adjectival events takes the conservative approach of tagging as events only those ad-jectives that have been lexically pre-selected from TimeBank 1 , whenever they appear as the head of a predicative complement. For each element identified as denoting an event, a set of linguistic rules is applied in order to obtain its temporally relevant grammatical features, like tense and aspect. Evita relies on preprocessed input with part-of-speech tags and chunks. Current performance of Evita against TimeBank is .75 precision, .87 recall, and .80 Fmeasure. The low precision is mostly due to Evita's over-generation of generic events, which were not annotated in TimeBank.
GUTenLINK
Georgetown's GUTenLINK TLINK tagger uses hand-developed syntactic and lexical rules. It handles three different cases at present: (i) the event is anchored without a signal to a time expression within the same clause, (ii) the event is anchored without a signal to the document date speech time frame (as in the case of reporting verbs in news, which are often at or offset slightly from the speech time), and (iii) the event in a main clause is anchored with a signal or tense/aspect cue to the event in the main clause of the previous sentence. In case (iii), a finite state transducer is used to infer the likely temporal relation between the events based on TimeML tense and aspect features of each event. For example, a past tense non-stative verb followed by a past perfect non-stative verb, with grammatical aspect maintained, suggests that the second event precedes the first.
GUTenLINK uses default rules for ordering events; its handling of successive past tense nonstative verbs in case (iii) will not correctly order sequences like Max fell. John pushed him. GUTenLINK is intended as one component in a larger machine-learning based framework for ordering events. Another component which will be developed will leverage document-level inference, as in the machine learning approach of (Mani et al., 2003), which required annotation of a reference time (Reichenbach, 1947;Kamp and Reyle, 1993) for the event in each finite clause.
An early version of GUTenLINK was scored at .75 precision on 10 documents. More formal Precision and Recall scoring is underway, but it compares favorably with an earlier approach developed at Georgetown. That approach converted eventevent TLINKs from TimeBank 1.0 into feature vectors where the TLINK relation type was used as the class label (some classes were collapsed). A C5.0 decision rule learner trained on that data obtained an accuracy of .54 F-measure, with the low score being due mainly to data sparseness.
Slinket
Slinket (SLINK Events in Text) is an application currently being developed. Its purpose is to automatically introduce SLINKs, which in TimeML specify subordinating relations between pairs of events, and classify them into factive, counterfactive, evidential, negative evidential, and modal, based on the modal force of the subordinating event. Slinket requires chunked input with events.
SLINKs are introduced by a well-delimited subgroup of verbal and nominal predicates (such as regret, say, promise and attempt), and in most cases clearly signaled by the context of subordination. Slinket thus relies on a combination of lexical and syntactic knowledge. Lexical information is used to pre-select events that may introduce SLINKs. Predicate classes are taken from (Kiparsky and Kiparsky, 1970;Karttunen, 1971;Hooper, 1975) and subsequent elaborations of that work, as well as induced from the TimeBank corpus. A syntactic module is applied in order to properly identify the subordinated event, if any. This module is built as a cascade of shallow syntactic tasks such as clause boundary recognition and subject and object tagging. Such tasks are informed from both linguisticbased knowledge (Papageorgiou, 1997;Leffa, 1998) and corpora-induced rules (Sang and Déjéan, 2001); they are currently being implemented as sequences of finite-state transducers along the lines of (Aït-Mokhtar and Chanod, 1997). Evaluation results are not yet available.
SputLink
SputLink is a temporal closure component that takes known temporal relations in a text and derives new implied relations from them, in effect making explicit what was implicit. A temporal closure component helps to find those global links that are not necessarily derived by other means. SputLink is based on James Allen's interval algebra (1983) and was inspired by (Setzer, 2001) and (Katz and Arosio, 2001) who both added a closure component to an annotation environment.
Allen reduces all events and time expressions to intervals and identifies 13 basic relations between the intervals. The temporal information in a document is represented as a graph where events and time expressions form the nodes and temporal relations label the edges. The SputLink algorithm, like Allen's, is basically a constraint propagation algorithm that uses a transitivity table to model the compositional behavior of all pairs of relations. For example, if A precedes B and B precedes C, then we can compose the two relations and infer that A precedes C. Allen allowed unlimited disjunctions of temporal relations on the edges and he acknowledged that inconsistency detection is not tractable in his algebra. One of SputLink's aims is to ensure consistency, therefore it uses a restricted version of Allen's algebra proposed by (Vilain et al., 1990). Inconsistency detection is tractable in this restricted algebra.
A SputLink evaluation on TimeBank showed that SputLink more than quadrupled the amount of temporal links in TimeBank, from 4200 to 17500. Moreover, closure adds non-local links that were systematically missed by the human annotators. Experimentation also showed that temporal closure allows one to structure the annotation task in such a way that it becomes possible to create a complete annotation from local temporal links only. See (Verhagen, 2004) for more details.
Conclusion and Future Work
The TARSQI system generates temporal information in news texts. The five modules presented here are held together by the TimeML annotation language and add time expressions (GUTime), events (Evita), subordination relations between events (Slinket), local temporal relations between times and events (GUTenLINK), and global temporal relations between times and events (SputLink).
In the nearby future, we will experiment with more strategies to extract temporal relations from texts. One avenue is to exploit temporal regularities in SLINKs, in effect using the output of Slinket as a means to derive even more TLINKs. We are also compiling more annotated data in order to provide more training data for machine learning approaches to TLINK extraction. SputLink currently uses only qualitative temporal infomation, it will be extended to use quantitative information, allowing it to reason over durations.
TimeBank is a 200-document news corpus manually annotated with TimeML tags. It contains about 8000 events, 2100 time expressions, 5700 TLINKs and 2600 SLINKs. See(Day et al., 2003) and www.timeml.org for more details.
Subject and Object Dependency Extraction Using Finite-State Transducers. Salah Aït, - Mokhtar, Jean-Pierre Chanod, Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications. ACL/EACL-97 Workshop Proceedings. Madrid, SpainAssociation for Computational LinguisticsSalah Aït-Mokhtar and Jean-Pierre Chanod. 1997. Sub- ject and Object Dependency Extraction Using Finite- State Transducers. In Automatic Information Extrac- tion and Building of Lexical Semantic Resources for NLP Applications. ACL/EACL-97 Workshop Proceed- ings, pages 71-77, Madrid, Spain. Association for Computational Linguistics.
Maintaining Knowledge about Temporal Intervals. James Allen, Communications of the ACM. 2611James Allen. 1983. Maintaining Knowledge about Temporal Intervals. Communications of the ACM, 26(11):832-843.
Roser Saurí, Andrew See, Andrea Setzer, and Beth Sundheim. David Day, Lisa Ferro, Robert Gaizauskas, Patrick Hanks, Marcia Lazo, James Pustejovsky, The TimeBank Corpus. Corpus Linguistics. David Day, Lisa Ferro, Robert Gaizauskas, Patrick Hanks, Marcia Lazo, James Pustejovsky, Roser Saurí, Andrew See, Andrea Setzer, and Beth Sundheim. 2003. The TimeBank Corpus. Corpus Linguistics.
On Assertive Predicates. Joan Hooper, Syntax and Semantics. John KimballNew YorkAcademic PressIVJoan Hooper. 1975. On Assertive Predicates. In John Kimball, editor, Syntax and Semantics, volume IV, pages 91-124. Academic Press, New York.
From Discourse to Logic. Hans Kamp, Uwe Reyle, Tense and Aspect. 5Hans Kamp and Uwe Reyle, 1993. From Discourse to Logic, chapter 5, Tense and Aspect, pages 483-546.
Some Observations on Factivity. Lauri Karttunen, Papers in Linguistics. 4Lauri Karttunen. 1971. Some Observations on Factivity. In Papers in Linguistics, volume 4, pages 55-69.
The Annotation of Temporal Information in Natural Language Sentences. Graham Katz, Fabrizio Arosio, Proceedings of ACL-EACL 2001, Workshop for Temporal and Spatial Information Processing. ACL-EACL 2001, Workshop for Temporal and Spatial Information ProcessingToulouse, FranceAssociation for Computational LinguisticsGraham Katz and Fabrizio Arosio. 2001. The Anno- tation of Temporal Information in Natural Language Sentences. In Proceedings of ACL-EACL 2001, Work- shop for Temporal and Spatial Information Process- ing, pages 104-111, Toulouse, France. Association for Computational Linguistics.
Fact. Paul Kiparsky, Carol Kiparsky, Progress in Linguistics. A collection of Papers. Manfred Bierwisch and Karl Erich HeidolphParisMoutonPaul Kiparsky and Carol Kiparsky. 1970. Fact. In Manfred Bierwisch and Karl Erich Heidolph, editors, Progress in Linguistics. A collection of Papers, pages 143-173. Mouton, Paris.
Clause Processing in Complex Sentences. Vilson Leffa, Proceedings of the First International Conference on Language Resources and Evaluation. the First International Conference on Language Resources and EvaluationGranada, Spain. ELRAVilson Leffa. 1998. Clause Processing in Complex Sen- tences. In Proceedings of the First International Con- ference on Language Resources and Evaluation, vol- ume 1, pages 937-943, Granada, Spain. ELRA.
Processing of News. Inderjeet Mani, George Wilson, Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics (ACL2000). the 38th Annual Meeting of the Association for Computational Linguistics (ACL2000)Inderjeet Mani and George Wilson. 2000. Processing of News. In Proceedings of the 38th Annual Meet- ing of the Association for Computational Linguistics (ACL2000), pages 69-76.
Inferring Temporal Ordering of Events in News. Short Paper. Inderjeet Mani, Barry Schiffman, Jianping Zhang, Proceedings of the Human Language Technology Conference (HLT-NAACL'03). the Human Language Technology Conference (HLT-NAACL'03)Inderjeet Mani, Barry Schiffman, and Jianping Zhang. 2003. Inferring Temporal Ordering of Events in News. Short Paper. In Proceedings of the Human Language Technology Conference (HLT-NAACL'03).
Clause Recognition in the Framework of Allignment. Harris Papageorgiou, Recent Advances in Natural Language Recognition. John Benjamins. Ruslan Mitkov and Nicolas NicolovAmsterdam, The NetherlandsHarris Papageorgiou. 1997. Clause Recognition in the Framework of Allignment. In Ruslan Mitkov and Nicolas Nicolov, editors, Recent Advances in Natural Language Recognition. John Benjamins, Amsterdam, The Netherlands.
TimeML: Robust Specification of Event and Temporal Expressions in Text. James Pustejovsky, José Castaño, Robert Ingria, Roser Saurí, Robert Gaizauskas, Andrea Setzer, Graham Katz, IWCS-5 Fifth International Workshop on Computational Semantics. James Pustejovsky, José Castaño, Robert Ingria, Roser Saurí, Robert Gaizauskas, Andrea Setzer, and Graham Katz. 2003. TimeML: Robust Specification of Event and Temporal Expressions in Text. In IWCS-5 Fifth International Workshop on Computational Semantics.
Elements of Symbolic Logic. Hans Reichenbach, MacMillan, LondonHans Reichenbach. 1947. Elements of Symbolic Logic. MacMillan, London.
Introduction to the CoNLL-2001 Shared Task: Clause Identification. Kim Tjong, Erik Sang, Herve Déjéan, Proceedings of the Fifth Workshop on Computational Language Learning (CoNLL-2001). the Fifth Workshop on Computational Language Learning (CoNLL-2001)Toulouse, France. ACLTjong Kim Sang and Erik Herve Déjéan. 2001. Introduc- tion to the CoNLL-2001 Shared Task: Clause Identifi- cation. In Proceedings of the Fifth Workshop on Com- putational Language Learning (CoNLL-2001), pages 53-57, Toulouse, France. ACL.
. Roser Saurí, Jessica Littman, Robert Knippen, Robert Gaizauskas, Andrea Setzer, James Pustejovsky, TimeML Annotation Guidelines. Roser Saurí, Jessica Littman, Robert Knippen, Robert Gaizauskas, Andrea Setzer, and James Puste- jovsky. 2004. TimeML Annotation Guidelines. http://www.timeml.org.
Temporal Information in Newswire Articles: an Annotation Scheme and Corpus Study. Andrea Setzer, Sheffield, UKUniversity of SheffieldPh.D. thesisAndrea Setzer. 2001. Temporal Information in Newswire Articles: an Annotation Scheme and Corpus Study. Ph.D. thesis, University of Sheffield, Sheffield, UK.
Forthcoming. TANGO: A Graphical Annotation Environment for Ordering Relations. Marc Verhagen, Robert Knippen, Time and Event Recognition in Natural Language. James Pustejovsky and Robert GaizauskasJohn Benjamin PublicationsMarc Verhagen and Robert Knippen. Forthcoming. TANGO: A Graphical Annotation Environment for Ordering Relations. In James Pustejovsky and Robert Gaizauskas, editors, Time and Event Recognition in Natural Language. John Benjamin Publications.
Times Between The Lines. Marc Verhagen, Waltham, Massachusetts, USABrandeis UniversityPh.D. thesisMarc Verhagen. 2004. Times Between The Lines. Ph.D. thesis, Brandeis University, Waltham, Massachusetts, USA.
Constraint propagation algorithms: A revised report. Marc Vilain, Henry Kautz, Peter Van Beek, Qualitative Reasoning about Physical Systems. D. S. Weld and J. de KleerSan Mateo, CaliforniaMorgan KaufmanMarc Vilain, Henry Kautz, and Peter van Beek. 1990. Constraint propagation algorithms: A revised report. In D. S. Weld and J. de Kleer, editors, Qualitative Rea- soning about Physical Systems, pages 373-381. Mor- gan Kaufman, San Mateo, California. |
2,505,580 | A Computational Analysis of the Language of Drug Addiction | We present a computational analysis of the language of drug users when talking about their drug experiences. We introduce a new dataset of over 4,000 descriptions of experiences reported by users of four main drug types, and show that we can predict with an F1-score of up to 88% the drug behind a certain experience. We also perform an analysis of the dominant psycholinguistic processes and dominant emotions associated with each drug type, which sheds light on the characteristics of drug users. | [
38166371,
4975368,
6210216,
2307601,
995282,
96824,
16050554,
10186140
] | A Computational Analysis of the Language of Drug Addiction
Association for Computational LinguisticsCopyright Association for Computational LinguisticsApril 3-7, 2017. 2017
Carlo Strapparava
FBK-irst
TrentoItaly
Rada Mihalcea mihalcea@umich.edu
University of Michigan
Ann ArborUSA
A Computational Analysis of the Language of Drug Addiction
Short Papers
the 15th Conference of the European Chapter of the Association for Computational LinguisticsValencia, SpainAssociation for Computational Linguistics2April 3-7, 2017. 2017
We present a computational analysis of the language of drug users when talking about their drug experiences. We introduce a new dataset of over 4,000 descriptions of experiences reported by users of four main drug types, and show that we can predict with an F1-score of up to 88% the drug behind a certain experience. We also perform an analysis of the dominant psycholinguistic processes and dominant emotions associated with each drug type, which sheds light on the characteristics of drug users.
Introduction
The World Drug Report globally estimated that in 2012, between 162 million and 324 million people, corresponding to between 3.5 per cent and 7.0 per cent of the world population aged 15-64, had used an illicit drug (United Nations Office, 2014). Moreover, in recent years, drug users have started to share their experiences on Web forums. 1 The availability of this new and very large form of data presents new opportunities to analyse and understand the "drug use phenomenon." Recent studies have shown how by processing these data with language processing techniques, it is possible to perform tasks of toxicovigilance, e.g., finding new drugs trends, adverse reactions, geographic and demographic characterizations (Chary et al., 2013). Other studies have also focused on the phenomenon of intoxication (Schuller et al., 2014). However, despite the interest around these topics, as far as we know, textual corpora of drug addicts experiences are not yet available.
In this paper we introduce a corpus that can be exploited as a basis for a number of computational explorations on the language of drug users. One of the most controversial and interesting issues in addictionology studies is to understand why drug consumers prefer a particular type of drug over another. Actually differentiating drugs with respect to their subjective effects can have an important impact on clinical drug treatment, since it can allow clinicians to better characterize the patient in therapy, with regard to the effect they seek through the drugs they use.
The paper is organized as follows. We first review the related work, followed by a description of the dataset of drug addict experiences that we constructed. Next, we present a classification experiment on predicting the drug behind an experience. We then present specific analyses of the language of drug users, i.e. their psycholinguistic processes and the emotions associated with an experience. Lastly, we conclude the paper and present some directions for future work.
Related Work
An important research on texts from social media was the platform PreDOSE (Cameron et al., 2013), designed to facilitate the epidemiological study of prescription (and related) drug abuse practices, or its successors: eDrugTrends 2 and iN3. 3 Another significant work was that of Paul and Dredze (2012;. They developed a new version of Blei's LDA, factorial LDA, and for each drug, they were able to collect multiple topics (route of administration, culture, chemistry, etc.) over posts collected from the website www.drugs-forum.com. The main directions of research on the state of consciousness are focused on alcoholic intoxication and mostly performed on the Alcohol Language Corpus (Schiel et al., 2012), only available in German: for example, speech analysis (Wang et al., 2013;Bone et al., 2014) and a text based system (Jauch et al., 2013) were used to analyse this data. Regarding alcohol intoxication detection, (Joshi et al., 2015) developed a system for automatic detection of drunk people by using their posts on Twitter. (Bedi et al., 2014) performed their analysis on transcriptions from a free speech task, in which the participants were volunteers previously administered with a dose of MDMA (3,4methylenedioxy-methamphetamine). Even if this is an ideal case study for analyzing cognitively the intoxication state, it is difficult to replicate on a large scale. Finally, as far as we know, the only attempt to classify and characterize experiences over different kinds of drugs was the project of (Coyle et al., 2012). Using a random-forest classifier over 1,000 random-collected reports of the website www.erowid.org they identified subsets of words differentiated by drugs.
Our research is also related to the broad theme of latent user attribute prediction, which is an emerging task within the natural language processing community, having recently been employed in fields such as public health (Coppersmith et al., 2015) and politics (Conover et al., 2011;Cohen and Ruths, 2013). Some of the attributes targeted for extraction focus on demographic related information, such as gender/age (Koppel et al., 2002;Mukherjee and Liu, 2010;Burger et al., 2011;Van Durme, 2012;, race/ethnicity (Pennacchiotti and Popescu, 2011;Eisenstein et al., 2011;Rao et al., 2011;, location (Bamman et al., 2014), yet other aspects are mined as well, among them emotion and sentiment , personality types (Schwartz et al., 2013;, user political affiliation (Cohen and Ruths, 2013;Volkova and Durme, 2015), mental health diagnosis (Coppersmith et al., 2015) and even lifestyle choices such as coffee preference (Pennacchiotti and Popescu, 2011). The task is typically approached from a machine learning perspective, with data originating from a variety of user generated content, most often microblogs (Pennacchiotti and Popescu, 2011;Coppersmith et al., 2015;, article com-ments to news stories or op-ed pieces (Riordan et al., 2014), social posts (originating from sites such as Facebook, MySpace, Google+) (Gong et al., 2012), or discussion forums on particular topics (Gottipati et al., 2014). Classification labels are then assigned either based on manual annotations , self identified user attributes (Pennacchiotti and Popescu, 2011), affiliation with a given discussion forum type, or online surveys set up to link a social media user identification to the responses provided (Schwartz et al., 2013). Learning has typically employed bagof-words lexical features (ngrams) (Van Durme, 2012;Filippova, 2012;Nguyen et al., 2013), with some works focusing on deriving additional signals from the underlying social network structure (Pennacchiotti and Popescu, 2011;Yang et al., 2011;Gong et al., 2012;Volkova and Durme, 2015), syntactic and stylistic features (Bergsma et al., 2012), or the intrinsic social media generation dynamic (Volkova and Durme, 2015). We should note that some works have also explored unsupervised approaches for demographic dimensions extraction, among them large-scale clustering (Bergsma et al., 2013) and probabilistic graphical models (Eisenstein et al., 2010).
Dataset
A corpus of drug experiences was collected from the user forum section of the www.erowid. org website. The data collection was performed semi-automatically, considering the most wellknown drugs and those with a large number of reports. The corpus consists of 4,636 documents, any user ID removed, split into four main categories according to their main effects (U.S. Department of Justice, 2015): (1) Empathogens (EMP), covering the following substances: MDA, MDAI, MDE, MBDB, MDMA; (2) Hallucinogens (HAL), which include 5-MeO-DiPT, ayahuasca, peyote, cacti (trichocerus pachanoi, peruvianus, terschekcii, cuzcoensis, bridgesi and calea zachatechichi), mescaline, cannabis, LSD, belladonna, DMT, ketamine, salvia divinorum, hallucinogen mushrooms (psilocybe cubensis, semilanceata, 'magic mushrooms'), PCP, 2C-B and its derivatives (2C-B-FLY, 2C-E, 2C-I, 2C-T-2,2C-T-7); (3) Sedatives (SED), which include alcohol, barbitures, buprenorphine, heroin, morphine, opium, oxycodone, oxymorphone, hydrocodone, hydromorphone, methadone, nitrous-oxide, DXM (dextromethorphan) and benzodiazepines (alprazolam, clonazepam, diazepam, flunitrazepam, flurazepam, lorazepam, midazolam, phenazepam, temazepam); (4) Stimulants (STI), including cocaine, caffeine, khata edulis, nicotine, tobacco, methamphetamines, amphetamines.
In the scientific literature about drug users, "purists" (i.e., consumers of only one specific substance) are rare. Nonetheless, when collecting the data, we decided to consider only reports describing one single drug in order to avoid the presence of a report in multiple categories, as well as to avoid descriptions of the interaction of multiple drugs, which are hard to characterize and still mostly unknown.
Predicting the Drug behind an Experience
To determine if an automatic classifier is able to identify the drug behind a certain reported experience, we create a document classification task using Multinomial Naïve Bayes, and use the default information gain feature weighting associated with this classifier. Each document corresponds to a report labelled with its corresponding drug category. Only minimal preprocessing was applied, i.e., part-of-speech tagging and lemmatization. No particular feature selection was performed, only stopwords were removed, keeping nouns, adjectives, verbs, and adverbs. Since the major class in the experiment was the hallucinogens category, we set the baseline corresponding to its percentage: 61%. In evaluating the system we perform a five-fold cross-validation, with an overall F1-score (micro-average) of 88%, indicating that good separation can be obtained by an automatic classifier (see Table 3). Not surprisingly, the hallucinogen experiences are the easiest to classify, probably due to the larger amount of data available for this drug. Table 4 shows a sample of the most informative features for the four categories. For example, we can observe that those using emphatogens are more "night"-oriented, while those addicted to sedatives and stimolants are "day"-oriented. Instead, the use of hallucinogens seems to be associated with a perceptual visual experience (i.e., see#v).
Understanding Drug Users
Psycholinguistic Processes
To gain a better understanding of the characteristics of drug users, we analyse the distribution of psycholinguistic word classes according to the Linguistic Inquiry and Word Count (LIWC) lexicon -a resource developed by Pennebaker and colleagues (Pennebaker and Francis, 1999). The 2015 version of LIWC includes 19,000 words and word stems grouped into 73 broad categories relevant to psychological processes. The LIWC lexicon has been validated by showing significant correlation between human ratings of a large number of written texts and the rating obtained through LIWC-based analyses of the same texts.
For each drug type T , we calculate the dominance score associated with each LIWC class C (Mihalcea and Strapparava, 2009). This score is calculated as the ratio between the percentage of words that appear in T and belong to C, and the percentage of words that appear in any other drug type but T and belong to C. A score significantly higher than 1 indicates a LIWC class that is dominant for the drug type T , and thus likely to be a characteristic of the experiences reported by users of this drug. Table 5 shows the top five dominant psycholinguistic word classes associated with each drug type. Interestingly, descriptions of experiences reported by users of empathogens are centered around people (e.g., Affiliation -which includes words such as club, companion, collaborate; We; Friend). Hallucinogens result in experiences that relate to the human senses (e.g., See, Hear, Perception). The experiences of users of sedatives and stimulants appear to be more concerned with mundane topics (e.g., Money, Work, Health).
To quantify the similarity of the distributions Drug Type Example EMP I found myself witnessing an argument between a man and a woman whom I've never met. I felt empathetic towards both of them, recognizing their struggle, he meant well, but couldn't find the right words, she, obviously cared a great deal for him but was doubtful of his intentions. EMP experience#n good#a pill#n people#n about#r drug#n night#n start#v HAL see#v experience#n trip#n look#v back#r say#v try#v down#r as#r SED day#n drug#n start#v about#r try#v good#a hour#n still#r effect#n STI day#n drug#n coke#n good#a try#v start#v about#r want#v really#r of psycholinguistic processes across the four drug types, we also calculate the Pearson correlation between the dominance scores for all LIWC classes. As seen in Table 6, empathogens appear to be the most dissimilar with respect to the other drug types. Hallucinogens instead seem to be most similar to stimulants and sedatives.
Emotions and Drugs
Another interesting dimension to explore in relation to drug experiences is the presence of various emotions. To quantify this dimension, we use a methodology similar to the one described above, and calculate the dominance score for each of six emotion word classes: anger, disgust, fear, joy, sadness, and surprise (Ortony et al., 1987;Ekman, 1993). As a resource, we use WordNet Affect (Strapparava and Valitutti, 2004), in which words from WordNet are annotated with several emotions. As before, the dominance scores are calculated for the experiences reported for each drug type when compared to the other drug types. Table 7 shows the scores for the four drug types and the six emotions. A score significantly higher than 1 indicates a class that is dominant in that category. Clearly, interesting differences emerge from this table: the use of emphathogens leads to experiences that are high on joy and surprise, whereas the dominant emotion in the use of hallucinogens as compared to the other drugs is fear. Sedatives lead to an increase in disgust, while stimulants have a mix of anger and joy.
Conclusions
Automating language assessment of drug addict experiences has a potentially large impact on both toxicovigilance and prevention. Drug users are inclined to underreport symptoms to avoid negative consequences, and they often lack the self awareness necessary to report a drug abuse problem. In fact, often times people with drug misuse problems are reported on behalf of a third party (social services, police, families), when the situation is no longer ignorable.
In this paper, we introduced a new dataset of drug use experiences, which can facilitate additional research in this space. We have described preliminary classification experiments, which showed that we can predict the drug behind an experience with a performance of up to 88% F1-score. To better understand the characteristics of drug users, we have also presented an analysis of the psycholinguistic process and emotions associated with different drug types.
We would like to continue the present work along the following directions: (i) Extend the corpus with texts written by people who supposedly do not ordinarily make use of drugs, using patient submitted forum posts when talking about ordinary medicines. The style of such patient submitted posts is expected to be similar to the one of drug experience reports, since both address writing about an experience with some particular substance; (ii) Explore the association between drug preferences and personality types. Following Khantzian's hypothesis (Khantzian, 1997), certain personalities may be more prone to a particular drug with respect to its subjective effects. Characterizing subjects by their potential drug preferences could enable clinicians, like in a reversed "recommender system," to explicitly warn their patients to avoiding particular kind of substances since they could become addictive.
The dataset introduced in this paper is available for research purposes upon request to the authors. cology, 9:184-191.
Table 1presents statistics on the dataset, whileTable 2shows excerpts from experiences reported for each drug type. 4Drug type Number reports Total words
EMP
399
378,478
HAL
2,806
3,494,223
SED
954
692,121
STI
480
449,596
Table 1: Corpus statistics.
The Argument escalated and I became very disturbed...I had to open my eyes again. My heart rate was up, my breathing was heavy, I had found a window to my own fears, to see what frustrates you the most, and not be able to do anything about it. HAL After watching TV for a bit I looked around the room and was suddenly jerked awake, I felt vibrant, alive and aware of my entire physical body. The friction of blood in my veins, the movement of my diaphragm, the tensing of muscles, the clenching of my heart. I looked down at my hands and was acutely aware of the bones within, I could feel the flesh sliding over the bone internally while my normal sense of touch was reduced so every thing felt like cold chrome.SED
Feeling kind of nausea, but I'm not worried about throwing up. Shooting great pool, I'm making several shots
in a row. I'm so happy right now, I would like to be like this all day. I'm begining to notice that I'm having
slight audio hallucinations, like hearing small noises that aren't there. Also some slight visual hallucinations,
thinking I see something move nearby but nothing alive is even close to me.
STI
I get up in the morning for work and do about two lines while I'm getting ready and somehow manage to
make it through work without a line. Not that I don't want to only because of the fear of getting caught. I can
say that it takes the edge off things at work though. Through the evening I do a line whenever I feel like it. At
bedtime I tell myself over and over that it's time to go to sleep. Sometimes I sleep but if I can't I know I have
my friend to help me through the next day.
Table 2 :
2Sample entries in the drug dataset.Prec. Rec. F1
EMP
0.84 0.71 0.77
HAL
0.93 0.92 0.92
SED
0.86 0.86 0.86
STI
0.73 0.85 0.78
micro-average
0.88
Table 3 :
3Naïve Bayes classification performance.
Table 4 :
4Most informative features (words and
parts-of-speech).
Table 5 :
5Psycholinguistic word classes dominant for each drug type.EMP HAL SED STI
EMP 1.00
0.34 0.03 0.15
HAL
1.00 0.80 0.83
SED
1.00 0.67
STI
1.00
Table 6 :
6Pearson correlations of the LIWC dominance scores.EMP HAL SED STI
Anger
1.09
0.91 1.01 1.13
Disgust
0.82
0.53 2.68 0.94
Fear
0.89
1.26 0.78 0.84
Joy
1.26
0.85 1.07 1.11
Sadness 1.08
0.95 0.96 1.09
Surprise 1.46
0.92 0.94 0.90
Table 7 :
7Emotion word classes dominant for each drug type. Dominance scores larger than 1.10 are shown in bold face.
ceedings of the North American Association of Computational Linguistics, pages 327-337, Montreal, CA. Shane Bergsma, Mark Dredze, Benjamin Van Durme, Theresa Wilson, and David Yarowsky. 2013. Broadly improving user classification via communication-based name and location clustering on Twitter. In In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACLHLT), pages 1010-1019. Daniel Bone, Ming Li, Matthew P. Black, and Shrikanth S. Narayanan. 2014. Intoxicated speech detection: A fusion framework with speakernormalized hierarchical functionals and GMM supervectors. Computer Speech and Language, 28:375-391. John D. Burger, John Henderson, George Kim, and Guido Zarrella. 2011. Discriminating gender on Twitter. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP 2011, pages 1301-1309. Delroy Cameron, Gary A. Smith, Raminta Daniulaityte, Amit P. Sheth, Drashti Dave, Lu Chen, Gaurish Anand, Robert Carlson, Kera Z. Watkins, and Russel Falck. 2013. PreDOSE: A semantic web platform for drug abuse epidemiology using social media. Journal of Biomedical Informatics, 46:985-997. Michael Chary, Nicholas Genes, Andrew McKenzie, and Alex F. Manini. 2013. Leveraging social networks for toxicovigilance. Journal of Medical Toxi
www.erowid.org: 95000 unique visitor per day; www.drugs-forum.com: 210000 members with 3.6 million unique visitor per month; www.psychonaut.com: 46000 members.
http://medicine.wright.edu/citar/edrugtrends 3 http://medicine.wright.edu/citar/nida-national-earlywarning-system-network-in3-an-innovative-approach
Note that each report is annotated with a set of metadata attributes, such as gender, age at time of experience, dose and number of views; these attributes are not used in the experiments reported in this paper, but we plan to use them for additional analyses in the future.
AcknowledgmentsWe would like to thank Samuele Garda for his insight and enthusiasm in the initial phase of the work. We also thank Dr. Marialuisa Grech, executive psychiatrist and psychotherapist at Serd (Service for Pathological Addiction) APSS, Trento, who helped us to better understand the drug consumption and drug-addicted world. This material is based in part upon work supported by the National Science Foundation (#1344257), the John Templeton Foundation (#48503), and the Michigan Institute for Data Science. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, the John Templeton Foundation, or the Michigan Institute for Data Science.
Distributed representations of geographically situated language. David Bamman, Chris Dyer, Noah A Smith, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsBaltimore, Maryland2Short Papers)David Bamman, Chris Dyer, and Noah A. Smith. 2014. Distributed representations of geographically situ- ated language. In Proceedings of the 52nd Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 828-834, Baltimore, Maryland, June.
A window into the intoxicated mind? speech as an index of psychoactive drug effects. Gillinder Bedi, Guillermo A Cecchi, Diego F Slezak, Facundo Carrillo, Mariano Sigman, Harriet De, Wit , Neuropsychopharmacology. 3910Gillinder Bedi, Guillermo A. Cecchi, Diego F. Slezak, Facundo Carrillo, Mariano Sigman, and Harriet de Wit. 2014. A window into the intoxicated mind? speech as an index of psychoactive drug effects. Neuropsychopharmacology, 39(10):2340-8.
Stylometric analysis of scientific articles. Shane Bergsma, Matt Post, David Yarowsky, Pro. Shane Bergsma, Matt Post, and David Yarowsky. 2012. Stylometric analysis of scientific articles. In Pro-
Classifying political orientation on Twitter: It's not easy! In Proceedings of the. Raviv Cohen, Derek Ruths, Seventh International AAAI Conference on Weblogs and Social Media. ICWSM 2013Raviv Cohen and Derek Ruths. 2013. Classifying po- litical orientation on Twitter: It's not easy! In Pro- ceedings of the Seventh International AAAI Confer- ence on Weblogs and Social Media (ICWSM 2013).
Predicting the political alignment of Twitter users. Michael Conover, Bruno Gonçalves, Jacob Ratkiewicz, Alessandro Flammini, Filippo Menczer, Proceedings of 3rd IEEE Conference on Social Computing (SocialCom). 3rd IEEE Conference on Social Computing (SocialCom)Michael Conover, Bruno Gonçalves, Jacob Ratkiewicz, Alessandro Flammini, and Filippo Menczer. 2011. Predicting the political alignment of Twitter users. In Proceedings of 3rd IEEE Conference on Social Computing (SocialCom).
From adhd to sad: analyzing the language of mental health on twitter through self-reported diagnoses. Glen Coppersmith, Mark Dredze, Craig Harman, Kristy Hollingshead, Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality. the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical RealityGlen Coppersmith, Mark Dredze, Craig Harman, and Kristy Hollingshead. 2015. From adhd to sad: analyzing the language of mental health on twit- ter through self-reported diagnoses. In Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 1-10.
Baggott. Jeremy R Coyle, David E Presti, Matthew J , arXiv:1206.0312Quantitative analysis of narrative reports of psychedelic drugs. Jeremy R. Coyle, David E. Presti, and Matthew J. Bag- gott. 2012. Quantitative analysis of narrative reports of psychedelic drugs. arXiv:1206.0312.
A latent variable model for geographic lexical variation. Jacob Eisenstein, O' Brendan, Noah A Connor, Eric P Smith, Xing, Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10. the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10Jacob Eisenstein, Brendan O'Connor, Noah A. Smith, and Eric P. Xing. 2010. A latent variable model for geographic lexical variation. In Proceedings of the 2010 Conference on Empirical Methods in Natu- ral Language Processing, EMNLP '10, pages 1277- 1287.
Discovering sociolinguistic associations with structured sparsity. Jacob Eisenstein, Noah A Smith, Eric P Xing, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesJacob Eisenstein, Noah A. Smith, and Eric P. Xing. 2011. Discovering sociolinguistic associations with structured sparsity. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies -Vol- ume 1, HLT '11, pages 1365-1374.
Facial expression of emotion. Paul Ekman, American Psychologist. 48Paul Ekman. 1993. Facial expression of emotion. American Psychologist, 48:384-392.
User demographics and language in an implicit social network. Katja Filippova, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)Katja Filippova. 2012. User demographics and lan- guage in an implicit social network. In Proceed- ings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning (EMNLP- CoNLL), pages 1478-1488.
Predicting links and inferring attributes using a socialattribute network (SAN). Ameet Neil Zhenqiang Gong, Lester W Talwalkar, Ling Mackey, Eui Chul Richard Huang, Emil Shin, Elaine Stefanov, Dawn Shi, Song, The 6th SNA-KDD Workshop. Neil Zhenqiang Gong, Ameet Talwalkar, Lester W. Mackey, Ling Huang, Eui Chul Richard Shin, Emil Stefanov, Elaine Shi, and Dawn Song. 2012. Pre- dicting links and inferring attributes using a social- attribute network (SAN). In The 6th SNA-KDD Workshop.
An integrated model for user attribute discovery: A case study on political affiliation identification. Swapna Gottipati, Minghui Qiu, Liu Yang, Feida Zhu, Jing Jiang, Advances in Knowledge Discovery and Data Mining. VincentS. Tseng, TuBao Ho, Zhi-Hua Zhou, ArbeeL.P. Chen, and Hung-Yu KaoSpringer International Publishing8443Swapna Gottipati, Minghui Qiu, Liu Yang, Feida Zhu, and Jing Jiang. 2014. An integrated model for user attribute discovery: A case study on political affili- ation identification. In VincentS. Tseng, TuBao Ho, Zhi-Hua Zhou, ArbeeL.P. Chen, and Hung-Yu Kao, editors, Advances in Knowledge Discovery and Data Mining, volume 8443 of Lecture Notes in Computer Science, pages 434-446. Springer International Pub- lishing.
Using text classification to detect alcohol intoxication in speech. Andreas Jauch, Paul Jaehne, David Suendermann, conjunction with the 36th German Conference on Artificial Intelligence. Koblenz, GermanyProceedings of the 7th Workshop on Emotion and ComputingAndreas Jauch, Paul Jaehne, and David Suendermann. 2013. Using text classification to detect alcohol intoxication in speech. In Proceedings of the 7th Workshop on Emotion and Computing (in conjunc- tion with the 36th German Conference on Artificial Intelligence), Koblenz, Germany, September.
A computational approach to automatic prediction of drunk-texting. Aditya Joshi, Abhijit Mishra, A R Balamurali, Pushpak Bhattacharyya, Mark James Carman, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics. the 53rd Annual Meeting of the Association for Computational LinguisticsBeijing, Chinashort papersAditya Joshi, Abhijit Mishra, Balamurali AR, Pushpak Bhattacharyya, and Mark James Carman. 2015. A computational approach to automatic prediction of drunk-texting. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics (short papers), Beijing, China, July.
The self-medication hypothesis of substance use disorders: a reconsideration and recent applications. Edward J Khantzian, Harvard Review of Psychiatry. 45Edward J. Khantzian. 1997. The self-medication hy- pothesis of substance use disorders: a reconsider- ation and recent applications. Harvard Review of Psychiatry, 4(5):231-44.
Automatically categorizing written texts by author gender. Moshe Koppel, Shlomo Argamon, Anat Shimoni, Literary and Linguistic Computing. 417Moshe Koppel, Shlomo Argamon, and Anat Shimoni. 2002. Automatically categorizing written texts by author gender. Literary and Linguistic Computing, 4(17):401-412.
The lie detector: Explorations in the automatic recognition of deceptive language. Rada Mihalcea, Carlo Strapparava, Proceedings of the Association for Computational Linguistics (ACL 2009). the Association for Computational Linguistics (ACL 2009)SingaporeRada Mihalcea and Carlo Strapparava. 2009. The lie detector: Explorations in the automatic recognition of deceptive language. In Proceedings of the Asso- ciation for Computational Linguistics (ACL 2009), Singapore.
Improving gender classification of blog authors. Arjun Mukherjee, Bing Liu, Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10. the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10Arjun Mukherjee and Bing Liu. 2010. Improving gen- der classification of blog authors. In Proceedings of the 2010 Conference on Empirical Methods in Natu- ral Language Processing, EMNLP '10, pages 207- 217.
how old do you think i am?" a study of language and age in twitter. Dong Nguyen, Rilana Gravel, Dolf Trieschnigg, Theo Meder, Proceedings of the AAAI Conference on Weblogs and Social Media (ICWSM). the AAAI Conference on Weblogs and Social Media (ICWSM)Dong Nguyen, Rilana Gravel, Dolf Trieschnigg, and Theo Meder. 2013. "how old do you think i am?" a study of language and age in twitter. In Proceed- ings of the AAAI Conference on Weblogs and Social Media (ICWSM), pages 439-448.
The referential structure of the affective lexicon. Andrew Ortony, Gerald Clore, Mark Foss, Cognitive Science. 11Andrew Ortony, Gerald Clore, and Mark Foss. 1987. The referential structure of the affective lexicon. Cognitive Science, (11).
Experimenting with drugs (and topic models): Multidimensional exploration of recreational drug discussions. J Michael, Mark Paul, Dredze, Proceedings of AAAI Fall Symposium: Information Retrieval and Knowledge Discovery in Biomedical Text. AAAI Fall Symposium: Information Retrieval and Knowledge Discovery in Biomedical TextAAAI PublicationsMichael J. Paul and Mark Dredze. 2012. Exper- imenting with drugs (and topic models): Multi- dimensional exploration of recreational drug discus- sions. In Proceedings of AAAI Fall Symposium: Information Retrieval and Knowledge Discovery in Biomedical Text. AAAI Publications, November.
Drug extraction from the web: Summarizing drug experiences with multi-dimensional topic models. J Michael, Mark Paul, Dredze, Proceedings of HLT-NAACL 2013. HLT-NAACL 2013Michael J. Paul and Mark Dredze. 2013. Drug extrac- tion from the web: Summarizing drug experiences with multi-dimensional topic models. In Proceed- ings of HLT-NAACL 2013, pages 168-178.
Democrats, republicans and Starbucks afficinados: User classification in Twitter. Marco Pennacchiotti, Ana-Maria Popescu, Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD 2011). the 17th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD 2011)Marco Pennacchiotti and Ana-Maria Popescu. 2011. Democrats, republicans and Starbucks afficinados: User classification in Twitter. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD 2011), pages 430-438.
Linguistic inquiry and word count: LIWC. Erlbaum Publishers. James Pennebaker, Martha Francis, James Pennebaker and Martha Francis. 1999. Linguis- tic inquiry and word count: LIWC. Erlbaum Pub- lishers.
Hierarchical Bayesian models for latent attribute detection in social media. Delip Rao, Michael Paul, Clay Fink, David Yarowsky, Timothy Oates, Glen Coppersmith, Delip Rao, Michael Paul, Clay Fink, David Yarowsky, Timothy Oates, and Glen Coppersmith. 2011. Hier- archical Bayesian models for latent attribute detec- tion in social media. pages 598-601.
Detecting sociostructural beliefs about group status differences in online discussions. Brian Riordan, Heather Wade, Proceedings of the Joint Workshop on Social Dynamics and Personal Attributes in Social Media. the Joint Workshop on Social Dynamics and Personal Attributes in Social MediaBrian Riordan, Heather Wade, and Afzal Upal. 2014. Detecting sociostructural beliefs about group status differences in online discussions. In Proceedings of the Joint Workshop on Social Dynamics and Per- sonal Attributes in Social Media, pages 1-6.
Alcohol language corpus: The first public corpus of alcoholized german speech. Language Resources and Evaluation. Florian Schiel, Christian Heinrich, Sabine Barfusser, 46Florian Schiel, Christian Heinrich, and Sabine Bar- fusser. 2012. Alcohol language corpus: The first public corpus of alcoholized german speech. Lan- guage Resources and Evaluation, 46(3):503-521, September.
Medium-term speaker states -a review on intoxication, sleepiness and the first challenge. Björn Schuller, Stefan Steidl, Anton Batliner, Florian Schiel, Jarek Krajewski, Felix Weninger, Florian Eyben, Computer Speech and Language. 28Björn Schuller, Stefan Steidl, Anton Batliner, Florian Schiel, Jarek Krajewski, Felix Weninger, and Flo- rian Eyben. 2014. Medium-term speaker states -a review on intoxication, sleepiness and the first chal- lenge. Computer Speech and Language, 28:346- 374.
. H , Andrew Schwartz, Johannes C Eichstaedt, Margaret L Kern, Lukasz Dziurzynski, Stephanie M Ramones, Megha Agrawal, MichalH. Andrew Schwartz, Johannes C. Eichstaedt, Mar- garet L. Kern, Lukasz Dziurzynski, Stephanie M. Ramones, Megha Agrawal, Achal Shah, Michal
Personality, gender, and age in the language of social media: The open vocabulary approach. David Kosinski, Martin E P Stillwell, Lyle H Seligman, Ungar, PLOS ONE. 89SeptKosinski, David Stillwell, Martin E. P. Seligman, and Lyle H. Ungar. 2013. Personality, gender, and age in the language of social media: The open vo- cabulary approach. PLOS ONE, 8(9):1-16, Sept.
Wordnet-affect: an affective extension of wordnet. Carlo Strapparava, Alessandro Valitutti, Proceedings of the 4th International Conference on Language Resources and Evaluation. the 4th International Conference on Language Resources and EvaluationLisbonCarlo Strapparava and Alessandro Valitutti. 2004. Wordnet-affect: an affective extension of wordnet. In Proceedings of the 4th International Conference on Language Resources and Evaluation, Lisbon.
World Drug Report. United Nations. New YorkUnited Nations OfficeUnited Nations Office, editor. 2014. World Drug Re- port. United Nations, New York.
Drug of Abuse. Drug Enforcement Administration -U.S. Department of Justice. U S Department, Justice, U.S. Department of Justice. 2015. Drug of Abuse. Drug Enforcement Administration -U.S. Depart- ment of Justice.
Streaming analysis of discourse participants. Benjamin Van Durme, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning12Benjamin Van Durme. 2012. Streaming analysis of discourse participants. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natu- ral Language Processing and Computational Natu- ral Language Learning, EMNLP-CoNLL '12, pages 48-58.
Online bayesian models for personal analytics in social media. Svitlana Volkova, Benjamin Van Durme, Svitlana Volkova and Benjamin Van Durme. 2015. Online bayesian models for personal analytics in so- cial media.
Inferring latent user properties from texts published in social media. Svitlana Volkova, Yoram Bachrach, AAAI Conference on Artificial Intelligence. Michael Armstrong, and Vijay SharmaSvitlana Volkova, Yoram Bachrach, Michael Arm- strong, and Vijay Sharma. 2015. Inferring latent user properties from texts published in social media. In AAAI Conference on Artificial Intelligence, pages 4296-4297.
Automatic detection of speaker state: Lexical, prosodic, and phonetic approaches to level-of-interest and intoxication classification. Yang William, Fadi Wang, Andrew Biadsy, Julia Rosenberg, Hirschberg, Computer Speech and Language. 27William Yang Wang, Fadi Biadsy, Andrew Rosenberg, and Julia Hirschberg. 2013. Automatic detection of speaker state: Lexical, prosodic, and phonetic ap- proaches to level-of-interest and intoxication classi- fication. Computer Speech and Language, 27:168- 189.
Like like alike: Joint friendship and interest propagation in social networks. Shuang-Hong Yang, Bo Long, Alex Smola, Narayanan Sadagopan, Zhaohui Zheng, Hongyuan Zha, Proceedings of the 20th International Conference on World Wide Web, WWW '11. the 20th International Conference on World Wide Web, WWW '11Shuang-Hong Yang, Bo Long, Alex Smola, Narayanan Sadagopan, Zhaohui Zheng, and Hongyuan Zha. 2011. Like like alike: Joint friendship and inter- est propagation in social networks. In Proceedings of the 20th International Conference on World Wide Web, WWW '11, pages 537-546. |
6,963,628 | Disambiguation of morphological analysis in Bantu languages | The paper describes problems in disambiguating the morphological analysis of Bantu languages by using Swahili as a test language. The main factors of ambiguity in this language group can be traced to the noun class structure on one hand and to the bi-directional word-formation on the other. In analyzing word-forms, the system applied utilizes SWATWOL, a morphological parsing program based on two-level formalism. Disambiguation is carried out with the latest version (April 1996) of the Constraint Grammar Parser (GGP). Statistics on ambiguity are provided. Solutions tbr resolving different types of ambiguity are presented and they are demonstrated by examples fi'om corpus text. Finally, statistics on the performance of the disambiguator are presented. | [] | Disambiguation of morphological analysis in Bantu languages
Arvi Hurskainen
Department of Asian
Disambiguation of morphological analysis in Bantu languages
The paper describes problems in disambiguating the morphological analysis of Bantu languages by using Swahili as a test language. The main factors of ambiguity in this language group can be traced to the noun class structure on one hand and to the bi-directional word-formation on the other. In analyzing word-forms, the system applied utilizes SWATWOL, a morphological parsing program based on two-level formalism. Disambiguation is carried out with the latest version (April 1996) of the Constraint Grammar Parser (GGP). Statistics on ambiguity are provided. Solutions tbr resolving different types of ambiguity are presented and they are demonstrated by examples fi'om corpus text. Finally, statistics on the performance of the disambiguator are presented.
Introduction
There are five principal factors in Bantu languages which contribute to ambiguous analysis of wordtbrms. First, nouns are grouped into more than ten marked noun classes. The marking of these classes extends across the noun phrase, whereby the noun governs the choice of markers in dependent constituents. Second, verbs inflect steminitially and mark the subject, object, and relative referent by prefixes, whereby the actual form of each prefix is governed by the noun class of the noun it refers to. In addition, verb derivation also adds to the complexity of verbal morphology. Third, reduplication is a productive phenomenon. Because its accurate description in lexicon is not possible, alternative ways in handling it are discussed. Fourth, the majority of Bantu languages have a tone system, but rarely this is indicated in writing. This adds to morphological ambiguity. Fifth, various semantic functions of word-forms are also a source of ambiguity.
In this paper I shall discuss the points one and two by using Swahili as a test language.
1 Morphological
analysis
The morphological analysis of Swahili is carried out by SWATWOL, which is based on the twolevel formalism (Koskenniemi 1983). The application of this formalism to Swahili has been under process since 1987, and it has now, after having been tested with a corpus of one million words, reached a mature phase with a recall of 99.8% in average running text, and precision of close to 100%. The performance of SWAT-WOL corresponds to what is reported of ENGT-WOL, the morphological parser of English (Voutilainen et al 1992; Tapanainen and J/irvinen 1994), and SWETWOL, the morphological analyzer of Swedish (Karlsson 1992). SWATWOL uses a two-level rule system for describing morphophonological variation, as well as a lexicon with 288 sub-lexicons. Unlike in languages with right-branching word formation, where word roots can be grouped together into a root lexicon, here word roots have been divided into several sub-lexicons. Because SWATWOL has been described in detail elsewhere (Hurskainen 1992), only a sketchy description of its parts is given here.
SWATWOL rules
Two-level rules have been written mainly for handling morhophonological processes, which occur principally in morpheme boundaries. Part of such processes take place also in verbal extensions, whereby the quality of the stem vowel(s) defines the surface form of the suffix. The total number of rules is 18, part of them being combined rules. An example of a combined rule: Chanqe lexical 'U' to surface 'w' iff there is 'k' on the lcft and a surface character belonging to the set 'Vo' on the right; or there is 't' on the left and a lexical diacritic '/1' on the right followed by a lexical 'a '.
SWATWOL lexicon
SWATWOL lexicon is at tree, where the morphemes of Swahili are located so that each route f¥om the root lexicon leads to a well-formed wordtbrm.
The most complicated part of the lexicon is the description of verb-forms, which requires a total of :[25 sub-lexicons. For describing verbs, there are a number of consecutive :prefix and suffix 'slots', which may or may not be filled by morphemes. The verb root is in the middle, and verbal extensions used mainly for derivation are suffixed to the root.
A noun is composed of a class prefix and root. Noun roots are located in 22 separate sublexicons, and access to them is permitted from the corresponding class prefix(es). Adjectives are grouped according to whether they t, ake class prefixes or not. Also numerals are grouped according to the same principle. The lexicon has a total of about 27,000 'words'.
Here is a simplified example of a sab-lexicon: This is a sub-lexicon with the name 'M/MI' containing prefixes of the noun classes 3 and 4. Each entry may have three parts, but only the middle part is compulsory. In the first entry, 'mU' is the lexical representation of a morpheme, and 'M/MIr' is the name of the sub-lexicon where the processing will continue. The third part within quotes is the output string.
In constructing the lexicon, underspecification of analysis was avoided. Although it may be used for decreasing the number of ambiguous readings (of. Karlsson 1992), it leaves ambiguity within readings themselves in the form of underspecifica-*ion, and it has to be resolved later in any case.
2
Extent of morphological ambiguity 1,'or the purposes of writing and testing disarnbiguation rules, a corpus of about 10,000 words of prose text was compiled (Corpus 1). The text was analyzed with SWATWOL, and the results in regard to ambiguity are given in Table 1.
As can be seen in Table 1, about half of wordform tokens in Swahili are at least two-ways ambiguous. About one fifth of tokens are precisely two-ways ambiguous, and the share of three-ways and four-ways ambiguous tokens is almost equal, about 10%. The share of five-ways ambiguous tokens is 5.68%, but the number of still more ambiguous tokens decreases drastically. There are word-forms with more than 20 readings, tile largest number in the corpus being 60 readings.
If we compare these numbers with those in Table 2 we note significant differences and similarities. Table 2 was constructed exactly in the same manner as Table 1, only the source text being different. Whereas in Table 1 a corpus of running text (Corpus 1) was used, in Table 2 the source text was a list of unique word-forms (Corpus 2).
The number of word-forms with more than one reading is almost equal in both corpora, slightly over 50%. The percentages in Table 2 decrease rather systematically the more readings a wordform has.
While there were more four-ways ambiguous word-forms (10.97%) than three-ways ones (9.12%) in Table 1, in Table 2 the numbers are as expected. The only unexpected result is the share of six-ways ambiguous words (3.44%), which is higher than the share of the five-ways ambiguous ones (2.94%). In Corpus 2, the high percentage of four-ways ambiguous readings found in Corpus 1 does not exist.
The ambiguity rate in Swahili is somewhat lower than in Swedish (60%, Berg 1978). It seems to correspond to that of English (Voutilainen ct al 1992:5), although Dett.ose (1988) gives somewhat rpi 1979). While the reported ambiguity counted from word-form tokens is generally much higher than that counted from word-form types, in Swahili the difference is small. This is due to the fact that in addition to ambiguity found in several of the most common words, verb-forms are typically ambiguous, as are almost half of the nouns. Karlsson (1994:23) suggests an inverse correlation between the number of unique word-forms and rate of ambiguity. Therefore, heavily inflecting languages would tend to produce unambiguous word-forms. Swahili does not seem to fully support this hypothesis, although the numbers in Table 1 and 2 are not directly comparable with results of other studies. In Swahili lexicon, underspecification was avoided which adds to ambiguity.
Disambiguation with Constraint Grammar Parser
Morphological disambiguation as well as syntactic mapping is carried out with Constraint Grammar Parser (CGP). Descriptions of its development phases are found in several publications (e.g. Karlsson 1.990;Karlsson 1994aKarlsson , 1994bKarlsson et al 1994;Voutilainen et al 1992;Voutilainen and Tapanainen 1993;Tapanainen 1996). It sets off from the idea that rather than trying to write rules by pointing out the conditions necessary for the acceptance of a reading in an ambiguous case, it allows the writing of such rules that discard a certain reading as illegitimate. The rule system is typically a combination of deletion and selection rules. The morphological analyzer SWATWOL was so designed that it would be ideal for further processing with CGP. The output of SWATWOL contains such information as part-of-speech features, features for adjectives, verbs, adverbs, nouns, numerals, and pronouns, as well as information on noun class marking (also zero marking) wherever it occurs, etc. In the present application also syntactic tags are included into the morphological lexicon as far as the marking can be done unambiguously. The syntactic mapping of context-sensitive wordforms is left to the CGP.
In order to simplify disambiguatiOn, fixed phrases, idioms, multi-word prepositions and nonambiguous collocations are joined together already in the preprocessing phase of the text (e.g. mbele ya > mbele_ya 'in front of'), and the same constructions are written into the lexicon with corresponding analysis.
Constraint Grammar rule formalism
The subsequent discussion of the Constraint Grammar Parser is based on the formalism of Tapanainen (1996). A detailed description of an earlier version of CGP is in Karlsson (1994b). The CGP rule file has the following sections (optional ones in parentheses):
DELIMITERS (PREFERRED-TARGET) (SETS) (MAPPINGS) CONSTRAINTS END
In DELIMITERS, those tags are listed which mark the boundary of context conditions. If the rule system tries to remove all readings of a cohort, the target listed in the section PREFERRED-TARGET is the one which survives. SETS is a section where groups of tags are defined. Syntactic parsing is carried out with rules located under the heading MAPPINGS. CONSTRAINTS contains constraint rules with tile following schema:
[WORDFORM] OPERATION (target) [(context condition(s) )]
WORDFORM can be any surface word-form, for which a rule will be written. OPERATION may have two forms: REMOVE and SELECT. These are self-explanatory. In TARGET is defined the concrete morphological tag (or sequence of tags), to which the operation is applied. A target may be also a set, which is defined in the SETS section. If the target is left without parentheses it is interpreted as a set. CONTEXT CONDITIONS is an optional part, but in most cases necessary. In it, conditions for the application of tile rule are defined in detail. Context conditions are defined in relation to the target reading, which has the default position 0. Positive integers refer to the number of words to the right, and the negative ones to the left. In context conditions, reference can be made to any of the features or tags found in the unambiguous reading, e.g. (1C ADJ), or in the whole cohort, e.g. (1 ADJ). These references can be made either directly to a tag or indirectly through sets, which are defined in a special section (SETS) of the rule formalism.
Any context may also be negated by placing the key-word NOT to the beginning of the context clause. It is also possible to refer to more than one context in the same position.
If there is a need to define further conditions for a reading found by scanning (by using position markers *-1 or *1), the linking mechanism may be used. This can be done by adding the key-word LINK to the context, whereafter the new context follows. For example, the context condition (*-1 N LINK 1 PP~ON LINK 1 ADJ) reads: 'there is a noun (N) on the left followed by pronoun (PI{ON) followed by and adjective (ADJ)'.
Order of rules
The algorithm allows a sequential rule order. This can be done by grouping the rules into separate sections. The sequential order of rules within a section does not guarantee that the rules are applied in the order where they appear. The rules of the first section are applied first. Any number of consecutive sections can be used. There are presently four sections of constraint rules in the rule file. Certain types of rules should be applied first, without giving a possibility to other, less clearly stated, rules to interfere. Typical of such first-level rules are those where disambiguation is done within a phrase structure. In intermediate sections there are rules which use larger structures for disambiguation. By first disambiguating noun phrases and genitive constructions, the use of otherwise too permissive rules becomes possible, when clear cases are already disambiguated. The disambiguation of verbJorms belongs to these middle levels. 2?he risk of wrong interpretations decreases substantially by first disambiguating noun phrases and other smaller units.
The CGP of Swahili has presently a total of 656 rules in four different sections for disambiguation and 50 rules for syntactic mapping. So far about 600 hours have been used for writing and testing rnles.
Disambiguation of a sample
sentence Below is a Swahili sample sentence after morphological analysis and after CG disambiguation. The sentence is:
Washiriki wa semina zote walitoka katika nchi za Afrika. (Participants of all seminars came from African countries.
Sample sentence 1 Sample sentence after morphological analysis with SWATWOL before disambiguation: "<*washiriki>" "*shiriki" SBJN VFIN I/2-PL2 GBJ V "*shiriki" SBJN VFIN I/2-PL3 GBJ V "*shiriki" SBJN VFIN I/2-PL3-SP V "*shiriki" I/2-SG2-SP VFIN PR:a V "*shiriki" 3/4-SG-SP VFIN PR:a V "*shiriki" II-SG-SP VFIN PR:a V "*shiriki" I/2-PL3-SP VFIN PR:a V "*mshiriki" I/2-PL N "<Wa>" "wa" SELFSTANDING SP "wa" 3/4-SG GEN-CON "wa" II-SG GEN-CON "wa" I/2-SG GEN-CON "wa" I/2-PL GEN-CON "<semina>" "semina" 9/IO-O-SG N "semina" 9/IO-O-PL N "<zote>" "ore" 9/IO-PL-SP PRON:ote "<walitoka>" "toka °' I/2-SG2-SP VFIN PR:a 5/6-SG OBJ V SVO "toka" 3/4-SG-SP VFIN PR:a 5/6-SG OBJ V SVO "toka" II-SG-SP VFIN PR:a 5/6-SG OBJ V SVO "toka" I/2-PL3-SP VFIN PAST V SVO "toka" I/2-PL3-SP VFIN PR:a 5/6-SG OBJ V SVO "<kat ika>" "katika" IMP V SVO STAT "tika" NARR-COLLOQ:ka-a VFIN V SVG STAT "tika" NARR-COLLGQ:ka-a VFIN V STAT "kat ika" PREPOS "<nchi>" "nchi" 9/IO-NI-SG N "nchi" 9/IO-NI-PL N "<za>" "za" 9/IO-PL GEN-CON "<*afrika>" "afrika" PROPN SG "<$. >,, Sample sentence 1 after disambiguatlon with CGP "<*washiriki>" S:816, 1099 "*mshiriki" I/2-PL N "<wa>" S:412 "wa" I/2-PL GEN-CON "<semina>" S:1433 "semina" 9/IO-O-PL N "<zote>" "ore" 9/iO-PL-SP PRON:ote "<walitoka>" S : 534 "toka" I/2-PL3-SP VFIN PAST V SVO "<katika>" S : 244 "katika" PREP ~ADVL "<nchi>" S:1155 "nchi" 9/iO-NI-PL N "<za>" "za" 9/IO-PL GEN-CON "<*afrika>"
"afrika" PKOPN SG -<.$>,,
The CG rules reduce the number of multiple readings so that optimally only one reading survives. Rule S:816 removes an object reading of the word-form. After that, a selection rule S:1099 is applied.
SELECT (I/2-PL N)
(i NCL-2 + GEN-CON) ;
Select noun reading of Ncl 1/2-PL if followed immediately by genitive connector belonging to the set NCL-2. This description is equal to the grammatical rule. Also other rules follow the same principle. E.g. the reading 1/2-PL GEN-CON is chosen for the analysis of wa on the basis of the Ncl of the preceding noun. The rule states: "<wa>" SELECT (1/2-eL) (-t NCL-2) ;
Select Ncl 1/2-PL of the word 'wa' if in the preceding cohort there is a feature belonging to the set NCL-2.
Although both washiriki and wa are initially ambiguous, and in rules the context reference does not extend beyond this pair of words, we get the correct result. This is because in both of the cohorts there is only one such reading which refers to the same noun class.
The word semina is both SG and PL, and the following pronoun zote, which has the PL reading, solves the problem. The word nchi is dis-ambiguated with a rule relying on the Ncl of the following genitive connector (GEN-CON).
The word katika has four readings. The grammatically correct way of disambiguating it is by referring to the following word. Success rate and remaining problems of disambiguation
The CGP of Swahili was tested with two text corpora, which had not been used as test material in writing rules: E. Kezilahabi's novel Mzingile (22,984 word-form tokens), and a collection of newspaper texts from the weekly paper Mzalendo, 1994 (49,969 word-form tokens). Test results are in Table 3. The parser performed best with newspaper texts, leaving ambiguity to 4.9% of tokens. Yet the overall result has to be considered promising, given that the parser is still under development and that the rules are almost solely grammarbased.
The most common types of ambiguity still remaining are: noun vs. adverb, adjective vs. adverb, noun vs. conjunction, verb (imperative) vs. noun, and verb (infinitive) vs. noun. Those are typically in such positions in a sentence that writing of reliable rules is difficult. A fairly large part of remaining ambiguity concerns genitive connectors ya and wa, and possessive pronouns. They are generally in positions where the governing noun is beyond the current clause or sentence boundary on the left. For such cases, the rule syntax should allow the use of more distantly located information.
The vast majority of constraints are selection rules for resolving ambiguity based on homographic noun class agreement markers, lit is possible to resolve most of this ambiguity by using contextual information.
Conclusion
The morphologicM anMysis of SwMfili tends to produce a comparatively large number of ambiguous readings. The noun class structure coupled with class agreement marking in dependent constituents, contributes significantly to ambiguity. The phenomenon is particularly evident in verb structures, where different sets of noun class markers add to the ambiguity of the same verb-form. It is assumed that the solutions suggested here apply Mso to other Bantu languages.
The ambiguity resolution is based on the Constraint Grammar formMism, which allows tile use of grammatically motivated rules. The maximal context in the present application is a sentence, but there is a need for extending it over sentence boundaries. ConstrMnt rules are grouped into sections, so thai; the most obvious cases are disambiguated first. A parser wiLt~ only grammar-based rules disambiguatcs M)out 95% or Swahili wordtbrms from running text, which initiMly has about 50% of the tokens ambiguous. The remaining ambiguity is hard to resolve fully safely, but probabilistic and hcnristic techniques are likely to still improve tile pertbrmance.
N OR INF OR PRON) Select the reading PREPOS of "katika" if there is a noun or infinitive of a verb or pronoun in the following cohort. i 5
Table 1 :
1Number of readings of word-fbrms in
Swahili test corpus (Corpus 1). N(r) = number
of readings, N(t) = number of word-form tokens,
% = percent of the total, cure-% = cumulative
percentage
N(r)
N(t)
% cure-%
1
4653 48.74
48.74
2
2061 21.59
70.33
3
871
9.12
79.55
4
1047 10.97
90.52
5
542
5.68
96.20
6
162
1.70
97.90
7
49
0.51
98.41
8
22
0.23
98.64
9
34
0.36
99.00
10
33
0.35
99.35
1] or more
72
0.75
100.00
Table 2 :
2Number of readings of word-forms in Swahili list of unique word-forms (Corpus 2). N(r) = number of readings, N(t) = number of wordform tokens, % = percent of the total, cum-% = cumulative percentage lower figures, 11% for word-form types and 40% for word-form tokens. In Finnish the corresponding figures are still lower, 3.3% for word-form types and 11.2% for word-form tokens (NiemikoN(r)
N(t)
% cure-%
i
4960
48.13
48.13
2
2294
23.99
72.12
3
1031
10.78
82.90
4
568
5.94
88.84
5
281
2.94
91.78
6
329
3.44
95.22
7
102
1.07
96.29
8
88
0.92
97.21
9
85
0.89
98.10
10
34
0.36
98.46
11 ormore
148
1.54
100.00
Table 3 :
3Ambiguity after processing with the Swahili CGP. N(t) = number of word-form tokens, N(w) = number of unique word-forms, amb-(t) = ambiguity in tokens, amb-(w) = ambiguity in unique word-forms.Ambiguity Mzingile Mzalendo
N(t)
22,984
49,968
N(w)
5,914
9,359
amb-(t)
1,837
2,463
%
7.99
4.93
amb-(w)
721
831
%
12.19
8.88
Olika lika ord. Svenskt homograflexikon. [Different, similar words. Dictionary of Swedish homographs. Stare Berg, Stockhohn: Ahnqvist and Wiksell internationM. Berg, Stare. 1978. Olika lika ord. Svenskt homo- graflexikon. [Different, similar words. Dictionary of Swedish homographs.] Stockhohn: Ahnqvist and Wiksell internationM.
GralnmaticM Category l)isambiguation by Statistical Optimization. Sture Derose, Computational Linguistics. 14DeRose, Sture. 1988. GralnmaticM Cate- gory l)isambiguation by Statistical Optimiza- tion. Computational Linguistics, 14:31-39.
A Two-Level Computer Formalism for the Analysis of Bantu Morphology: An Application to Swahili. Arvi Llurskainen, Nordic Journal of African Studies. 11llurskainen, Arvi. 1992. A Two-Level Computer Formalism for the Analysis of Bantu Morphol- ogy: An Application to Swahili. Nordic Journal of African Studies 1(1):87-122.
Constraint Grammar as a framework ]or parsing running te.vt. Fred Karlsson, COLING-90. Papers presented to the 13th International Conference on Computational Linguistics. Hans KarlgrenHelsinki3Karlsson, Fred. 1990. Constraint Grammar as a framework ]or parsing running te.vt. In Hans Karlgren (ed.), COLING-90. Papers presented to the 13th International Conference on Com- putational Linguistics. volume 3, pp. 168-173, Helsinki, 1990.
SWETWOL: A comprehensive morphological analyzer for Swedish. Fred Karlsson, Nordic Journal of Linguistics, 1. 5Karlsson, Fred. 1992. SWETWOL: A com- prehensive morphological analyzer for Swedish. Nordic Journal of Linguistics, 1.5:1-45.
Designing a parser ]'or unrestricted text. Fred Karlsson, Constraint Grammar: A Language-Indepcndcnt System for Parsing Unrestricted '1~xt. Mouton de Gruyter. Karlsson et alBerlinKarlsson, Fred. 1994a. Designing a parser ]'or unrestricted text. In Karlsson et al (ed.) Constraint Grammar: A Language-Indepcndcnt System for Parsing Unrestricted '1~xt. Mouton de Gruyter, Berlin, 1994. pp. 1-40.
Constraint Grammar: A Language-Independent System for Parsing Unrestricted Text. Fred Karlsson, Karlsson et alMouton de GruyterBerlin7'he formalism and cn-viTvnrnent of Constraint Grammar ParsingKarlsson, Fred. 1994b. 7'he formalism and cn- viTvnrnent of Constraint Grammar Parsing. In Karlsson et al (ed.) Constraint Grammar: A Language-Independent System for Parsing Un- restricted Text. Mouton de Gruyter, Berlin, 1994. pp. 41-88.
Constraint Grammar: A Language-Independent System for Parsing Un. F Karlsson, A Voutilainen, J , Heikkilg, and A. AnttilaBerlinrestricted 7'ext. Mouton de GruyterKarlsson, F., A. Voutilainen, J. Heikkilg, and A. Anttila (eds.). 1994. Constraint Grammar: A Language-Independent System for Parsing Un- "restricted 7'ext. Mouton de Gruyter, Berlin, 1994.
Two-level morphology: A general computational model for word-form recognition and production. Publications No. ll. Department of GenerM Linguistics. Kimmo Koskenniemi, University of HelsinkiKoskenniemi, Kimmo. 11983. Two-level mor- phology: A general computational model for word-form recognition and production. Publica- tions No. ll. Department of GenerM Linguis- tics, University of Helsinki, 1983.
Automatic Data Processing in the Compilation of Word Lists. Antero Niemikorpi, Kaisa H'~kkinen and Fred KarlssonNiemikorpi, Antero. 1979. Automatic Data Processing in the Compilation of Word Lists. In Kaisa H'~;kkinen and Fred Karlsson (eds.)
Suomen kielitieteelliscn ghdistykscn julkaisuja. 2Publications of the Linguistic Association of [,'inlandSuomen kielitieteelliscn ghdistykscn julkaisuja [Publications of the Linguistic Association of [,'inland,] 2:1117-126.
The Constraint Grammar Parser CG-2. Publications No. 27. l)epartment of General Linguistics. ISBN-951-45-7331-5University of Itelsinki'l'apanainen, l'asi. 1.996'l'apanainen, l'asi. 1.996. The Constraint Gram- mar Parser CG-2. Publications No. 27. l)e- partment of General Linguistics, University of Itelsinki, (ISBN-951-45-7331-5).
Syntactic analysis of natural language using linguistic rules and corpus-based patterns. P Tapanainen, T Jgrvinen, COLING-9/t. Papers presented to the 15th International Conference on Computational Linguistics. Kyoto199Tapanainen, P. and Jgrvinen T. 199,1. Syntac- tic analysis of natural language using linguistic rules and corpus-based patterns, in COLING- 9/t. Papers presented to the 15th International Conference on Computational Linguistics. Vol. 1, pp. 629-634. Kyoto.
Constraint Grammar of English -A Performance-Oriented introduction. A Voutilainen, J Heikkilg, A Anttila, 992University of HelsinkiPublications No. 21. Department of General LinguisticsVoutilainen, A., J. Heikkilg, and A. Anttila. ]992. Constraint Grammar of English -A Performance-Oriented introduction. Publica- tions No. 21. Department of General Linguis- tics, University of Helsinki.
Ambiguity resolution in a reductionistic parser. A Voutilmnen, P Tapanainen, Proceedings of the Sixth Conference of the European Chapter of the Association for Computational Linguistics. EACL-g& pp. the Sixth Conference of the European Chapter of the Association for Computational Linguistics. EACL-g& ppUtrecht, Netherlands993VoutilMnen, A. and Tapanainen, P. 1.993. Am- biguity resolution in a reductionistic parser. In Proceedings of the Sixth Conference of the Eu- ropean Chapter of the Association for Com- putational Linguistics. EACL-g& pp. 394-403, Utrecht, Netherlands, 1993. |
933,983 | CSNIPER Annotation-by-query for non-canonical constructions in large corpora | We present CSNIPER (Corpus Sniper), a tool that implements (i) a web-based multiuser scenario for identifying and annotating non-canonical grammatical constructions in large corpora based on linguistic queries and (ii) evaluation of annotation quality by measuring inter-rater agreement. This annotationby-query approach efficiently harnesses expert knowledge to identify instances of linguistic phenomena that are hard to identify by means of existing automatic annotation tools. | [] | CSNIPER Annotation-by-query for non-canonical constructions in large corpora
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 2012. 2012
Richard Eckart De Castilho
Department of Computer Science
English linguistics Department of Linguistics and Literary Studies
Ubiquitous Knowledge Processing Lab (UKP-TUDA
Technische Universität Darmstadt
Technische Universität
Darmstadt
Iryna Gurevych
Department of Computer Science
English linguistics Department of Linguistics and Literary Studies
Ubiquitous Knowledge Processing Lab (UKP-TUDA
Technische Universität Darmstadt
Technische Universität
Darmstadt
Sabine Bartsch
Department of Computer Science
English linguistics Department of Linguistics and Literary Studies
Ubiquitous Knowledge Processing Lab (UKP-TUDA
Technische Universität Darmstadt
Technische Universität
Darmstadt
CSNIPER Annotation-by-query for non-canonical constructions in large corpora
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics
the 50th Annual Meeting of the Association for Computational LinguisticsJeju, Republic of KoreaAssociation for Computational LinguisticsJuly 2012. 2012http://www.linglit.tu-darmstadt.de
We present CSNIPER (Corpus Sniper), a tool that implements (i) a web-based multiuser scenario for identifying and annotating non-canonical grammatical constructions in large corpora based on linguistic queries and (ii) evaluation of annotation quality by measuring inter-rater agreement. This annotationby-query approach efficiently harnesses expert knowledge to identify instances of linguistic phenomena that are hard to identify by means of existing automatic annotation tools.
Introduction
Linguistic annotation by means of automatic procedures, such as part-of-speech (POS) tagging, is a backbone of modern corpus linguistics; POS tagged corpora enhance the possibilities of corpus query. However, many linguistic phenomena are not amenable to automatic annotation and are not readily identifiable on the basis of surface features. Non-canonical constructions (NCCs), which are the use-case of the tool presented in this paper, are a case in point. NCCs, of which cleft-sentences are a well-known example, raise a number of issues that prevent their reliable automatic identification in corpora. Yet, they warrant corpus study due to the relatively low frequency of individual instances, their deviation from canonical construction patterns and frequent ambiguity. This makes them hard to distinguish from other, seemingly similar constructions. Expert knowledge is thus required to reliably identify and annotate such phenomena in sufficiently large corpora like the 100 mil. word British National Corpus (BNC Consortium, 2007). This necessitates manual annotation which is time-consuming and error-prone when carried out by individual linguists.
To overcome these issues, CSNIPER implements a web-based multi-user annotation scenario in which linguists formulate and refine queries that identify a given linguistic construction in a corpus and assess the query results to distinguish instances of the phenomenon under study (true positives) from such examples that are wrongly identified by the query (false positives). Each expert linguist thus acts as a rater rather than an annotator. The tool records assessments made by each rater. A subsequent evaluation step measures the inter-rater agreement. The actual annotation step is deferred until after this evaluation in order to achieve high annotation confidence. Query Assess Evaluate Annotate review assessments refine query Figure 1: Annotation-by-query workflow CSNIPER implements an annotation-by-query approach which entails the following interlinking functionalities (see fig. 1):
Query development: Corpus queries can be developed and refined within the tool. Based on query results which are assessed and labeled by the user, queries can be systematically evaluated and refined for precision. This transfers some of the ideas of relevance feedback, which is a common method of improving search results in information retrieval, to a linguistic corpus query system. Assessment: Query results are presented to the user as a list of sentences with optional additional context; the user assesses and labels each sentence as representing or not representing an instance of the linguistic phenomenon under study. The tool implements a function that allows the user to comment on decisions and to temporarily mark sentences with uncertain assessments for later review.
Evaluation: Evaluation is a central functionality of CSNIPER serving three purposes. 1) It integrates with the query development by providing feedback to refine queries and improve query precision.
2) It provides information on sentences not labeled consistently by all users, which can be used to review the assessments. 3) It calculates the interrater agreement which is used in the corpus annotation step to ensure high annotation confidence.
Corpus annotation: By assessing and labeling query results as correct or wrong, raters provide the tool with their annotation decisions. CSNIPER annotates the corpus with those annotation decisions that exceed a certain inter-rater agreement threshold.
This annotation-by-query approach of querying, assessing, evaluating and annotating allows multiple distributed raters to incrementally improve query results and achieve high quality annotations. In this paper, we show how such an approach is well-suited for annotation tasks that require manual analysis over large corpora. The approach is generalizable to any kind of linguistic phenomena that can be located in corpora on the basis of queries and require manual assessment by multiple expert raters.
In the next two sections, we are providing a more detailed description of the use-case driving the development of CSNIPER (sect. 2) and discuss why existing tools do not provide viable solutions (sect. 3). Sect. 4 discusses CSNIPER and sect. 5 draws some conclusions and offers an outlook on the next steps.
Non-canonical grammatical constructions
The initial purpose of CSNIPER is the corpus-based study of so-called non-canonical grammatical constructions (NCC) (examples (2) -(5) below):
1. The media was now calling Reagan the frontrunner. (canonical) 2. It was Reagan whom the media was now calling the frontrunner. (it-cleft) 3. It was the media who was now calling Reagan the frontrunner. (it-cleft) 4. It was now that the media were calling Reagan the frontrunner. (it-cleft) 5. Reagan the media was not calling the frontrunner. (inversion)
NCCs are linguistic constructions that deviate in characteristic ways from the unmarked lexicogrammatical patterning and informational ordering in the sentence. This is exemplified by the constructions of sentences (2) -(5) above. While expressing the same propositional content, the order of information units available through the permissible grammatical constructions offers interesting insights into the constructional inventory of a language. It also opens up the possibility of comparing seemingly closely related languages in terms of the sets of available related constructions as well as the relations between instances of canonical and noncanonical constructions.
In linguistics, a cleft sentence is defined as a complex sentence that expresses a single proposition where the clefted element is co-referential with the following clause. E.g., it-clefts are comprised of the following constituents:
dummy subject it main verb to be clefted element clause
The NCCs under study pose interesting challenges both from a linguistic and a natural language processing perspective. Due to their deviation from the canonical constructions, they come in a variety of potential construction patterns as exemplified above. Non-canonical constructions can be expected to be individually rarer in any given corpus than their canonical counterparts. Their patterns of usage and their discourse functions have not yet been described exhaustively, especially not in representative corpus studies because they are notoriously hard to identify without suitable software. Their empirical distribution in corpora is thus largely unknown.
A major task in recognizing NCCs is distinguishing them from structurally similar construc-tions with default logical and propositional content. An example of a particular difficulty from the domain of it-clefts are anaphoric uses of it as in (6) below that do not refer forward to the following clause, but are the antecedents of entities previously introduced in the context of preceding sentences. Other issues arise in cases of true relative clauses as exemplified in (7) Further examples of NCCs apart from the it-clefts addressed in this paper are wh-clefts and their subtypes, all-clefts, there-clefts, if-because-clefts and demonstrative clefts as well as inversions. All of these are as hard to identify in a corpus as it-clefts.
The linguistic aim of our research is a comparison of non-canonical constructions in English and German. Research on these requires very large corpora due to the relatively low frequency of the individual instances. Due to the ambiguous nature of many NCC candidates, automatically finding them in corpora is difficult. Therefore, multiple experts have to manually assess candidates in corpora.
Our approach does not aim at the exhaustive annotation of all NCCs. The major goal is to improve the understanding of the linguistic properties and usage of NCCs. Furthermore, we define a gold standard to evaluate algorithms for automatic NCC identification. In our task, the total number of NCCs in any given corpus is unknown. Thus, while we can measure the precision of queries, we cannot measure their recall. To address this, we exhaustively annotate a small part of the corpus and extrapolate the estimated number of total NCC candidates.
In summary, the requirements for a tool to support multi-user annotation of NCCs are as follows:
1. querying large linguistically pre-processed corpora and query refinement 2. assessment of sentences that are true instances of NCCs in a multi-user setting 3. evaluation of inter-rater agreement and query precision
In the following section, we review previous work to support linguistic annotation tasks.
Related work
We differentiate three categories of linguistic tools which all partially fulfill our requirements: querying tools, annotation tools, and transformation tools.
Linguistic query tools: Such tools allow to query a corpus using linguistic features, e.g. part-ofspeech tags. Examples are ANNIS2 (Zeldes et al., 2009) and the IMS Open Corpus Workbench (CWB) (Christ, 1994). Both tools provide powerful query engines designed for large linguistically annotated corpora. Both are server-based tools that can be used concurrently by multiple users. However, they do not allow to assess the query results.
Linguistic annotation tools: Such tools allow the user to add linguistic annotations to a corpus. Examples are MMAX2 (Müller and Strube, 2006) and the UIMA CAS Editor 1 . These tools typically display a full document for the user to annotate. As NCCs appear only occasionally in a text, such tools cannot be effectively applied to our task, as they offer no linguistic query capabilities to quickly locate potential NCCs in a large corpus.
Linguistic transformation tools: Such tools allow the creation of annotations using transformation rules. Examples are TextMarker (Kluegl et al., 2009) and the UAM CorpusTool (O'Donnell, 2008). A rule has the form category := pattern and creates new annotation of the type category on any part of a text matching pattern. A rule for the annotation of passive clauses in the UAM CorpusTool could be passive-clause := clause + containing be% participle. These tools do not support the assessment of the results, though. In contrast to the querying tools, transformation tools are not specifically designed to operate efficiently on large corpora. Thus, they are hardly productive for our task, which requires the analysis of large corpora.
CSNIPER
We present CSNIPER, an annotation tool for noncanonical constructions. Its main features are: Annotation-by-query -Sentences potentially containing a particular type of NCC are retrieved using a query. If the sentence contains the NCC of interest, the user manually labels it as correct and otherwise wrong. Annotations are generated based on the users' assessments.
Distributed multi-user setting -Our web-based tool supports multiple users concurrently assessing query results. Each user can only see and edit their own assessments and has a personal query history.
Evaluation -The evaluation module provides information on assessments, number of annotated instances, query precision and inter-rater agreement.
Implementation and data
CSNIPER is implemented in Java and uses the CWB as its linguistic search engine (cf. sect. 3). Assessments are stored in a MySQL database. Currently, the British National Corpus (BNC) is used in our study. Apache UIMA and DKPro Core 2 are used for linguistic pre-processing, format conversion, and to drive the indexing of the corpora. In particular, DKPro Core includes a reader for the BNC and a writer for the CWB. As the BNC does not carry lemma annotations, we add them using the DKPro TreeTagger (Schmid, 1994) module.
Query (Figure 2)
The user begins by selecting a 1 corpus and a 2 construction type (e.g. It-Cleft). A query can be chosen from a 3 list of examples, from the 4 personal query history, or a new 5 query can be entered. The query is applied to find instances of that construction (e.g. "It" /VCC[] /PP[] /RC[]). After pressing the 6 Submit query button, the tool presents the user with a KWIC view of the query results ( fig. 3). At this point, the user may choose to refine and re-run the query. As each user may use different queries, they will typically assess different sets of query results. This can yield a set of sentences labeled by a single user only. Therefore, the tool can display those sentences for assessment that other users have assessed, but the current user has not. This allows getting labels from all users for every NCC candidate.
Assessment (Figure 3)
If the query results match the expectation, the user can switch to the assessment mode by clicking the 7 Begin assessment button. At this point, an An-notationCandidate record is created in the database for each sentence unless a record is already present. These records contain the offsets of the sentence in the original text, the sentence text and the construction type. In addition, an AnnotationCandidateLabel record is created for each sentence to hold the assessment to be provided by the user. In the assessment mode, an additional 8 Label column appears in the KWIC view. Clicking in this column cycles through the labels correct, wrong, check and nothing. When the user is uncertain, the label check can be used to mark candidates for later review. The view can be 9 filtered for those sentences that need to be assessed, those that have been assessed, or those that have been labeled with check.
A 10 comment can be left to further describe difficult cases or to justify decisions. All changes are immediately saved to the database, so the user can stop assessing at any time and resume the process later.
The proper assessment of a sentence as an instance of a particular construction type sometimes depends on the context found in the preceding and following sentences. For this purpose, clicking on the 11 book icon in the KWIC view displays the sentence in its larger context ( fig. 4). POS tags are shown in the sentence to facilitate query refinement.
Evaluation (Figure 5)
The evaluation function provides an overview of the current assessment state (fig. 5). We support two evaluation views: by construction type and by query.
By construction type: In this view, one or more Fleiss, 1971) are the main uses of this view. The inter-rater agreement is calculated using only candidates labeled by all selected users. By query: In this view, query precision and assessment completeness are calculated for a set of 17 queries and 18 users. The query precision is calculated from the labeled candidates as:
precision = |T P | |T P | + |F P |
We treat a candidate as a true positive (TP) if: 1) the number of correct labels is larger than the number of wrong labels; 2) the ratio of correct labels compared to the number of raters exceeds a given 19 threshold. Candidates are conversely treated as false positives (FPs) if the number of wrong labels is larger and the threshold is exceeded. The threshold controls the confidence of the TP and, thus, of the annotations generated from them (cf. sect. 4.5). If a candidate is neither TP nor FP, it is unknown (UNK). When calculating precision, UNK candidates are counted as FP. The estimated precision is the precision to be expected if TP and FP are equally distributed over the set of candidates. It takes into account only the currently known TP and FP and ignores the UNK candidates. Both values are the same once all candidates have been labeled by all users.
Annotation
When the assessment process is complete, corpus annotations can be generated from the assessed candidates. Here, we employ the thresholded majority vote approach that we also use to determine the TP/FP in sect. 4.4. Annotations for the respective NCC type are added directly to the corpus. The augmented corpus can be used in further exploratory work. Alternatively, a file with all assessed candidates can be generated to serve as training data for identification methods based on machine learning.
Conclusions
We have presented CSNIPER, a tool for the annotation of linguistic phenomena whose investigation requires the analysis of large corpora due to a relatively low frequency of instances and whose identification requires expert knowledge to distinguish them from other similar constructions. Our tool integrates the complete functionality needed for the annotation-by-query workflow. It provides distributed multi-user annotation and evaluation. The feedback provided by the integrated evaluation module can be used to systematically refine queries and improve assessments. Finally, high-confidence annotations can be generated from the assessments. The annotation-by-query approach can be generalized beyond non-canonical constructions to other linguistic phenomena with similar properties. An example could be metaphors, which typically also appear with comparatively low frequency and require expert knowledge to be annotated. We plan to integrate further automatic annotations and query possibilities to support such further use-cases.
Figure 2 :
2Search form
Figure 4 :
4Sentence context view with POS tags
Figure 5 :
5Evaluation by query and by NCC type
below:6. London will be the only capital city in Europe where rail services are expected to make a profit,' he added. It is a policy that could lead to economic and environmental chaos. [BNC: A9N-s400] 7. It is a legal manoeuvre that declined in currency in the '80s.[BNC:B1L-s576]
at least one other did not. Investigating disputed cases and 16 inter-rater agreement per type using Fleiss' Kappa (Figure 3: KWIC view of query results and assessments
sible to 15 filter for correct, wrong, disputed, incom-
pletely assessed, and unassessed candidates. A can-
didate is disputed if it is not labeled consistently by
all selected users. A candidate is incompletely as-
sessed if at least one of the selected users labeled
it and
http://uima.apache.org/
http://www.ukp.tu-darmstadt.de/ research/current-projects/dkpro/
corpora, 13 types, and 14 users can be selected for evaluation. For these, all annotation candidates and the respective statistics are displayed. It is pos-
AcknowledgmentsWe would like to thank Erik-Lân Do Dinh, who assisted in implementing CSNIPER as well as Gert Webelhuth and Janina Rado for testing and providing valuable feedback.This work has been supported by the Hessian research excellence program "Landes-Offensive zur Entwicklung Wissenschaftlich-ökonomischer Exzellenz" (LOEWE) as part of the research center "Digital Humanities" and by the Volkswagen Foundation as part of the Lichtenberg-Professorship Program under grant No. I/82806. Data cited herein have been extracted from the British National Corpus, distributed by Oxford University Computing Services on behalf of the BNC Consortium. All rights in the texts cited are reserved.
The British National Corpus, version 3 (BNC XML Edition). Bnc Consortium, Distributed by Oxford University Computing Services p.p. the BNC Consortium. BNC Consortium. 2007. The British National Corpus, version 3 (BNC XML Edition). Distributed by Oxford University Computing Services p.p. the BNC Consor- tium, http://www.natcorp.ox.ac.uk/.
A modular and flexible architecture for an integrated corpus query system. Oliver Christ, Proc. of the 3rd Conference on Computational Lexicography and Text Research (COMPLEX'94). of the 3rd Conference on Computational Lexicography and Text Research (COMPLEX'94)HungaryOliver Christ. 1994. A modular and flexible architec- ture for an integrated corpus query system. In Proc. of the 3rd Conference on Computational Lexicography and Text Research (COMPLEX'94), pages 23-32, Bu- dapest, Hungary, Jul.
Measuring nominal scale agreement among many raters. L Joseph, Fleiss, American Psychological Association. 765In Psychological Bulletin, volumeJoseph L. Fleiss. 1971. Measuring nominal scale agree- ment among many raters. In Psychological Bulletin, volume 76 (5), pages 378-381. American Psychologi- cal Association, Washington, DC.
TextMarker: A tool for rule-based information extraction. Peter Kluegl, Martin Atzmueller, Frank Puppe, Proc. of the Biennial GSCL Conference 2009, 2nd UIMA@GSCL Workshop. Christian Chiarcos, Richard Eckart de Castilho, and Manfred Stedeof the Biennial GSCL Conference 2009, 2nd UIMA@GSCL WorkshopGunter Narr VerlagPeter Kluegl, Martin Atzmueller, and Frank Puppe. 2009. TextMarker: A tool for rule-based informa- tion extraction. In Christian Chiarcos, Richard Eckart de Castilho, and Manfred Stede, editors, Proc. of the Biennial GSCL Conference 2009, 2nd UIMA@GSCL Workshop, pages 233-240. Gunter Narr Verlag, Sep.
Multi-level annotation of linguistic data with MMAX2. Christoph Müller, Michael Strube, Corpus Technology and Language Pedagogy: New Resources, New Tools, New Methods. Sabine Braun, Kurt Kohn, and Joybrato MukherjeeGermanyPeter LangFrankfurt am MainChristoph Müller and Michael Strube. 2006. Multi-level annotation of linguistic data with MMAX2. In Sabine Braun, Kurt Kohn, and Joybrato Mukherjee, editors, Corpus Technology and Language Pedagogy: New Re- sources, New Tools, New Methods, pages 197-214. Pe- ter Lang, Frankfurt am Main, Germany, Aug.
The UAM CorpusTool: Software for corpus annotation and exploration. O' Mick, Donnell, Applied Linguistics Now: Understanding Language and Mind / La Lingüística Aplicada Hoy: Comprendiendo el Lenguaje y la Mente. Carmen M. et al. Bretones CallejasAlmería: Universidad de AlmeríaMick O'Donnell. 2008. The UAM CorpusTool: Soft- ware for corpus annotation and exploration. In Car- men M. et al. Bretones Callejas, editor, Applied Lin- guistics Now: Understanding Language and Mind / La Lingüística Aplicada Hoy: Comprendiendo el Lenguaje y la Mente, pages 1433-1447. Almería: Uni- versidad de Almería.
Improvements in part-of-speech tagging with an application to German. Helmut Schmid, Proc. of Int. Conference on New Methods in Language Processing. of Int. Conference on New Methods in Language essingManchester, UKHelmut Schmid. 1994. Improvements in part-of-speech tagging with an application to German. In Proc. of Int. Conference on New Methods in Language Processing, pages 44-49, Manchester, UK, Sep.
ANNIS: A search tool for multilayer annotated corpora. Amir Zeldes, Julia Ritz, Proc. of Corpus Linguistics. of Corpus LinguisticsLiverpool, UKAnke Lüdeling, and Christian ChiarcosAmir Zeldes, Julia Ritz, Anke Lüdeling, and Christian Chiarcos. 2009. ANNIS: A search tool for multi- layer annotated corpora. In Proc. of Corpus Linguis- tics 2009, Liverpool, UK, Jul. |
13,244,912 | Chinese Word Segmentation Based on Contextual Entropy | Chinese is written without word delimiters so word segmentation is generally considered a key step in processing Chinese texts. This paper presents a new statistical approach to segment Chinese sequences into words based on contextual entropy on both sides of a bigram. It is used to capture the dependency with the left and right contexts in which a bigram occurs. Our approach tries to segment by finding the word boundaries instead of the words. Experimental results show that it is effective for Chinese word segmentation. | [
977159,
8032424,
3864841,
13539149,
5651543
] | Chinese Word Segmentation Based on Contextual Entropy
Jin Hu Huang
School of Informatics and Engineering Flinders
School of Informatics and Engineering Flinders
University of South Australia GPO
Box 21005001Adelaide SouthAustralia
David Powers
University of South
GPO Box 21005001Adelaide SouthAustralia, Australia
Chinese Word Segmentation Based on Contextual Entropy
Chinese is written without word delimiters so word segmentation is generally considered a key step in processing Chinese texts. This paper presents a new statistical approach to segment Chinese sequences into words based on contextual entropy on both sides of a bigram. It is used to capture the dependency with the left and right contexts in which a bigram occurs. Our approach tries to segment by finding the word boundaries instead of the words. Experimental results show that it is effective for Chinese word segmentation.
Introduction
Unlike English there is no explicit word boundary in Chinese text. Chinese words can comprise one, two, three or more characters without delimiters. But almost all techniques to Chinese language processing, including machine translation, information retrieval and natural language understanding are based on words. Word segmentation is a key step in Chinese language processing.
Several approaches have been developed for Chinese word segmentation. In general two main approaches are widely used: the statistical approach (Gua and Gan, 1994, Sproat and Shih, 1990, , Teaban, Wen, McNab and Witten, 2000, Peng and Schuurmans, 2001) and lexicon-based approach (Yeh and Lee, 1991, Palmer, 1997, Cheng, Yong and Wong, 1999.
Some statistical approaches are based on the mutual information (Sproat and Shih, 1990), which only captures the dependency among characters of a word. Some need large pre-tagged corpus for training (Teaban, Wen, McNab and Witten, 2000), which is too expensive to construct at present. Rule-based approaches require a pre-defined word list (dictionary, or lexicon). The coverage of the dictionary is critical for these approaches. Many researches use a combination of approaches (Nie, Jin and Hanna 1994). These are supervised approaches that require extensive human involvement. Some (Sproat and Shih, 1990, de Marcken, 1996, Peng and Schuurmans, 2001 used unsupervised approaches and required little human intervention.
It has been long known that contextual information can be used for segmentation (Harris 1955). Dai, Kgoo and Loh (1999) used weighted document frequency as contextual information for Chinese word segmentation. Zhang, Gao and Zhou (2000) used the context dependency for word extraction. Twig and Lee (1994) used contextual entropy to identify unknown Chinese words. Chang, Lin & Su (1995) and Ponte & Croft (1996) used contextual entropy for automatic lexical acquisition. Hutchens & Alder (1998) and Kempe(1999) used the contextual entropy to detect the separator in English and German corpus.
In this paper we will present a simple purely statistical approach using contextual entropy for word segmentation. Details about our approach are given in section 1 and 2.
Contextual Entropy
We use a Markov model to estimate the probabilities of symbols of a corpus. The probability of a symbol w with respect to this model M and to a context c can be estimated by:
Pax, A c) = f (w, M, c)
The information of a symbol w with respect to the model M and to a context c is defined by:
I (w M c) _ -loge p (w I M , c)
The entropy of a context c with respect to this model M is defined by:
H(M, c) p(w M, c)I (w I M, c) weI
This entropy measures the uncertainty about the next symbol after having seen the context c. We call it contextual entropy. It will be low if one particular symbol is expected to occur with a high probability. Otherwise it will high if the model has no "idea" what kind of symbol will follow the context. "*ftgERKg-&WW)CIVI "ftkri'Wt ril igaNlY9Ä3 AVAJOUt. ° " "The two world wars happened this century had brought great disasters to human being including China."
Monitoring entropy in the figure 1 above shows regions of high entropy correspond with word boundary. Given the left context, a word boundary will follow the context. Given the right context, a boundary is followed by the context. In other words, the beginning and the end of a boundary are often marked by high entropy as any symbol can follow a boundary and occur before a boundary.
Contextual entropy finds a left boundary if there is a high branching factor (perplexity & choice) to the left and a right boundary if there is a high branching factor.
-I+ 3 Algorithm
Contextual Entropy
To find Chinese words we look for character sequences that are stable in the corpus in the sense that the components of a word are strongly correlated but appear in various contexts in the corpus. Contextual entropy among components of a word is low. High entropy appears at word boundaries.
We calculate both left and right contextual entropy values for each bigram occurring in the corpus.
H(xi , x2 ) = -Ep(x3 I , x2 ) log 2 p(x3 I , x2 ) X3E E H(x2 , x3 ) = p(xl I x2 , x 3 ) log 2 p(xi 1 x2 , x3 ) eX
We only store the positive contextual entropy value. An entropy of zero indicates there is no boundary before or after the context given the right or left context. We assume the value for the bigrams which do not appear in the corpus is zero as we can still predict the boundary according to the left or right adjacent context. This can save a lot of space to store bigrams with zero value.
From Figure 1 above we know that there is a word boundary at a peak for both entropy values. On the contrary there is no boundary at a trough. For a punctuation mark or a Chinese word marker, there is a peak preceding it given the right context and a peak following it given the left context. In other words, after having seen a punctuation mark or a word marker we do not know what occurs before and after it. This is very useful for detecting punctuation marks and word markers. Most other work did not treat the punctuation as an unknown character (Peng andSchuurmans, 2001, Dai, Khoo andLoh, 1999) or could not detect word markers well based on statistical methods (Ge, Pratt and Smyth, 1999). They treated punctuation marks or characters as separators for sentences.
In order to segment the text we simply need to find the word boundaries. Across a word boundary there is a significant change in the contextual entropy. We apply the following thresholds to determine whether there is a word boundary between C and D for a string ABCDEF.
1. LHBC LHAB >Ill 2. LHBC -LHCD >h2 3. RHDE RHEF >h3 4. RHDE -Rlicp >h4
For each word markers or punctuation mark, there is a boundary before and after it. We call these function characters and apply the following thresholds to determine if c is a function character in the string ABCDE.
Mutual Information
The work by Sproat and Shih (1990) has a similar goal using a different measure, Mutual Information.
MI(x, = -log P( x, Y) 13(x)P(Y)
From Fig. 1 we know that there is a high mutual information between characters in a word and a low mutual information across a boundary. They found the pair of adjacent characters with mutual information greater than some threshold (2.5) is a word and grouped them together. They iterated it until there were no more characters to group.
We formulate this in our model as well and consider it on its own and in combination with Contextual Entropy. Instead of grouping characters together as a word we try to find the boundary between characters. We use (9) (10) where MI is the mutual information, ml, m2 are the threshold values.
Experiment Results
We trained the bi-directional 2nd order Markov model on 220MB corpora mainly news from People Daily (91-95). We obtained about 1M pairs of bigrams with positive entropy. We stored the mutual information for the bigram at the same time.
In order to validate variations on our algorithm, we used a small corpus 100 articles of 325 articles from People Daily (94-98) included in the Penn Treebank Tagged Chinese Corpus (3.3M) to set the thresholds hi hl 1, ml .. m3 and find the best way of combining these. Then we tested on the rest of the articles. We used recall and precision to measure our performance both on discovering word boundaries and words. A word is considered correctly segmented only if there is a word boundary in front of and at the end of the word and these is no boundary among the word. The following Table 1 (AND(1,2),AND(3,4) (AND(1,2),AND(3,4) Table 1 we know there is a significant change in contextual entropy across a word boundary. Either side of contextual entropy change is useful to detect the word boundary. If we use F-measure: 2 * p* r F p + r as a testing metric, using a threshold value around 2 with an "OR" relationship among Eq.(1)(2)(3)(4) we achieve the best result for the validation corpus. Table 2 shows (5)(6)(7)(8) properties are useful to detect a single character word marker in Chinese or punctuation. We obtained the highest precision under the four conditions. Table 3 shows using equation (13) sum of both left and right contextual entropy is better than either left Eq. (11) or right contextual entropy Eq. (12). Table 4 shows the best threshold for grouping characters together is 4 for Penn Treebank corpus, greater than 2.5 that Sproat and Shih (1990) used in their work.
From the results above, the following conditions and thresholds we get the best results on the validation corpus (100 articles We had the same errors as Peng and Schuurmans (2001) mentioned and had the same errors as most segmenters had to recognise the Chinese names. Most errors caused with our approaches relate to numbers and dates. In the training corpus, numbers written in full-width Arabic digits were replaced by a special character but in Penn corpus numbers are written in Chinese character. The other main kind of errors concerns compound nouns. We segmented "F I" as "ffAIK " . But note that there is no standard definition for Chinese words. It should be noted that there is poor agreement on word segmentation amongst human annotators and at least three relative widespread conventions (China, Taiwan, Penn Treebank). Our results are as expected lower than those judged by hand (which can bias judgements) and tested on non-standard corpora.
Although our approach only used a 2nd order Markov model, we still can find words longer than 2 characters as we only used our model to identify the word boundaries rather than words.
Conclusion
This paper describes a new approach for Chinese word segmentation based contextual entropy from an unsegmented corpus. Contextual entropy is used to capture the dependency with the both contexts in which a word occurs. We used a relative short order Markov model to train our model and tried to identify the word boundary rather than the word. Our approach is simple and fast, and although it is unsupervised it gives very competitive results.
For a boundary between BC and DE, the contextual entropy given left context BC or right context DE are very high. We try to test whether there is a threshold for boundaries and non-boundaries.5. LHBC LHAB >h5
6. LHBC LHCD >h6
7. RHCD -RHDE >h7
8. RHCD -RH BC >h8
where LH is the left contextual entropy, RH is the right contextual entropy. hl, h2, h3, h4, h5, h6, h7,
h8 are the threshold values.
9. LHBC >h9
10. RHDE >h10
11. LHBC + RHDE >hl 1
We obtained 93.2% precision with 93.1% recall on discovering word boundaries and 81.2% precision with 81.1% recall on discovering words. And we got 93.3% precision with 92.4% recall on discovering word boundaries and 81.3% precision with 80.4% recall on discovering words. We tested on another corpus tagged by Beijing University from People Daily (Jan 1998, 8.8M). We obtained 89.4% precision with 91.5% recall on discovering word boundaries and 75.0% precision with 76.8% recall on discovering words.Peng and Schuurmans (2001) used successive EM phases to learn a probabilistic model of character sequences and pruned the model with a mutual information selection criterion. They achieved 75.1% precision with 74.0% recall on discovering words by repeatedly applying lexicon pruning to an improved EM training. Their results are tested on the same corpus as ours. Compared with their approaches, our approaches are simpler, faster and achieved better results.):
1. OR(AND(1,2),AND(3,4)),h1,h2,h3,h4=2
2. 13(h11=9)
3. AND(5,6,7,8)h5,h6,h7,h8=0
4. AND(9,10),m1,m2=3
Automatic construction of a Chinese electronic dictionary. J S Chang, Y Lin, K Y Su, Proceedings of the Third Workshop on Very Large Corpora. the Third Workshop on Very Large CorporaChang, J. S., Y. C Lin and K. Y. Su. 1995. Automatic construction of a Chinese electronic dictionary. Proceedings of the Third Workshop on Very Large Corpora.
A new statistical formula for Chinese text segmentation incorporating contextual information. Y B Dai, C Kgoo, T Loh, SIGIR'99. Berkley, CA USA de Marcken, CUnsupervised Language Acquisition. Ph.D thesis, MITDai, Y. B., C. Kgoo and T. Loh, 1999, A new statistical formula for Chinese text segmentation incorporating contextual information, SIGIR'99 Berkley, CA USA de Marcken, C. 1996, Unsupervised Language Acquisition. Ph.D thesis, MIT.
X Ge, W Pratt, P Smyth, Discovering Chinese Words from Unsegmented Text. SIGIR 99. BerkeleyGe, X., W. Pratt and P. Smyth. 1999. Discovering Chinese Words from Unsegmented Text. SIGIR 99. Berkeley.
From phoneme to morpheme. Z S Harris, Language. 3121955Harris, Z.S. 1955. From phoneme to morpheme. Language, 31(2), 1955
Finding structure via compression. J Hutchens, M Alder, NeMLap3/CoNLL98. D. PowersSydney, AustraliaHutchens, J. and M. Alder, 1998, Finding structure via compression. In D. Powers, ed., NeMLap3/CoNLL98, Sydney, Australia, 1998.
Experiments in unsupervised entropy-based corpus segmentation. A Kempe, Ninth Conference of the European Chapter of the Assocciation for Computational Linguistics' 99 Workshop. Bergen, NorwayKempe, A. 1999. Experiments in unsupervised entropy-based corpus segmentation, Ninth Conference of the European Chapter of the Assocciation for Computational Linguistics' 99 Workshop, 12th June 1999, Bergen, Norway
An application of information theory in Chinese word segmentation. K T Lua, K W Gan, Computer Processing of Chinese & Oriental Languages. 81Lua, K. T. and K. W. Gan. 1994. An application of information theory in Chinese word segmentation. Computer Processing of Chinese & Oriental Languages, Vol. 8, no. 1:115-124
A hybrid approach to unknown word detection and segmentation of Chinese. ICCC'94. J Y Nie, W Y Jin , M L Hannan, SingaporeNie, J Y, W Y Jin and M L Hannan. 1994. A hybrid approach to unknown word detection and segmentation of Chinese. ICCC'94. Singapore.
A trainable rule-based algorithm for word segmentation. D Palmer, Proceedings of the 35' Annual Meeting of the Association for Computational Linguistics (ACL97). the 35' Annual Meeting of the Association for Computational Linguistics (ACL97)MadridPalmer, D. 1997. A trainable rule-based algorithm for word segmentation. Proceedings of the 35' Annual Meeting of the Association for Computational Linguistics (ACL97). Madrid.
Self-supervised Chinese word segmentation. F Peng, D Schuurmans, Advances in Intelligent Data Analysis, Proceedings of the Fourth International Conference. F. Hoffman et al.Cascais, PortugalIDA-01Peng, F. and D. Schuurmans. 2001. Self-supervised Chinese word segmentation, In F. Hoffman et al. (Eds.): Advances in Intelligent Data Analysis, Proceedings of the Fourth International Conference (IDA-01), Cascais, Portugal.
USeg: a retargetable word segmentation procedure for information retrieval. J Ponte, W B Croft, Symposium on document analysis and information retrieval (SDAIR '96). Ponte, J and W. B. Croft. 1996. USeg: a retargetable word segmentation procedure for information retrieval. In: Symposium on document analysis and information retrieval (SDAIR '96).
A statistical method for finding word boundaries in Chinese text. R Sproat, C Shih, Computer Processing of Chinese & Oriental Languages. 44Sproat, R. and C. Shih. 1990. A statistical method for finding word boundaries in Chinese text. Computer Processing of Chinese & Oriental Languages, 4, 4.
A stochastic finite-state word-segmentation algorithm for Chinese. R Sproat, C Shih, W Gale, N Chang, Computational Linguistics. 322Sproat, R., C. Shih, W. Gale and N. Chang. 1996. A stochastic finite-state word-segmentation algorithm for Chinese. Computational Linguistics, 22(3).
A compression-based algorithm for Chinese word segmentation. W J Teahan, Y Wen, R Mcnab, I Witten, Computational Linguistics. 263Teahan, W. J, Y. Wen, R. McNab and I. Witten 2000, A compression-based algorithm for Chinese word segmentation, Computational Linguistics, 26, 3.
Identification of unknown words from a corpus. C H Tung, H J Lee, Computer Processing of Chinese & Oriental Languages. 8supplementTung, C. H. and H. J. Lee. 1994. Identification of unknown words from a corpus. Computer Processing of Chinese & Oriental Languages, Vol. 8 (supplement).
Rule-based word identification for mandarin Chinese sentences -a unification approach. C L Yeh, H J Lee, Computer Processing of Chinese and Oriental Languages. 52Yeh, C. L. and H. J. Lee. 1991. Rule-based word identification for mandarin Chinese sentences -a unification approach, Computer Processing of Chinese and Oriental Languages, Vol. 5, No. 2.
Extraction of Chinese compound words -an experimental study on a very large corpus. The second Chinese Language Processing Workshop attached to ACL2000. J Zhang, J Gao, M Zhou, Hong KongZhang, J., J. Gao and M. Zhou. 2000. Extraction of Chinese compound words -an experimental study on a very large corpus. The second Chinese Language Processing Workshop attached to ACL2000, Hong Kong. |
868,646 | CUNI System for the WMT17 Multimodal Traslation Task | In this paper, we describe our submissions to the WMT17 Multimodal Translation Task. For Task 1 (multimodal translation), our best scoring system is a purely textual neural translation of the source image caption to the target language. The main feature of the system is the use of additional data that was acquired by selecting similar sentences from parallel corpora and by data synthesis with back-translation. For Task 2 (cross-lingual image captioning), our best submitted system generates an English caption which is then translated by the best system used in Task 1. We also present negative results, which are based on ideas that we believe have potential of making improvements, but did not prove to be useful in our particular setup. | [
15349458,
1325523,
905565,
13420142
] | CUNI System for the WMT17 Multimodal Traslation Task
Jindřich Helcl helcl@ufal.mff.cuni.cz
Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics Malostranské náměstí 25
Charles University
118 00PragueCzech Republic
Jindřich Libovický libovicky@ufal.mff.cuni.cz
Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics Malostranské náměstí 25
Charles University
118 00PragueCzech Republic
CUNI System for the WMT17 Multimodal Traslation Task
In this paper, we describe our submissions to the WMT17 Multimodal Translation Task. For Task 1 (multimodal translation), our best scoring system is a purely textual neural translation of the source image caption to the target language. The main feature of the system is the use of additional data that was acquired by selecting similar sentences from parallel corpora and by data synthesis with back-translation. For Task 2 (cross-lingual image captioning), our best submitted system generates an English caption which is then translated by the best system used in Task 1. We also present negative results, which are based on ideas that we believe have potential of making improvements, but did not prove to be useful in our particular setup.
Introduction
Recent advances in deep learning allowed inferring distributed vector representations of both textual and visual data. In models combining text and vision modalities, this representation can be used as a shared data type. Unlike the classical natural language processing tasks where everything happens within one language or across languages, multimodality tackles how the language entities relate to the extra-lingual reality. One of these tasks is multimodal translation whose goal is using cross-lingual information in automatic image captioning.
In this system-description paper, we describe our submission to the WMT17 Multimodal Translation Task. In particular, we discuss the effect of mining additional training data and usability of advanced attention strategies. We report our results on both the 2016 and 2017 test sets and discuss efficiency of tested approaches.
The rest of the paper is organized as follows. Section 2 introduces the tasks we handle in this paper and the datasets that were provided to the task. Section 3 summarizes the state-of-the-art methods applied to the task. In Section 4, we describe our models and the results we have achieved. Section 5 presents the negative results and Section 6 concludes the paper.
Task and Dataset Description
The challenge of the WMT Multimodal Translation Task is to exploit cross-lingual information in automatic image caption generation. The stateof-the-art models in both machine translation and automatic image caption generation use similar architectures for generating the target sentence. The simplicity with which we can combine the learned representations of various inputs in a single deep learning model inevitably leads to a question whether combining the modalities can lead to some interesting results. In the shared task, this is explored in two subtasks with different roles of visual and textual modalities.
In the multimodal translation task (Task 1), the input of the model is an image and its caption in English. The system then should output a German or French translation of the caption. The system output is evaluated using the METEOR (Denkowski and Lavie, 2011) and BLEU (Papineni et al., 2002) scores computed against a single reference sentence. The question this task tries to answer is whether and how is it possible to use visual information to disambiguate the translation.
In the cross-lingual captioning task (Task 2), the input to the model at test-time is the image alone. However, additionally to the image, the model is supplied with the English (source) caption during training. The evaluation method differs from Task 1 in using five reference captions instead of a single one. In Task 2, German is the only target language. The motivation of Task 2 is to explore ways of easily creating an image captioning system in a new language once we have an existing system for another language, assuming that the information transfer is less complex across languages than between visual and textual modalities.
Data
The participants were provided with the Multi30k dataset ) -a multilingual extension of Flickr30k dataset (Plummer et al., 2017)for both training and evaluation of their models. The data consists of 31,014 images. In Flickr30k, each image is described with five independently acquired captions in English. Images in the Multi30k dataset are enriched with five crowdsourced German captions. Additionally, a single German translation of one of the English captions was added for each image.
The dataset is split into training, validation, and test sets of 29,000, 1,014, and 1,000 instances respectively. The statistics on the training and validation part are tabulated in Table 1.
For the 2017 round of the competition, an additional French translation was included for Task 1 and new test sets have been developed. Two test sets were provided for Task 1: The first one consists of 1,000 instances and is similar to the test set used in the previous round of the competition (and to the training and validation data). The second one consists of images, captions, and their transla-tions taken from the MSCOCO image captioning dataset (Lin et al., 2014). A new single test set containing 1,071 images with five reference captions was added for Task 2.
The style and structure of the reference sentences in the Flickr-and MSCOCO-based test sets differs. Most of the sentences in the Multi30k dataset have a similar structure with a relatively simple subject, an active verb in present tense, simple object, and location information (e.g., "Two dogs are running on a beach."). Contrastingly, the captions in the MSCOCO dataset are less formal and capture the annotator's uncertainty about the image content (e.g., "I don't know, it looks like a lemon.").
Related Work
Several promising neural architectures for multimodal translation task have been introduced since the first competition in 2016.
In our last year's submission (Libovický et al., 2016), we employed a neural system that combined multiple inputs -the image, the source caption and an SMT-generated caption. We used the attention mechanism over the textual sequences and concatenated the context vectors in each decoder step.
The overall results of the WMT16 multimodal translation task did not prove the visual features to be particularly useful Caglayan et al., 2016).
To our knowledge, Huang et al. (2016) were the first who showed an improvement over a textual-only neural system with model utilizing distributed features explicit object recognition. Calixto et al. (2017) improved state of the art using a model initializing the decoder state with the image vector, while maintaining the rest of the neural architecture unchanged. Promising results were also shown by Delbrouck and Dupont (2017) who made a small improvement using bilinear pooling. Elliott and Kádár (2017) brought further improvements by introducing the "imagination" component to the neural network architecture. Given the source sentence, the network is trained to output the target sentence jointly with predicting the image vector. The model uses the visual information only as a regularization and thus is able to use additional parallel data without accompanying images. image source sentence
CNN w i−1 GRU cell GRU cell α i 1 st attention bidirectional RNN pre-trained convnet α i × β 2 × β 1 2 nd attention w i Encoders A Decoder
Step
Experiments
All models are based on the encoder-decoder architecture with attention mechanism (Bahdanau et al., 2014) as implemented in Neural Monkey . 1 The decoder uses conditional GRUs (Firat and Cho, 2016) with 500 hidden units and word embeddings with dimension of 300. The target sentences are decoded using beam search with beam size 10, and with exponentially weighted length penalty (Wu et al., 2016) with α parameter empirically estimated as 1.5 for German and 1.0 for French. Because of the low OOV rate (see Table 1), we used vocabularies of maximum 30,000 tokens and we did not use sub-word units. The textual encoder is a bidirectional GRU network with 500 units in each direction and word embeddings with dimension of 300. We use the last convolutional layer VGG-16 network (Simonyan and Zisserman, 2014) of dimensionality 14 × 14 × 512 for image processing. The model is optimized using the Adam optimizer (Kingma and Ba, 2014) with learning rate 10 −4 with early stopping based on validation BLEU score.
Task 1: Multimodal Translation
We tested the following architectures with different datasets (see Section 4.3 for details):
• purely textual (disregarding the visual modality); 1 https://github.com/ufal/neuralmonkey
• multimodal with context vector concatenation in the decoder (Libovický et al., 2016);
• multimodal with hierarchical attention combination (Libovický and Helcl, 2017) -context vectors are computed independently for each modality and then they are combined together using another attention mechanism as depicted in Figure 1.
Task 2: Cross-lingual Captioning
We conducted two sets of experiments for this subtask. In both of them, we used an attentive image captioning model (Xu et al., 2015) for the crosslingual captioning with the same decoder as for the first subtask. The first idea we experimented with was using a multilingual decoder provided with the image and a language identifier. Based on the identifier, the decoder generates the caption either in English or in German. We speculated that the information transfer from the visual to the language modality is the most difficult part of the task and might be similar for both English and German.
The second approach we tried has two steps. First, we trained an English image captioning system, for which we can use larger datasets. Second, we translated the generated captions with the multimodal translation system from the first subtask.
Acquiring Additional Data
In order to improve the textual translation, we acquired additional data. We used the following technique to select in-domain sentences from both parallel and monolingual data.
We trained a neural character-level language model on the German sentences available in the training part of the Multi30k dataset. We used a GRU network with 512 hidden units and character embedding size of 128.
Using the language model, we selected 30,000 best-scoring German sentences from the SDEWAC corpus (Faaß and Eckart, 2013) which were both semantically and structurally similar to the sentences in the Multi30k dataset.
We tried to use the language model to select sentence pairs also from parallel data. By scoring the German part of several parallel corpora (EU Bookshop (Skadiņš et al., 2014), News Commentary (Tiedemann, 2012) and Common-Crawl (Smith et al., 2013)), we were only able to retrieve a few hundreds of in-domain sentences. For that reason we also included sentences with lower scores which we filtered using the following rules: sentences must have between 2 and 30 tokens, must be in the present tense, must not contain non-standard punctuation, numbers of multiple digits, acronyms, or named entities, and must have at most 15 % OOV rate w.r.t. Multi30k training vocabulary. We extracted additional 3,000 indomain parallel sentences using these rules. Examples of the additional data are given in Table 2.
By applying the same approach on the French versions of the corpora, we were pable to extract only few additional in-domain sentences. We thus trained the English-to-French models in the constrained setup only.
Following Calixto et al. (2017), we backtranslated (Sennrich et al., 2016) the German captions from the German side of the Multi30k dataset (i.e. 5+1 captions for each image), and sentences retrieved from the SDEWAC corpus. We included these back-translated sentence pairs as additional training data for the textual and multimodal systems for Task 1. The back-translation system used the same architecture as the textual systems and was trained on the Multi30k dataset only. The additional parallel data and data from the SDEWAC corpus (denoted as additional in Table 3) were used only for the text-only systems because they were not accompanied by images.
For Task 2, we also used the MSCOCO (Lin et al., 2014) dataset which consists of 300,000 images with 5 English captions for each of them.
SDEWAC Corpus (with back-translation)
zwei Männer unterhalten sich · · · · · · · · · · · · · · · · · two men are talking to each other . ein kleines Mädchen sitzt auf einer Schaukel . · · · · · · · · · · a little girl is sitting on a swing . eine Katze braucht Unterhaltung . · · · · · · · · · · · · · · · · · · a cat is having a discussion . dieser Knabe streichelt das Schlagzeug . · · · · · · · · · this professional is petting the drums .
Parallel Corpora
Menschen bei der Arbeit · · · · · People at work Männer und Frauen · · · · · · Men and women Sicherheit bei der Arbeit · · · · · Safety at work Personen in derÖffentlichkeit · · · · · · · · · · · · · · · · · · · · · · Members of the public
Results
In Task 1, our best performing system was the textonly system trained with additional data. These were acquired both by the data selection method described above and by back-translation. Results of all setups for Task 1 are given in Table 3. Surprisingly, including the data for Task 2 to the training set decreased the METEOR score on both of the 2017 test sets. This might have been caused by domain mismatch. However, in case of the additional parallel and SDEWAC data, this problem was likely outweighed by the advantage of having more training data.
In case of multimodal systems, adding approximately the same amount of data increased the performance more than in case of the text-only system. This suggests, that with sufficient amount of data (which is a rather unrealistic assumption), the multimodal system would eventually outperform the textual one.
The hierarchical attention combination brought major improvements over the concatenation approach on the 2017 test sets. On the 2016 test set, the concatenation approach yielded better results, which can be considered a somewhat strange result, given the similarity of the Flickr test sets.
The baseline system was Nematus (Sennrich et al., 2017) trained on the textual part of Multi30k only. However, due to its low score, we suspect the model was trained with suboptimal parameters because it is in principle a model identical to our constrained textual submission. In Task 2, none of the submitted systems outperformed the baseline which was a captioning system (Xu et al., 2015) trained directly on the German captions in the Multi30k dataset. The results of our systems on Task 2 are shown in Table 4.
For the English captioning, we trained two models. First one was trained on the Flickr30k data only. In the second one, we included also the MSCOCO dataset. Although the captioning system trained on more data achieved better performance on the English side (Table 5), it led to extremely low performance while plugged into our multimodal translation systems (Table 4, rows labeled "en captioning + translation"). We hypothe-size this is caused by the different styles of the sentences in the training datasets.
Our hypothesis about sharing information between the languages in a single decoder was not confirmed in this setup and the experiments led to relatively poor results.
Interestingly, our systems for Task 2 scored poorly in the BLEU score and relatively well in the METEOR score. We can attribute this to the fact that unlike BLEU which puts more emphasis on precision, METEOR considers strongly also recall.
Negative Results
In addition to our submitted systems, we tried a number of techniques without success. We describe these techniques since we believe it might be relevant for future developments in the field, despite the current negative result.
Beam Rescoring
Similarly to Lala et al. (2017), our oracle experiments on the validation data showed that rescoring of the decoded beam of width 100 has the potential of improvement of up to 3 METEOR points. In the oracle experiment, we always chose a sentence with the highest sentence-level BLEU score. Motivated by this observation, we conducted several experiments with beam rescoring.
We trained a classifier predicting whether a given sentence is a suitable caption for a given image. The classifier had one hidden layer with 300 units and had two inputs: the last layer of the VGG-16 network processing the image, and the last state of a bidirectional GRU network processing the text. We used the same hyperparameters for the bidirectional GRU network as we did for the textual encoders in other experiments. Training data were taken from both parts of the Multi30k dataset with negative examples randomly sampled from the dataset, so the classes were represented equally. The classifier achieved validation accuracy of 87% for German and 74% for French. During the rescoring of the 100 hypotheses in the beam, we selected the one which had the highest predicted probability of being the image's caption.
In other experiments, we tried to train a regression predicting the score of a given output sentence. Unlike the previous experiment, we built the training data from scored hypotheses from output beams obtained by translating the training part of the Multi30k dataset. We tested two architectures: the first one concatenates the terminal states of bidirectional GRU networks encoding the source and hypothesis sentences and an image vector; the second performs an attentive average pooling over hidden states of the RNNs and the image CNN using the other encoders terminal states as queries and concatenates the context vectors. The regression was estimating either the sentence-level BLEU score (Chen and Cherry, 2014) or the chrF3 score (Popović, 2015).
Contrary to our expectations, all the rescoring techniques decreased the performance by 2 ME-TEOR points.
Reinforcement Learning
Another technique we tried without any success was self-critical sequence training (Rennie et al., 2016). This modification of the REIN-FORCE algorithm (Williams, 1992) for sequenceto-sequence learning uses the reward of the training-time decoded sentence as the baseline. The systems were pre-trained with the word-level cross-entropy objective and we hoped to finetune the systems using the REINFORCE towards sentence-level BLEU score and GLEU score (Wu et al., 2016).
It appeared to be difficult to find the right moment when the optimization criterion should be switched and to find an optimal mixing factor of the cross-entropy loss and REINFORCE loss. We hypothesize that a more complex objective mixing strategy (like MIXER (Ranzato et al., 2015)) could lead to better results than simple objective weighting.
Conclusions
In our submission to the 2017 Multimodal Task, we tested the advanced attention combination strategies in a more challenging context and achieved competitive results compared to other submissions. We explored ways of acquiring additional data for the task and tested two promising techniques that did not bring any improvement to the system performance.
Figure 1 :
1An overall picture of the multimodal model using hierarchical attention combination on the input. Here, α and β are normalized coefficients computed by the attention models, w i is the i-th input to the decoder.
Table 2 :
2Examples of the collected additional training data.
Table 3 :
3Results of Task 1 in BLEU / METEOR points. 'C' denotes constrained configuration, 'U'
unconstrained, '2016' is the 2016 test set, 'Flickr' and 'MSCOCO' denote the 2017 test sets. The two
unconstrained textual models differ in using the additional textual data, which was not used for the
training of the multimodal systems.
Task 2
Baseline
C 9.1 / 23.4
Bilingual captioning
C 2.3 / 17.6
en captioning + translation C 4.2 / 22.1
en captioning + translation U 6.5 / 20.6
other participant
C 9.1 / 19.8
Table 4 :
4Results of Task 2 in BLEU / METEOR points.Flickr30k
Xu et al. (2015)
19.1 / 18.5
ours: Flickr30k
15.3 / 18.7
ours: Flickr30k + MSCOCO 17.9 / 16.6
Table 5 :
5Results of the English image captioning systems on Flickr30k test set in BLEU / METEOR points
AcknowledgmentsThis research has been funded by the Czech Science Foundation grant no. P103/12/G084, the EU grant no. H2020-ICT-2014-1-645452 (QT21), and Charles University grant no. 52315/2014 and SVV project no. 260 453.This work has been using language resources developed and/or stored and/or distributed by the LINDAT-Clarin project of the Ministry of Education of the Czech Republic (project LM2010013).
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, CoRR abs/1409.0473Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. http://arxiv.org/abs/1409.0473.
Does multimodality help human and machine for translation and image captioning?. Ozan Caglayan, Walid Aransa, Yaxing Wang, Marc Masana, Mercedes García-Martínez, Fethi Bougares, Loïc Barrault, Joost Van De Weijer, Proceedings of the First Conference on Machine Translation. the First Conference on Machine TranslationBerlin, GermanyAssociation for Computational LinguisticsOzan Caglayan, Walid Aransa, Yaxing Wang, Marc Masana, Mercedes García-Martínez, Fethi Bougares, Loïc Barrault, and Joost van de Wei- jer. 2016. Does multimodality help human and machine for translation and image captioning? In Proceedings of the First Conference on Ma- chine Translation. Association for Computational Linguistics, Berlin, Germany, pages 627-633. http://www.aclweb.org/anthology/W16-2358.
Incorporating global visual features into attention-based neural machine translation. Iacer Calixto, Qun Liu, Nick Campbell, CoRR abs/1701.06521Iacer Calixto, Qun Liu, and Nick Campbell. 2017. Incorporating global visual features into attention-based neural machine translation. CoRR abs/1701.06521. http://arxiv.org/abs/1701.06521.
A systematic comparison of smoothing techniques for sentence-level bleu. Boxing Chen, Colin Cherry, Proceedings of the Ninth Workshop on Statistical Machine Translation. Association for Computational Linguistics. the Ninth Workshop on Statistical Machine Translation. Association for Computational LinguisticsBaltimore, Maryland, USABoxing Chen and Colin Cherry. 2014. A sys- tematic comparison of smoothing techniques for sentence-level bleu. In Proceedings of the Ninth Workshop on Statistical Machine Trans- lation. Association for Computational Linguis- tics, Baltimore, Maryland, USA, pages 362-367. http://www.aclweb.org/anthology/W14-3346.
Multimodal compact bilinear pooling for multimodal neural machine translation. Jean-Benoit Delbrouck, Stéphane Dupont, CoRR abs/1703.08084Jean-Benoit Delbrouck and Stéphane Dupont. 2017. Multimodal compact bilinear pooling for mul- timodal neural machine translation. CoRR abs/1703.08084. http://arxiv.org/abs/1703.08084.
Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. Michael Denkowski, Alon Lavie, Proceedings of the Sixth Workshop on Statistical Machine Translation. the Sixth Workshop on Statistical Machine TranslationEdinburgh, United KingdomAssociation for Computational LinguisticsMichael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In Pro- ceedings of the Sixth Workshop on Statistical Ma- chine Translation. Association for Computational Linguistics, Edinburgh, United Kingdom, pages 85- 91. http://www.aclweb.org/anthology/W11-2107.
Multi30k: Multilingual englishgerman image descriptions. Desmond Elliott, Stella Frank, Khalil Sima'an, Lucia Specia, CoRR abs/1605.00459Desmond Elliott, Stella Frank, Khalil Sima'an, and Lu- cia Specia. 2016. Multi30k: Multilingual english- german image descriptions. CoRR abs/1605.00459. http://arxiv.org/abs/1605.00459.
Imagination improves multimodal translation. Desmond Elliott, Kádár, CoRR abs/1705.04350Desmond Elliott andÁkos Kádár. 2017. Imagi- nation improves multimodal translation. CoRR abs/1705.04350. http://arxiv.org/abs/1705.04350.
Sdewaca corpus of parsable sentences from the web. Gertrud Faaß, Kerstin Eckart, Language processing and knowledge in the Web. SpringerGertrud Faaß and Kerstin Eckart. 2013. Sdewac- a corpus of parsable sentences from the web. In Language processing and knowledge in the Web, Springer, pages 61-68.
Conditional gated recurrent unit with attention mechanism. Orhan Firat, Kyunghyun Cho, Published online, version adbaeeaOrhan Firat and Kyunghyun Cho. 2016. Con- ditional gated recurrent unit with attention mechanism. https://github.com/nyu-dl/dl4mt- tutorial/blob/master/docs/cgru.pdf. Published online, version adbaeea.
Neural monkey: An open-source tool for sequence learning. Jindřich Helcl, Jindřich Libovický, 10.1515/pralin-2017-0001The Prague Bulletin of Mathematical Linguistics. 107Jindřich Helcl and Jindřich Libovický. 2017. Neural monkey: An open-source tool for sequence learn- ing. The Prague Bulletin of Mathematical Lin- guistics (107):5-17. https://doi.org/10.1515/pralin- 2017-0001.
Attentionbased multimodal neural machine translation. Po-Yao Huang, Frederick Liu, Sz-Rung Shiang, Jean Oh, Chris Dyer, Proceedings of the First Conference on Machine Translation. the First Conference on Machine TranslationBerlin, GermanyAssociation for Computational LinguisticsPo-Yao Huang, Frederick Liu, Sz-Rung Shiang, Jean Oh, and Chris Dyer. 2016. Attention- based multimodal neural machine translation. In Proceedings of the First Conference on Ma- chine Translation. Association for Computational Linguistics, Berlin, Germany, pages 639-645. http://www.aclweb.org/anthology/W/W16/W16- 2360.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, CoRR abs/1412.6980Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. http://arxiv.org/abs/1412.6980.
Unraveling the contribution of image captioning and neural machine translation for multimodal machine translation. Chiraag Lala, Pranava Madhyastha, Josiah Wang, Lucia Specia, 10.1515/pralin-2017-0020The Prague Bulletin of Mathematical Linguistics. 108Chiraag Lala, Pranava Madhyastha, Josiah Wang, and Lucia Specia. 2017. Unraveling the contribution of image captioning and neural machine translation for multimodal machine translation. The Prague Bul- letin of Mathematical Linguistics (108):197-208. https://doi.org/doi: 10.1515/pralin-2017-0020.
Attention strategies for multi-source sequence-to-sequence learning. Jindřich Libovický, Jindřich Helcl, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, Canada2Short Papers). Association for Computational LinguisticsJindřich Libovický and Jindřich Helcl. 2017. Attention strategies for multi-source sequence-to-sequence learning. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Compu- tational Linguistics, Vancouver, Canada.
CUNI system for WMT16 automatic post-editing and multimodal translation tasks. Jindřich Libovický, Jindřich Helcl, Marek Tlustý, Ondřej Bojar, Pavel Pecina, Proceedings of the First Conference on Machine Translation. the First Conference on Machine TranslationBerlin, GermanyAssociation for Computational LinguisticsJindřich Libovický, Jindřich Helcl, Marek Tlustý, Ondřej Bojar, and Pavel Pecina. 2016. CUNI system for WMT16 automatic post-editing and multimodal translation tasks. In Pro- ceedings of the First Conference on Machine Translation. Association for Computational Linguistics, Berlin, Germany, pages 646-654. http://www.aclweb.org/anthology/W/W16/W16- 2361.
Microsoft COCO: common objects in context. Tsung-Yi Lin, Michael Maire, Serge J Belongie, Lubomir D Bourdev, Ross B Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, CoRR abs/1405.0312Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. Microsoft COCO: common objects in context. CoRR abs/1405.0312. http://arxiv.org/abs/1405.0312.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, 10.3115/1073083.1073135Proceedings of 40th Annual Meeting of the Association for Computational Linguistics. 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, Pennsylvania, USAAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine transla- tion. In Proceedings of 40th Annual Meeting of the Association for Computational Linguis- tics. Association for Computational Linguistics, Philadelphia, Pennsylvania, USA, pages 311-318. https://doi.org/10.3115/1073083.1073135.
Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. A Bryan, Liwei Plummer, Chris M Wang, Juan C Cervantes, Julia Caicedo, Svetlana Hockenmaier, Lazebnik, 10.1007/s11263-016-0965-7Int. J. Comput. Vision. 1231Bryan A. Plummer, Liwei Wang, Chris M. Cervantes, Juan C. Caicedo, Julia Hockenmaier, and Svet- lana Lazebnik. 2017. Flickr30k entities: Col- lecting region-to-phrase correspondences for richer image-to-sentence models. Int. J. Comput. Vision 123(1):74-93. https://doi.org/10.1007/s11263-016- 0965-7.
chrf: character n-gram fscore for automatic mt evaluation. Maja Popović, Proceedings of the Tenth Workshop on Statistical Machine Translation. the Tenth Workshop on Statistical Machine TranslationLisbon, PortugalAssociation for Computational LinguisticsMaja Popović. 2015. chrf: character n-gram f- score for automatic mt evaluation. In Proceed- ings of the Tenth Workshop on Statistical Ma- chine Translation. Association for Computational Linguistics, Lisbon, Portugal, pages 392-395. http://aclweb.org/anthology/W15-3049.
Sequence level training with recurrent neural networks. Aurelio Marc, Sumit Ranzato, Michael Chopra, Wojciech Auli, Zaremba, CoRR abs/1511.06732Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. CoRR abs/1511.06732. http://arxiv.org/abs/1511.06732.
Self-critical sequence training for image captioning. J Steven, Etienne Rennie, Youssef Marcheret, Jarret Mroueh, Vaibhava Ross, Goel, CoRR abs/1612.00563Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. 2016. Self-critical sequence training for image captioning. CoRR abs/1612.00563. http://arxiv.org/abs/1612.00563.
Nematus: a toolkit for neural machine translation. Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexandra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel Läubli, Antonio Valerio Miceli, Jozef Barone, Maria Mokry, Nadejde, Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics. the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational LinguisticsValencia, SpainRico Sennrich, Orhan Firat, Kyunghyun Cho, Alexan- dra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel Läubli, Antonio Vale- rio Miceli Barone, Jozef Mokry, and Maria Nade- jde. 2017. Nematus: a toolkit for neural ma- chine translation. In Proceedings of the Soft- ware Demonstrations of the 15th Conference of the European Chapter of the Association for Com- putational Linguistics. Association for Computa- tional Linguistics, Valencia, Spain, pages 65-68. http://aclweb.org/anthology/E17-3017.
Improving neural machine translation models with monolingual data. Rico Sennrich, Barry Haddow, Alexandra Birch, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, Germany1Long Papers). Association for Computational LinguisticsRico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceed- ings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers). Association for Computa- tional Linguistics, Berlin, Germany, pages 86-96. http://www.aclweb.org/anthology/P16-1009.
Very deep convolutional networks for largescale image recognition. Karen Simonyan, Andrew Zisserman, CoRR abs/1409.1556Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large- scale image recognition. CoRR abs/1409.1556. http://arxiv.org/abs/1409.1556.
Billions of parallel words for free: Building and using the eu bookshop corpus. Raivis Skadiņš, Jörg Tiedemann, Roberts Rozis, Daiga Deksne, Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC-2014). European Language Resources Association (ELRA). the 9th International Conference on Language Resources and Evaluation (LREC-2014). European Language Resources Association (ELRA)Reykjavik, IcelandRaivis Skadiņš, Jörg Tiedemann, Roberts Rozis, and Daiga Deksne. 2014. Billions of parallel words for free: Building and using the eu bookshop corpus. In Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC- 2014). European Language Resources Association (ELRA), Reykjavik, Iceland.
Dirt cheap web-scale parallel text from the common crawl. Jason R Smith, Herve Saint-Amand, Magdalena Plamada, Philipp Koehn, Chris Callison-Burch, Adam Lopez, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsSofia, BulgariaAssociation for Computational Linguistics1Long Papers)Jason R. Smith, Herve Saint-Amand, Magdalena Pla- mada, Philipp Koehn, Chris Callison-Burch, and Adam Lopez. 2013. Dirt cheap web-scale par- allel text from the common crawl. In Pro- ceedings of the 51st Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Sofia, Bulgaria, pages 1374-1383. http://www.aclweb.org/anthology/P13-1135.
A shared task on multimodal machine translation and crosslingual image description. Lucia Specia, Stella Frank, Khalil Sima'an, Desmond Elliott, Proceedings of the First Conference on Machine Translation. Association for Computational Linguistics. the First Conference on Machine Translation. Association for Computational LinguisticsBerlin, GermanyLucia Specia, Stella Frank, Khalil Sima'an, and Desmond Elliott. 2016. A shared task on multi- modal machine translation and crosslingual image description. In Proceedings of the First Conference on Machine Translation. Association for Computa- tional Linguistics, Berlin, Germany, pages 543-553. http://www.aclweb.org/anthology/W16-2346.
Parallel data, tools and interfaces in opus. Jörg Tiedemann, . ; , Khalid Choukri, Thierry Declerck, Bente Mehmet Ugur Dogan, Joseph Maegaard, Jan Mariani, Odijk, Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12). European Language Resources Association (ELRA). the Eight International Conference on Language Resources and Evaluation (LREC'12). European Language Resources Association (ELRA)Istanbul, TurkeyNicoletta Calzolari (Conference Chair)Jörg Tiedemann. 2012. Parallel data, tools and in- terfaces in opus. In Nicoletta Calzolari (Con- ference Chair), Khalid Choukri, Thierry Declerck, Mehmet Ugur Dogan, Bente Maegaard, Joseph Mariani, Jan Odijk, and Stelios Piperidis, edi- tors, Proceedings of the Eight International Con- ference on Language Resources and Evaluation (LREC'12). European Language Resources Associ- ation (ELRA), Istanbul, Turkey.
Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. J Ronald, Williams, Machine learning. 83-4Ronald J Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforce- ment learning. Machine learning 8(3-4):229-256.
Google's neural machine translation system: Bridging the gap between human and machine translation. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, CoRR abs/1609.08144Oriol Vinyals. Greg Corrado, Macduff Hughes, and Jeffrey DeanYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR abs/1609.08144. http://arxiv.org/abs/1609.08144.
Show, attend and tell: Neural image caption generation with visual attention. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, Yoshua Bengio, Proceedings of the 32nd International Conference on Machine Learning (ICML-15). JMLR Workshop and Conference Proceedings. David Blei and Francis Bachthe 32nd International Conference on Machine Learning (ICML-15). JMLR Workshop and Conference ProceedingsLille, FranceKelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, at- tend and tell: Neural image caption gener- ation with visual attention. In David Blei and Francis Bach, editors, Proceedings of the 32nd International Conference on Machine Learn- ing (ICML-15). JMLR Workshop and Confer- ence Proceedings, Lille, France, pages 2048-2057. http://jmlr.org/proceedings/papers/v37/xuc15.pdf. |
252,411,758 | BILinMID: A Spanish-English Corpus of the US Midwest | This paper describes the Bilinguals in the Midwest (BILinMID) Corpus, a comparable text corpus of the Spanish and English spoken in the US Midwest by various types of bilinguals. Unlike other areas within the US where language contact has been widely documented (e.g., the Southwest), Spanish-English bilingualism in the Midwest has been understudied despite an increase in its Hispanic population. The BILinMID Corpus contains short stories narrated in Spanish and in English by 72 speakers representing different types of bilinguals: early simultaneous bilinguals, early sequential bilinguals, and late second language learners. All stories have been transcribed and annotated using various natural language processing tools. Additionally, a user interface has also been created to facilitate searching for specific patterns in the corpus as well as to filter out results according to specified criteria. Guidelines and procedures followed to create the corpus and the user interface are described in detail in the paper. The corpus is fully available online and it might be particularly interesting for researchers working on language variation and contact. | [
17954486,
246647
] | BILinMID: A Spanish-English Corpus of the US Midwest
June 2022
Irati Hurtado ihurta3@illinois.edu
Department of Spanish & Portuguese
University of Illinois
Urbana-Champaign 707 S Mathews Ave4142 FLB, 61801UrbanaILUSA
BILinMID: A Spanish-English Corpus of the US Midwest
Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022)
the 13th Conference on Language Resources and Evaluation (LREC 2022)MarseilleJune 2022Language Resources Association (ELRA), licensed under CC-BY-NC-4.0 5511BilingualismLanguage contactBilingual corpora
This paper describes the Bilinguals in the Midwest (BILinMID) Corpus, a comparable text corpus of the Spanish and English spoken in the US Midwest by various types of bilinguals. Unlike other areas within the US where language contact has been widely documented (e.g., the Southwest), Spanish-English bilingualism in the Midwest has been understudied despite an increase in its Hispanic population. The BILinMID Corpus contains short stories narrated in Spanish and in English by 72 speakers representing different types of bilinguals: early simultaneous bilinguals, early sequential bilinguals, and late second language learners. All stories have been transcribed and annotated using various natural language processing tools. Additionally, a user interface has also been created to facilitate searching for specific patterns in the corpus as well as to filter out results according to specified criteria. Guidelines and procedures followed to create the corpus and the user interface are described in detail in the paper. The corpus is fully available online and it might be particularly interesting for researchers working on language variation and contact.
Introduction
The Hispanic population in the United States makes up to 15-20% of the nation's total population currently, and it is expected to continue growing in the next few decades (Pew Research Center, 2008). Due to its proximity to the US-Mexico border, Hispanics have more presence in the Southwest of the country (US Census, 2015), a situation that has inspired numerous studies on language contact and bilingualism in that area (e.g., Silva-Corvalán, 1994;Teschner, 2009;Travis et al., 2017). The predominance of the Southwest is also reflected in the few available corpora that document Spanish-English bilingualism in the United States (Gironzetti, 2021(Gironzetti, , 2022, which leave other US areas underrepresented in those datasets. The Bilinguals in the Midwest (BILinMID) Corpus 1 constitutes an attempt to document the Spanish and English spoken in an area that has not received much attention in the literature but where the presence of Hispanics is certainly considerable, especially in recent years (Potowski, 2020). The creation of a text corpus such as BILinMID has important implications. On the one hand, the corpus serves as a pedagogical resource, since it can make learners value local language varieties and view them as a relevant source of learning. This is especially important in the case of Spanish, as US varieties of this language are often stigmatized (Hill, 1998). On the other hand, because the Midwest has a smaller bilingual population than other US regions, the corpus allows researchers interested in language contact, variation, and acquisition to better understand how the degree of societal bilingualism influences linguistic phenomena. For example, researchers could compare data from BILinMID to data from speakers living in highly bilingual regions (documented in other corpora from the US). This need to compare Spanish-English speakers across the nation living in different bilingual settings has already been pointed out by some scholars (Fuller and Leeman, 2020;Lynch, 2017). Likewise, the use of corpora for sociolinguistic research has also been underscored (Díaz-Campos and Torres, 2018). To this end, the BILinMID Corpus has a functional user interface that allows 1 The BILinMID Corpus can be accessed here: https://go.illinois.edu/BILinMID-corpus researchers without a technical background to easily explore the corpus. This paper describes the process of creating the BILinMID Corpus as well as its user interface. It also reviews other available Spanish-English corpora from the US.
Related Work
Even though some corpora have been created to document the speech of Spanish-English bilinguals in the US (Table 1), most of them were compiled to investigate the use of Spanish in the country. Thus, in these corpora, English is only present in those cases where speakers code-switch. (Bullock and Toribio, 2014), which includes 96 individual sociolinguistic interviews with bilinguals from Texas. The corpus contains the video, audio, and transcription from each interview. Additionally, POS-tagged annotations are provided separately and can be explored using Google Data Studio. The second largest corpus is the Corpus of Spanish in Southern Arizona (Carvalho, 2012), comprised of 76 individual sociolinguistic interviews. Both the audios and transcriptions are available to researchers, as well as the meta-data. A third important corpus is the Miami Corpus (Deuchar, 2011), which includes 56 audio recordings of 84 Spanish-English bilinguals having conversations. Both the audios and the transcriptions can be downloaded in different formats for further analysis. A smaller corpus is the Corpus of Mexican Spanish in Salinas (Brown, 2022), containing only 11 individual interviews with first-generation immigrants from Mexico who currently live in California. Transcriptions have been annotated using TreeTagger and can be explored using a simple user interface. Besides these corpora, there are two others which are more general but that also include speech samples from the speakers under consideration here. The first is the New England Corpus of Heritage and Second Language Speakers (Amaral and Gubitosi, 2013), which also includes data from second language learners, contrary to other corpora. This corpus includes both oral and written speech samples. Lastly, the Polinsky Language Sciences Lab Dataverse (Polinsky, 2015) is a data repository of speech samples by multiple types of speakers and languages. Some of them are Spanish-English bilinguals whose oral narratives were collected for different research projects and which are now available online to other researchers.
Corpus
Speakers and Data Collection
The BILinMID Corpus is a bilingual comparable corpus of approximately 35,000 tokens which contains short stories narrated in English and in Spanish. Even though the stories were narrated orally by the speakers, only the texts corresponding to the transcriptions are part of the corpus (and not the audio recordings). To create a corpus that was representative of the bilingual population in the Midwest, three groups of bilinguals were targeted: early simultaneous bilinguals, early sequential bilinguals, and late second language learners. The first two groups are usually referred to in the literature as 'heritage speakers' (Montrul, 2016;Valdés, 2000). However, this term is not used in the corpus due to being too broad. The terminology chosen instead better reflects the specific language developmental pattern of each group of speakers, which is more informative. In order to collect data for the corpus, speakers were contacted through ads on social media, local organizations, and personal contacts. The data collection process took place in two sessions with at least two weeks in between sessions ( Figure 1). This was done to prevent the first narrative from influencing the second narrative. Speakers were recorded in a quiet and familiar space, usually in their office or at home. Speakers saw images depicting a famous children's fairy tale, Little Red Riding Hood. As they saw the images, they had to narrate the story orally in their own words. This fairy tale was chosen because it has been frequently used in research projects examining Spanish-English bilingualism in the US (Cuza et al., 2013;Montrul, 2004Montrul, , 2010. Storytelling tasks such as this one promote natural speech and are a good source of linguistic phenomena of interest (Kisselev, 2021;Schmid, 2011). In the first session, speakers narrated the story in one language, and in the second session, they narrated the story in the other language (this was counterbalanced across participants). However, there were several instances of code-switching in the narratives. Lastly, even though a researcher was always present in the room, speakers were instructed not to interact with them at all while narrating the story. All narratives were recorded on the researcher's laptop using the Audacity software (Audacity Team, 2021) and speakers had full control of the images while narrating the story, since they could go forward or backward whenever they wanted to. In addition to narrating the story in one of the languages, in the first session, speakers were also asked to fill out a questionnaire to gather more information about the demographics and linguistic background of each group. Furthermore, the questionnaire included a section where speakers completed the Bilingual Language Profile (BLP) (Birdsong et al., 2012), a standardized test to measure language dominance in the two languages. The BLP test consists of several multiple-choice questions that produce a final score ranging from -218 to 218. A score of 0 indicates balanced bilingualism (i.e., the speaker is equally dominant in Spanish as in English). A negative score indicates the speaker is more dominant in English than in Spanish, and a positive score indicates the speaker is more dominant in Spanish than in English. The inclusion of the BLP test responds to the need of reporting some sort of language proficiency or dominance metric in corpora containing speech from bilinguals (Kisselev, 2021). The full questionnaire was hosted online on Qualtrics (Qualtrics, 2005) so that speakers could fill it out conveniently from their smartphone or tablet. Based on the answers to the questionnaire, there were 7 early simultaneous bilinguals (5 females, 2 males), 40 early sequential bilinguals (31 females, 9 males), and 25 late second language learners (18 females, 7 males). Speakers from the first two groups were all secondgeneration immigrants (i.e., either born in the US to at least one parent from a Spanish-speaking country, or born in a Spanish-speaking country and moved to the US with their parents before the age of 5). Simultaneous bilinguals had only one parent of Hispanic origin, which explains why they started speaking both Spanish and English simultaneously from birth. All sequential bilinguals, on the other hand, started speaking Spanish before English. In this latter case, either both their parents were from a Spanish-speaking country and continued speaking only Spanish at home while in the US, or the speaker was born in a Spanish-speaking country and moved to the US at a very young age. Regarding the group of late second language learners, all speakers were first-generation immigrants. Speakers in this group were raised monolingually in a Spanish-speaking country and moved to the US after puberty. They learned English as adults and, at the time they were recorded, they all had lived in the US for at least ten years (mean length of residence in the US: 17.68 years) and used English frequently on a daily basis. The linguistic background of speakers is also reflected in their BLP scores, with the group of early simultaneous bilinguals being clearly more dominant in English than in Spanish, and the group of late second language learners being more dominant in Spanish than in English, overall (
Transcription Process
All audio recordings with the short stories were transcribed using CLAN (MacWhinney, 2000). CLAN is an open-source software widely used in the language sciences. It consists of an editor where audio files can be imported, segmented, and easily transcribed thanks to its several features. Some of the corpora previously reviewed have also used CLAN in their transcription process (e.g., the Miami Corpus). The software uses the CHAT format for transcription (producing .cha files), which involves adding some special symbols to the transcribed text in order to provide more detail (e.g., pauses, hesitations, repetitions) as well as a header with information from the speaker (Figure 2). For the BILinMID Corpus, the April 2021 version of the CHAT manual was used.
Figure 2: A transcription in the CHAT format
The CHAT format also allows to mark errors as such. However, for this corpus, errors were left unmarked unless the transcribed word (or group of words) was difficult to interpret (e.g., if the speaker made up a new word). The motivation behind this choice is that the notion of 'error' is prescriptive and might offend some of the speakers whose narratives were recorded for this corpus. When working with these populations, this terminology should be avoided 2 . Furthermore, since the goal of this corpus is to simply provide a descriptive account of how Spanish-English bilinguals speak in the Midwest, error marking is not necessary. Three researchers participated in the transcription process, all of them Spanish-English bilinguals familiar with the language varieties of the Midwest and with a solid background in linguistics. Even though all researchers followed the same guidelines and often discussed transcription issues together, a quantitative analysis was carried out to ensure inter-rater reliability. The quantitative analysis chosen was similar to the one used for the Miami Corpus (Deuchar, 2011) and which is described in Deuchar et al. (2014). In this case, 15% of all audio recordings were independently transcribed by two of the three researchers. Then, inter-rater reliability was assessed for each transcription file by calculating percent of agreement, which reached an average of 92%. Most discrepancies had to do with the use of CHAT special symbols and not with language issues (e.g., a researcher marked something as a pause whereas another marked it as a hesitation). When the transcription process was over, each file was manually checked before starting the annotation process by being proofread by a researcher who did not work on it. This last check was done to make sure there were no spelling or formatting issues that could interfere with the annotation process.
Data Annotation
All .cha files with the transcriptions were converted into .txt files and were then imported into R (R Core Team, 2021). An R function was created to go over the transcription files and generate a dataframe or table linking the full transcriptions to basic information about the speakers. This was done as a first step to annotate the transcriptions, as dataframes allow us to work with data in a more efficient manner. The dataframe contained 6 columns:
• id: a number to identify each row • text: the speaker's transcription as a single paragraph • speaker: a unique code to identify each speaker anonymously • generation: whether the speaker was a first-or second-generation immigrant • gender: whether the speaker was a male or a female • language: the language of the narrative (either English or Spanish) All this information was extracted from the .txt files, either from the header or the transcription lines. The dataframe was exported as a .csv file. Once all the transcriptions were conveniently stored in a dataframe, the annotation process began. The .csv file was imported into R and the udpipe R package (Straka et al., 2016;Wijffels, 2021) was loaded. udpipe is an R package designed for doing natural language processing directly in R without having to rely on other programming languages such as Python or Java. It allows tokenization, POStagging, lemmatization, and dependency parsing. udpipe uses pre-trained models which are available in many languages and which follow the Universal Dependencies (UD) framework (Nivre et al., 2016), a project that aims to develop consistent treebank annotation for multiple languages. Together with the R package, the pre-trained Spanish and English models were downloaded and loaded into R. A new R function was created to iterate over each row (i.e., each full transcription) in the main dataframe and annotate the text using the appropriate language model. This generated a new, larger dataframe with one word per row. This dataframe contained 10 columns:
• id: a number to identify each row • sentence: individual sentences from the transcriptions • token_id: a number to identify each token in a transcription • token: each token • lemma: lemma corresponding to each token • upos: POS-tag for each token based on the UD framework • speaker: a unique code to identify each speaker anonymously • generation: whether the speaker was a first-or second-generation immigrant • gender: whether the speaker was a male or a female • language: the language of the narrative (either English or Spanish) This dataframe containing the annotations was also exported as a .csv file. Since the process of annotation was automatically done relying on the udpipe package and the pre-trained models, the .csv file was manually checked by three researchers to ensure all the information was correct. These were the same researchers that had previously worked on the transcription process. For this manual check, a set of guidelines was created describing how to deal with some of the most common issues found in the annotated dataframe (e.g., how to annotate passages with code-switching, how to annotate CHAT special transcription symbols). All researchers followed the guidelines to ensure the annotations were consistent and reliable, and any discrepancies were discussed as a group. This revised dataset constitutes the basis for the user interface that was developed to explore the corpus.
User Interface
The user interface developed for the BILinMID Corpus was built as an R-Shiny app using the shiny R package (Chang et al., 2021). R-Shiny apps are interactive web applications that can be built directly in R and which can be further customized with HTML, CSS, and JavaScript. More specifically, these apps are useful to interact with data in tabular format. Since the datasets (i.e., the .csv files) followed that format, developing an R-Shiny app was a good option. The user interface developed for the BILinMID Corpus is simple and intuitive. It has a horizontal navigation bar at the top with several tabs, as follows:
• Home: the BILinMID Corpus homepage • About this corpus: a general description about the corpus and the research team • How to use this corpus: information about what users can find in the different tabs • The speakers: information about the speakers based on responses from the Qualtrics questionnaire • Search by KWIC: to search for a specific keyword in the corpus • Search by lemma: to search for a specific lemma in the corpus • Search full transcriptions: to search for a transcription given a speaker and a language To navigate the app, users simply have to click on a tab and they will be taken to that page. Important for our purposes are the four last tabs. The 'the speakers' page displays a table through the DT R package , which is an R interface for the JavaScript library DataTables. The table displayed on the app comes from the .csv file containing the information about the speakers' background and demographics (e.g., gender, age, generation, BLP score) (Figure 3). Thus, this table provides the relevant meta-data for the BILinMID Corpus. The pages 'search by KWIC' and 'search by lemma' are very similar to one another and provide an easy way to query the corpus. These pages contain a browser on the left together with filters for language, generation, and gender. Users can type in a word (either a keyword or a lemma) and adjust the filters, and a JavaScript table will appear on the right of the page containing any sentences from the corpus that match the query. The table will also list the speaker who produced the sentence. The dataset searched for during these queries is the annotated .csv dataframe that was created with udpipe. When the user executes a query, the keyword or lemma is found in the dataframe and the sentences containing the match are selected together with their speakers (Figure 4). Lastly, the 'search full transcriptions' page has a language filter and a speaker filter on the left. Here, users will see the full transcription of the speaker and language they choose ( Figure 5). This transcription can be copied and exported elsewhere for further analysis.
Conclusions and Future Work
This paper has introduced the BILinMID Corpus, a comparable corpus of the Spanish and English spoken in the US Midwest by different types of bilinguals. The corpus contains the transcriptions of short stories narrated by the speakers together with relevant meta-data. The corpus also includes a practical user interface developed with R-Shiny to facilitate exploring the datasets. To my knowledge, this is the first corpus documenting the speech of bilinguals in this region of the US. In terms of future work, the BILinMID Corpus is still in progress. Therefore, more speakers will be recorded in the near future, especially from the early simultaneous bilingual group and the late second language learner group. The goal is to have roughly the same number of speakers per group (~40-50 speakers). Likewise, the user interface will also be enhanced to support other types of queries and to provide some analytics. Even though this requires more processing of the udpipe output, these new features will make the user interface more functional.
Acknowledgements
I thank A. Andino and A. Seielstad for their assistance transcribing the audio recordings and annotating the corpus' main dataset; P. Renteria and J. Vazquez for their help during the data collection process; and J. Canty for his assistance developing the user interface.
Figure 1 :
1Data collection process
Figure 3 :Figure 4 :Figure 5 :
345Page with information from the speakers The 'search by KWIC' page The 'search full transcriptions' page
Table 1 :
1Corpora of Spanish-English bilinguals in the USOut of the available corpora, the largest is the Spanish in Texas Corpus
Table 2 ).
2Group
N Mean age (SD)
Mean BLP (SD)
Early
simultaneous
bilinguals
7
20 (2.29)
-58.42 (23.13)
Early
sequential
bilinguals
40 20 (1.57)
-12.54 (31.89)
Late second
language
learners
25 45 (6.30)
38.58 (25.35)
Table 2 :
2Information about the speakersIt is also important to mention that there is a high
variability of scores among the second-generation
speakers, as indicated by the large standard deviations.
Information about the speakers' linguistic background and
demographics was stored in a table format in a .csv file.
This file was later used for the user interface (see section
6).
For a discussion on this, seeKlee and Lynch (2009),Potowski and Lynch (2014), and Valdés (1995), among others.
Audacity(R): Free audio editor and recorder (computer application). Bibliographical References Audacity Team. Bibliographical References Audacity Team (2021). Audacity(R): Free audio editor and recorder (computer application). Available at https://audacityteam.org/
Bilingual Language Profile: An easy to use instrument to assess bilingualism. D Birdsong, L Gertken, Amengual , M , Austin, TXCOERLL, University of TexasBirdsong, D., Gertken, L., and Amengual, M. (2012). Bilingual Language Profile: An easy to use instrument to assess bilingualism. COERLL, University of Texas, Austin, TX.
Speaking Spanish in the US: The sociopolitics of language. J Fuller, J Leeman, Multilingual MattersBristol, UKFuller, J., and Leeman, J. (2020). Speaking Spanish in the US: The sociopolitics of language. Bristol, UK: Multilingual Matters.
Shiny: Web application framework for R. W Chang, J Cheng, J J Allaire, C Sievert, B Schloerke, Y Xie, J Allen, J Mcpherson, A Dipert, B Borges, Chang, W., Cheng, J., Allaire, J. J., Sievert, C., Schloerke, B., Xie, Y., Allen, J., McPherson, J., Dipert, A., and Borges, B. (2021). Shiny: Web application framework for R. CRAN. Available at https://CRAN.R- project.org/package=shiny
The development of tense and aspect morphology in child and adult heritage speakers. A Cuza, R Pérez-Tattam, E Barajas, L Miller, C Sadowski, Innovative research and practices in second language acquisition and bilingualism. J. SchwieterCuza, A., Pérez-Tattam, R., Barajas, E., Miller, L., and Sadowski, C. (2013). The development of tense and aspect morphology in child and adult heritage speakers. In J. Schwieter (Ed.), Innovative research and practices in second language acquisition and bilingualism.
. The Amsterdam, Netherlands, John BenjaminsAmsterdam, The Netherlands: John Benjamins, pp. 192- 220.
Building bilingual corpora. M Deuchar, P Davies, J R Herring, M C Couto, D Carter, Advances in the study of bilingualism. E. Thomas, and I. MennenBristol, UKMultilingual MattersDeuchar, M., Davies, P., Herring, J. R., Parafita Couto, M. C., and Carter, D. (2014). Building bilingual corpora. In E. Thomas, and I. Mennen (Eds.), Advances in the study of bilingualism. Bristol, UK: Multilingual Matters, pp. 93-110.
Corpus approaches to the study of language, variation, and change. M Díaz-Campos, J E Torres, K. GeeslinCambridge University PressCambridge, UKThe Cambridge handbook of Spanish linguisticsDíaz-Campos, M., and Torres, J. E. (2018). Corpus approaches to the study of language, variation, and change. In K. Geeslin (Ed.), The Cambridge handbook of Spanish linguistics. Cambridge, UK: Cambridge University Press, pp. 121-141.
Pragmática y multimodalidad en el español como lengua de herencia. E Gironzetti, D. Pascual y Cabo, and J. TorresRoutledgeLondon, UKAproximaciones al estudio del español como lengua de herenciaGironzetti, E. (2021). Pragmática y multimodalidad en el español como lengua de herencia. In D. Pascual y Cabo, and J. Torres (Eds.), Aproximaciones al estudio del español como lengua de herencia. London, UK: Routledge, pp. 66-78.
The Routledge handbook of Spanish corpus linguistics. E Gironzetti, G. Parodi, P. Cantos-Gómez, and C. HoweRoutledgeLondon, UKCorpus del español como lengua de herenciaGironzetti, E. (2022). Corpus del español como lengua de herencia. In G. Parodi, P. Cantos-Gómez, and C. Howe (Eds.), The Routledge handbook of Spanish corpus linguistics. London, UK: Routledge.
Language, race, and white public space. J Hill, American Anthropologist. 1003Hill, J. (1998). Language, race, and white public space. American Anthropologist, 100(3): 680-689.
Corpus-based methodologies in the study of heritage languages. O Kisselev, S. Montrul, and M. PolinskyCambridge University PressCambridge, UKThe Cambridge handbook of heritage languages and linguisticsKisselev, O. (2021). Corpus-based methodologies in the study of heritage languages. In S. Montrul, and M. Polinsky (Eds.), The Cambridge handbook of heritage languages and linguistics. Cambridge, UK: Cambridge University Press, pp. 520-544.
El español en contacto con otras lenguas. C Klee, A Lynch, Georgetown University PressWashington, DCKlee, C., and Lynch, A. (2009). El español en contacto con otras lenguas. Washington, DC: Georgetown University Press.
The 'in-between' paradigm in Spanish as a heritage language. Talk given at the 2017 Heritage Spanish Workshop. A Lynch, Lawrence Erlbaum AssociatesAustin, TX. MacWhinney, B; Mahwah, NJUniversity of TexasThe CHILDES project: Tools for analyzing talkLynch, A. (2017). The 'in-between' paradigm in Spanish as a heritage language. Talk given at the 2017 Heritage Spanish Workshop. University of Texas, Austin, TX. MacWhinney, B. (2000). The CHILDES project: Tools for analyzing talk. Mahwah, NJ: Lawrence Erlbaum Associates.
Subject and object expression in Spanish heritage speakers: A case of morpho-syntactic convergence. S Montrul, Bilingualism: Language and Cognition. 72Montrul, S. (2004). Subject and object expression in Spanish heritage speakers: A case of morpho-syntactic convergence. Bilingualism: Language and Cognition, 7(2): 125-142.
How similar are adult second language learners and Spanish heritage speakers? Spanish clitics and word order. S Montrul, Applied Psycholinguistics. 311Montrul, S. (2010). How similar are adult second language learners and Spanish heritage speakers? Spanish clitics and word order. Applied Psycholinguistics, 31(1): 167-207.
The acquisition of heritage languages. S Montrul, Cambridge University PressCambridge, UKMontrul, S. (2016). The acquisition of heritage languages. Cambridge, UK: Cambridge University Press.
Universal dependencies v1: A multilingual treebank collection. J Nivre, M-C De Marneffe, F Ginter, Y Goldberg, J Hajič, C Manning, R Mcdonald, S Petrov, S Pyysalo, N Silveira, R Tsarfaty, D Zeman, Proceedings of the 10 th International Conference on Language Resources and Evaluation. N. Calzolari, K. Choukri, T. Declerck, S. Goggi, M. Grobelnik, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, and S. Piperidisthe 10 th International Conference on Language Resources and EvaluationLRECNivre, J., de Marneffe, M-C., Ginter, F., Goldberg, Y., Hajič, J., Manning, C., McDonald, R., Petrov, S., Pyysalo, S., Silveira, N., Tsarfaty, R., and Zeman, D. (2016). Universal dependencies v1: A multilingual treebank collection. In N. Calzolari, K. Choukri, T. Declerck, S. Goggi, M. Grobelnik, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, and S. Piperidis (Eds.), Proceedings of the 10 th International Conference on Language Resources and Evaluation (LREC 2016).
France Paris, European Language Resources Association (ELRA). Paris, France: European Language Resources Association (ELRA), pp. 1659-1666.
US Population Projections. Pew Research, Center, Pew Research Center (2008). US Population Projections: 2005-2050. Available at https://www.pewresearch.org/hispanic/2008/02/11/us- population-projections-2005-2050/ (last accessed January 3, 2022).
La valoración del habla bilingüe en los Estados Unidos: Fundamentos lingüísticos y pedagógicos en 'Hablando bien se entiende la gente. K Potowski, A Lynch, Hispania. 971Potowski, K., and Lynch, A. (2014). La valoración del habla bilingüe en los Estados Unidos: Fundamentos lingüísticos y pedagógicos en 'Hablando bien se entiende la gente'. Hispania, 97(1): 32-46.
Spanish in the Midwest: Hablando in the Heartland. K Potowski, Spanish across domains in the United States: Education, public space, and social media. F. Salgado-Robles, and E. LamboyBoston, MABrillPotowski, K. (2020). Spanish in the Midwest: Hablando in the Heartland. In F. Salgado-Robles, and E. Lamboy (Eds.), Spanish across domains in the United States: Education, public space, and social media. Boston, MA: Brill, pp. 65-93.
Qualtrics. Qualtrics, Provo, UTQualtrics (2005). Qualtrics. Provo, UT. Available at https://www.qualtrics.com/
R: A language and environment for statistical computing. R Core Team, Vienna, Austria: R Foundation for Statistical ComputingR Core Team (2021). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Available at https://www.R- project.org/
Language attrition. M Schmid, Cambridge University PressCambridge, UKSchmid, M. (2011). Language attrition. Cambridge, UK: Cambridge University Press.
Language contact and change: Spanish in Los Angeles. C Silva-Corvalán, Oxford University PressNew York City, NYSilva-Corvalán, C. (1994). Language contact and change: Spanish in Los Angeles. New York City, NY: Oxford University Press.
UDPipe: Trainable pipeline for processing CoNLL-U files performing tokenization, morphological analysis, POS tagging and parsing. M Straka, J Hajič, J Straková, Proceedings of the 10 th International Conference on Language Resources and Evaluation. N. Calzolari, K. Choukri, T. Declerck, S. Goggi, M. Grobelnik, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, and S. Piperidisthe 10 th International Conference on Language Resources and EvaluationLRECStraka, M., Hajič, J., and Straková, J. (2016). UDPipe: Trainable pipeline for processing CoNLL-U files performing tokenization, morphological analysis, POS tagging and parsing. In N. Calzolari, K. Choukri, T. Declerck, S. Goggi, M. Grobelnik, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, and S. Piperidis (Eds.), Proceedings of the 10 th International Conference on Language Resources and Evaluation (LREC 2016).
France Paris, European Language Resources Association (ELRA). Paris, France: European Language Resources Association (ELRA), pp. 4290-4297.
Beachheads, islands, and conduits: Spanish monolingualism and bilingualism in El Paso. R Teschner, Texas. International Journal of the Sociology of Language. 114Teschner, R. (1995). Beachheads, islands, and conduits: Spanish monolingualism and bilingualism in El Paso, Texas. International Journal of the Sociology of Language, 114: 93-105.
Cross-language priming: A view from bilingual speech. C Travis, R Torres-Cacoullos, E Kidd, Bilingualism: Language and Cognition. 2022011-2015 ACS 5-year estimates. last accessedTravis, C., Torres-Cacoullos, R., and Kidd, E. (2017). Cross-language priming: A view from bilingual speech. Bilingualism: Language and Cognition, 20(2): 283-298. US Census (2015). 2011-2015 ACS 5-year estimates. Available at https://www.census.gov/programs- surveys/acs/technical-documentation/table-and- geography-changes/2015/5-year.html (last accessed January 3, 2022).
The teaching of minority languages as academic subjects: Pedagogical and theoretical challenges. G Valdés, Modern Language Journal. 79Valdés, G. (1995). The teaching of minority languages as academic subjects: Pedagogical and theoretical challenges. Modern Language Journal, 79: 299-328.
Spanish for native speakers. G Valdés, Introduction. In AATSP. Fort Worth, TX; Harcourt CollegeValdés, G. (2000). Introduction. In AATSP (Ed.), Spanish for native speakers. Fort Worth, TX: Harcourt College, pp. 1-20.
Package 'udpipe. J Wijffels, Wijffels, J. (2021). Package 'udpipe'. CRAN. Available at https://CRAN.R-project.org/package=udpipe
DT: A wrapper of the JavaScript library 'DataTables. Y Xie, J Cheng, X Tan, Xie, Y., Cheng, J., and Tan, X. (2021). DT: A wrapper of the JavaScript library 'DataTables'. CRAN. Available at https://CRAN.R-project.org/package=DT 10. Language Resources
New England Corpus of Heritage and Second Language Speakers. P Amaral, P Gubitosi, Amaral, P., and Gubitosi, P. (2013). New England Corpus of Heritage and Second Language Speakers. Available at https://digitalhumanities.umass.edu/projects/new- england-corpus-heritage-and-second-language-speakers
E Brown, Corpus of Mexican Spanish in. Salinas, CaliforniaBrown, E. (2022). Corpus of Mexican Spanish in Salinas, California. Available at http://itcdland.csumb.edu/~eabrown
Spanish in Texas Corpus. B Bullock, J Toribio, Bullock, B., and Toribio, J. (2014). Spanish in Texas Corpus. Available at https://spanishintexas.org
Corpus del Español en el Sur de Arizona (CESA). A Carvalho, Available atCarvalho, A. (2012). Corpus del Español en el Sur de Arizona (CESA). Available at https://cesa.arizona.edu
The Miami Corpus. M Deuchar, Deuchar, M. (2011). The Miami Corpus. Available at http://bangortalk.org.uk
The Polinsky Language Sciences Lab Dataverse. M Polinsky, Polinsky, M. (2015). The Polinsky Language Sciences Lab Dataverse. Available at https://dataverse.harvard.edu/dataverse/polinsky |
378,229 | Modelling Valence and Arousal in Facebook posts | Access to expressions of subjective personal posts increased with the popularity of Social Media. However, most of the work in sentiment analysis focuses on predicting only valence from text and usually targeted at a product, rather than affective states. In this paper, we introduce a new data set of 2895 Social Media posts rated by two psychologicallytrained annotators on two separate ordinal nine-point scales. These scales represent valence (or sentiment) and arousal (or intensity), which defines each post's position on the circumplex model of affect, a well-established system for describing emotional states(Russell, 1980;Posner et al., 2005). The data set is used to train prediction models for each of the two dimensions from text which achieve high predictive accuracy -correlated at r = .65 with valence and r = .85 with arousal annotations. Our data set offers a building block to a deeper study of personal affect as expressed in social media. This can be used in applications such as mental illness detection or in automated large-scale psychological studies. | [
38166371,
14021168,
15590323,
13845267,
17175925
] | Modelling Valence and Arousal in Facebook posts
June 12-17, 2016
Daniel Preoţiuc-Pietro danielpr@sas.upenn.edu
Positive Psychology Center
Department of Computer Science Stony Brook University
Positive Psychology Center
Centre for Positive Psychology
Computer & Information Science
Department of Psychology Brock University
University of Pennsylvania
University of Pennsylvania
University of Melbourne
University of Pennsylvania
H Andrew Schwartz
Positive Psychology Center
Department of Computer Science Stony Brook University
Positive Psychology Center
Centre for Positive Psychology
Computer & Information Science
Department of Psychology Brock University
University of Pennsylvania
University of Pennsylvania
University of Melbourne
University of Pennsylvania
Gregory Park
Positive Psychology Center
Department of Computer Science Stony Brook University
Positive Psychology Center
Centre for Positive Psychology
Computer & Information Science
Department of Psychology Brock University
University of Pennsylvania
University of Pennsylvania
University of Melbourne
University of Pennsylvania
Johannes C Eichstaedt
Positive Psychology Center
Department of Computer Science Stony Brook University
Positive Psychology Center
Centre for Positive Psychology
Computer & Information Science
Department of Psychology Brock University
University of Pennsylvania
University of Pennsylvania
University of Melbourne
University of Pennsylvania
Margaret Kern
Positive Psychology Center
Department of Computer Science Stony Brook University
Positive Psychology Center
Centre for Positive Psychology
Computer & Information Science
Department of Psychology Brock University
University of Pennsylvania
University of Pennsylvania
University of Melbourne
University of Pennsylvania
Lyle Ungar ungar@cis.upenn.edu
Positive Psychology Center
Department of Computer Science Stony Brook University
Positive Psychology Center
Centre for Positive Psychology
Computer & Information Science
Department of Psychology Brock University
University of Pennsylvania
University of Pennsylvania
University of Melbourne
University of Pennsylvania
Elizabeth P Shulman eshulman@brocku.ca
Positive Psychology Center
Department of Computer Science Stony Brook University
Positive Psychology Center
Centre for Positive Psychology
Computer & Information Science
Department of Psychology Brock University
University of Pennsylvania
University of Pennsylvania
University of Melbourne
University of Pennsylvania
Modelling Valence and Arousal in Facebook posts
Proceedings of NAACL-HLT 2016
NAACL-HLT 2016San Diego, CaliforniaJune 12-17, 2016
Access to expressions of subjective personal posts increased with the popularity of Social Media. However, most of the work in sentiment analysis focuses on predicting only valence from text and usually targeted at a product, rather than affective states. In this paper, we introduce a new data set of 2895 Social Media posts rated by two psychologicallytrained annotators on two separate ordinal nine-point scales. These scales represent valence (or sentiment) and arousal (or intensity), which defines each post's position on the circumplex model of affect, a well-established system for describing emotional states(Russell, 1980;Posner et al., 2005). The data set is used to train prediction models for each of the two dimensions from text which achieve high predictive accuracy -correlated at r = .65 with valence and r = .85 with arousal annotations. Our data set offers a building block to a deeper study of personal affect as expressed in social media. This can be used in applications such as mental illness detection or in automated large-scale psychological studies.
Introduction
Sentiment analysis is a very active research area that aims to identify, extract and analyze subjective information from text (Pang and Lee, 2008). This generally includes identifying if a piece of text is subjective or objective, what sentiment it expresses (positive or negative; often referred to as valence), what emotion it conveys (Strapparava and Mihalcea, 2007) and towards which entity or aspect of the text i.e., aspect based sentiment analysis (Brody and Elhadad, 2010). Downstream applications are mostly interested in automatically inferring public opinion about products or actions. Besides expressing attitudes towards other objects, texts can also express the emotions of the ones writing them, most common recently with the rise of Social Media usage (Rosenthal et al., 2015). This study focuses on presenting a gold standard data set as well as a model trained on this data in order to drive research in learning about the affective norms of people posting subjective messages. This is of great interest to applications in social science which study text at a large scale and with orders of magnitude more users than traditional studies.
Emotion classification is a widely debated topic in psychology (Gendron and Barrett, 2009). Two main theories about emotions exist: the first posits a discrete and finite set of emotions, while the second suggests that emotions are a combination of different scales. Research in Natural Language Processing (NLP) has been focused mostly on Ekman's model of emotion (Ekman, 1992) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise (Strapparava and Valitutti, 2004;Strapparava and Mihalcea, 2008;Calvo and D'Mello, 2010). In this study, we focus on the most popular dimensional model of emotion: the circumplex model introduced in (Russell, 1980). This model suggests that all affective 9 states are represented in a two-dimensional space with two independent neurophysiological systems: valence (or sentiment) and arousal. Any affective experience is a linear combination of these two independent systems, which is then interpreted as representing a particular emotion. For example, fear is a state involving the combination of negative valence and high arousal (Posner et al., 2005). Previous research in NLP focused mostly on valence or sentiment, either binary or having a strength component coupled with sentiment (Wilson et al., 2005;Thelwall et al., 2010;Thelwall et al., 2012).
In this paper we build a new data set consisting of 2895 anonymized Facebook posts labeled with both valence and arousal by two annotators with psychology training. The ratings are made on two independent nine point scales, reaching a high agreement correlations of .768 for valence and .827 for arousal. Data set statistics suggest that while the dimensions of valence and arousal are associated, they present distinct information, especially in posts with a clear positive or negative valence.
Further, we train a bag-of-words linear regression model to predict ratings of new messages. This model achieves high correlation with actual mean ratings, reaching Pearson r = .85 correlation on the arousal dimension and r = .65 on the valence dimension without using any other sentiment analysis resources. Comparing our method to other established lexicons for valence and arousal and methods from sentiment analysis, we demonstrate that these methods are not able to handle well the type of posts present in our data set. We further illustrate the most correlated words with both dimensions and identify opportunities for improvement. The data set and annotations are freely available online. 1
Data set
We create a new data set with annotations on two independent scales:
• Valence (or sentiment) represents the polarity of the affective content t in a post, rated on a nine point scale from 1 (very negative) to 5 (neutral/objective) to 9 (very positive);
• Arousal (or intensity) represents the intensity of the affective content, rated on a nine point scale from 1 (neutral/objective post) to 9 (very high).
Our corpus is comprised of Facebook status updates shared by participants as part of the MyPersonality Facebook application (Kosinski et al., 2013), in which they also took a variety of questionnaires. All authors have explicitly given permission to include their information in a corpus for research purposes. We have manually anonymized the entire corpus by removing any references to other names of persons, addresses, telephone numbers, emails and URLs, and replaced them with placeholders.
In order to reduce biases due our participant demographics, the data set sample was stratified by gender and age and we have not rated more than two messages written by the same person. Research is inconclusive about whether females express more emotions in general (Wester et al., 2002). With regards to age, an age positivity bias has been found, where positive emotion expression increases with age (Mather and Carstensen, 2005;Kern et al., 2014).
The data originally consisted of 3120 posts. All of these posts were annotated by the same two independent raters with a training in psychology. The raters performed the coding in a similar environment without any distractions (e.g., no listening to music, no watching TV/videos) as these could have influenced the emotions of raters, and therefore the coding.
The annotators were instructed to sparingly rate messages as un-ratable when they were written in other languages than English or that offered no cues for a accurate rating (only characters with no meaning). The annotators were instructed to rate a message if they could judge at least a part of the message. Then, the raters were asked to rate the two dimensions, valence and arousal, after they have explicitly been briefed that these should be independent of each other. The raters were provided with anchors with specified valence and arousal and were instructed to rate neutral messages at the middle of the scale in terms of valence and 1 if they lacked arousal. 10 Figure 2: Variation in valence and arousal with age in our data set using a LOESS fit. Data is split by gender: Male (coral orange) and Female (mint green).
Dimension R1 µ ± σ R2 µ ± σ IA
In total, 2895 messages were rated by both users in both dimensions. Table 1 shows examples of posts rated in all quadrants of the circumplex model.
The correlation between the raters and the mean and standard deviation for each rater are presented in Table 2. The inter-annotator agreement on deciding un-ratable posts is measured by Cohen's Kappa of κ = .93. The histograms of ratings are presented in Figure 1. The data set is released with the scores of both individual raters.
We study the correlation between the valence and arousal scores for posts in Table 3. We chose to split values based on different valence thresholds in order to remove posts rated as neutral in valence (5) from the analysis, as they are expected to be low in intensity (1). We observed an overall correlation between the valence and arousal ratings, which holds for both positive and negative valence tweets when the neutral posts are removed (.222, .226 correlation). However, when the posts are both more positive and negative in valence, arousal is only mildly correlated (.047 and .085). This highlights that the Valence of posts 1-9 1-3.5 1-4 6-9 6.5-9 Correlation to arousal . presence of either positive and negative valence is correlated with a arousal score different than 1, but this correlation is weaker when the positive or negative valence passes a certain threshold (i.e. 3.5 and 6.5 respectively). We also note that the high overall correlation is also due to higher mean arousal for positive valence posts compared to negative posts (4.68 cf. 3.85) Figure 2 displays the relationship between the age of the user at posting time and the valence and arousal of their posts in our data set, and further divided by gender. We notice some patterns emerge in our data. Valence increases with age for both genders, especially at the start and end of our age intervals (13-16 and 30-35), confirming the aging positivity bias (Mather and Carstensen, 2005). Valence is higher for females across almost the entire age range. Posts written by females are also significantly higher in arousal for all age groups. Age does not play a significant effect in post arousal, although there is a slight increase with age especially for females. Overall, these figures again illustrate the importance of age and gender as factors to be considered in these types of application (Volkova et al., 2013;Hovy, 2015).
Predicting Valence and Arousal
To study the linguistic differences of both dimensions, we build a bag-of-words prediction model of valence and arousal from our corpus. 2 We train two linear regression models with 2 regularisation on the posts and test their predictive power in a 10fold cross-validation setup. Results for predicting the two scores are presented in Table 4.
We compare to a number of different existing general purpose lexicons. First, we use the ANEW (Bradley and Lang, 1999) weighted dictionary to compute a valence and arousal score as the weighted sum of individual word valence and arousal scores. Similarly, we use the affective norms of words obtained by extending ANEW with human ratings for ∼14000 words (Warriner et al., 2013). We also benchmark with standard methods for estimating valence from sentiment analysis. First, we use the MPQA lexicon (Wilson et al., 2005), which contains 7629 words rated for positive or negative sentiment, to obtain a score based on the difference between positive and negative words in the post. Second, we use the NRC Hashtag Sentiment Lexicon (Mohammad et al., 2013), which obtained the best performance on the Semeval Twitter Sentiment Analysis tasks. 3 Our method achieves very high correlations with the target score. Arousal is easier to predict, reaching r = 0.85 correlation between predicted and rater score. ANEW obtains significant correlations with both of our ratings, however these are significantly lower than our model. The extended list of affective norms obtains, perhaps surprisingly, lower correlation for valence, but stronger correlation with arousal than ANEW. For valence, both sentiment analysis lexicons provide better performance Table 4: Prediction results for valence and arousal of posts reported in Pearson correlation on 10-fold cross-validation for the BOW model. than the affective norms lexicons, albeit lower than our model trained on parts of the same data set. The performance improvement is most likely driven by the domain of the data set. While our method is trained on held-out data from the same domain in a cross-validation setup, the other methods suffer from lack of adaptation to this domain. The NRC lexicon, trained for predicting sentiment on Twitter, obtains the highest performance of the established models, due to the fact that is trained on a more similar domain. The lower performance of the existing models can also be explained by the fact that they predict a score used for classification into positive vs. negative, while our target score repre-12 sents the strength of the positive or negative expression. Moreover, the affective norms scores are handcrafted dictionaries where the weights assigned to words are derived in isolation of context, contain no adaptations to new words, spellings and to the language use from Facebook.
Qualitative Analysis
In this section we highlight the most important unigram features for each dimension as well as the qualitative difference between the two dimensions of valence and arousal. To this end, we show the words with the highest univariate Pearson correlation with either of the two dimensions in Table 5. Each score is represented by the mean of the two ratings. The results show that both dimensions have similar top features as well as distinct ones. Tokens such as '!', 'Happy', 'Birthday', 'Thanks', 'Wishes' are indicative of both positive valence and arousal, while tokens like 'Bored' and '...' are indicative of both negative valence and low arousal. We notice however tokens that are only indicative of positive valence ('Wonderful', 'Love'), positive arousal ('Sunday', 'Yay'), negative valence ('Why', 'Stupid') or negative arousal ('Life', 'Every', 'People'). The question mark is correlated to negative valence, together with the word 'Why', showing that questions on Facebook are usually negative in valence. Also in terms of punctuation, positive valence and arousal is expressed through exclamation marks, while negative valence and especially arousal is expressed through repeated periods. This behavior is specific to Social Media and which standard emotion lexicons usually does not capture.
Emoticons also exhibit an interesting pattern across the two dimensions. The smiley :) is the second most correlated feature with valence, but is not in the top 10 for arousal. Similarly, the frown emoticons (:(, :'() are amongst the top 10 features correlated with negative valence, but have no relationship with arousal. The only emoticon correlated highly with low arousal is the undecided emoticon (:/ ).
Conclusion
In this work, we introduced a new corpus of Social Media posts mapped to the circumplex model of affect. Each post is annotated by two annotators with a background in psychology on two independent nine point scales of valence and arousal, who were calibrated before rating the statuses. We described our annotation process and reviewed the annotation guidelines. In total, we annotated 2895 Facebook posts, discarding the un-ratable ones. The corpus and our valence and arousal bag-of-words prediction models are publicly available.
The results of the annotations have very high agreement. A linear regression model using a bag of words representation trained on this data achieves high correlations with the outcome annotations, especially when predicting arousal. Standard sentiment analysis lexicons predicted both dimensions with lower accuracies.
Our system can be further improved by leveraging the vast amount of available data for Twitter sentiment analysis. We consider this model extremely useful for computational social science research that aims to measure individual user valence and arousal, its relationship to demographic traits and its changes over time or in relation to certain life events. 13
:
Example of posts annotated with average valence (V) and arousal (A) ratings.
Figure 1 :
1Histrograms of average rating scores.
Table 2 :
2Individual rater mean and standard deviation and inter-annotator correlation (IA Corr).4.5
5.0
5.5
6.0
15
20
25
30
35
Age
Valence
2.0
2.5
3.0
3.5
4.0
4.5
15
20
25
30
35
Age
Arousal
Table 3 :
3Correlation with arousal and mean arousal values for different posts grouped by valence.
Table 5 :
5Words most correlated positively and negatively with the two dimensions.
http://mypersonality.org/wiki/doku.php? id=download_databases
Available at http://wwbp.org/data.html
https://www.cs.york.ac.uk/semeval-2013/ task2/
AcknowledgementsThe authors acknowledge the support of the Templeton Religion Trust, grant TRT-0048.
Margaret Bradley, Peter Lang, Affective Norms for English Words (ANEW): Stimuli, Instruction Manual, and Affective Ratings. Technical reportMargaret Bradley and Peter Lang. 1999. Affective Norms for English Words (ANEW): Stimuli, In- struction Manual, and Affective Ratings. Technical report.
An Unsupervised Aspect-Sentiment Model for Online Reviews. Samuel Brody, Noemie Elhadad, Proceedings of the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL. the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACLSamuel Brody and Noemie Elhadad. 2010. An Unsu- pervised Aspect-Sentiment Model for Online Re- views. In Proceedings of the 2010 Annual Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics, NAACL, pages 804-812.
Affect Detection: An Interdisciplinary Review of Models, Methods, and their Applications. Rafael Calvo, D' Sidney, Mello, IEEE Transactions on Affective Computing. 11Rafael Calvo and Sidney D'Mello. 2010. Affect De- tection: An Interdisciplinary Review of Models, Methods, and their Applications. IEEE Transac- tions on Affective Computing, 1(1):18-37.
An Argument for Basic Emotions. Paul Ekman, Cognition & Emotion. 63-4Paul Ekman. 1992. An Argument for Basic Emotions. Cognition & Emotion, 6(3-4):169-200.
Reconstructing the Past: A Century of Ideas about Emotion in Psychology. Maria Gendron, Lisa Feldman Barrett, Emotion Review. 14Maria Gendron and Lisa Feldman Barrett. 2009. Recon- structing the Past: A Century of Ideas about Emo- tion in Psychology. Emotion Review, 1(4):316- 339.
Demographic Factors Improve Classification Performance. Dirk Hovy, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, ACL. the 53rd Annual Meeting of the Association for Computational Linguistics, ACLDirk Hovy. 2015. Demographic Factors Improve Classi- fication Performance. In Proceedings of the 53rd Annual Meeting of the Association for Computa- tional Linguistics, ACL, pages 752-762.
From "sooo excited!!!" to "so proud": Using language to study development. Johannes C Margaret L Kern, Andrew Eichstaedt, Greg Schwartz, Park, H Lyle, Ungar, J David, Michal Stillwell, Lukasz Kosinski, Martin Ep Dziurzynski, Seligman, Developmental Psychology. 50Margaret L Kern, Johannes C Eichstaedt, H Andrew Schwartz, Greg Park, Lyle H Ungar, David J Still- well, Michal Kosinski, Lukasz Dziurzynski, and Martin EP Seligman. 2014. From "sooo ex- cited!!!" to "so proud": Using language to study development. Developmental Psychology, 50:178- 188.
Private Traits and Attributes are Predictable from Digital Records of Human Behavior. Proceedings of the National Academy of Sciences of the United States of. Michal Kosinski, David Stillwell, Thore Graepel, America. 11015PNASMichal Kosinski, David Stillwell, and Thore Graepel. 2013. Private Traits and Attributes are Predictable from Digital Records of Human Behavior. Pro- ceedings of the National Academy of Sciences of the United States of America (PNAS), 110(15):5802- 5805.
Aging and Motivated Cognition: The Positivity Effect in Attention and Memory. Mara Mather, Laura L Carstensen, Trends in Cognitive Sciences. 910Mara Mather and Laura L Carstensen. 2005. Aging and Motivated Cognition: The Positivity Effect in At- tention and Memory. Trends in Cognitive Sciences, 9(10):496-502.
NRC-Canada: Building the State-ofthe-Art in Sentiment Analysis of Tweets. M Saif, Svetlana Mohammad, Xiaodan Kiritchenko, Zhu, Proceedings of the 7th International Workshop on Semantic Evaluation. the 7th International Workshop on Semantic EvaluationSaif M. Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. NRC-Canada: Building the State-of- the-Art in Sentiment Analysis of Tweets. In Pro- ceedings of the 7th International Workshop on Se- mantic Evaluation, SemEval, pages 321-327.
Opinion Mining and Sentiment Analysis. Foundations and Trends in Information Retrieval. Bo Pang, Lillian Lee, 2Bo Pang and Lillian Lee. 2008. Opinion Mining and Sentiment Analysis. Foundations and Trends in In- formation Retrieval, 2(1-2):1-135.
The Circumplex Model of Affect: An Integrative Approach to Affective Neuroscience. Jonathan Posner, A James, Bradley S Russell, Peterson, Cognitive Development, and Psychopathology. Development and Psychopathology. 173Jonathan Posner, James A Russell, and Bradley S Peterson. 2005. The Circumplex Model of Affect: An Integrative Approach to Affective Neuroscience, Cognitive Development, and Psy- chopathology. Development and Psychopathology, 17(3):715-734.
Semeval-2015 Task 10: Sentiment Analysis in Twitter. Sara Rosenthal, Preslav Nakov, Svetlana Kiritchenko, M Saif, Alan Mohammad, Veselin Ritter, Stoyanov, Proceedings of the 9th International Workshop on Semantic Evaluation. the 9th International Workshop on Semantic EvaluationSe-mEvalSara Rosenthal, Preslav Nakov, Svetlana Kiritchenko, Saif M Mohammad, Alan Ritter, and Veselin Stoy- anov. 2015. Semeval-2015 Task 10: Sentiment Analysis in Twitter. In Proceedings of the 9th In- ternational Workshop on Semantic Evaluation, Se- mEval, pages 451-463.
A Circumplex Model of Affect. James A Russell, Journal of Personality and Social Psychology. 396James A. Russell. 1980. A Circumplex Model of Af- fect. Journal of Personality and Social Psychology, 39(6):1161-1178.
SemEval-2007 Task 14: Affective Text. Carlo Strapparava, Rada Mihalcea, Proceedings of the 4th International Workshop on Semantic Evaluations, SemEval. the 4th International Workshop on Semantic Evaluations, SemEvalCarlo Strapparava and Rada Mihalcea. 2007. SemEval- 2007 Task 14: Affective Text. In Proceedings of the 4th International Workshop on Semantic Eval- uations, SemEval, pages 70-74.
Learning to Identify Emotions in Text. Carlo Strapparava, Rada Mihalcea, Proceedings of the 2008 ACM Symposium on Applied Computing, SAC. the 2008 ACM Symposium on Applied Computing, SACCarlo Strapparava and Rada Mihalcea. 2008. Learning to Identify Emotions in Text. In Proceedings of the 2008 ACM Symposium on Applied Computing, SAC, pages 1556-1560.
Wordnet affect: an affective extension of wordnet. Carlo Strapparava, Alessandro Valitutti, Proceedings of the Fourth International Conference on Language Resources and Evaluation. the Fourth International Conference on Language Resources and EvaluationCarlo Strapparava and Alessandro Valitutti. 2004. Word- net affect: an affective extension of wordnet. In Proceedings of the Fourth International Confer- ence on Language Resources and Evaluation, vol- ume 4 of LREC, pages 1083-1086.
Sentiment strength detection in short informal text. Mike Thelwall, Kevan Buckley, Georgios Paltoglou, Di Cai, Arvid Kappas, Journal of the American Society for Information Science and Technology. 6112Mike Thelwall, Kevan Buckley, Georgios Paltoglou, Di Cai, and Arvid Kappas. 2010. Sentiment strength detection in short informal text. Journal of the American Society for Information Science and Technology, 61(12):2544-2558.
Sentiment Strength Detection for the Social Web. Mike Thelwall, Kevan Buckley, Georgios Paltoglou, Journal of the American Society for Information Science and Technology. 631Mike Thelwall, Kevan Buckley, and Georgios Paltoglou. 2012. Sentiment Strength Detection for the Social Web. Journal of the American Society for Informa- tion Science and Technology, 63(1):163-173.
Exploring Demographic Language Variations to Improve Multilingual Sentiment Analysis in Social Media. Svitlana Volkova, Theresa Wilson, David Yarowsky, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingSvitlana Volkova, Theresa Wilson, and David Yarowsky. 2013. Exploring Demographic Language Varia- tions to Improve Multilingual Sentiment Analysis in Social Media. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Processing, EMNLP, pages 1815-1827.
Norms of Valence, Arousal, and Dominance for 13. Amy Beth Warriner, Victor Kuperman, Marc Brysbaert, 915 English Lemmas. Behavior Research Methods. 454Amy Beth Warriner, Victor Kuperman, and Marc Brys- baert. 2013. Norms of Valence, Arousal, and Dom- inance for 13,915 English Lemmas. Behavior Re- search Methods, 45(4):1191-1207.
Sex Differences in Emotion a Critical Review of the Literature and Implica-14 tions for Counseling Psychology. R Stephen, David L Wester, Vogel, K Page, Martin Pressly, Heesacker, The Counseling Psychologist. 304Stephen R Wester, David L Vogel, Page K Pressly, and Martin Heesacker. 2002. Sex Differences in Emo- tion a Critical Review of the Literature and Implica- 14 tions for Counseling Psychology. The Counseling Psychologist, 30(4):630-652.
Recognizing Contextual Polarity in Phraselevel Sentiment Analysis. Theresa Wilson, Janyce Wiebe, Paul Hoffmann, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingTheresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing Contextual Polarity in Phrase- level Sentiment Analysis. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP, pages 347-354. |
7,305,269 | Encodinl~ and Acquiring Meanings for-Figurative Phrases * | The Task DomainHere we address the problem of mapping phrase meanings into their conceptual representations. Figurative phrases are pervasive in human communication, yet they are difficult to explain theoretically. In fact, the ability to handle idiosyncratic behavior of phrases should be a criterion for any theory of lexical representation. Due to the huge number of such phrases in the English language, phrase representation must be amenable to parsing, generation, and also to learning. In this paper we demonstrate a semantic representation which facilitates, for a wide variety of phrases, both learning and parsing. | [] | Encodinl~ and Acquiring Meanings for-Figurative Phrases *
Michael G Dyer
Artificial Intelligence Laboratory Computer Science Department 3531
Boelter Hall University of California Los Angeles
90024California
Uri Zernik
Artificial Intelligence Laboratory Computer Science Department 3531
Boelter Hall University of California Los Angeles
90024California
Encodinl~ and Acquiring Meanings for-Figurative Phrases *
The Task DomainHere we address the problem of mapping phrase meanings into their conceptual representations. Figurative phrases are pervasive in human communication, yet they are difficult to explain theoretically. In fact, the ability to handle idiosyncratic behavior of phrases should be a criterion for any theory of lexical representation. Due to the huge number of such phrases in the English language, phrase representation must be amenable to parsing, generation, and also to learning. In this paper we demonstrate a semantic representation which facilitates, for a wide variety of phrases, both learning and parsing.
Introduction
The phrasal approach to language processing [Backer75, Pawley83, Fillmore86] emphasizes the role of the lexicon as a knowledge source. Rather than maintaining a single generic lexical entry for each word, e.g.: take, the lexicon contains many phrases, e.g.: take over, take it or leave it, take it up with, take it for granted, etc. Although this approach proves effective in parsing and in generation [Wilensky84], there are three problems which require further investigation. First, phrase interaction: the lexicon provides representation for single phrases, such as take to task and make up one' s mind. Yet it is required to analyze complex clauses such as he made up his mind to take her to task. The problem lies with the way the meanings of the two phrases interact to form the compound meaning. Second, phrase ambiguity: [Zernik86] phrasal parsing shifts the task from single-word selection to the selection of entire lexical phrases. When a set of lexical phrases appear syntactically equivalent, i.e.: he ran into a friend, he ran into an 1986 Mercedes, he ran into the store, and he ran into trouble again, disambiguation must be performed by semantic means. The conditions which facilitate phrase discrimination reside within each lexical entry itself. Third, phrase idiosyncracy: the meaning representation of phrases such as: lay down the law VS. put one' s foot down, must distinguish the special use of each phrase. This paper is concerned in the representation of phrase meanings and the process of acquiring these meanings from examples in context. * This research was supported in part by a grant from the ITA Foundation.
Consider the figurative phrases in the sentences below, as they are parsed by the program RINA [Zernik85a].
Sh
The Democrats in the house carried the water for Reagan's tax-reform bill.** $2:
The famous mobster evaded prosecution for years. Finally, they threw the book at him for tax evasion.
Depending on the contents of the given lexicon, the program may interpret these sentences in one of two ways. On the one hand, assuming that the meaning of a phrase exists in the lexicon, the program applies that meaning in the comprehension of the sentence. In S1, the program understands that the Democratic representatives did the "dirty" work in passing the bill for Reagan. On the other hand, if the figurative phrase does not exist in the lexicon, an additional task is performed: the program must figure out the meaning of the new phrase, using existing knowledge: First, the meanings given for the single words carry and water are processed literally. Second, the context which exists prior to the application of the phrase, provides a hypothesis for the formation of the phrase meaning. A dialog with RINA proceeds as follows:
RINA:
They moved water?
User:
No. The Democrats carried the water for Reagan. RINA:
They helped him pass the bill?
Thus, RINA detects the metaphor underlying the phrase, and using the context, it learns that carry the water means helping another person do a hard job. Consider encounters with three other phrases: Jenny wanted to go punk but her father $3: laid down the law.
$4: put his foot down.
$5:
read her the riot act.
In all these cases, it is understood from the context that Jenny's father objected to her plan of going punk (aided by the word but which suggests that something went wrong with Jenny's goals). However, what is the meaning of each one of the phrases, and in particular do all these phrases convey identical concepts?
** This sentence was recorded off the ABe television program Nightline, December 12, 1985.
The Issues
In encoding meanings of figurative phrases, we must address the following issues.
Underlying Knowledge
What is the knowledge required in order to encode the phrase throw the book? Clearly, this knowledge includes the situation and the events that take place in court, namely the judge punishing the defendant.
The phrase carry the water, for example, requires two kinds of knowledge: (a) Knowledge about the act of carrying water which can support the analysis of the phrase metaphor. (b) Knowledge about general plans and goals, and the way one person agrees to serve as an agent in the execution of the plans of another person. This knowledge supports the analysis of the context.
While the phrases above could be denoted in terms of plans and goals, other phrases, i.e.: rub one's nose in it, climb the walls, and have a chip on one's shoulder require knowledge about emotions, such as embarrassment and frustration. Unless the program maintains knowledge about resentment, the phrase have a chip on the shoulder, for example, cannot be represented. Thus, a variety of knowledge structures take place in encoding figurative phrases.
Representing Phrase Meanings and Connotations
The appearance of each phrase carries certain implications. For example, John put his foot down implies that John refused a request, and on the other hand, John read the riot act implies that he reacted angrily about a certain event in the past. John gave Mary a hard time implies that he refused to cooperate, and argued with Mary since he was annoyed, while John laid down the law implies that John imposed his authority in a discussion. The representation of each phrase must account for such implications.
Three different phrases in sentences $3-$5 are applied in the same context. However, not any phrase may be applied in every context. For example, consider the context established by this paragraph:
$6:
Usually, Mary put up with her husband's cooking, but when he served her cold potatoes for breakfast, she put her foot down.
Could the phrase in this sentence be replaced by the other two phrases: (a) lay down the law, or (b) read the riot act? While understandable, these two phrases are not appropriate in that context. The sentence she read him the riot act does not make sense in the context of debating food taste. The sentence she laid down the law does not make as much sense since there is no argument between individuals with non-equal authority. Thus, there are conditions for the applicability of each lexical phrase in various contexts. These conditions support phrase disambiguation, and must be included as pan of a phrase meaning.
Phrase Acquisition
Phrase meanings are learned from examples given in context. Suppose the structure and meaning of put one' s foot down is acquired through the analysis of the following sentences:
$6:
Usually, Mary put up with her husband's cooking, but when he served her cold potatoes for breakfast, she put her foot down.
S7:
Jenny was dating a new boyfriend and started to show up after midnight. When she came at 2am on a weekday, her father put his foot down: no more late dates.
58:
From time to time I took money from John, and I did not always remember to give it back to him. He put his foot down yesterday when I asked him for a quarter.
Since each example contains many concepts, both appropriate and inappropriate, the appropriate concepts must be identified and selected. Furthermore, although each example provides only a specific episode, the ultimate meaning must be generalized to encompass further episodes.
Literal Interpretation
Single-word senses (e.g.: the sense of the panicle into in run into another ear), as well as entire metaphoric actions (e.g.: carry the water in the Democratic representatives carried the water for Reagan's tax-reform bill) take pan in forming the meaning of unknown figurative phrases. Can the meaning of a phrase be acquired in spite of the fact that its original metaphor is unknown, as is the case with read the riot act (what act exactly?) or carry the water (carry what water)?
The Program
The program RINA [Zernik85b] is designed to parse sentences which include figurative phrases. When the meaning of a phrase is given, that meaning is used in forming the concept of the sentence. However, when the phrase is unknown, the figurative phrase should be acquired from the context. The program consists of three components: phrasal parser, phrasal lexicon, and phrasal acquisition module.
Phrasal Parser
A lexical entry, a phrase, is a triple associating a linguistic pattern with its concept and a situation. A clause in the input text is parsed in three steps:
(1) Matching the phrase pattern against the clause in the text.
(2) Validating in the context the relations specified by the phrase situation.
(3) If both (1) and (2) are successful then instantiating the phrase concept using variable bindings computed in (1) and (2).
For example, consider the sentence:
$9: :Fred wanted to marry Sheila, but she ducked the issue for years. Finally he put her on the spot.
The figurative phrase is parsed relative to the context established by the first sentence. Assume that the lexicon contains a single phrase, described informally as:
phrase pattern: Personl put Person2 on the spot situation: Person2 avoids making a certain tough decision concept: Personl prompts Person2 to make that decision
The steps in parsing the clause using this phrase are:
(1) The pattern is matched successfully against the text. Consequently, Personl and person2 are bound to Fred and Sheila respectively. (2) The situation associated with the pattern is validated in the context. After reading the first phrase the context contains two concepts: (a) Fred wants to marry Sheila, and (b) she avoids a decision. The situation matches the input.
(3) Since both (1) and (2) are successful, then thepattern itself is instantiated, adding to the context:
Fred prompted Sheila to make up her mind.
Phrase situation, distinguished from phrase concept, is introduced in our representation, since it help solve three problems:
(a) in disambiguation it provides a discrimination condition for phrase selection, (b) in generation it determines if the phrase is applicable, and (c) in acquisition it allows the incorporation of the input context as pan of the phrase.
Phrasal Lexicon
RINA uses a declarative phrasal lexicon which is implemented through GATE [Mueller84] using unification [Kay79] as the grammatic mechanism. Below are some sample phrasal patterns.
PI: ?x <lay down> <the law> P2: ?x throw <the book> <at ?y>
These patterns actually stand for the slot fillers given below:
PI: (subject ?x (class person)) (verb (root lay) (modifier down)) (object (determiner the)(noun law)) P2: (subject ?x (class person)) (verb (root throw)) (object ?z (marker at) (class person))) (object (determiner the)(noun book))
This notation is described in greater detail in [Zernik85b].
Phrase Acquisition through Generalization and Refinement
Phrases are acquired in a process of hypothesis formation and error correction. The program generates and refines hypotheses about both the linguistic pattern, and the conceptual meaning of phrases. For example, in acquiring the phrase carry the water, RINA first uses the phrase already existing in the lexicon, but it is too general a pattern and does not make sense in the context. ?x carry:verb ?z:phys-obj <for ?y> Clearly, such a syntactic error stems from a conceptual error. Once corrected, the hypothesis is: ?x carry:verb <the water> <for ?y>
The meaning of a phrase is constructed by identifying salient features in the context. Such features are given in terms of scripts, relationships, plan/goal situations and emotions. For example, carry the water is given in terms of agency goal situation (?x executes a plan for ?x) on the background of rivalry relationship (?x and ?y are opponents). Only by detecting these elements in the context can the program learn the meaning of the phrase.
Conceptual Representation
The key for phrase acquisition is appropriate conceptual representation, which accounts for various aspects of phrase meanings.
Consider the phrase to throw the book in the following paragraph:
$2:
The famous mobster avoided prosecution for years. Finally they threw the book at him for tax evasion.
We analyze here the components in the representation of this phrase.
Scripts
Basically, the figurative phrase depicts the trial script which is given below: This script involves a Judge, a Defendant, and a Prosecutor, and it describes a sequence of events. Within the script, the phrase points to a single event, the decision to punish the defendant. However, this event presents only a rough approximation of the real meaning which requires further refinement.
(a) The phrase may be applied in situations that are more general than the trial script itself. For example:
Sl0: When they caught him cheating in an exam for the third time, the dean of the school decided to throw the book at him.
Although the context does not contain the specific trial script, the social authority which relates the judge and the defendant exists also between the dean and John. (b) The phrase in $2 asserts not only that the mobster was punished by the judge, but also that a certain prosecution strategy was applied against him.
Specific Plans and Goals
In order to accommodate such knowledge, scripts incorporate specific planning situations. For example, in prosecuting a person, there are three options, a basic rule and two deviations: (a) Basically, for each law violation, assign a penalty as prescribed in the book. (b) However, in order to loosen a prescribed penalty, mitigating circumstances may be taken into account. (c) And on the other hand, in order to toughen a prescribed penalty, additional violations may be thrown in. In $2 the phrase conveys the concept that the mobster is punished for tax evasion since they cannot prosecute him for his more serious crimes. It is the selection of this particular prosecution plan which is depicted by the phrase. The phrase representation is given below, phrase pattern ?x:person throw:verb <the book> <at ?y:person> situation ($trial (prosecution ?x) (defendant ?y)) concept (act (select-plan (actor prosecution) (plan(ulterior-crime (crime ?c) (crime-of ?y))))) (result (thwart-goal (goal ?g) (goal-of ?y)))
where ulterior-crime is the third prosecution plan above.
Relationships
The authority relationship [Schank78, Carbonel179] is pervasive in phrase meanings, appearing in many domains: judge-defendant, teacher-student, employer-employee, parentchild, etc. The existence of authority creates certain expectationsi if X presents an authority for Y, then:
(a) X issues rules which Y has to follow. (b) Y is expected to follow these rules. (c) Y is expected to support goals of X. (d) X may punish Y if Y violates the rules in (a).
(e) X cannot dictate actions of Y; X can only appeal to Y to act in a certain way. (,9 X can delegate his authority to Z which becomes an authority for Y.
In S10, the dean of the school presents an authority for John. The representation of the phrase take it up with, for example, is given below: phrase pattern ?x:person <take:verb up> ?z:problem <with ?y:person> situation (authority (high ?y) (low ?x)) concept (act (auth-appeal(actor ?x) (to ?y) (object ?z)) (purpose (act (auth-decree (actor ?y) (to ?x) (object ?z))) (result (support-plan (plan-of ?x))))
The underlying situation is an authority relationship between X and Y. The phrase implies that X appeals to Y so that Y will act in favor of X.
Abstract Planning Situations
General planning situations, such as agency, agreement, goal-conflict and goal-coincidence [Wilensky83] are addressed in the examples below.
S1:
The Democrats in the house carried the water for Reagan in his tax-reform bill.
The phrase in S1 is described using both rivalry and agency. In contrast to expectations stemming from rivalry, the actor serves as an agent in executing his opponent's plans. The representation of the phrase is given below: phrase pattern ?x:person carry:verb <the water ?z:plan> <for ?y:person> situation (rivalry (actorl ?x) (actor2 ?y)) concept (agency (agent ?x) (plan ?z) (plan-of ?y)) Many other phrases describe situations at the abstract goal/plan level. Consider $14: S14: I planned to do my CS20 project with Fred. I backed out of it when I heard that he had flunked CS20 twice in the past.
Back out of depicts an agreed plan which is cancelled by one party in contradiction to expectations stemming from the agreement. S15: John' s strongest feature in arguing is his ability to fallbaekon his quick wit.
Fall back on introduces a recovery of a goal through an alternative plan, in spite of a failure of the originally selected plan.
516:
My standing in the tennis club deteriorated since I was bogged down wlth CS20 assignments the whole summer.
In bog down, a goal competition over the actor's time exists between a major goal (tennis) and a minor goal (CS20). The major goal fails due to the efforts invested in the minor goal.
Emotions and Attitudes
In text comprehension, emotions [Dyer83, Mueller85] and attitudes are accounted for in two ways: (a) they are generated by goal/planning situations, such as goal failure and goal achievement, and (b) they generate goals, and influence plan selection. Some examples of phrases involving emotions are given below. Humiliation is experienced by a person when other people achieve a goal which he falls to achieve. The phrase in S17 depicts humiliation which is caused when John reminds the speaker of his goal situation: S17:
I failed my CS20 class. My friend John rubbed nlynose lnit by telling me that he got an A+.
Resentment is experienced by a person when a certain goal of his is not being satisfied. This goal situation causes the execution of plans by that person to deteriorate. The phrase in S18 depicts such an attitude: S18:
Since clients started to complain about John, his boss asked him if he had a chip on his shoulder.
Embarrassment is experienced by a person when his plan failure is revealed to other people. The phrase in S19, depicts embarrassment which is caused when a person is prompted to make up his mind between several bad options.
519: Ted Koppel put his guest on the spot when he asked
him if he was ready to denounce appartheid in South Africa.
In all the examples above, it is not the emotion itself which is conveyed by the phrase. Rather, the concept conveys a certain goal situation which causes that emotion. For example, in $20 (rub one' s nose) a person does something which causes the speaker to experience humiliation.
Learning Phrase Meanings
Consider the situation when a new phrase is first encountered by the program: Three sources take pan in forming the new concept, (a) the linguistic clues, (b) the context, and (c) the metaphor.
The Context
The context prior to reading the phrase includes two concepts: (a) Reagan has a goal of passing a law. (b) The Democrats are Reagan's rivals-they are expected to thwart his goals, his legislation in particular. These concepts provide the phrase situation which specifies the context required for the application of the phrase.
The Literal Interpretation
The literal interpretation of carried the water as "moved water" does not make sense given the goal/plan situation in the context. As a result, RINA generates the literal interpretation and awaits confirmation from the user. If the user repeats the utterance or generates a negation, then RINA generates a number of utterances, based on the current context, in hypothesizing a novel phrase interpretation.
The Metaphor
Since the action of moving water does not make sense literally, it is examined at the level of plans and goals: Moving water from location A to B is a low-level plan which supports other high-level plans (i.e., using the water in location B). Thus, at the goal/plan level, the phrase is perceived as: "they executed a low-level plan as his agents" (the agency is suggested by the prepositional phrase: for his tax-reform bill; i.e., they did an act.for his goal). This is taken as the phrase concept.
The Constructed Meaning
The new phrase contains three parts:
(a)
The phrase pattern is extracted from the example sentence:
?x carry:verb <the water> <for ?y>
(b)
The phrase situation is extracted from the underlying context:
(rivalry (actorl ?x) (actor2 ?y)) (c) The phrase concept is taken from the metaphor:
(plan-agency (actor ?x) (plan ?z) (plan-of ?y)) Thus, the phrase means that in a rivalry situation, an opponent served as an agent in carrying out a plan.
Future Work and Conclusions
The phrasal approach elevates language processing from interaction among single words to interaction among entire phrases. Although it increases substantially the size of the lexicon, this chunking simplifies the complexity of parsing since clauses in the text include fewer modules which interact in fewer ways. The phrasal approach does reduce the power of the program in handling non-standard uses of phrases. For example, consider the situation where a mobster kidnaps a judge, points the gun at him, and says: No funny book you could throw at me now would do you any good!*. Our current parser would certainly fail in matching the syntactic pattern and inferring the ironic meaning. The analysis of such a sentence would require that the program associate the two existing phrases, the general throw something and the figurative throw the book, and make inferences about the pun meant by the mobster. Such examples show that it is difficult to capture human behavior through a single parsing paradigm.
* This example is attributed to an anonymous referee.
Parsing text is a futile task unless it addresses the ultimate objective of language processing, namely mapping text into conceptual representation. To this end, we have shown the structure of a lexicon which provides the association between syntactic patterns with their semantic concepts. However, due to the huge size of the English language, not all phrases can be given at the outset. A parsing program is required to handle unknown phrases as they are encountered in the text. In RINA we have shown how new phrases can be acquired from examples in context.
Phrase acquisition from context raises questions regarding the volume of knowledge required for language processing. A phrase such as throw the book requires highly specialized knowledge involving sentencing strategies in court. Now, this is only one figurative phrase out of many. Thus, in order to handle figurative phrases in general, a program must ultimately have access to all the knowledge of a socially mature person. Fortunately, learning makes this problem more tractible. In the process of phrase acquisition, phrase meaning is elevated from the specific domain in which the phrase has originated to a level of abstract goal situations. For example, once throw the book is understood as the act of authority-decree, then knowledge of the trial situation no longer needs to be accessed. The phrase is well comprehended in other domains: my boss threw the book at me, his parents threw the book at him, her teacher threw the book at her, etc. At that level, a finite number of goal situations can support the application of figurative phrases across a very large number of domains.
[Becker75]
[Carbonel179]
[Dyer83]
[Fillmore86]
[Kay79]
The prosecutor says his arguments to the judge (b) The defendant says his arguments to the judge (c) The judge determines the outcome, either:(I) to punish the defendant (2) not to punish the. defendant
User: The Democrats in the house carried the water for Reagan's tax-reform bill.RINA: They moved watery
User: No. They carried the water for him.
P~[NA: They helped him pass the bill.
The Phrasal Lexicon. Joseph D Becker, in Proceedings Interdisciplinary Workshop on Theoretical Issues in Natural Language Processing. s Interdisciplinary Workshop on Theoretical Issues in Natural Language ProcessingCambridge, MassachusetsBecker, Joseph D., "The Phrasal Lexi- con," pp. 70-73 in Proceedings Interdisci- plinary Workshop on Theoretical Issues in Natural Language Processing, Cambridge, Massachusets (June 1975).
Subjective Understanding: Computer Models of Belief Systems. J G Carbonell, TR-150Yale, New Haven CTCarbonell, J. G., "Subjective Understand- ing: Computer Models of Belief Systems," TR-150, Yale, New Haven CT (1979).
. . D Ph, Dissertation, Ph.D. Dissertation.
In-Depth Understanding: A Computer Model of Integrated Processing for Narrative Comprehension. Michael G Dyer, MIT PressCambridge, MADyer, Michael G., In-Depth Understand- ing: A Computer Model of Integrated Pro- cessing for Narrative Comprehension, MIT Press, Cambridge, MA (1983).
Regularity and Idiomaticity in Grammatical Constructions: The Case of Let alone. C Fillmore, P Kay, M O'connor, UC Berkeley, Department of LinguisticsUnpublished. ManuscriptFillmore, C., P. Kay, and M. O'Connor, Regularity and Idiomaticity in Grammati- cal Constructions: The Case of Let alone, UC Berkeley, Department of Linguistics (1986). Unpublished Manuscript.
Functional Grammar. Martin Kay, Proceedings 5th Annual Meeting of the Berkeley Linguistic Society. 5th Annual Meeting of the Berkeley Linguistic SocietyBerkeley, CaliforniaKay, Martin, "Functional Grammar," pp. 142-158 in Proceedings 5th Annual Meet- ing of the Berkeley Linguistic Society, Berkeley, California (1979).
GATE Reference Manual. E Mueller, U Zernik, Computer Science, AI LabUCLA-AI-84-5Mueller, E. and U. Zernik, "GATE Refer- ence Manual," UCLA-AI-84-5, Computer Science, AI Lab (1984).
Daydreaming in Humans and Computers. E Mueller, M Dyer, Proceedings 9th International Joint Conference on Artificial Intelligence. 9th International Joint Conference on Artificial IntelligenceLos Angeles CAMueller, E. and M. Dyer, "Daydreaming in Humans and Computers," in Proceed- ings 9th International Joint Conference on Artificial Intelligence, Los Angeles CA (1985).
Two Puzzles for Linguistic Theory: Nativelike Selection and Nativelike Fluency. A Pawley, H Syder, Language and Communication, ed. J. C. Richards R. W. Schmidt, Longman, LondonPawley, A. and H. Syder, "Two Puzzles for Linguistic Theory: Nativelike Selection and Nativelike Fluency," in Language and Communication, ed. J. C. Richards R. W. Schmidt, Longman, London (1983).
The Gettysburg Address: Representing Social and Political Acts. R Schank, J Carbonell, TR-127New Haven CTYale University, Depatment of Computer ScienceSchank, R. and J. Carbonell, "The Gettys- burg Address: Representing Social and Political Acts," TR-127, Yale University, Depatment of Computer Science, New Haven CT (1978).
Planning and Understanding. Robert Wilensky, Addison-WesleyMassachusettsWilensky, Robert, Planning and Under- standing, Addison-Wesley, Massachusetts (1983).
Talking to UNIX in English: an Overview of UC. R Wilensky, Y Arens, D Chin, Communications of the ACM. 276Wilensky, R., Y. Arens, and D. Chin, "Talking to UNIX in English: an Over- view of UC," Communications of the ACM 27(6), pp.574-593 (June 1984).
Learning Phrases in Context. Zernik, Michael G Lift, Dyer, Proceedings The 3rd Machine Learning Workshop. The 3rd Machine Learning WorkshopNew-Brunswick NJZernik, Lift and Michael G. Dyer, "Learn- ing Phrases in Context," in Proceedings The 3rd Machine Learning Workshop, New-Brunswick NJ (June 1985).
Towards a Self-Extending Phrasal Lexicon. Uri Zernik, Michael G Dyer, Proceedings 23rd Annual Meeting of the Association for Computational Linguistics. 23rd Annual Meeting of the Association for Computational LinguisticsChicago ILZernik, Uri and Michael G. Dyer, "To- wards a Self-Extending Phrasal Lexicon," in Proceedings 23rd Annual Meeting of the Association for Computational Linguistics, Chicago IL (July 1985).
Disambiguation and Acquisition using the Phrasal Lexicon. U Zernik, M G Dyer, Proceedings llth International Conference on Computational Linguistics. llth International Conference on Computational LinguisticsBonn GermanyZernik, U. and M. G. Dyer, "Disambigua- tion and Acquisition using the Phrasal Lex- icon," in Proceedings llth International Conference on Computational Linguistics, Bonn Germany (1986). |